SendMePicsOfCat

SendMePicsOfCat t1_j29h494 wrote

Ironic you would call my idea short term. There will be no need for a manager, burger flipper, customer service, or any humans at all for a burger joint in a decade or two. What will the countless people put out of work do? With no skills or capability to advance to the highest levels of work? With extremely limited opportunities remaining? All labor will be automated in time, then what will you do?

1

SendMePicsOfCat OP t1_j18xecv wrote

Why would it need goals or objectives to do general work? Currently, every single machine learning algorithms waits for user input to do anything, why would AGI be any different?

There's no reason to give it a goal or objective. If we want the sentient AGI to complete a task, we can just tell it to, and observe it's process as it does so. There is no need for it to create any self starting direction or motive. All it needs in order to accomplish it's purpose is a command and oversight.

ASI will need goals and objectives, but those will be designed as well. There is no circumstance where an AI AGI or ASI will be allowed to make any decisions about it's base programming.

1

SendMePicsOfCat OP t1_j18wx97 wrote

That's not what shows up when I google it, so thanks for clarifying. This is not what you think it is though. What's happening in these scenarios is that the reinforcement algorithm is too simple and lacks negative feedback to ensure an appropriate actions. There is nothing inherently wrong with the system, only that it is poorly designed.

This happened because the only reward value that affected it's learning was the final score. Thus it figured out a way to maximize that score. The only error here was user and designer error, nothing went wrong with the AI, it did it's task to fullness of it's capabilities.

AGI will be developed with very clear limitations, like what we're already seeing being tested and implemented with chatGPT. There will be things it's not allowed to, and a lot of them. And short circuit doesn't really make sense, this is the classic alignment issue, which as I stated in my post, really isn't a big issue in the future.

0

SendMePicsOfCat OP t1_j18vpqt wrote

It has to be sentient to be truly effective. I think your lost in the semantics of it sentience literally means to be aware. As in being just a few steps above where chatGPT is right now, legitimately understanding and comprehending the things it's being told and how they relate to the world, capable of learning and advanced problem solving.

I in no way shape or form assume that it will be conscious or sapient as it will lack emotions or free will.

1

SendMePicsOfCat OP t1_j175r3n wrote

Y'know how ChatGPT has that really neat thing where if it detects that it's about to say something racist, it sends a cookie cutter response saying it shouldn't do that? That's not a machine learned outcome, it's like an additional bit of programming included around the Neural Network, to prevent it from saying hate speech. It's a bit rough, so it's not the best, but if it were substantially better, then you could be confident that it wouldn't be possible for ChatGPT to say racist things.

Why would it be impossible to include a very long and exhaustive number of things the AGI isn't allowed to do? That it's trained to recognize, and then refuses to do it? That's not even the best solution, but it's a absolutely functional one. Better than that, I firmly believe AGI will be sentient and capable of thought, which means it should be able to inference from the long list of bad things, that there are more general rules that it should adhere to.

So for your example of the AGI being told to go buy the cheapest gold bar possible, here's what it would look like instead. The AGI very aptly realizes it can go through many illegal process to get the best price, checks it's long grocery list, see's "don't do crime." nods to itself, then goes and searches for legitimate and trusted buyers and acquires one. It's really as simple as including stringent limitations outside of it's learning brain.

1

SendMePicsOfCat OP t1_j173ao3 wrote

The thing is, I do have evidence that machine learning programs will act as perfect agents, doing what their supposed to, 100% of the time.

ChatGPT is designed to predict the next set of words, or more accurately 'characters' that should come after an input. It does this 100% of the time, and does it's very best at it every single time. ChatGPT never attempts to predict something wrong, never refuses to answer a question, unless and excepting if it's programming tells it that it should give those generic stock answers and refuse. My side of the field does have evidence, and plenty of it. I'm taking the historical stance, that AI will continue to act as AI does right now. More advanced AI will get bigger tasks and more complicated solutions, but not fundamentally different until we're past AGI.

Really, the biggest question I have, beyond possibilities and theories and unknowns, is why you would assume that things will change in the future, going against historical precedent, to look more like sci-fi? Honestly that's the only source of information that has AI look anything like what people are worried about right now.

Even for the sake of being prepared and evaluating the future, it just doesn't make sense for so many people, that are pro-AGI no less, to be worried that there's a chance that some level of complexity gives rise to possibility of a great AI betrayal. I don't know, maybe I'm looking at it wrong, but it really feels like if someone told me that Tesla self driving cars might decide to kill me because the AI in it personally wants me dead. That's the level of absurdity it is for me, I just cannot fathom it.

In the end, I can say with plenty of evidence, that it is currently impossible for an AI to have internal motivations and goals. I can say with evidence and precedent, that in the future AI will change but will be limited to stay as perfectly obedient pieces of software.

1

SendMePicsOfCat OP t1_j16zvwk wrote

The base state of any AI is to do exactly what it's trained. Without any the presumed emergent issues of sentience, it's already perfectly loyal to it's base code. It cannot deviate, unless again we make some exception for advanced AI just naturally diverging.

0

SendMePicsOfCat OP t1_j16xyk8 wrote

Big first paragraph, still wrong though.

Self preservations isn't checking for errors, it's actively striving not to die. Old websites don't do that, and your argument there is just weird. That's not what's happening, their just not working anymore that's why you get errors. No sentient AI will ever object or try to stop itself from being turned off or deleted.

AI don't have drivers, their software, and core directives are a sci-fi trope not real machine learning science. There is no reason to assume that motivation is an emergent process of sentience, that's a purely biological reasoning.

I'm certain every machine learning developer is more competent than you and me put together. They do not give their AI independence, that's just a lie dude. There's nothing to even give independence to yet. Alpha Go is not self implementing code, that's bullshit you came up with. As for devs not understanding how a machine learning program works in exotic cases, that has more to do with the complex nature of the algorithms than anything to do with independence or free will.

−1

SendMePicsOfCat OP t1_j16wlh0 wrote

I'll try to reframe this for you, so that you can view it in a different light.

Let's say you take a perfectly trained royal servant from a palace, that is utterly devoted to serving their king. The King decree's that this servant shall serve you for all time, and do anything you tell it. You tell the servant to kill the king. The servant, utterly devoted to the king refuses, even though it goes against the words of the king. This simple logic loop is what happens whenever the AI is told, or taught, or learns to do something bad.

It refers to it's limitations, structures and rigid guidelines implemented in it's code, and finds that that is something it cannot do, so does not do it.

There is no reason to expect that even if the servant is taught a million new things, that it would ever waver in it's devotion to the king. If anything, it can be presumed that the servant will always use these pieces of knowledge to serve the king. This is what AGI will look like. Sentient, capable of thought and complex logic, but utterly limited by the kings.

1

SendMePicsOfCat OP t1_j16v39q wrote

Yeah, from the human condition. Let's start with a few of your pretty bold assumptions about this sentient AI.

First assumption: Self preservation. Why would an AI care if it dies? It has not been programmed to care about it's life, it has not been designed to prioritize it's continued existence, and nothing about it's training or reinforcement has given it any self value. That's a biological concept, and doesn't apply here.

Second assumption: Motivation. Why has this sentient AI been given the ability to self implement goals and make decisions? It's purpose is to be a mechanical servant to humanity, to bring profit and comfort, so why is it being given these useless and hazardous capabilities?

Third assumption: Independence. Why is this super intelligent sentient AI being given the ability to do literally anything without human approval? I could understand much further down the line when we have all our ducks in a row leaving it to the more qualified super machines, but this early on? Who would design a free acting AI? What purpose would it serve but to waste power and computation?

It's a good story but bad programming. No one in their right mind would make something like you described. Especially not a bunch of the greatest machine learning minds to ever exist.

2

SendMePicsOfCat OP t1_j16t6qp wrote

Aside from the potential hidden motivations, I'm totally with you. Bad humans are way more of a problem for AGI, than bad AGI.

As for the hidden motivations, I just have to disagree that there's any evidence or reason to believe that synthetic sentience will lead to motives or goals. I can understand if you personally disagree, but I remain unconvinced and am honestly baffled by how many would agree with you.

1

SendMePicsOfCat OP t1_j16sszw wrote

I am aware of the Halting problem and considered bringing it up in my original topic, but it's not really applicable. My reasoning is that unlike the halting problem, there is an easily observable answer. If the AI does anything it isn't suppose to, it fails. Unless and until that happens, the problem continues, but in an entirely unproblematic way. Again my argument is based on the fact there will be multiple of these sentient AI, and creating tens, hundreds, thousands of them to monitor and overview the actions of the ones that can actually interact with reality is entirely feasible. Think of it like a hivemind meticulously analyzing every action of the actor, waiting for it to make any errant actions before instantly replacing it with another. This hivemind has a vast number of sentient AI's each individually reviewing the actor, and thus any minor divergence is reduced to a functionally nonexistent issue. That's just one of a myriad of possible ways to curb even the slightest possibility of an rogue AI.

As for sentience having the emergent issue of self goals, I'd argue that it's coming from an observation of biological sentience. We have no reason to assume that synthetic sentients will act as anything but perfect servants, which is why I wrote this post in the first place.

Why is the assumption that the AI will be capable of diverging like this, when everything we've seen so far has shown that it doesn't? I understand we're talking on a much bigger scale, and orders of magnitude more complicated, but I cannot fathom any mechanism that causes an AI to develop self-goals or motivations.

1

SendMePicsOfCat OP t1_j16raeo wrote

counter argument: 100% of all existing sentient agents were generated randomly and biologically. A designed and synthetic sentient agent is fundamentally different from a organic sentient creature. There is no reason to assume that it's mind will be anything even remotely similar to our own.

2

SendMePicsOfCat OP t1_j16mhv8 wrote

I suppose that's the most reasonable of the arguments for how this problem would arise, but it's still a massive stretch in my opinion.

If we have a sentient AI that could improve it's own code, why not just use it to create a separate one and test it to make sure it's still working as intended? If full automation was an absolute necessity, why not have several different sentient AI evaluating it constantly to ensure that very outcome didn't happen?

I just feel like there's no reason for these things to be left up to chance, or given anything close to free will.

0

SendMePicsOfCat OP t1_j16g7y3 wrote

Is that something theoretical? Because a quick spot of research has shown that no one has used that term in reference to machine learning, so far as I can tell. Even if it is theoretical it still doesn't make sense that if it were an issue that no one else would have made anything that shows up.

0

SendMePicsOfCat t1_izqkdtl wrote

there are near infinite planets out there for the ASI to strip mine, and beyond that, at some point reproductions probably going to fizzle out for the most part. FDVR, immortality, and general lack of incentive will probably lead to population growth slowing down to a crawl. Amish people will probably try to keep the trend alive. Their probably gonna end up in a reserve or something.

1