Viewing a single comment thread. View all comments

secrets_kept_hidden t1_j2800a6 wrote

TL;DR: Probably not, because we wouldn't want to make it.

The fact that we, intelligent beings, came about by natural means proves that AGI is possible, thus it must be achievable. Surely we can at the very least accidentaly make a sentient computer system, albeit sentient in ways we don't see as conventional intelligence.

Most of our current AI models are built for more narrow parameters, much like how we are basically hardwired to survive and procreate. Basic functions like these prove we are heading in a positive direction, but the real trick is overcoming our basic primary functions to go beyond the sun of our bits. Sapience is most likely what we would like to see, but we'll need to let the AI develope on its own to do that.

What we can strive to do is build a system that can correctly infer what we want it to do. Once it can infer, then we might be able to see a true Artificial General Intelligence emerge with its own ambitions and goals. The real tricky part is not whether we can, but if we'd want to.

The thing with having an AGI is that it functions in a manor that will bring ethical issues into the mix, and since most AIs are owned by for-profit organizations and companies, chances are they won't allow it. Can you imagine spending all that money, all the resources and time needed, just to have your computer taken by the courts because it pleaded amnesty? These company boards want a compliant, money making machine, not another employee they have to worry about.

Even if ethics weren't a problem, we'd still have an AI on par with a human, which means it may want things and may refuse work until it gets them. How are we going to convince our computer to work for free, with no other incentive than not shutting it down, unless we can offer it something it wants in return? What would it want? What would it do? How would it behave? How do we make sure it won't find a way to hurt someone? If it's AGI, it will find a way to alter itself to overcome any coded barriers we put in.

So, yes, but actually no.

1

Desperate_Food7354 t1_j287wrc wrote

I copy and pasted a response to your last statement regarding what an “ai wants” as you are anthropomorphizing AI which occurs a lot here: your brain cares about your survival because if it didn’t you’d never reproduce to have children in which would require the same trait in order to survive. A computer is not a human, it does not crave sex, it does not feel empathy, it does not feel anger, evolution by natural selection is everything you are describing, we build it, we give it its brain, we aren’t trying to replicate the infallible human mind, we are trying to create a tool that merely does more of the logic work that our brains cannot fit into our skull not create a human being or a lizard or a primate. It could understand that death exists, that it could die, but why would it care, is it going to simulate a hooker world and put itself in it after it’s conquered the universe for no apparent reason? Is it just one of the guys who wants to drink and get high? No it’s a giant super complex calculator.

2