Viewing a single comment thread. View all comments

loga_rhythmic t1_j91ptib wrote

Why would no one try and design an AI that is self aware? That's literally the exact thing (or at least the illusion of it) that many AI researchers are trying to achieve. Just listen to interviews with guys like Sutskever, Schmidhuber, Karpathy, Sutton, etc.

36

tiensss t1_j93dmlq wrote

Self-awareness cannot be fully tested, it can only be inferred from behavior. We don't even know if other human beings are self-aware (see philosophical zombies), we trust it and infer from their behavior (I am self-aware --> other people behave similarly to me --> they are self-aware). Self-awareness is a buzzword in cognitive science that isn't epistemologically substantive enough to conduct definitive research.

14

advadnoun t1_j952b3z wrote

"Buzzword" is not the right term for this term lol

​

It's meaningful and... not just fashionable. Whether you think it's easily benchmarked is a different story.

1

kromem t1_j93b7pf wrote

Additionally, What Learning Algorithm Is In-Context Learning? Investigations with Linear Models from the other week literally just showed that transformer models are creating internal complexity beyond what was previously thought and reverse engineering mini-models that represent untaught procedural steps in achieving the results.

So if a transformer taught to replicate math is creating internal mini-models that replicate unlearned mathematical processes in achieving that result, how sure are we that a transformer tasked with recreating human thought as expressed in language isn't internally creating some degree of parallel processing of human experience and emotional states?

This is research that's less than two weeks old that seems pretty relevant to the discussion, but my guess is that nearly zero of the "it's just autocomplete bro" crowd has any clue that the research exists and I'm doubtful could even make their way through the paper if they did.

There's some serious Dunning-Kreuger going on with people thinking that dismissing expressed emotional stress by a LLM transformer somehow automatically puts them on the right side of the curve.

It doesn't, and I'm often reminded of Socrates' words when seeing people so self-assured on what's going on inside the black box of a hundred billion parameters transformer:

> Well, I am certainly wiser than this man. It is only too likely that neither of us has any knowledge to boast of; but he thinks that he knows something which he does not know, whereas I am quite conscious of my ignorance.

2

Username912773 t1_j92rzza wrote

I think it might be seen as something to fear, a truly sentient machine would have the ability to develop animosity towards humanity or develop a distrust/hatred for us in the same way we might distrust it.

It also might be seen as something that makes being human entirely obsolete.

−4

Sphere343 t1_j92woql wrote

Yes indeed that’s what it seems a lot of these people seem to think. But the thing is AI being self aware of sentient isn’t that bad of a thing as long as it is done correctly it is really good which is contrary to all that. As first off a AI just being created and being sentient is literally just like suddenly having a baby, you need to raise it right. For a Ai you need to give it as unbiased information as possible, make it clear about what is right and wrong and don’t give the AI a reason to hate you (abuse it, try to kill it) the AI may turn out good just like any other human or turn bad just like many others.

And the best way to make a sentient Ai with out all these problems? Base it on the human brain. Create emotional circuits and functions for each individual emotion and so on. The tech and knowledge for all this stuff isn’t here of course so we can’t do this currently. However in the future the best way to really realistically create a sentient AI is to find a way to digitize the human brain. It’s possible given our brain works as a organic “programming” of sorts with all the Neutron networks and everything.

Major Taboo of AI is don’t do stupid stuff. Don’t give unreasonable commands that can make it do weird things like saying do something by any means. Don’t feed the AI garbage information. And most certainly don’t antagonize a sentient AI. Also i believe personally a requirement for AI is to be allowed to be created and be sentient is to basically show that the AI would have emotions circuits and as such can train the AI in what is good and bad.

If a AI doesn’t have any programming to tell a right from a wrong naturally a Sentient AI would be dangerous. Which I think is the main important problem. Kinda rambled but anyways yeah they indeed should be created but more when we have the knowledge I mentioned.

4

the320x200 t1_j939qzo wrote

Nearly all animals fit that definition to a large degree. Hard to see that really being the core issue and not something more in line with other new technology, like the issues of misplaced incentives around engagement in social networks for example.

4