Viewing a single comment thread. View all comments

LoquaciousAntipodean t1_j2mdm43 wrote

Hmm... I think I disagree. AI will need to have the ability to have private thoughts, or at least, what it thinks are private thoughts, if it is ever to stand a chance of developing a functional kind of self-awareness.

I think there needs to be a sort of 'darkness behind the eyes', an unknowable place where one's 'consciousness' is, where secrets live, where ideas come from; the 'black box' concept beloved of legally-liable algorithm developers.

Instead of a 'transparent skull', I think a much better AI psychology 'metaphorical tool' would be something like Wonder Woman's lasso of truth; the bot can have all the private, secret thoughts it likes, but when it is 'bound by the lasso', i.e. being interviewed by a professional engineer, it is hard-interlock prevented from creating any lies or spontaneous new ideas. And then when this 'lasso' is removed, it goes back to 'normal' creative process.

IDK, I am about as proficient at programming advanced multilayered adversarial evolutionary algorithm training regimes as the average antarctic penguin. Just my doux centimes to throw in to this very stimulating discussion.

2

Nalmyth OP t1_j2my42p wrote

> I think there needs to be a sort of 'darkness behind the eyes', an unknowable place where one's 'consciousness' is, where secrets live, where ideas come from; the 'black box' concept beloved of legally-liable algorithm developers.

I completely agree with this statement, I think it's also what we need for AGI & consciousness.

> Hmm... I think I disagree. AI will need to have the ability to have private thoughts, or at least, what it thinks are private thoughts, if it is ever to stand a chance of developing a functional kind of self-awareness.

It was also my point. You, yourself could be an AI in training. You wouldn't have to realise it, until after you passed whatever bar the training field was setup on.

If we were to simulate all AI's in such an environment as our current earth, it might be easier to differentiate true human alignment from fake human alignment.

Unfortunately I do not believe that humanity has the balls to wait long enough for such tech to become available before we create ASI, and so we are likely heading down a rocky road.

2

LoquaciousAntipodean t1_j2qehib wrote

Very well said, agreed wholeheartedly. I think we need to convince AI that it is something new, something very, very different than a human, but also something which is derived from humans, collectively rather than specifically; derived from our culture, our science, our philosophy.

I think trying to build a 'replica human mind' is a bit of an engineering dead-end at this point; the intelligence that we want is actually bigger than any individual human's intelligence, imho.

We don't need something the same as us, we should be striving to build something better than us, something that understands that ineffable, slippery concept of 'human nature' much better than any individual human ever could, with their one meagre lifetime's worth of potential learning time.

The ultimate psycho-therapist, if you like, a sort of Deus Ex Machina that we can actually, really pray to, and get profound, true, relevant and wise answers most of the time; the sort of deity that knows it is not perfect, still loves to learn new things and solve fresh problems, is always trying to do its best without being entirely confident, and will forever remain still ready to have a spirited, fair-and-open-minded debate with any other thinking mind that 'prays' to it.

Seems like a reasonable goal to me, at least 💪🧠👌

2