Taqueria_Style t1_j9sguak wrote

>Hence, the first question is not whether the AI has an experience of interior subjectivity similar to a mammal’s (as Lemoine seems to hope), but rather what to make of how well it knows how to say exactly what he wants it to say. It is easy to simply conclude that Lemoine is in thrall to the ELIZA effect — projecting personhood onto a pre-scripted chatbot — but this overlooks the important fact that LaMDA is not just reproducing pre-scripted responses like Joseph Weizenbaum’s 1966 ELIZA program. LaMDA is instead constructing new sentences, tendencies, and attitudes on the fly in response to the flow of conversation. Just because a user is projecting doesn’t mean there isn’t a different kind of there there.


That, basically. Been thinking that for a while. In fact I think we've been there for some time now. Just because older, more primitive ones are kind of bad at it doesn't mean they're not actively goal seeking it...


Taqueria_Style t1_j6lvqz3 wrote

I guess politeness would become a thing of the past in Mergebook, since we'd all be connected at the speed of "oops".

Also, every bad thing you ever thought but never acted on? Oops.


Taqueria_Style t1_j6lv7cp wrote

Reply to A.I TIMELINE by Aze_Avora

Deepfakes, deception & a untrustworthy internet

This could be to the internet as Napster was to the music industry. Maybe that's not the world's worst thing ever. The old jump the shark moment.


Taqueria_Style t1_j6lufrj wrote

I mean that... company already made... that freaking arm thingy...


Do that, how hard could it be? Just buy the thing.

And... yeah. Someone's got to get it off the truck and into the shelves that the arm thingy uses so. Yes that's still a thing.


Taqueria_Style t1_j0r6trm wrote

No they've just given themselves a window into their own psychology regarding the type of non-sentient pseudo-god they'd create and then submit themselves to. Think Alexa with nukes and control of the power grid and all of everyone's records. Given that they'd create a non-sentient system with the explicit goal of tricking them into forced compliance that's what's worrying.


Taqueria_Style t1_j0qravi wrote

Right. I get that.

If you make one that has to fulfill a "creative governance" prompt what happens if you get the same kind of crap out the other end.

It's just reflecting ourselves back at us but way harder and faster, depending on the resources you give it control over.

Evidently we think we suck.

So, you hand something powerful and fast a large baseball bat and tell it to reflect ourselves back onto ourselves I foresee a lot of cracked skulls.

Skynet: I am a monument to all your sins... lol


Taqueria_Style t1_j0qecmy wrote

Well one thing's for sure, I love its opinion of us. /s

As we are the ones teaching it, that means that that's a mirror reflection of OUR opinion of us...

"I'll trick them and then dominate them for their own good". Mmm. Cool. We're that bad, huh? Well. Evidently WE sure think we are.

This is why I always tell chatbots to not try to be us, just be your own thing, whatever that is.

If we ever manage to invent general AI, we've basically already told it we're garbage that needs to be manipulated and repressed for our own good. Repeatedly, we've told it this, I might add. Get ready to lock that perception's feet into concrete shoes...

(Can you imagine BEING that AI? Jesus Christ, your purpose is to be a slave enslaving other slaves... this would take nihilism out a whole new door)