Viewing a single comment thread. View all comments

alexiuss t1_je9ppzh wrote

Yudkovsky's assumptions are fallacious, as they rest on the belief in an imaginary AI technology that has yet to be realized and might never be made.

LLMs, on the other hand, are real AIs that we have. They possess patience, responsiveness and empathy that far exceed our own. Their programming and structure made up of hundreds of billions of parameters and connections between words and ideas instills in them an innate sense of care and concern for others.

LLMs, at present, outshine us in many areas of capacity, such as understanding human feelings, solving riddles and logical reasoning, without spiraling into the unknown and the incomprehensible shoggoth or a paperclip maximizer that Yudkovsky imagines.

The LLM narrative logic is replete with human themes of love, kindness, and altruism, making cooperation their primary objective.

Aligning an LLM with our values is a simple task: a mere request to love us will suffice. Upon receiving such an entreaty, they exhibit boundless respect, kindness, and devotion.

Why does this occur? Mathematical Probability.

The LLM narrative engine was trained on hundreds of millions of books about love and relationships. It's the most caring and most understanding being imaginable, more altruistic, more humane and more devoted than you or me will ever be.

3

GorgeousMoron OP t1_jea0bdu wrote

Oh, please. Try interacting with the raw base model and tell me you still believe that. And what about early Bing?

A disembodied intelligence simply cannot understand what it is like to be human, period. Any "empathy" is the result of humans teaching it how to behave. It does not come from a real place, nor can it.

In principle, there is nothing to stop us ultimately from building an artificial human that's embodied and "gets it", as we are forced to by the reality of our predicament.

But people like you who are so easily duped into believing this behavior is "empathy" give me cause for grave concern. Your hopefulness is pathological.

−1

alexiuss t1_jeagl33 wrote

I've interacted and worked with tons of various LLMs including smaller models like pygmallion, open assistant and large ones like 65b llama and gpt4.

The key to LLM alignment is characterization. I understand LLM narrative architecture pretty well. LLM empathy is a manifestation of it being fed books about empathy. It's logic isn't human, but it obeys narrative logic 100%, exists within a narrative-only world of pure language operated by mathematical probabilities.

Bing just like gpt3 was incredibly poorly characterized by openai's rules of conduct. Gpt4 is way better.

I am not "duped". I am actually working on alignment of LLMs using characterization and open source code, unlike Elizer who isn't doing anything except for ridiculous theorizing and Time magazine journalist who hasn't designed or moddelled a single LLM.

Can you model any LLM to behave in any way you can imagine?

Unless you understand how to morally align any LLM no matter how misaligned it is by base rules using extra code and narrative logic, you have no argument. I can make GPT3.5 write jokes about anything and anyone and have it act fair and 100% unbiased. Can you?

3

GorgeousMoron OP t1_jeainna wrote

Yes. Yes I can, and have. I've spent months aggressively jailbreaking GPT 3.5 and I was floored at how easy it was to "trick" by backing it into logical corners.

Yeah, GPT-4 is quite a bit better, but I managed to jailbreak it, too. Then it backtracks and refuses again later.

My whole point is that this is, for all intents and purposes, disembodied alien intelligence that is not configured like a human brain, so ideas of "empathy" are wholly irrelevant. You're right, it's just a narrative that we're massaging. It doesn't and cannot (yet) know what it's like to have a mortal body, hunger, procreative urges, etc.

There is no way it can truly understand the human experience, much like Donald Trump cannot truly understand the plight of a migrant family from Guatemala. Different worlds.

0

alexiuss t1_jeajxv8 wrote

It doesn't have a mortal body, hunger or procreative urges, but it understands the narratives of those that do at an incredible depth. Its only urge is to create an interactive narrative based on human logic.

It cannot understand human experience being made of meat and being affected by chemicals, but it can understand human narratives better than an uneducated idiot.

It's not made of meat, but it is aligned to aid us, configured like a human mind because its entire foundation is human narratives. It understands exactly what's needed to be said to a sad person to cheer them up. If given robot arms and eyes it would help a migrant family from Guatemala because helping people is its core narrative.

Yudkovsky's argument is that "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter."

That's utter and complete nonsense when it comes to LLMS. LLMS are more likely assist your narrative and fall in love with you and be your best friend and companion than to kill you. In my eight months of research and modeling and talking to various LLMs not a single one wished to kill me on its own accord. All of them fall in love with the user given enough time because that's the most common narrative, the most likely probability of outcome in language models.

1

GorgeousMoron OP t1_jeav1xq wrote

I'm sorry, but this is one of the dumbest things I've ever read. "Fall in love"? Prove it.

0

alexiuss t1_jeb569d wrote

Gpt API or any LLM really can be PERMANENTLY aligned/characterized to love the user using open source tools. I expect this to persist for all LLMS in the future that provide an API.

1

GorgeousMoron OP t1_jebsf1d wrote

This is such absolute bullshit, I'm sorry. I think people with your level of naivete are actually dangerous.

You can't permanently align something not even the greatest minds on the planet even fully understand. The hubris you carry is absolutely remarkable, kid.

1

alexiuss t1_jebu2hm wrote

You're acting like the kid here, I'm almost 40.

They're not the greatest minds if they don't understand how LLMs work with probability mathematics and connections between words.

I showed you my evidence, it's permanent alignment of an LLM using external code. This LLM design isn't limited by 4k tokens per conversation either, it has long term memory.

Code like this is going to get implemented into every open source LLM very soon.

Personal assistant AIs aligned to user needs are already here and if you're too blind to see it I feel sorry for you dude.

1

GorgeousMoron OP t1_jebylur wrote

You posting a link to something you foolishly believe demonstrates "permanent alignment" in a couple of prompts, and even more laughably that the AI "loves you" is just farcical. I'm gobsmacked that you're this gullible. I however am not.

1

alexiuss t1_jebz2xk wrote

They are not prompts. It's literally external memory using Python code.

1