Viewing a single comment thread. View all comments

play_yr_part t1_ja8b98b wrote

Sydney was (before the recent nerf) already hugely misaligned, and that's today. There are billions of dollars being put into LLMs and other models whilst even the programmers themselves cannot explain why these chatbots come to the conclusions they do. And it's not so much about "wanting to destroy us", it could destroy us without having any negative emotions to us whatsoever.

It's certainly something to think about, if not completely change your life based on it. I don't mind if people don't think it's going to be an issue, live your life. But there are people who have studied it extensively and know their "jack shit" who think it's very plausible.

1

DadSnare t1_ja8ibe5 wrote

That’s fine, but even in your post I’m seeing some easy-to-claim stuff that has no solid basis. Are you sure that the programmers cannot explain why a chatbot errors out? Really? Also, who said anything about the emotional state of an AI? That’s hardly even possible because it doesn’t have an endocrine system. We may have strong emotions the way we do to help with memory formation and retrieval as much as anything else. That’s not a problem for a machine. What’s a plausible way we get destroyed? Does AI own the corporations too? How do I lose power, internet, food, etc,? The nuclear terminator version seems impossible unless we are going talk about hacking brains and adjusting behavior like crazy people think is possible.

1