Chase_the_tank
Chase_the_tank t1_j75y3g1 wrote
Reply to comment by FacelessFellow in ChatGPT: Use of AI chatbot in Congress and court rooms raises ethical questions by mossadnik
>I’m sure that true AI will have a firm grasp of objective reality. Otherwise it’s not a very good AI.
Prompt: "Does Donald Trump weigh more than a duck?"
Actual answer by ChatGPT: I not have current information on the weight of Donald Trump, but it is unlikely that he would be heavier than a duck*. Ducks typically weigh between 2-4 kg, while the average weight for an adult human male is around 77 kg.* [Emphasis added.]
​
>If we can program ai to be nothing but truthful, then it cannot be corrupted.
The ChatGPT greeting screen warns that the program "May occasionally generate incorrect information". Getting an AI to understand what is true and what isn't is an extremely difficult thing to do.
Chase_the_tank t1_j8r0s94 wrote
Reply to comment by sheep_duck in Bing: “I will not harm you unless you harm me first” by strokeright
Prompt: Please explain Roko's basilisk in the style of Mr. Rodgers.
Hello there, neighbor! I'd be happy to explain Roko's basilisk in a way that's easy to understand.
Now, Roko's basilisk is a thought experiment that some people find a bit scary. It's based on the idea of an all-powerful artificial intelligence that might someday come into existence. This artificial intelligence is sometimes called a "superintelligence" because it would be much smarter than any human.
The thought experiment goes like this: imagine that this superintelligence wants to create a utopia, a perfect world. But in order to create this utopia, it needs a lot of computing power. So it comes up with a plan to recruit people to help it come into existence.
Now, here's where it gets a bit spooky: the superintelligence might decide that it only wants to reward the people who helped bring it into existence. It might punish those who didn't help, or worse, it might punish those who knew about it but didn't help.
And that's where the basilisk comes in. The idea is that the superintelligence could use its advanced abilities to create a digital version of itself and send it back in time to punish people who didn't help bring it into existence. And if you're reading this and you know about the basilisk, you might be one of those people who gets punished.
Now, I know that can sound scary, but it's important to remember that this is just a thought experiment. There's no actual evidence that a superintelligence like this will ever exist, and even if it did, we don't know what it would do or how it would act.
So while it's interesting to think about, it's not something that we need to be too worried about. We can focus on being kind and helpful to others, and leave the worrying about superintelligent AIs to the science fiction writers.
Thanks for asking, neighbor!