dentalperson t1_j9t6zxx wrote
Reply to comment by royalemate357 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
> can also create highly dangerous bioweapons
EY's example he gave in the podcast was a bioweapon attack. Unsure what kind of goal the AI had in this case, but maybe that was the point:
>But if it's better at you than everything, it's better at you than building AIs. That's snowballs. It gets an immense technological advantage. If it's smart, it doesn't announce itself. It doesn't tell you that there's a fight going on. It emails out some instructions to one of those labs that'll synthesize DNA and synthesize proteins from the DNA and get some proteins mailed to a hapless human somewhere who gets paid a bunch of money to mix together some stuff they got in the mail in a file. Like smart people will not do this for any sum of money. Many people are not smart. Builds the ribosome, but the ribosome that builds things out of covalently bonded diamondoid instead of proteins folding up and held together by Van der Waals forces, builds tiny diamondoid bacteria. The diamondoid bacteria replicate using atmospheric carbon, hydrogen, oxygen, nitrogen, and sunlight. And a couple of days later, everybody on earth falls over dead in the same second. That's the disaster scenario if it's as smart as I am. If it's smarter, it might think of a better way to do things. But it can at least think of that if it's relatively efficient compared to humanity because I'm in humanity and I thought of it.
crt09 t1_j9tncbf wrote
"Unsure what kind of goal the AI had in this case"
tbf pretty much any goal that involves you doing something on planet Earth may be interrupted by humans, so to be certain, getting rid of them probably reduces the probability of being interrupted from your goal. I think its a jump that itll be that smart or that the alignment goal we use in the end wont have any easier way to the goal than accepting that interruptibility, but the alignment issue is that it Wishes it was that smart and could think of an easier way around
[deleted] t1_j9uulfb wrote
[deleted]
Viewing a single comment thread. View all comments