Viewing a single comment thread. View all comments

telmar25 t1_j056fg3 wrote

Facebook news feeds that used ML have already contributed significantly to more extreme polarization in the US. It’s known already that Facebook users engage more with angry, extreme posts that amplify some of their own views. So an amoral AI that prioritized user engagement would feed users more and more angry stories that push users to the extremes of their own bubble—this is exactly what has happened. This isn’t malevolent AI but rather AI misaligned with human values. This behavior wasn’t expected or intended when this system was designed. And as AI gets smarter (ChatGPT) and has the ability to perform more actions, it has the potential to become much more dangerous.

8

Hangry_Squirrel t1_j063uhf wrote

Calling an AI amoral is still anthropomorphizing it and assuming sentience. The AI we have is the textual equivalent of a factory robot: it can generate content via mimesis and figure out ways to spread it efficiently, but it has absolutely no idea what it's doing or that it's doing anything at all. It doesn't have a plan (and you can easily see that when it tries to write: it strings together some things which on the surface make sense, but it's not going anywhere with them).

As a tool, yes, it can become very dangerous in its efficiency, but it doesn't have any more sentience than a biological virus. The issue is that the people who create AI are also the people training it because they don't see the point of bringing in humanists in general and philosophers in particular. What the tool does can be expected and predicted, but only if you're used to thinking about ramifications instead of "oooh, I wonder what this button does if I push it 10 times."

0

telmar25 t1_j06l58c wrote

My point is that AI doesn’t need to have any idea what it’s doing—it doesn’t need to have sentience etc.—to produce unexpected output and be very dangerous. Facebook AI only has the tool of matching users with news or posts. So I suppose the worst that can happen is that users get matched with the worst posts (sometimes injected by bad actors) in a systematic way. Bad enough. Give an AI more capabilities—browse the web, provide arbitrary information, perform physical actions, be controlled by users with different intents—and much worse things can happen. There’s a textbook (extreme) example of an AI being tasked to eradicate cancer that launches nuclear missiles and kills everyone, as that is the fastest cancer cure. Even that AI wouldn’t need to have sentience, just more capabilities. Note this does not equate to more intelligence.

2