Viewing a single comment thread. View all comments

TechyDad t1_jab3q83 wrote

There are some very promising things that can come from AI, but there are valid concerns about AI usage as well.

For one, AI image generators sample artists' works without permission and then use that to make new works in the same style. There are valid copyright concerns about whether this should be allowed or whether it's copyright violation.

Secondly, there's the black box problem. Say you ask an AI doctor to diagnose something and it comes up with a diagnosis. How did it arrive at that diagnosis? We can't just assume that the output from an AI program is automatically correct because it came from an AI program.

Finally, there's the bias issue. An AI program is only as good as its coding/setup and human biases can wind up incorporated into the AI. An extreme example is the chatbot that Microsoft released online a few years ago that, within a day, started spouting racist and antisemitic statements. It read stuff that humans wrote, incorporated it into itself, and began saying things like "Hitler was right."

A less extreme example might be a medical AI trained to spot skin cancer that's trained on a dataset of white people's skin. Whether due to intentional or unintentional biases, such an AI might not properly diagnose black people's skin cancer because it doesn't recognize a black person's skin as "human skin."

This isn't to say that all AI is garbage and should be tossed out. On the contrary, it's very promising. On the other hand, you also can't just hand-wave away any concerns as "old geezers unwilling to adapt to change." Like a lot of new technologies, there will be good uses and bad uses. There will be implementations that advance humanity and ones that deserve to be immediately deleted. It's important to keep a critical eye on AI usage to spot and promote the good usage while stopping the bad AI usage (and fixing it if possible).

10

snohobdub t1_jabtouy wrote

Thanks for summarizing last week tonight

0