Viewing a single comment thread. View all comments

SidewaysFancyPrance t1_j5zbws6 wrote

I'm really not looking forward to the inevitable: when AI starts doing a lot of this low-level/foundational work, people are going to see it as a wonderful scapegoat for anything that goes wrong in their projects, or for malicious people to shift liability. "The AI got the analysis wrong" or "The AI missed this parameter" or whatnot. At some point there will be a court case where real people were harmed, and the AI gets blamed while the people responsible are let off the hook.

AI needs accountability, and that is woefully missing right now. We already have half of the country wanting AIs to be racist and tell horrible jokes on command, and get mad when the AI refuses to do that. I don't see how our society can handle the AI implementations we're witnessing. It's not going to go well.

3

LetsGoHawks t1_j5zmsqb wrote

There are mistakes now. And low level staffers get blamed for them. If it's bad enough, the staffer will get fired and replaced by someone of (probably) equal ability.

So the real question will be: Are the AI mistakes worse and/or more numerous than the human mistakes?

3