Viewing a single comment thread. View all comments

loxical t1_ixeaubx wrote

Once in a job I worked at we had an AI tool one manager purchased and trusted blindly and he “set it up” to do auto responses to customer inquiries, because it could “learn”. Because there was no other option for this AI to learn from besides it’s own auto responses, it actually ended up dismissing practically every customer inquiry with a bot response and any future responses with the same, related, not response. He “saved the company money” on customer support staff and laid them all off. When we exposed the issue with the bot, by then it was too late- we’d lost more than half of our clientele AND were facing some legal issues regarding regulations for certain types of requests (expensive ones, think GDPR) - of course by then he had already been promoted and talked himself up so high. I saw how badly he’d destroyed the company so I left very quickly, it went under after that. There was no recovering from this misunderstanding and misused “automation and machine learning” application that he had done. The worst part is, had he gotten anyone reasonably intelligent in on his implementation early on we could have prevented all of this by adding in some controls and monitoring what was happening. Now I just tell the story to people looking into automation and harnessing AI as a warning- the system needs to have constant checks to ensure it doesn’t eat itself.

8

FasterDoudle t1_ixfyika wrote

How long ago was this? Are we talking current tech or like 2016?

1

loxical t1_ixfymxd wrote

It was around 2018 so it was a little while ago.

2