CactusOnFire t1_j9s4b0f wrote
Reply to comment by shoegraze in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
>We can say goodbye soon to a usable internet because power seeking people with startup founder envy are going to just keep ramping these things up.
This might seem like paranoia on my part, but I am worried about AI being leveraged to "drive engagement" by stalking and harassing people.
"But why would anyone consider this a salable business model?" It's no secret that the entertainment on a lot of different websites are fueled by drama. If someone were so inclined, AI and user metrics could be harvested specifically to find creative and new ways to sow social distrust, and in turn drive "drama" related content.
I.E. creating recommendation engines specifically to show people things that they assume will make them angry, specifically so they engage with it in great detail, so that a larger corpus of words will be exchanged that can be datamined for advertising analytics.
FeministNeuroNerd t1_j9syj6j wrote
Pretty sure that's already happened... (the algorithm showing things to make you angry so you write a long response)
icedrift t1_j9uwkrx wrote
I agree with all of this but it's already been done. Social media platforms already use engagement driven algorithms that instrumentally arrive at recommending reactive content.
Cambridge analytica also famously preyed on user demographics to feed carefully tailored propaganda to swing states in the 2016 election.
Viewing a single comment thread. View all comments