OpenRole t1_jdzymov wrote

> The script collected around a million comments made to around 3,750 posts. Batchelor proceeded to clean the collection by removing posts that had less than 5 and more than 499 words. He also took care that there not be more than 5 comments from a single commenter in the collection for the study. The final collection of comments consisted of 177,296 comments made across 3 years, a total of around 7.75 million words in length.

5 comments over a 3 year period. In short, people who occasionally visit this sub, stumbled across it once or luckers. This isn't an analysis of the sub, but of the average person to interact with the sub.

That's like saying universities are dumb because most people entering the campus are undergrads and visitors.

To be fair, they did mention that a lot of complaints on the subreddit are about how sample populations are selected so I guess this comment is meta


OpenRole t1_j4axcag wrote

Allow people to form their own opinions on things. As a search engine, Google should simply be providing accurate information. As long as peoples opinions are informed. We should not impose our moral values on anyone. If what they're doing isn't illegal it's not for us to force them to believe or act a way, even if we don't agree with them.

If you disagree, you'd probably have supported colonialisation, "bringing civilizations to these savages", during its height


OpenRole t1_irmb6gc wrote

It always comes back to humans being a threat which is weird. If we make an AI that is specialised in creating the perfect blend of ingredients to make cakes. No matter how intelligent it becomes there's no reason it would decide to kill humans.

And if anything, the more intelligent it becomes, the less likely it will be to reach irrational conclusions.

AIs operate within their problem space. Which are often limited in scope. An AI designed to be the best chess player isn't going to kill you.