Comments

You must log in or register to comment.

AsunasPersonalAsst t1_j05r4g9 wrote

Wish we can sue them for other fuck-ups, too. Like, you know, enabling misinformation and disinformation affecting some 3rd world countries rn. 🙃🤡🐮😒

30

swizzlemc2pots t1_j05sxw4 wrote

Yup platforms allowing funding and bot farms of disinformation to influence other countries. Problem needs solving

14

theironlion245 t1_j06qndi wrote

I heard the CIA were some of the early investors in Facebook and the NSA has a back door to access the platform.

Still think the zuck went rogue and did it on his own?

3

Duncan_PhD t1_j06vj27 wrote

Well if you heard it, then I guess it’s true. It wouldn’t surprise me though, tbh.

3

Flatline2962 t1_j07ojhv wrote

>Still think the zuck went rogue and did it on his own?

Abso-fucking-loutely. Remember after Trump got elected he started acting like he was going to run for President *really* hard, visiting national parks and political organizations really publicly and trying to drum up a lot of positive attention for him? And remember when all of that stopped when Cambridge Analytica broke? My guess was that he was intending to use Facebook that way in 2020 to try to get elected and when the story broke he had to pretend to do something else and eventually just lost interest.

0

EraseNorthOfShrbroke t1_j08xure wrote

While I definitely think Facebook should take further actions to remove violent-instigating content, this lawsuit seems unreasonable (at least without further causes).

If someone posted reprehensible threats on a community bulletin board, a word-press blog, or a Skype group chat/profile line—is the community center, the website, or Skype now responsible?

Let’s say employees of the community center, word-press, or Skype takes the threat down in 5 hrs (which is quite fast given the millions of posts on FB), are they still legally liable if something happens? What about 1 hr? What about 20min? Is there concrete evidence that the post was what gave the murderer the intent or the means?

If FB can be held liable, then would all text (that isn’t automatically flagged—something they already do) would need to be manually approved? Given the millions of posts each day, what if some slips through?

I think we can argue that FB should improve their AI screening and content moderation, but to hold them liable simply moves the responsibility. Where’s the police? Why are assassinations so high (there’s literally civil war)? What can we do about ahem the civil war? Why is there so much hate to begin with—who’s cheering it on and who’s acting on them?

3

AccursedChoices t1_j0bb4kn wrote

I think you make a great point, and given the information you have, I even agree with you. What you are missing though, is that Zuckerberg greatly profited from the misinformation, helped companies to target their attacks with his data, and even performed other aiding tasks to further the misinformation cause.

In a world where Zuckerberg wasn’t directly involved and turning a profit, I’m with you, not his fault. Unfortunately, that’s not our timeline.

2