Viewing a single comment thread. View all comments

EraseNorthOfShrbroke t1_j08xure wrote

While I definitely think Facebook should take further actions to remove violent-instigating content, this lawsuit seems unreasonable (at least without further causes).

If someone posted reprehensible threats on a community bulletin board, a word-press blog, or a Skype group chat/profile line—is the community center, the website, or Skype now responsible?

Let’s say employees of the community center, word-press, or Skype takes the threat down in 5 hrs (which is quite fast given the millions of posts on FB), are they still legally liable if something happens? What about 1 hr? What about 20min? Is there concrete evidence that the post was what gave the murderer the intent or the means?

If FB can be held liable, then would all text (that isn’t automatically flagged—something they already do) would need to be manually approved? Given the millions of posts each day, what if some slips through?

I think we can argue that FB should improve their AI screening and content moderation, but to hold them liable simply moves the responsibility. Where’s the police? Why are assassinations so high (there’s literally civil war)? What can we do about ahem the civil war? Why is there so much hate to begin with—who’s cheering it on and who’s acting on them?

3

AccursedChoices t1_j0bb4kn wrote

I think you make a great point, and given the information you have, I even agree with you. What you are missing though, is that Zuckerberg greatly profited from the misinformation, helped companies to target their attacks with his data, and even performed other aiding tasks to further the misinformation cause.

In a world where Zuckerberg wasn’t directly involved and turning a profit, I’m with you, not his fault. Unfortunately, that’s not our timeline.

2