Comments

You must log in or register to comment.

AutoModerator t1_j54zjt4 wrote

Welcome to r/science! This is a heavily moderated subreddit in order to keep the discussion on science. However, we recognize that many people want to discuss how they feel the research relates to their own personal lives, so to give people a space to do that, personal anecdotes are allowed as responses to this comment. Any anecdotal comments elsewhere in the discussion will be removed and our normal comment rules apply to all other comments.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

1

CrustyMilkCap t1_j555kwi wrote

Because Facebook users know how to stop the spread of misinformation on Facebook...

16

Jackmeoff611 t1_j555tp8 wrote

Just look at Reddit with its collective group think. If you don’t aggressively agree with their view points you get banned. There is no more such thing as grey area now. And people have stopped learning how to interpret things for themselves.

32

oOzephyrOo t1_j556d88 wrote

Sure, put the blame on platforms when law makers could toughen the laws and penalties on misinformation.

−15

alexRr92 t1_j557og7 wrote

I find this very true about grey areas, it's like people have forgotten grey areas exist despite our reality being mostly made up of grey areas/complexity.

The real issue I feel is too many are way to eager to speak on subjects they don't understand instead of openly admitting to the fact they don't understand. There's nothing wrong with not understanding, you can't know know everything about everything. Also people don't seem to trust one another anymore contributing to the problem.

20

Captain_Spicard t1_j55e4zl wrote

One thing I can think of is how crossposting is handled on Facebook.

If a group posts an article, and a separate group links to that same article, the reacts and comments go on the original post.

Post anything Apollo moon landing related, and all the anti moon landing groups will react to it. On a purely science related group, you get "Laugh" reacts and absolute cesspool of comments. It's impossible to have a nice discussion on anything in the realm of conspiracy.

71

mobrocket t1_j55k8pu wrote

It is. But censorship isn't always a bad thing.

Societies have to decide what should or shouldn't be censored.

I could easily argue why in today's post truth world where any idiot can find an audience maybe we need more censorship

−6

budlystuff t1_j55l5ax wrote

I agree with this where a “dissenting” view on Reddit with the most traction gets buried in the comment section and Bots force feeding agendas.

Facebook is a sewer of information, I explain to my Mam that everything is fake even when her neighbours is live-streaming at the local shops.

8

nlewis4 t1_j55m77u wrote

It is far too late to stop or reduce the spread of misinformation online. Bad actors that are knowingly spreading misinformation stand behind "my free speech!!!" and "I'm being silenced!" and the target audience eats it up.

−3

EddoWagt t1_j55me14 wrote

>The real issue I feel is too many are way to eager to speak on subjects they don't understand instead of openly admitting to the fact they don't understand.

This is probably the vocal minority, as people who know they don't know will probably remain silent

13

RonPMexico t1_j55nw13 wrote

All legal speech should be allowed, and algorithms should be content neutral. If a platform is going to censor any speech, they should be liable for all speech on the platform.

4

Akiasakias t1_j55qskc wrote

So they determine not only the "truth" but also mind read your intent.

Given the missteps lately of true stories being labeled misinformation, I don't think there could be a good way to administer this policy, it is flawed.

2

BigMax t1_j55w1cb wrote

Wait, so are they saying Facebook is more suited to stopping problems with Facebook compared to random users? Astonishing!

I wonder if this is true in other cases. I mean, I saw an issue with Amazon and figured I could fix it, but maybe Amazon itself might be better positioned to?

42

thechinninator t1_j55x3gf wrote

More people than you think are reasonable but locked in an echo chamber due to where they happen to live. Yeah this will make some people dig in further but most of those are probably lost causes.

2

RonPMexico t1_j55xnm7 wrote

It absolutely can be content neutral, and it normally is. The algorithm looks at engagement patterns and content is usually only considered after the optimization has occurred.

−8

whittily t1_j561fn5 wrote

And then it surfaces content that is highly engaged, like sensationalized misinformation. Content-neutral decisions never have content-neutral effects.

The town square can only accommodate a limited amount of speech. Democratic societies have always had an active role in deciding what kind of speech is prioritized and what mechanisms should be used to do so in a way that’s fair and non-censorious. If you go to a public hearing, is it just a crowd of people shouting over each other? Do you only get to hear from whoever is shouting loudest? No, obviously that would be unproductive and stupid. The digital town square isn’t different.

Your statement also weirdly puts this design choice in its own category than literally every other that a company makes when designing algorithms. They don’t work from first principles to decide what inputs should feed an algorithm. They test changes and look to see if it results in desired outputs. But for this one aspect, you expect them to design in a black box and not respond to what the actual effects are to platform. It’s just not really engaging with the reality of how these get built and optimized.

4

RonPMexico t1_j562sk7 wrote

prioritizing content is censorship.

I believe that it is fine if platforms want to censor content, but if they are going to take responsibility for some of the speech on their platform, they should have to be liable for all speech on their platform.

If we are going to allow nameless tech employees to determine who gets the megaphone in society with no public accountability, we should at least be able to use litigation to keep the size of their megaphone in check.

3

bildramer t1_j566unq wrote

There's "content-neutrality" as in not ranking posts by date, or not prefering .pngs to .jpgs, and then there's "content-neutrality" as in not looking into the content and determining if a no-no badperson with the wrong opinions posted it. The first is usually acceptable by anyone, and not what we're talking about. And you can distinguish the two, there's a fairly thick line.

−1

_______someone t1_j56bb3w wrote

By all means go ahead and tell the Oxford English Dictionary that you know better than them when defining a word.

While you're at it, look up the word "especially" in the dictionary and reread the definition quoted in my post.

9

Thatguyxlii t1_j56bm0i wrote

Except Facebook doesn't stop it. And often will ignore it even if reported. And will "fact check" obvious satire and joke memes and restrict accounts based on that.

5

whittily t1_j56da21 wrote

I’m going to move past your ridiculous strawman and just say that that is not how algorithm design happens. You are just not engaging with reality. Every design choice is evaluated on its effects on user behavior. To insist that we refuse to evaluate whether algorithm design degrades the user experience by forcing lies into their feed is absurd.

−2

RonPMexico t1_j56fdea wrote

The problem is that when a person makes a value judgment on content and uses it to promote (or not) that speech, it is censorship. If a platform is going to censor speech, they should be accountable for that speech.

4

Chiliconkarma t1_j56gckz wrote

Also, prejudice and people are very willing to ignore a lack of information. Thinking more like "round block goes in round hole", than "out of X possible scenarios, you should worry about this: .....".

If the poster mentions an acceptable clue for what the answer should be, then the answer can only be what the clue indicates.

2

denyjunctionfunction t1_j56p0tl wrote

No they aren’t. “Inaccurate” doesn’t deal with context alone. It’s binary in saying it is accurate or not, on a spectrum of how inaccurate/accurate it is. If stats show that X does Y if Z is present, it is not inaccurate to say X does Y. It’s misleading though because the entire context isn’t there, but it is factually correct.

0

whittily t1_j56y9xb wrote

  1. It’s unavoidable. You are demanding something that is impossible. Every decision requires a value judgement, especially decisions that attempt to avoid a value judgement. In this case, we should value truth and accuracy.

  2. The platform shouldn’t be responsible for these decisions. We should democratically determine how we prioritize speech in a crowd d public sphere, just like every democratic society has done for hundreds of years. Pretending that every society has always allowed infinite, unfettered speech in public forums is ahistorical and also a little stupid. Society would be chaos, just like corporate-controlled digital public spaces are today.

  3. Finally, no, there is such a thing as truth and a lie. Sometimes it’s complex, but that doesn’t mean it is impossible to determine. Democracy only works when strong, trusted, public, unbiased institutions mediate information between experts/information producers and the public. The introduction of digital media infosystems without institutional mediation is breaking down our publics’ access to true, relevant information and damaging our ability to solve problems politically.

0

mobrocket t1_j578oce wrote

So you think child porn should be allowed.... Same logic

Or graphic murder on all tv stations

Irs should disclose everyone's tax returns

All that stuff is censored and protected because as a society we determined it's not acceptable for all the public to have

Maybe you should consider censoring your mind and it's 5th grade level of knowledge about censorship

0

Euphoric-Driver-7568 t1_j5793oi wrote

I don’t want a technology firm to decide what I’m to see. If I want to listen to someone who only tells lies, I should be allowed to do that.

3

burnerman0 t1_j57dr0t wrote

> prioritizing content is censorship

What exactly do you think these algorithms are doing if not prioritizing content? By your logic every platform is practicing censorship simply by existing.

0

RonPMexico t1_j57e7t5 wrote

The platforms generally prioritize posts not by content but by interactions. The algorithm doesn't know what message any individual post conveys, it only knows that the post will lead to a desired outcome (clicks, shares, likes, what have you)

3

SmellyMammoth t1_j57eija wrote

This is a bad take. You can’t force a company to host speech it doesn’t agree with. It’s also unrealistic to expect a company to monitor every single user post on their platform. This is why we have protections like Section 230 in place.

2

RonPMexico t1_j57fhes wrote

My point is section 230 shouldn't exist. Either the company is responsible for the posted content or it isn't.

A company removing content it does not agree with gives content it doesn't remove its implied endorsement. They can't have it both ways.

2

Chpgmr t1_j57h4fk wrote

Same with those street interviews. Look at how many people simply look and walk by vs how many actually stop to talk. You just get the weird confrontational people who talk before they think more often than not.

2

PmMeWifeNudesUCuck t1_j57ib3s wrote

#TwitterFiles disagrees. Communication shows they're the key source of government disinformation/misinformation aka propaganda

2

Deep90 t1_j57iczt wrote

Its also problematic because not all users are actually people.

The "Every user is equal" model falls apart when I create 10 bot accounts against your single account.

​

A truly neutral platform would give my posts 10x the reach.

2

koebelin t1_j57jnq2 wrote

There are private groups on Facebook that have a more adult level of discussion than on most subreddits. The big public or popular private Facebook groups are just garbage, but I love some of the more focused, unsensationalist groups.

10

PhilosophyforOne t1_j57jxvb wrote

While I applaud the results and the study, the fact that some people try to argument otherwise is downright laughable.

Think about making the argument that instead of having a police force and laws that govern our behavior towards each other, we should simply leave it up to individual responsibility to reduce crime and harmful behavior.

(Note. I’m not saying our current system is perfect, but saying it’s not up to the platform but the user to reduce misinformation is akin to saying instead of rule of law, it should be up to individuals to behave how they choose.)

3

_______someone t1_j57k2ef wrote

Mate, you're debating semantics. I don't make up the words. People do. You got a problem with a word that is less than your standard of certainty? You got a big problem cuz that's most words. Language is not maths. Get over it and get wise.

1

qwicksilver6 t1_j57n6rx wrote

Human body says lymph node to blame for rampant virus.

1

QTheLibertine t1_j57nw0v wrote

I did a study of all of human history and found allowing the government to define misinformation leads to death on a massive scale and should be avoided at all costs.

−1

IRYIRA t1_j57qsib wrote

2,400 years since Socrates and the smartest person in any room is the one who says, "I don't know." Of course we understand a lot more about our world today than we did then, but new data could always completely change the rules by which we understand how the world operates. None of that means we should stop trying to understand our world or that everything once believed to be true is wrong, rather we should recognize that what we know to be true today could be false tomorrow, or merely partially true.

3

trollsmurf t1_j57tv12 wrote

But do they want to? Users are the bait.

1

boston101 t1_j584fn2 wrote

I’ve been thinking of something I’d like you all opinion on.

As someone that works in the industry, as a data scientist/engineer, and writes/implements my ml models in production.

I think people would benefit to learn at a very basic level how their data is turned into decisions aka money. I also think showing them how data is structured for ML models to make those decisions would help.

What do you all think?

I’ll hazard a guess and say the older generations won’t take to this but if this was taught in schools even at a very basic level I think people would rethink their behaviors online.

(Making up this example) basic level could be as simple as:

data is gather and organized into spreadsheets.

We are trying to predict the next value in this one column. The values are “yes” or “no” for this column based on if you clicked a button. The rest of the columns represent your behavior on the website, like cursor speed. With math magic we can see what columns are most likely to influence the “yes” or “no” value in column we are trying to predict.

Finally as new data comes in we can use the math to see how you interactions on the website will predict if you click the button. Totally made up.

1

Delamoor t1_j5878ii wrote

I'm pretty sure I can take down bot networks with willpower, facts and logic alone. I just have to make posts long and angry enough to outweigh theirs and then post them in response to every bot's post or comment. Seems doable in a weekend.

1

Delamoor t1_j589wzg wrote

I also did a study of human history and found that mob rule leads to death and suffering on a massive scale and should also be avoided at all costs.

Like government defining disinformation is also what protects you from 'Your tribe cursed my cousin, so we all came over here to murder half of you and enslave/rape the other half if we think they look good enough. This'll teach you to cast curses.'

Or even just 'our daughter is a witch. We need to kill her.'

2

js1138-2 t1_j58af2p wrote

Am I allowed to say that I hate websites that exhibit slow performance that seem to be the result of processing something other than my menu choices.

I leave the site and never return. I return to sites that enable snappy searches for stuff.

1

boston101 t1_j58bkll wrote

Totally. Aren’t we doing the same math for feature elimination every day subconsciously?

Every decision has some sort of quick analysis going on and there are feature to every decision that one subconsciously weights to determine the decision to make.

For example, why chose the apple you picked? Did it have the shine or firmness you wanted but isn’t fundamentally in your attention? You made this decisions without even knowning.

1

js1138-2 t1_j58dcbh wrote

Here’s a clue. I do a lot of searches for hardware and parts. When I find a website with a good static home page, I bookmark it. When I find a site that wiggles too much on the home page, I leave immediately.

Hope you can capture that.

You have one half second to fully display the home page, or I’m gone.

After that I’m a bit more forgiving.

1