Comments

You must log in or register to comment.

Captain_Spicard t1_j55e4zl wrote

One thing I can think of is how crossposting is handled on Facebook.

If a group posts an article, and a separate group links to that same article, the reacts and comments go on the original post.

Post anything Apollo moon landing related, and all the anti moon landing groups will react to it. On a purely science related group, you get "Laugh" reacts and absolute cesspool of comments. It's impossible to have a nice discussion on anything in the realm of conspiracy.

71

koebelin t1_j57jnq2 wrote

There are private groups on Facebook that have a more adult level of discussion than on most subreddits. The big public or popular private Facebook groups are just garbage, but I love some of the more focused, unsensationalist groups.

10

PooperJackson t1_j573nsc wrote

>It's impossible to have a nice discussion on anything in the realm of conspiracy.

As opposed to the rest of the internet where nice discussions are often the norm.

7

Groperofeuropa t1_j58lnwt wrote

That is false equivalence. The assertion is that this is a way in which Facebooks platform design makes it worse, which is the topic of the post.

14

BigMax t1_j55w1cb wrote

Wait, so are they saying Facebook is more suited to stopping problems with Facebook compared to random users? Astonishing!

I wonder if this is true in other cases. I mean, I saw an issue with Amazon and figured I could fix it, but maybe Amazon itself might be better positioned to?

42

processedmeat t1_j57c68e wrote

Taken another way... User don't take accountability for what they share on facebook

3

Chpgmr t1_j57gml0 wrote

And what is the solution to that?

3

processedmeat t1_j57hd0x wrote

User shouldn't hit the share button when they see misinformation

2

Chpgmr t1_j57irfp wrote

Because we all are constantly up to date on what is and isn't misinformation and there certainly isn't nefarious people out there.

4

insaneintheblain t1_j586v8h wrote

You realise that users create and share the actual content on Facebook, right?

1

Delamoor t1_j5878ii wrote

I'm pretty sure I can take down bot networks with willpower, facts and logic alone. I just have to make posts long and angry enough to outweigh theirs and then post them in response to every bot's post or comment. Seems doable in a weekend.

1

Jackmeoff611 t1_j555tp8 wrote

Just look at Reddit with its collective group think. If you don’t aggressively agree with their view points you get banned. There is no more such thing as grey area now. And people have stopped learning how to interpret things for themselves.

32

alexRr92 t1_j557og7 wrote

I find this very true about grey areas, it's like people have forgotten grey areas exist despite our reality being mostly made up of grey areas/complexity.

The real issue I feel is too many are way to eager to speak on subjects they don't understand instead of openly admitting to the fact they don't understand. There's nothing wrong with not understanding, you can't know know everything about everything. Also people don't seem to trust one another anymore contributing to the problem.

20

EddoWagt t1_j55me14 wrote

>The real issue I feel is too many are way to eager to speak on subjects they don't understand instead of openly admitting to the fact they don't understand.

This is probably the vocal minority, as people who know they don't know will probably remain silent

13

Chpgmr t1_j57h4fk wrote

Same with those street interviews. Look at how many people simply look and walk by vs how many actually stop to talk. You just get the weird confrontational people who talk before they think more often than not.

2

IRYIRA t1_j57qsib wrote

2,400 years since Socrates and the smartest person in any room is the one who says, "I don't know." Of course we understand a lot more about our world today than we did then, but new data could always completely change the rules by which we understand how the world operates. None of that means we should stop trying to understand our world or that everything once believed to be true is wrong, rather we should recognize that what we know to be true today could be false tomorrow, or merely partially true.

3

Chiliconkarma t1_j56gckz wrote

Also, prejudice and people are very willing to ignore a lack of information. Thinking more like "round block goes in round hole", than "out of X possible scenarios, you should worry about this: .....".

If the poster mentions an acceptable clue for what the answer should be, then the answer can only be what the clue indicates.

2

budlystuff t1_j55l5ax wrote

I agree with this where a “dissenting” view on Reddit with the most traction gets buried in the comment section and Bots force feeding agendas.

Facebook is a sewer of information, I explain to my Mam that everything is fake even when her neighbours is live-streaming at the local shops.

8

CrustyMilkCap t1_j555kwi wrote

Because Facebook users know how to stop the spread of misinformation on Facebook...

16

Thatguyxlii t1_j56bm0i wrote

Except Facebook doesn't stop it. And often will ignore it even if reported. And will "fact check" obvious satire and joke memes and restrict accounts based on that.

5

Few-Ability-7312 t1_j54zulg wrote

What classifies as misinformation?

4

_______someone t1_j55baoo wrote

According to Google, misinformation is defined by the OED as "false or inaccurate information, especially that which is deliberately intended to deceive".

12

denyjunctionfunction t1_j55v4un wrote

So misleading and cherry-picked information is fine so long as it is true (but leaving out context)?

4

thechinninator t1_j55vu83 wrote

An unfortunate loophole, but idk how to account for this if you want to keep moderators' personal biases out of the execution

3

denyjunctionfunction t1_j55y9jm wrote

A major loophole. I’m willing to bet most of the things that people call misinformation aren’t straight up lies, but just cherry picked information.

3

_______someone t1_j56com4 wrote

Misleading and cherry-picked information is inaccurate information.

3

denyjunctionfunction t1_j56p0tl wrote

No they aren’t. “Inaccurate” doesn’t deal with context alone. It’s binary in saying it is accurate or not, on a spectrum of how inaccurate/accurate it is. If stats show that X does Y if Z is present, it is not inaccurate to say X does Y. It’s misleading though because the entire context isn’t there, but it is factually correct.

0

Akiasakias t1_j55qskc wrote

So they determine not only the "truth" but also mind read your intent.

Given the missteps lately of true stories being labeled misinformation, I don't think there could be a good way to administer this policy, it is flawed.

2

resorcinarene t1_j55qbyf wrote

No, misinformation is false or inaccurate information. That's it. Information that is deliberately intended to deceive is disinformation.

Your post is misinformation

1

_______someone t1_j56bb3w wrote

By all means go ahead and tell the Oxford English Dictionary that you know better than them when defining a word.

While you're at it, look up the word "especially" in the dictionary and reread the definition quoted in my post.

9

Proponentofthedevil t1_j56mlh8 wrote

The definition isn't the issue. Read the question again and figure out what it is asking. Your definition does nothing to figure that out.

−2

_______someone t1_j57fkw8 wrote

I believe this is the third time I say: it is not my definition.

0

Proponentofthedevil t1_j57iq2d wrote

That's not the issue. The definition is just that. Yes, we know that bad info is bad info. The definition doesn't tell us what is misinfo. That's the question. Are you being purposefully obtuse?

0

_______someone t1_j57k2ef wrote

Mate, you're debating semantics. I don't make up the words. People do. You got a problem with a word that is less than your standard of certainty? You got a big problem cuz that's most words. Language is not maths. Get over it and get wise.

1

RonPMexico t1_j55nw13 wrote

All legal speech should be allowed, and algorithms should be content neutral. If a platform is going to censor any speech, they should be liable for all speech on the platform.

4

whittily t1_j55x1dr wrote

No algorithmically-curated content feed can be content neutral. Every design choice affects what you see and comes with unintended curatorial effects. It’s an oxymoron.

20

Deep90 t1_j57iczt wrote

Its also problematic because not all users are actually people.

The "Every user is equal" model falls apart when I create 10 bot accounts against your single account.

​

A truly neutral platform would give my posts 10x the reach.

2

RonPMexico t1_j55xnm7 wrote

It absolutely can be content neutral, and it normally is. The algorithm looks at engagement patterns and content is usually only considered after the optimization has occurred.

−8

whittily t1_j561fn5 wrote

And then it surfaces content that is highly engaged, like sensationalized misinformation. Content-neutral decisions never have content-neutral effects.

The town square can only accommodate a limited amount of speech. Democratic societies have always had an active role in deciding what kind of speech is prioritized and what mechanisms should be used to do so in a way that’s fair and non-censorious. If you go to a public hearing, is it just a crowd of people shouting over each other? Do you only get to hear from whoever is shouting loudest? No, obviously that would be unproductive and stupid. The digital town square isn’t different.

Your statement also weirdly puts this design choice in its own category than literally every other that a company makes when designing algorithms. They don’t work from first principles to decide what inputs should feed an algorithm. They test changes and look to see if it results in desired outputs. But for this one aspect, you expect them to design in a black box and not respond to what the actual effects are to platform. It’s just not really engaging with the reality of how these get built and optimized.

4

RonPMexico t1_j562sk7 wrote

prioritizing content is censorship.

I believe that it is fine if platforms want to censor content, but if they are going to take responsibility for some of the speech on their platform, they should have to be liable for all speech on their platform.

If we are going to allow nameless tech employees to determine who gets the megaphone in society with no public accountability, we should at least be able to use litigation to keep the size of their megaphone in check.

3

burnerman0 t1_j57dr0t wrote

> prioritizing content is censorship

What exactly do you think these algorithms are doing if not prioritizing content? By your logic every platform is practicing censorship simply by existing.

0

RonPMexico t1_j57e7t5 wrote

The platforms generally prioritize posts not by content but by interactions. The algorithm doesn't know what message any individual post conveys, it only knows that the post will lead to a desired outcome (clicks, shares, likes, what have you)

3

bildramer t1_j566unq wrote

There's "content-neutrality" as in not ranking posts by date, or not prefering .pngs to .jpgs, and then there's "content-neutrality" as in not looking into the content and determining if a no-no badperson with the wrong opinions posted it. The first is usually acceptable by anyone, and not what we're talking about. And you can distinguish the two, there's a fairly thick line.

−1

whittily t1_j56da21 wrote

I’m going to move past your ridiculous strawman and just say that that is not how algorithm design happens. You are just not engaging with reality. Every design choice is evaluated on its effects on user behavior. To insist that we refuse to evaluate whether algorithm design degrades the user experience by forcing lies into their feed is absurd.

−2

RonPMexico t1_j56fdea wrote

The problem is that when a person makes a value judgment on content and uses it to promote (or not) that speech, it is censorship. If a platform is going to censor speech, they should be accountable for that speech.

4

whittily t1_j56y9xb wrote

  1. It’s unavoidable. You are demanding something that is impossible. Every decision requires a value judgement, especially decisions that attempt to avoid a value judgement. In this case, we should value truth and accuracy.

  2. The platform shouldn’t be responsible for these decisions. We should democratically determine how we prioritize speech in a crowd d public sphere, just like every democratic society has done for hundreds of years. Pretending that every society has always allowed infinite, unfettered speech in public forums is ahistorical and also a little stupid. Society would be chaos, just like corporate-controlled digital public spaces are today.

  3. Finally, no, there is such a thing as truth and a lie. Sometimes it’s complex, but that doesn’t mean it is impossible to determine. Democracy only works when strong, trusted, public, unbiased institutions mediate information between experts/information producers and the public. The introduction of digital media infosystems without institutional mediation is breaking down our publics’ access to true, relevant information and damaging our ability to solve problems politically.

0

RonPMexico t1_j574jrw wrote

You are unapologetically anti-free speech. What a stance.

3

whittily t1_j57728e wrote

No. Free speech =/= unfettered, unmoderated speech, and it never has in the history of the US. No society operates that way.

1

RonPMexico t1_j577huj wrote

Ok, the sentence "a man can not become a woman." Is true or false? How would it be decided in a way that does not infringe on free speech in your world?

3

SmellyMammoth t1_j57eija wrote

This is a bad take. You can’t force a company to host speech it doesn’t agree with. It’s also unrealistic to expect a company to monitor every single user post on their platform. This is why we have protections like Section 230 in place.

2

RonPMexico t1_j57fhes wrote

My point is section 230 shouldn't exist. Either the company is responsible for the posted content or it isn't.

A company removing content it does not agree with gives content it doesn't remove its implied endorsement. They can't have it both ways.

2

Euphoric-Driver-7568 t1_j5793oi wrote

I don’t want a technology firm to decide what I’m to see. If I want to listen to someone who only tells lies, I should be allowed to do that.

3

PhilosophyforOne t1_j57jxvb wrote

While I applaud the results and the study, the fact that some people try to argument otherwise is downright laughable.

Think about making the argument that instead of having a police force and laws that govern our behavior towards each other, we should simply leave it up to individual responsibility to reduce crime and harmful behavior.

(Note. I’m not saying our current system is perfect, but saying it’s not up to the platform but the user to reduce misinformation is akin to saying instead of rule of law, it should be up to individuals to behave how they choose.)

3

PmMeWifeNudesUCuck t1_j57ib3s wrote

#TwitterFiles disagrees. Communication shows they're the key source of government disinformation/misinformation aka propaganda

2

AutoModerator t1_j54zjt4 wrote

Welcome to r/science! This is a heavily moderated subreddit in order to keep the discussion on science. However, we recognize that many people want to discuss how they feel the research relates to their own personal lives, so to give people a space to do that, personal anecdotes are allowed as responses to this comment. Any anecdotal comments elsewhere in the discussion will be removed and our normal comment rules apply to all other comments.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

1

qwicksilver6 t1_j57n6rx wrote

Human body says lymph node to blame for rampant virus.

1

trollsmurf t1_j57tv12 wrote

But do they want to? Users are the bait.

1

boston101 t1_j584fn2 wrote

I’ve been thinking of something I’d like you all opinion on.

As someone that works in the industry, as a data scientist/engineer, and writes/implements my ml models in production.

I think people would benefit to learn at a very basic level how their data is turned into decisions aka money. I also think showing them how data is structured for ML models to make those decisions would help.

What do you all think?

I’ll hazard a guess and say the older generations won’t take to this but if this was taught in schools even at a very basic level I think people would rethink their behaviors online.

(Making up this example) basic level could be as simple as:

data is gather and organized into spreadsheets.

We are trying to predict the next value in this one column. The values are “yes” or “no” for this column based on if you clicked a button. The rest of the columns represent your behavior on the website, like cursor speed. With math magic we can see what columns are most likely to influence the “yes” or “no” value in column we are trying to predict.

Finally as new data comes in we can use the math to see how you interactions on the website will predict if you click the button. Totally made up.

1

js1138-2 t1_j58af2p wrote

Am I allowed to say that I hate websites that exhibit slow performance that seem to be the result of processing something other than my menu choices.

I leave the site and never return. I return to sites that enable snappy searches for stuff.

1

boston101 t1_j58bkll wrote

Totally. Aren’t we doing the same math for feature elimination every day subconsciously?

Every decision has some sort of quick analysis going on and there are feature to every decision that one subconsciously weights to determine the decision to make.

For example, why chose the apple you picked? Did it have the shine or firmness you wanted but isn’t fundamentally in your attention? You made this decisions without even knowning.

1

js1138-2 t1_j58dcbh wrote

Here’s a clue. I do a lot of searches for hardware and parts. When I find a website with a good static home page, I bookmark it. When I find a site that wiggles too much on the home page, I leave immediately.

Hope you can capture that.

You have one half second to fully display the home page, or I’m gone.

After that I’m a bit more forgiving.

1

QTheLibertine t1_j57nw0v wrote

I did a study of all of human history and found allowing the government to define misinformation leads to death on a massive scale and should be avoided at all costs.

−1

Delamoor t1_j589wzg wrote

I also did a study of human history and found that mob rule leads to death and suffering on a massive scale and should also be avoided at all costs.

Like government defining disinformation is also what protects you from 'Your tribe cursed my cousin, so we all came over here to murder half of you and enslave/rape the other half if we think they look good enough. This'll teach you to cast curses.'

Or even just 'our daughter is a witch. We need to kill her.'

2

nlewis4 t1_j55m77u wrote

It is far too late to stop or reduce the spread of misinformation online. Bad actors that are knowingly spreading misinformation stand behind "my free speech!!!" and "I'm being silenced!" and the target audience eats it up.

−3

thechinninator t1_j55x3gf wrote

More people than you think are reasonable but locked in an echo chamber due to where they happen to live. Yeah this will make some people dig in further but most of those are probably lost causes.

2

oOzephyrOo t1_j556d88 wrote

Sure, put the blame on platforms when law makers could toughen the laws and penalties on misinformation.

−15

[deleted] t1_j558j9t wrote

[removed]

11

resorcinarene t1_j55rbu9 wrote

What was misinformation a year ago that's a fact today?

−1

[deleted] t1_j55s15c wrote

[removed]

2

resorcinarene t1_j55v84u wrote

Speak plainly. What's the summary, in your words?

0

[deleted] t1_j55vkxo wrote

[removed]

1

_______someone t1_j55c58a wrote

That's called censorship. For the pros and cons of this practice, look it up. I ain't educating you on something you should have retained from your high-school history class.

6

whittily t1_j575iny wrote

No. Speech is zero sum. Democratic societies always have to create prioritization mechanisms and institutions to not let public debate degrade to chaotic shouting matches.

1

mobrocket t1_j55k8pu wrote

It is. But censorship isn't always a bad thing.

Societies have to decide what should or shouldn't be censored.

I could easily argue why in today's post truth world where any idiot can find an audience maybe we need more censorship

−6

[deleted] t1_j55ngl7 wrote

[removed]

5

mobrocket t1_j578oce wrote

So you think child porn should be allowed.... Same logic

Or graphic murder on all tv stations

Irs should disclose everyone's tax returns

All that stuff is censored and protected because as a society we determined it's not acceptable for all the public to have

Maybe you should consider censoring your mind and it's 5th grade level of knowledge about censorship

0

_______someone t1_j569vml wrote

The authority of a government to suppress the ideas that people express is always a bad thing.

2