Viewing a single comment thread. View all comments

whittily t1_j55x1dr wrote

No algorithmically-curated content feed can be content neutral. Every design choice affects what you see and comes with unintended curatorial effects. It’s an oxymoron.

20

Deep90 t1_j57iczt wrote

Its also problematic because not all users are actually people.

The "Every user is equal" model falls apart when I create 10 bot accounts against your single account.

​

A truly neutral platform would give my posts 10x the reach.

2

RonPMexico t1_j55xnm7 wrote

It absolutely can be content neutral, and it normally is. The algorithm looks at engagement patterns and content is usually only considered after the optimization has occurred.

−8

whittily t1_j561fn5 wrote

And then it surfaces content that is highly engaged, like sensationalized misinformation. Content-neutral decisions never have content-neutral effects.

The town square can only accommodate a limited amount of speech. Democratic societies have always had an active role in deciding what kind of speech is prioritized and what mechanisms should be used to do so in a way that’s fair and non-censorious. If you go to a public hearing, is it just a crowd of people shouting over each other? Do you only get to hear from whoever is shouting loudest? No, obviously that would be unproductive and stupid. The digital town square isn’t different.

Your statement also weirdly puts this design choice in its own category than literally every other that a company makes when designing algorithms. They don’t work from first principles to decide what inputs should feed an algorithm. They test changes and look to see if it results in desired outputs. But for this one aspect, you expect them to design in a black box and not respond to what the actual effects are to platform. It’s just not really engaging with the reality of how these get built and optimized.

4

RonPMexico t1_j562sk7 wrote

prioritizing content is censorship.

I believe that it is fine if platforms want to censor content, but if they are going to take responsibility for some of the speech on their platform, they should have to be liable for all speech on their platform.

If we are going to allow nameless tech employees to determine who gets the megaphone in society with no public accountability, we should at least be able to use litigation to keep the size of their megaphone in check.

3

burnerman0 t1_j57dr0t wrote

> prioritizing content is censorship

What exactly do you think these algorithms are doing if not prioritizing content? By your logic every platform is practicing censorship simply by existing.

0

RonPMexico t1_j57e7t5 wrote

The platforms generally prioritize posts not by content but by interactions. The algorithm doesn't know what message any individual post conveys, it only knows that the post will lead to a desired outcome (clicks, shares, likes, what have you)

3

bildramer t1_j566unq wrote

There's "content-neutrality" as in not ranking posts by date, or not prefering .pngs to .jpgs, and then there's "content-neutrality" as in not looking into the content and determining if a no-no badperson with the wrong opinions posted it. The first is usually acceptable by anyone, and not what we're talking about. And you can distinguish the two, there's a fairly thick line.

−1

whittily t1_j56da21 wrote

I’m going to move past your ridiculous strawman and just say that that is not how algorithm design happens. You are just not engaging with reality. Every design choice is evaluated on its effects on user behavior. To insist that we refuse to evaluate whether algorithm design degrades the user experience by forcing lies into their feed is absurd.

−2

RonPMexico t1_j56fdea wrote

The problem is that when a person makes a value judgment on content and uses it to promote (or not) that speech, it is censorship. If a platform is going to censor speech, they should be accountable for that speech.

4

whittily t1_j56y9xb wrote

  1. It’s unavoidable. You are demanding something that is impossible. Every decision requires a value judgement, especially decisions that attempt to avoid a value judgement. In this case, we should value truth and accuracy.

  2. The platform shouldn’t be responsible for these decisions. We should democratically determine how we prioritize speech in a crowd d public sphere, just like every democratic society has done for hundreds of years. Pretending that every society has always allowed infinite, unfettered speech in public forums is ahistorical and also a little stupid. Society would be chaos, just like corporate-controlled digital public spaces are today.

  3. Finally, no, there is such a thing as truth and a lie. Sometimes it’s complex, but that doesn’t mean it is impossible to determine. Democracy only works when strong, trusted, public, unbiased institutions mediate information between experts/information producers and the public. The introduction of digital media infosystems without institutional mediation is breaking down our publics’ access to true, relevant information and damaging our ability to solve problems politically.

0

RonPMexico t1_j574jrw wrote

You are unapologetically anti-free speech. What a stance.

3

whittily t1_j57728e wrote

No. Free speech =/= unfettered, unmoderated speech, and it never has in the history of the US. No society operates that way.

1

RonPMexico t1_j577huj wrote

Ok, the sentence "a man can not become a woman." Is true or false? How would it be decided in a way that does not infringe on free speech in your world?

3