mdjank t1_j34gvh4 wrote

Social media makes it easier for people to find their communities specifically because of the way statistical learners (algorithms) work. Statistical learners work by using statistics to predict the probabilities of specific outcomes. It matches like with like. Regulating the functionality of statical learners would require the invention of new math that supersedes everything we know about statistics.

Regulation is easier said than done.

It is not possible to regulate how the algorithms work. That would be like trying to regulate the entropy of a thermodynamic system. Eliot Nash won a Pulitzer for his paper on equilibriums. Statistical Learners solve Nash Equilibriums the hard way.

One thing people suggest is manipulating the algorithm's inputs. This only changes the time it takes to reach the same conclusions. The system will still decay into equilibrium.

Maybe it's possible to regulate how and where algorithms are implemented. Even then, you're still only changing the time it takes to solve the Nash Equilibrium. I would love to see someone disprove this claim. Disproving that claim would mean the invention of new math that can be used to break statistics. I would be in Vegas before the next sunrise with that math on my belt.

Any effective regulation on the implementation of statistical learners would be indistinguishable from people just deleting their social media. Without the Statistical Learners to help people more effectively sort themselves into communities, there is no social media. These algorithms are what defines social media.

To claim that people wouldn't be able to find their communities without social media is naive at best. People were finding their communities online long before social media used statistical learners to make it easier. If anything, social media was so effective that other methods could not compete. It has been around so long; it just seems like the only solution.

P.S. Your thinly veiled argumentum ad passiones isn't without effect. Still, logos doesn't care about your pathos.


mdjank t1_j32oxg0 wrote

I already explained how algorithms worked in this post.

Tailoring your own social media to work for you is probable. It would require disciplined responses directed by unbiased self analysis. In other words, it's not bloody likely.

Then there's the question of limiting the dataset in your feed. You do not have direct control over the data in your feed. You can only control which people can publish to your feed.

You can cut people out of your feed for some level of success. The more people you cut, the less it is a "tool to keep you connected". It stops being social media.

The only sure way to keep from seeing material on social media is to not look at social media. You remove the drunk from the tavern. Change your environment by removing yourself from it.


mdjank t1_j2ycwdp wrote

The way statistical learners (algorithms) work is by using a labeled dataset of features to determine the probability a new entry should be labeled as 'these' or 'those'. You then tell it if it is correct or not. The weights of the features used in its determination are then adjusted and the new entry is added to the dataset.

The points you have control over are the labels used, the defined features and decision validation. The algorithm interprets these things by abstraction. No one has any direct control on how the algorithm correlates features and labels. We can only predict the probabilities of how the algorithm might interpret things.

In the end, the correlations drawn by the algorithm are boolean. 100% one and none of the other. All nuance is thrown out. It will determine which label applies most and that will become 'true'. If you are depressed, it will determine the most depressed you. If you are angry, it will determine the most angry you.

You can try to adjust feature and label granularity for a semblance of nuance. This only changes the time needed to determine 'true'. In the end, all nuance will still be lost and you'll be left with a single 'true'.

People already have the tools to control how their algorithms work. They just don't understand how the algorithms work so they misuse the tools they have.

Think about "Inside Out" by Pixar. You can try to make happy all the time but at some point you get happy and sad. The algorithm cannot make that distinction. It's either happy or sad.


mdjank t1_j2y0ffi wrote

Gamification of self improvement activities are its own industry. You can go buy a piece of software that already does that.

All social media can do is share your progress or lack there of.

Think of it this way. You're not going to stop alcoholics by putting a salad bar in a tavern and charging people to eat their salad.


mdjank t1_j2xze5c wrote

There are major problems with a happiness algorithm.

First how do you measure a person's level of happiness? The person's emotional state is not a metric in the system.

An algorithm can decide if a piece of media is uplifting but it cannot say if that media would produce the desired effect on an individual. It can only predict the media's effect on a group of individuals.

You can ask individuals about their mental state and measure changes after presenting stimuli. That introduces all the problems of self reporting. e.g. People lie.

Second, a solution to happiness already exists. It's called "delete your social media". Any "happiness algorithm" has to compete with this as a solution.

"Delete your social media" is such an effective solution that Social Media will lie to you to make it seem incomprehensible. It tells you "social media is the only way to be connected with others" and "you're 'in the know' because you use social media and that makes you special".