Viewing a single comment thread. View all comments

futurespacecadet t1_j2y0sme wrote

Well, I don’t think there is some magic, happiness algorithm, I’m just saying the concept of it. What would create happiness in an algorithm form? I think control.

I think control over what people see is pivotal to how they interact and I think we need to give back control to the users .

So maybe when you sign up, you can choose what you want to see, if you do want politics, maybe you can choose the level at what politics you see. Do you want to be challenged, do you want to be in a bubble? I mean that in itself could cause problems.

But I also think we don’t need any of that, I think what people really, like it was the fact that Facebook used to just be about connecting with your friends, purely a communication tool before it was bloated with wanting to be everything else like on marketplace and an advertisement center and pages for clubs etc

It’s the same thing that’s happening with LinkedIn. Are used to be affective as just a job search tool, and now it is bloated with politics. I don’t care about I would rather have more services that the one specific thing rather than one service that tries to do it all and I think that’s where people are getting overwhelmed and depressed.


mdjank t1_j2ycwdp wrote

The way statistical learners (algorithms) work is by using a labeled dataset of features to determine the probability a new entry should be labeled as 'these' or 'those'. You then tell it if it is correct or not. The weights of the features used in its determination are then adjusted and the new entry is added to the dataset.

The points you have control over are the labels used, the defined features and decision validation. The algorithm interprets these things by abstraction. No one has any direct control on how the algorithm correlates features and labels. We can only predict the probabilities of how the algorithm might interpret things.

In the end, the correlations drawn by the algorithm are boolean. 100% one and none of the other. All nuance is thrown out. It will determine which label applies most and that will become 'true'. If you are depressed, it will determine the most depressed you. If you are angry, it will determine the most angry you.

You can try to adjust feature and label granularity for a semblance of nuance. This only changes the time needed to determine 'true'. In the end, all nuance will still be lost and you'll be left with a single 'true'.

People already have the tools to control how their algorithms work. They just don't understand how the algorithms work so they misuse the tools they have.

Think about "Inside Out" by Pixar. You can try to make happy all the time but at some point you get happy and sad. The algorithm cannot make that distinction. It's either happy or sad.