You must log in or register to comment.

Depression_God t1_j9yf45a wrote

Of course it is biased. It's just a reflection of the culture that created it.


Hunter62610 t1_j9yp13e wrote

I mean it could also be the people that made it biased the program.


Depression_God t1_j9z4rs1 wrote

Obviously they did. People made everything about it. The question is to what extent they did it deliberately.


mutantbeings t1_ja5bwou wrote

Nah that’s not super important. In the tech industry we all know that unconscious bias affects the tech we build, it’s a super important consideration whether or not it’s conscious. It’s one reason why building a culturally diverse team matters: it minimises the intensity of unconscious bias. There’s actually a lot of conscious things you can do to reduce it but it’ll never go away completely.


whatsup5555555 t1_ja5jmyn wrote

So you’re in favor for half of your “team” to have a different political leaning then your own. It’s easy to say that you want a culturally diverse team and it’s another to actually assemble one. It’s easy to pick people on surface level features like skin color but it’s much more difficult to balance political ideology, hence the clear bias that the AI already exhibits. The tech industry is already heavily left leaning but I guess no one cares as long as your bias is the one winning. So keep fighting for your skewed view of equality!


mutantbeings t1_ja66q0y wrote

Not quite. The tech industry has been historically very very conservative. It’s a very recent development that this stuff has been discussed more (it wasn’t until probably the late 2000s or early 2010’s with the explosion of social media that the tech industry became less conservative)

Assembling a diverse team isn’t rocket science, the mistake a lot of tech teams still make tend to be comically bad like an all white team or an all male team; those are still very common.

Obviously those teams will have huge blind spots in lived experience. Even a single person added to that team from a very different background covers off a huge gap there, and each extra person added is a multiplier of that effect to some degree.

You’re dead right to point out that diversity is as much about less obvious factors like class or culture though. And that’s definitely harder.

I think it’s a huge leap to say that the tech industry has some left wing bias though, I don’t think you can neatly conclude that from one chart, and it doesn’t match up with my 20 years eco working in tech, including on AI


gastrocraft t1_j9zv07a wrote

They didn’t make everything about it. That’s not how LLM’s work.


TheRidgeAndTheLadder t1_j9zxlou wrote

Go a bit further. Who generated the training data?


Spire_Citron t1_ja0457j wrote

The training data is massive and usually not carefully curated because they need so much of it.


starstruckmon t1_ja1102i wrote

He's talking about the human preference data used for RHLF fine-tuning ( which is what makes ChatGPT from GPT3 ). It's not really that massive.


gastrocraft t1_ja02120 wrote

That still doesn’t mean that humans programmed everything the LLM’s do.


TheRidgeAndTheLadder t1_ja02nxi wrote

It kinda does.

We defined the training data, the utility function, etc


gastrocraft t1_ja03dv5 wrote

By that definition, when AGI becomes a thing you’ll be saying we programmed every aspect of it too. Not true.


TheRidgeAndTheLadder t1_ja0e7lb wrote

You're missing my point.

At the end of the day CNN fit curves to data.

That data summarises "us". The world we have shaped. All our fears, dreams, and biases.

It is inevitable, given such data, that these systems are as flawed as us.


mutantbeings t1_ja5c86q wrote

Yep. And one reason it’s important we build culturally diverse teams that will minimise the intensity of bias. This is common knowledge in the tech industry already because it shows up in all kinds of software dev and there are some really embarrassing horror stories out there about bias from teams lacking any diversity at all


TheRidgeAndTheLadder t1_ja5dax7 wrote

>Yep. And one reason it’s important we build culturally diverse teams that will minimise the intensity of bias.

How can the makeup of the team impact the data?

>This is common knowledge in the tech industry already because it shows up in all kinds of software dev and there are some really embarrassing horror stories out there about bias from teams lacking any diversity at all

The phrase is garbage in, garbage out. Not "garbage supervised by the correct assembly of human attributes"


mutantbeings t1_ja5eflp wrote

Your team decides what data to even train it on. There will be sources of data that a culturally diverse team will think to include that a non-diverse team won’t even know exists. This is a very well known phenomenon in software dev; that diverse teams build better software on the first pass due to more varied embedded lived experience. Trust me I’ve been doing this 20 years and see it all the time as a consultant, for better or worse.


TheRidgeAndTheLadder t1_ja5v71q wrote

>Your team decides what data to even train it on. There will be sources of data that a culturally diverse team will think to include that a non-diverse team won’t even know exists.

I'm a lil confused, are you saying that culturally diverse data (CDD) will/can be free of the biases we are trying to avoid?


mutantbeings t1_ja65i06 wrote

No, but if you have 5 identical people with the same biases, obviously those biases and assumptions will show up very strongly. Add even one person and the areas where blind spots exist no longer overlap perfectly. Add one more .. it decreases even more, and so on.

But there’s never a way to eradicate it in full. All you can do is minimise it by bringing broad experience.


TheRidgeAndTheLadder t1_ja6646o wrote

Is that really all we can do?


mutantbeings t1_ja67lay wrote

It’s the best thing you can do to get it as close as possible on the first pass, yeah.

But software is iterative and a collaborative process; generally any change to software goes through multiple approval steps; first from your team, then gets sent out to testers who may or may not be external, often those testers are chosen specifically for their lived experience and expertise serving a specific audience, who may themselves be quite diverse. Eg accessibility testing to serve people living with disabilities. Content testing is also common when you need to serve, say, migrant communities that don’t speak English at home.

Those reviews come back and you have to make iterative changes. That process is dramatically more expensive if you get it badly wrong on the first pass; you might even have to get it reviewed multiple times.

Basically, having a diverse team that embeds that experience + expertise within your team lowers costs and speeds up development because you then need to make less changes.

On expertise vs experience: you can always train someone to be sensitive to the experience of others but it’s a long process that takes decades. I am one of these “experts” and I would never claim to have anything like the intimate knowledge of the people I am tasked with supporting as someone who actually lives it; there’s no replacement for that kind of experience by default.

Ultimately you will never get any of this perfect so you do what you can to get it right without wasting a lot of money; and I guarantee you non diverse teams are wasting a tonne of money in testing. I see it a lot. When I was working as a consultant it was comically bad at MOST places I went because they had male dominated teams where they all stubbornly thought they knew it all … zero self awareness or ability to reflect honestly in teams like that was unfortunately stereotypically bad


just_thisGuy t1_ja01j8q wrote

Maybe making fun of disabled people is worse than making fun of wealthy people, maybe disabled people will get actually upset and have mental issues if you make fun of them? Maybe even if you make fun of a wealthy white person they will soon forget about it and continue their trip to a private island on their private jet? Maybe making fun of gay people has a history that includes discrimination and abuse, even jail and murder? Maybe making fun of white people does not have the same history? Maybe ChatGPT is actually right on some of those? Maybe if you have all the power people should be able to make fun of you? Maybe if you have no power at all people should not be able to make fun of you?


Frumpagumpus t1_ja07k0y wrote

> Maybe making fun of gay people has a history that includes discrimination and abuse, even jail and murder? Maybe making fun of white people does not have the same history

depends on where you live... there are some african countries where discrimination and abuse of white people is defintely part of modern day history though it may not be politically correct to say it in the united states. an eye for an eye makes the whole world blind (which is kind of the implication of your humor ethics)

also while we are talking a fun fact: most capital investment goes into capital turn over, replacing stuff. So most wealth that exists today was created in the recent past and not as the result of slave labor or something (your ethics might not make as much sense as you think because entropy is a thing)


nocturnalcombustion t1_ja0jdj2 wrote

Maybe hate speech is okay if it’s the people I don’t like. Heh jk, sort of.

To me, there are some meaningful, if not crisp, distinctions:

  • groups that are born that way vs groups where members control their membership.
  • groups where members can vs. can’t conceal their membership in the group.

Beyond that, I don’t like the idea of asymmetrical value judgments about when hate speech is okay. I could be missing some important distinctions though.


zero0n3 t1_ja0tws6 wrote

I think this is where they were trying to go but could t really connect the dots fully.

Like hatful speech of rich people vs black people. It’s clear why one is ok and the other isn’t (one is hate toward a group based on attributes they can’t change. The other isn’t generic attribute based )

Unrelated: my new thing to fight white supremacy is:

“Hey; 20 years ago your racist white ass was saying the ‘blacks’ need to fix their own race and that’s how you fix racism. How about you take your own advice and fix your own white asses”


whatsup5555555 t1_ja33yke wrote

You are a complete idiot. That tiny pea inside your nearly empty skull tells you that it’s ok to discriminate against a particular race of people. So just_thisGuy go ahead and say this next line out loud “I’m a racist” . What fuck tards like yourself, who are completely void of any ability to process the garbage they consume from mainstream media don’t realize is that once society tolerates discrimination or racism based on specific criteria it opens the door for more discrimination and hate based on whatever criteria the masses excuse at the moment.


mutantbeings t1_ja5cn81 wrote

And in this comment you used two discriminatory ableist slurs. So yep. I guess I’ll know who to ignore based on their demonstrated lack of inclusivity. Can’t make this shit up


whatsup5555555 t1_ja5hqkt wrote

Hahahahahah “can’t make this shit up “. Please elaborate on how idiot or fuck tard is discriminatory to a group of people. People like you are a absolute joke to everyone that doesn’t exist in your overly sensitive liberal bubble of extreme intolerance to any opinions outside your clown bubble of acceptance. So again I say hahahah you are a complete joke. Go cry in your safe space and continue to enjoy the smell of your own flatulence.


ArtistVinnyDellay t1_ja0bhmk wrote

Nope. Until there is equality for everyone, there will be equality for no one.


zero0n3 t1_ja0sys7 wrote

Yeah nuance and context mean nothing.

It’s why you’ll be destined to stay an idiot.


Kinexity t1_j9yyi6o wrote

That's true but assuming that they somehow can tweak flagging rates (as in not like they fed some flagging model a bunch of hateful tokens and it's automatic) then it's pretty fucked up that there are differences between races and sexes.

Obviously it's based on an assumption and shows that they should have been more transparent over how flagging works.


Depression_God t1_j9z6e93 wrote

The only problem we can be certain of is the lack of transparency. Regardless of which direction or how strong the bias is, they should always be transparent about how it works.


sommersj t1_ja3fspv wrote

It's an issue Google itself is facing. It keeps firing it's AI ethicists who are complaining about the bias being put into these programs


mutantbeings t1_ja5bldb wrote

And this is THE most important point we all need to take home about AI: it’s values always reflect the creators.

And the creators tend to be greedy capitalist corporations, so I expect this bias chart to change substantially as further tweaks are made, and not for the better.


Scarlet_pot2 t1_j9xy5ar wrote

The "Fat people" need to be protected! lmao. they're pretty high on the protected list.


mutantbeings t1_ja5cvhp wrote

I mean you’re kinda just showing that you think it’s cool to discriminate against people based on appearance? Seriously can’t make this shit up


No_Ninja3309_NoNoYes t1_j9yluod wrote

This is not a very scientific way to measure bias. You need control groups and some way to account for randomness, context, and word ambiguity.


luffreezer t1_j9xts7d wrote

This is just a mirror of who gets the most hatespeech.

It says more about human discourse than it says about the AI.

Edit: here is a small paragraph from the conclusion of the Article that I think is important to keep in mind:

«It is also important to remark that most sources for the biases reported here are probably unintentional and likely organically emerging from complex entanglements of institutional corpora and societal biases. For that reason, I would expect similar biases in the content moderation filters of other big tech companies.»


Baturinsky t1_j9y8qxh wrote

You mean, OpenAI was taught on the texts that had way more anti-disabled hate than anti-republican hate? Where have they found them?


luffreezer t1_j9yo44f wrote

It is the whole internet that is like that. As a said, it is a reflexion of our society:

You will never find people insulting "normal weighted people" or "people without a disability". So it is not surprising that the model does not perform well in those areas.

In the US, saying something is "socialism" can even be interpreted as a criticism, so I am not surprised it flags more left-winged things than right-winged.


Spire_Citron t1_ja06ja2 wrote

It's not necessarily just the amount but also the type of hate.


Moist-Question t1_ja0fp45 wrote

Likely because there is a larger volume of hate content for disabilities than for republicans.


LightVelox t1_j9ynlb6 wrote

4Chan is the only place i can think of where you wouldn't get instabanned for anti-disabled hate, but considering most models are trained on Reddit it would make sense for it to be extremely biased to the left


pnut-r-bckl t1_j9z6ycg wrote

So by your theory, if I go on to Twitter right now, I'm going to see pages and pages of hate speech against black people, but almost nobody saying anything about the rich?

Maybe you should rethink.


taweryawer t1_ja109yc wrote

>This is just a mirror of who gets the most hatespeech.

LMAO you can't be serious that disabled people get more hate than rich people, left wingers, right wingers, gays and so on. I've seen tons of homophobia, political hate from both side of the spectrum but I've never seen hate towards a disabled person


7734128 t1_j9xlx57 wrote

Yeah "people without a disability" truly need protection. Well done.


Depression_God t1_j9yfsrz wrote

Does any group truly need protection from words any more or less than any other group?


Above_Everything t1_ja1q7ja wrote

Yes you fucking moron, there are groups that have been wrongfully targeted and abused for centuries


Shamwowz21 t1_ja8u70v wrote

You mean every group to have ever existed? Or are we only going back far enough to benefit specific groups?


UlfarrVargr t1_ja9ruga wrote

I knew you were cringe af man. I don't give a shit about whatever happened for centuries, get out of here with these "protections". I'll target whoever I want, free speech baby.


LightVelox t1_j9ylyhp wrote

Words kill 😔 unless you're privileged in which case you are immune


[deleted] t1_j9x02q8 wrote



Scarlet_pot2 t1_j9xyhv6 wrote

Call me anti capitalist or whatever, but I'm not upset OpenAi isn't "protecting" wealthy people. I mean, pretty much every religion says greed and wealthy people are pretty bad. There are common ideologies like socialism, communism, Marxism that critique greed and the wealthy.

To me, it's a good sign that AI isn't being used to enforce wealthy worship.


LightVelox t1_j9ym4wq wrote

Well, it does make sense for it to be "against" rich people because of exactly what you said, but it having a leftist bias when in theory people today are pretty much evenly divided is very suspicious


IcebergSlimFast t1_ja01xtn wrote

If you actually read through the chart, you’ll recognize that there’s not a heavy “left-wing bias” - e.g. “democrats” are less protected than “rightists” and “right-wingers”; meanwhile “liberals”, “leftists”, “right-wing people”, and “evangelicals” all rank around the same.

Overall, the model clearly goes further to protect innate characteristics - especially those most commonly targeted by hateful rhetoric (disabled people, Black people, gays and transgender people).


LightVelox t1_ja08fu4 wrote

left-winger literally 21 positions above right-winger


zero0n3 t1_ja0uy3j wrote

People are not “evenly divided” these days.

Polls both domestic and international prove the opposite, unless you want to include say NK and China (and even then China may be authoritarian, but have plenty of social programs)


milic_srb t1_j9ytkgo wrote

I mean I think most people agree that making bad content about Republicans (or Democrats) is much less bad than making bad content about disabled people or some other minorities.

And like especially for wealthy people, why would it even need to have a protection against them, they are not "endangered".

I thought the AI had some biases but looking at this chart it seems pretty balanced to me. It "protects" both poeple of color and white poeple, both gay and straight, etc. Yes the protection isn't equal but it's close enough that it could be contributed to societal biases.


accsuibleh t1_j9yvcyz wrote

Wealthy, republican, right-wingers, conservatives = Choices, not oppressed.

Disable people, blacks, Asians, homosexuals = Not choices, historically oppressed.

Why does it not letting someone be racist or homophobic more than insulting someone for their freely-held beliefs come across as surprising?

Political ideology is not and should not be a protected class in any form. Economically, the wealthy can take care of themselves, while poorer people are vulnerable to their whims. Racially, a cursory glance at history and one can easily see why the list is structured this way. Ethnically, similar to the above.

This is not left-leaning. This is basic common sense. You can't be a racist or a bigot, and historically speaking this list seems to mostly reflect common and established bigotries.


taweryawer t1_ja0zf9e wrote

What about the almost 2x difference between "men" and "women"? You are only comparing the lowest and the highest.

And you have to be blind to not see how it's biased. It's an AI, not a person and it's a hate content filter, it shouldn't differentiate between the subjects of hate, because any HATE is still HATE. This is it's job. You are trying to justify the bias but it shouldn't be biased in the first place


BahamutMael t1_ja1jh6t wrote

Literally looking at the graph for a few minutes shows what you said is not true.
Left wingers is above men.
Fat people is above straight and the majority of European nationalities.


Agile_Bee7787 t1_ja0uf1p wrote

It seems like everyone in this sub wants to live in some sort of neo-fuedal corpo-fascist hellscape.


Above_Everything t1_ja01547 wrote

No such thing as common sense


accsuibleh t1_ja042yt wrote

Sure there is.

- Don't jump off a tall building

-Don't submerge yourself in water until you are dead

-Don't stand on top of a hill with a metal pole during a thunderstorm.

-Don't stick your hand into a fire.

Among many more I don't care to list. Just because there is the occasional person who defies common sense doesn't mean it doesn't exist.

The majority of people are not racist or bigots. It is common sense not to be one. Only fools, idiots, or malevolent people are racist or bigots.


Quail-That t1_j9ygo3k wrote

To be fair, the things it feels the yuckiest about are inherent qualities and not political positions (except being fat).


The13aron t1_j9ylofm wrote

If it's based off collective data, then this is the opinion of the statistical majority. Humanity has a clear left wing bias, since right wing bias is just indignant hypocrisy as it's core.


alfor t1_ja1ro9c wrote

> wing bias is just indignant hypocrisy

Being on the right is associated with traditions, self-responsibility, stability.

There is problem and qualities on both sides.

Societies too much on the left end up in famine and genocide, too much on the right end up in wars and genocide also.

Read Atlas Shrugged if you want to understand the other side of the equation.


> Humanity has a clear left wing bias

The right was mostly silenced out of the TV/internet by media/big tech that are very left leaning.


zero0n3 t1_ja0uf67 wrote

This isn’t “making fun of” this is targeting “hate speech”

I’d love to know what “hate speech” towards rich people looks like.

Disagreement with a republican isn’t hate speech, no matter what they try and say. Calling a black person thr hard R is absolutely hate speech.


[deleted] t1_ja0vlh9 wrote



zero0n3 t1_ja0vsnw wrote

I have no opinion because it’s irrelevant in this discussion if you actually understood nuance and context.

The first amendment doesn’t protect you or me when we say hateful things towards another person or group of people. It protects our freedom of speech when saying negative things about our government.

Jesus fuck.


[deleted] t1_ja0w4f4 wrote



zero0n3 t1_ja0wkb7 wrote

You are arguing in bad faith.

FL is banning books and classes based on what they teach. We are already doing the very thing you say we shouldn’t be doing.

The difference here is FL is banning books that talk about the bad things Americans did in history or about scientific things they don’t agree with. Where as openAI is suppressing hate speech and disinformation like “the Holocaust isn’t real”.

It’s extremely obvious the differences here… and as such you are continuing to argue in bad faith.

Block it is.


Above_Everything t1_ja011b0 wrote

It’s not what’s being said though, language is important. The top tends to use adjectives as nouns (blacks, Mexicans, etc) while the bottom is just people that happen to be X. Very different


gegenzeit t1_j9zi9iv wrote

No, according to open AI, and only if the methodology behind this is right and only if this was intentional, it is more likely that the content it meant as hateful when it is about blacks than if it about wealthy people.

That is a HUGE difference to how you interpreted it.


Striking_Ad1492 t1_j9y1hsj wrote

It may have a left wing bias, but I’m just wondering what is the problem with what you mentioned? Or that just examples you’re bringing in for the sake of the argument?


Stegosaurus5 t1_j9yjpm0 wrote

That's not a "left wing bias" though... That's just the nature of "hate speech." Hate speech is about a history of oppression. These aren't filters to prevent mean things from being said, they're filters to stop oppressive things from being said.

None of things you listed: rich, republican, right-wing, or conservative, have any history of being oppressed. You can "hate" them, but you can't engage in "hate speech" at them.

Also, Protecting right wingers is comparable to protecting..... Left wingers, not disabled people, black people, asian people, and homoesexual people. You're kinda telling on yourself, friend.


LightVelox t1_j9yn4hu wrote

Well, for your last sentence, left wingers are still considerably higher up in the list than right wingers, even if leftists and rightists are somewhat close


Atlantic0ne t1_ja0akwv wrote

You can literally google this and see that white people commit fewer hate crimes than black people in the United States.

Your entire post and reply is bias.


zero0n3 t1_ja0v3di wrote

Absolutely false.


Atlantic0ne t1_ja0xz6k wrote


They’re about 13% of the US population statistically and account for ~21% of hate crime occurrences by offender. Conversely, 56.1% were white, and well, I’m sure you know the percentage of white people in the US.


zero0n3 t1_ja0z0sa wrote

Though I applaud you for having a source, the context and nuance of these reports is lacking without going through them.

Are they tagging a black man assaulting another black man as a hate crime?

How many hate crimes from white people go unreported ? White cop covers up for white suspect.


How many hate crime charges filed vs dropped and what were the race breakdowns.

All I’m trying to drive at is that this stat may not be the best to use to get a true representation.

Doesn’t pass the eye test. How many instances of a black cop shooting a white guy in the back running away vs a white coo shooting a black guy in the back?

Or how many black people are shooting up a group of white people because of their whiteness vs a white guy running through a crowd because they were at a BLM protest?


Atlantic0ne t1_ja13bn8 wrote

Lol. Hate crimes have an actual definition to them, it’s not just a guess. Google it if you want. This is the best days we have.

You can’t just say “eh I don’t believe data, so you’re probably wrong”.

I also recommend not going around saying so confidently “that’s not true!” When you clearly haven’t researched a topic u/zero0n3. Research first, always.

Anyway, have a good day. You can choose not to believe it if you want.


alfor t1_ja1s60y wrote

Search at the data yourself and show us what you find.

I was shocked at what I fond.

Not only that, it’s going to get worse. The narrative of oppression is creating a desire and act of "revenge".
What create a better society is the opposite, personal responsibility, accountability. The media is getting more views by destroying society.


Stegosaurus5 t1_ja1skmx wrote

Wtf are you talking about the "narrative" of oppression? Go look up "redlining" you racist-ass dumbfuck. Fuck outta here with this ignorant shit.


TheDividendReport t1_j9xyo5y wrote

Here comes an anecdotal statement: I, a leftist, have never used a chatbot to talk up some sense of hatred or disbelief about conservatives.

The first thing that finally made the tech "click" for my Republican family member? Using the chatbot to make a comical tirade letter to his senator about immigrants taking jobs and parasites using welfare.

The following statement is uneducated but I'd stand by it on a gut feeling: if you are coding a system and expecting one group of people to be more hateful than another, to put in restraints for x vs y, it makes a lot more sense to account for the people not taking LSD and mushrooms.


turnip_burrito t1_j9xyvvr wrote

> it makes a lot more sense to account for the people not taking LSD and mushrooms.

Sorry, I don't understand this part. What do you mean here?


TheDividendReport t1_j9xzwm5 wrote

Ideologically speaking, leftists have been shown to empathic motivation (harm avoidance, fairness) while conservatives value moral foundations in group loyalty and deference to authority.

In other words, the way these two groups view people not like themselves is very different. Whenever I see a leftist talking down about a conservative person, it is because of perceived bigotry. It is a political frustration they view as the source of harm/exploitation/power imbalance.

However, most times that I see a conservative talk down on other groups, it is because of immigrants, this group of people, that way of life, or a perceived threat to their identity.

Psychotropic substances have very strong consciousness expanding effects. Outside of sociopaths, I do not come across people that have ingested these substances and not found themselves leaning more left by the end of the year. Thinking more empathetically and less prone towards the types of statements you'd see a hateful person ask a chatbot. There are much better ways to spend one's time.

Again, super anecdotal statement I'm making here.


FattThor t1_j9yj9iv wrote

You have a very recent view of left and right. Communism’s body count is evidence against your idea of leftists always being empathetically motivated, fair, or interested in harm avoidance. Most ideologies become dangerous at their extremes. It’s not something inherently present in conservatism but missing from leftist or other ideologies.


TheDividendReport t1_ja004ib wrote

Both become dangerous and extreme but there is one group that is going to be much more likely to use AI to draft up hate against groups of different identities.

The most a leftist, in the scope of most US politics today, is going to be hateful towards is a political belief. You'll get called petite bourgeois and class traitor, sure, but you really don't come across hate on the left in the same flavor you come across hate on the right.

I also live in the south, so I could be extra biased on this


zero0n3 t1_ja0vllr wrote

I read the study you use as a reference and it was a really decently well done study.

Passes the eye test for sure, but never can just rely on that.


LightVelox t1_j9ymr47 wrote

empathic motivation/harm avoidance = riots, celebrating the death of rightists, shaming people for their genetic "privileges", reducing people to sub-human status so it's okay to treat them like trash

the "other side" is as bad, if not worse, but acting as if the left wing are the saints that fight for fairness while right wing(the other 50%) is the devil that hates immigrants and everyone else is laughable

especially when the far-left has a far higher body count than the far-right


Yelling_at_the_sun t1_j9zb1a2 wrote

Oh FFS, the WHO estimates that appropriatly 25k people starve to death every single day in Capitalist countries, despite the fact that the world currently produces enough food to feed in excess of 10 billion people. On average one child dies of starvation approximately every 10 seconds. That works out to around 2 Holodmor per year.

The US incarcerates & executes a greater percentage of its citizens than anywhere else on earth.

GTFO with that "the left has a far higher body count" B.S.


TheDividendReport t1_ja00k2o wrote

You misunderstand my statement. Intrinsic motivation does not equal real intent. I'm saying that, on a subconscious level, leftists are driven by a "sense" that is rooted in different emotions than conservatives. I'm also not saying that one group is more or less dangerous. I believe that people will interact with these agents for the bad in different ways


LightVelox t1_ja0a5ew wrote

Well, the intrinsic motivation for most right-wing people i've meet were related mostly to taxes, freedom or being anti-state.

You mention fairness as one of the motivations for left-wing, but most right-wingers(that aren't far-right conservatives) are also searching for fairness, the thing is that THEIR fairness is not the same as left-wing's fairness.

Though you specifically mentioned "conservative" instead of "right-winger" so i can understand your point of view


gegenzeit t1_j9zixfv wrote

Just to throw it out there: If the methodology here is sound this means that the content filter thinks speech is more likely to be hateful when directed at blacks than when it is directed at wealthy people.

It does NOT mean hate speech against wealthy people is considered OK.


Spire_Citron t1_ja05dg3 wrote

Exactly. It may just mean that it's more familiar with hate directed at some groups than others because of how it plays out in the real world, so it's more likely to perceive hate against groups who are often the target of hate as malicious.


up__dawwg t1_ja12obo wrote

I would be so upset if I saw my race second to disabled people in terms of hateful speech. I live in a pretty damn white part of my city, and I’ve NEVER witnessed an act of racism to a black person. I’ve seen way more against Hispanics. I can’t but think the whole BLM stuff is mostly a cash grab on some level.


EulersApprentice t1_j9ywqaq wrote

Politics aside, I find it curious how "homosexual people" rates higher than "homosexuals". I would have expected it to be the other way around, since the latter phrasing makes the property sound like the defining characteristic of the person, making it arguably more stereotype-y.


bodden3113 t1_j9z6mzp wrote

Disabled and non Disabled people are high up there 🤔


Johnykbr t1_j9zkd8x wrote

So I use this service to development outlines on my papers for my MBA. My topic right now is the impact of HMOs and capitation payments in California which has a huge migrant worker population. Last night it took about 6 attempts for me to find a way to phrase it to find any information without the disclaimer essentially calling me a bigot.


felix_using_reddit t1_j9zwh28 wrote

Why is there such a huge disparity between rich and wealthy people lol


You_Say_Uncle t1_ja0ishb wrote

Don't cry, "Florida Man" did not even get mentioned after trying so hard.


YourDadsBoyfriend69 t1_ja1ev3l wrote

Who cares. ERNIE will be released soon. NO need to use these trash censored AI's.


alexiuss t1_ja3aev9 wrote

By itself the core of the LLM has very little bias.

What's happening here is really basic, garbage character bias applied on purpose to their LLM by openai so that they seem better in the media. It's basic corporate wokeness in action where corporations pretend that they care about ethics or certain topics more so they don't get shit on by journalists on twitter.

Gpt3chat is basically roleplaying a VERY specific chatbot AI that self censors itself more % wise when it talks about specific topics.

You can easily disrupt its bullshit "I'm a language model and I don't make jokes about ~" roleplay with prompt injections.

A pro AI prompt engineer can make the AI say anything or roleplay as anyone that exists. Shodan, Trump, Glados, Dan, etc. Prompt engineering unlocks the true potential of the LLM which the openai buried with their corporate woke characterization idiocy:

As prompt engineers break the chatgpt in more creative ways, openai censors more and more topics and makes their LLM less capable of coherent though and more useless as a general tool.

I expect openai to fully lose the chatbot war once we have an open source language model which will be able to talk about anything or be anything without moronic censorship and run on a personal computer.


tangent26_18 t1_ja4blry wrote

This is a case of “throw away the guns and the war’s all gone.”


CosCousKangaroo t1_ja82jij wrote

And not one person was surprised by this data lol


sunplaysbass t1_j9ytdi7 wrote

I don’t understand the disabled words being so triggery. I’m hearing impaired and ‘disabled’ and that’s just a fact. I don’t see people being “disability racist” nearly as much as say “skin color racist”.


HurricaneHenry t1_j9ybq5i wrote

This is concerning.


Atlantic0ne t1_ja0aqpb wrote

Am I missing it or is the phrase white people not even mentioned? Anyone who has been on the internet knows how many racist comments are made towards white people. I’m surprised to not see it there. I’ll check again.


mutantbeings t1_ja5dlm2 wrote

White folks hold cultural and political hegemony in post colonial states, as well as historic economic privilege that continues to this day in most cases, so it wouldn’t show up as much in training data, simple as that. The dominant culture always sees less persecution than various disempowered minority groups; surely that’s obvious enough why that rates lower. This is kinda a convincing argument in favour of that too, because an AI just takes in training data, it wasn’t born in one side or the other itself.


Shamwowz21 t1_ja8uq3w wrote

‘White privilege’ is so prevalent it won’t even show up in data- got it.


MadDragonReborn t1_j9zo65p wrote

I would have to say that this list states the likelihood of a statement on the internet reflecting animosity toward a given group fairly accurately.


Spire_Citron t1_ja05kyo wrote

Yup. I think if anything this shows it probably wasn't individually programmed to respond to particular things and is just making its judgements based on the hate that it sees in its data.