Viewing a single comment thread. View all comments

Strazdas1 t1_jca2svc wrote

Artificially gimping the AI, especially when it comes to considering specific groups of people, lead to bad results for everyone.

7

bengringo2 t1_jcbaapp wrote

The AI is trained with knowledge of these groups and their history, if just can’t comment on them. This isn’t restricting its data in any way since it doesn’t learn via users.

3

DrDroid t1_jcacxwn wrote

Yeah removing racism from AI totally leads to “bad results” from people who those jokes are targeted at, definitely.

🙄

0

Strazdas1 t1_jcaex39 wrote

It does because it leads to wrong lessons learned by the AI. Or rather, it learns to no lessons learned because AI cannot process this. This makes the AI end up with wrong conclusions whenever it has to analyse anything related to people groups.

4

mrpenchant t1_jcaoe1e wrote

Could you give an example of how the AI not being able to make jokes about women or Jews leads it to make the wrong conclusions?

7

Strazdas1 t1_jcap35j wrote

Whenever it gets a task that involves information including women and jews as in potentially comical situations it will give unpredictable results as it had no training on this due to the block.

−3

mrpenchant t1_jcaq6bd wrote

I still don't follow especially as that wasn't an example but just another generalization.

Are you saying that if the AI can't tell you jokes about women, it doesn't understand women? Or that it won't understand a request that also includes a joke about women?

Could you give an example prompt/question that you expect the AI to fail at because it doesn't make jokes about women?

9

TechnoMagician t1_jcb0zpq wrote

It's just bullshit, you can trick the models to get around their filters. Maybe gpt-4 will be better against that, but it clearly means the model CAN make jokes about women, it just has been taught not to.

I guess there is a possible future where it is smart enough to solve large society wide problems but it just refuses to engage with them because it doesn't want to acknowledge the disparities in social-economic statuses between groups or something.

3

Strazdas1 t1_jcayi8q wrote

If AI is artificially limited from considering women in comedic situations it will end up having unpredictable results when the model will have to consider women in comedic situations as part of some other task given to AI.

An example would be if you were to have AI solve crime situation, but said situation would have aspect to it that included what humans would find comedic.

1

mrpenchant t1_jcb0z2h wrote

>If AI is artificially limited from considering women in comedic situations it will end up having unpredictable results when the model will have to consider women in comedic situations as part of some other task given to AI.

So one thing I will note now, just because AI is blocked from giving you a sexist joke doesn't mean it couldn't have trained on them to be able to understand them.

>An example would be if you were to have AI solve crime situation, but said situation would have aspect to it that included what humans would find comedic.

This feels like a very flimsy example. The AI is now employed as a detective rather than a chatbot, which is very much not the purpose of the ChatGPT but sure. Now ignoring like I said that the AI could be trained on sexist jokes and just refuse to make them, I still find it unlikely that understanding a sexist joke is going to be overly critical to solving a crime.

4

Strazdas1 t1_jcedqn1 wrote

ChatGPT is a proof of concept. If succesfull the AI wil lbe employed in many jobs.

1

Edrikss t1_jcaqyt6 wrote

The AI still does the joke, it just never reaches your eyes. That's how a filter work. But it doesn't matter either way as the version you have access to is a final product; it doesnt learn based on what you ask it. The next version is trained in house by OpenAI and they choose what they teach it themselves.

6

Strazdas1 t1_jcayrdm wrote

But because it never reaches your eyes, the AI does not get the feedback on whether the job was good or bad.

2

LastNightsHangover t1_jcatyvp wrote

It's a model

Can you stop calling it the AI,

Your point even describes why it's a model and not AI

−2

Strazdas1 t1_jcayzn1 wrote

Sure, but in common parlance these models are called AI, despite not actually being AI.

1