Viewing a single comment thread. View all comments

turnip_burrito t1_jd5ud7f wrote

Here's what we'll do imo:

Just give it some set of morals (western democratic egalitarian most likely). The philosophical considerations will eventually all conclude "well we have to do something" and then they'll just give it morals that seem "good enough". Given the people developing the AI, it makes sense that it will adhere to their views.

4

visarga t1_jd773xf wrote

There is a new trend started by Stability and picked up by OpenAI that will provide base models for fine-tuning for each country/language/social group. Various groups are reacting to one-size-fits-all AI models.

This is an excellent article showing how AI models could impact communities effort to preserve their language.

> OpenAI's Whisper is another case study in Colonisation

https://blog.papareo.nz/whisper-is-another-case-study-in-colonisation/

And a positive one:

> How Iceland is using GPT-4 to preserve its language.

https://openai.com/customer-stories/government-of-iceland

When you got just 300k speakers of a language, you don't want the TTS and language model to make the new generation learn it wrong because the model didn't have good enough training data and made many mistakes. Kids are going to use AI in their own language, hence the risk of low quality responses impacting their small community even more.

5