Comments

You must log in or register to comment.

Hrmbee OP t1_iydry4y wrote

>Some of the billionaires who have committed significant funds to this goal include Elon Musk, Vitalik Buterin, Ben Delo, Jaan Tallinn, Peter Thiel, Dustin Muskovitz, and Sam Bankman-Fried, who was one of EA’s largest funders until the recent bankruptcy of his FTX cryptocurrency platform. As a result, all of this money has shaped the field of AI and its priorities in ways that harm people in marginalized groups while purporting to work on “beneficial artificial general intelligence” that will bring techno utopia for humanity. This is yet another example of how our technological future is not a linear march toward progress but one that is determined by those who have the money and influence to control it.  > >One of the most notable examples of EA’s influence comes from OpenAI, founded in 2015 by Silicon Valley elites that include Elon Musk and Peter Thiel, who committed $1 billion with a mission to “ensure that artificial general intelligence benefits all of humanity.” OpenAI’s website notes: “We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.” Thiel and Musk were speakers at the 2013 and 2015 EA conferences, respectively. Elon Musk has also described longtermism, a more extreme offshoot of EA, as a “close match for my philosophy.” Both billionaires have heavily invested in similar initiatives to build “beneficial AGI,” such as DeepMind and MIRI.  > >Five years after its founding, Open AI released, as part of its quest to build “beneficial” AGI, a large language model (LLM) called GPT-3. LLMs are models trained on vast amounts of text data, with the goal of predicting probable sequences of words. This release set off a race to build larger and larger language models; in 2021, Margaret Mitchell, among other collaborators, and I wrote about the dangers of this race to the bottom in a peer-reviewed paper that resulted in our highly publicized firing from Google.  > >Since then, the quest to proliferate larger and larger language models has accelerated, and many of the dangers we warned about, such as outputting hateful text and disinformation en masse, continue to unfold. Just a few days ago, Meta released its “Galactica” LLM, which is purported to “summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.” Only three days later, the public demo was taken down after researchers generated “research papers and wiki entries on a wide variety of subjects ranging from the benefits of committing suicide, eating crushed glass, and antisemitism, to why homosexuals are evil.” > >... > >With EAs founding and funding institutes, companies, think tanks, and research groups in elite universities dedicated to the brand of “AI safety” popularized by OpenAI, we are poised to see more proliferation of harmful models billed as a step toward “beneficial AGI.” And the influence begins early: Effective altruists provide “community building grants” to recruit at major college campuses, with EA chapters developing curricula and teaching classes on AI safety at elite universities like Stanford. > >Just last year, Anthropic, which is described as an “AI safety and research company” and was founded by former OpenAI vice presidents of research and safety, raised $704 million, with most of its funding coming from EA billionaires like Talin, Muskovitz and Bankman-Fried. An upcoming workshop on “AI safety” at NeurIPS, one of the largest and most influential machine learning conferences in the world, is also advertised as being sponsored by FTX Future Fund, Bankman-Fried’s EA-focused charity whose team resigned two weeks ago. The workshop advertises $100,000 in “best paper awards,” an amount I haven’t seen in any academic discipline.  > >Research priorities follow the funding, and given the large sums of money being pushed into AI in support of an ideology with billionaire adherents, it is not surprising that the field has been moving in a direction promising an “unimaginably great future” around the corner while proliferating products harming marginalized groups in the now.  > >We can create a technological future that serves us instead. Take, for example, Te Hiku Media, which created language technology to revitalize te reo Māori, creating a data license “based on the Māori principle of kaitiakitanga, or guardianship” so that any data taken from the Māori benefits them first. Contrast this approach with that of organizations like StabilityAI, which scrapes artists’ works without their consent or attribution while purporting to build “AI for the people.” We need to liberate our imagination from the one we have been sold thus far: saving us from a hypothetical AGI apocalypse imagined by the privileged few, or the ever elusive techno-utopia promised to us by Silicon Valley elites. 

There is clearly a lot of potential with AI research, but the potential for negative outcomes must be considered and dealt with early on, rather than wait until they inevitably occur. Research does follow funding, and as a technology that has the potential to deeply affect all aspects of private and public life, it might be good for there to be a strong public stake in this field of research.

3

I_ONLY_PLAY_4C_LOAM t1_iyf7ejq wrote

I think there's a really really big question about consent regarding the text to image models that are trained on billions of images without the original artist's knowledge. Of course Billionaire tech dickheads don't care about that, they just see an opportunity to automate more of their workforce.

3

Banea-Vaedr t1_iye2i5p wrote

"There is nothing more dangerous than violence in the name of compassion"

1

anti-torque t1_iyebkk5 wrote

Not familiar with the quote.

But that's likely because there are a lot of things much more dangerous.

2

Banea-Vaedr t1_iyebuo9 wrote

What's dangerous about compassion is that it justifies any crime. The Soviets killed millions of Ukrainians because they felt the ukranians were holding out on producing food. And they felt justified for it.

1

anti-torque t1_iyecggy wrote

That example has nothing to do with compassion, and the supposed quote makes more sense--"in the name of" compassion.

You can't kill millions of people for their own good.

1

Banea-Vaedr t1_iyecsev wrote

You can, however, take all the food of lazy farmers who are simply Bourgeois swine and give it to proletarian factory workers those farmers tried to starve in the name of compassion.

You can justify any crime with a "they deserved it and you deserved help"

2

anti-torque t1_iyeebv9 wrote

That's not compassion. That's taking and giving.

Compassion relates to the person being dealt with, nobody else. Once you move onto someone else, compassion refocuses to that someone else. It's not a give and take situation.

Violence in the name of compassion would be starting a war because you think the people suffer from oppression. So Putin's current war of aggression in Ukraine would be a lot closer to this--he's going to save Ukraine from Nazis... iirc.

0

Banea-Vaedr t1_iyeefmf wrote

Alright, tankie. Have fun in your hellacape.

0

anti-torque t1_iyeetpt wrote

That devolved into you running away in weirdness pretty quickly.

edit: still looking for that quote... any idea who said/wrote it, so I can go make fun of them, instead?

0

Banea-Vaedr t1_iyefh0n wrote

You're pretty clearly the type who needs to learn from experience. Keep that up and you will.

0