Viewing a single comment thread. View all comments

adt t1_j9w6x17 wrote

Leave them be.

Listen to the experts.

Connor Leahy was the first to re-create the GPT-2 model back in 2019 (by hand, he knows the tech stack, OpenAI lined up a meeting with him and told him to back off), co-founder of EleutherAI (open-source language models), helped with GPT-J and GPT-NeoX-20B models, advised Aleph Alpha (Europe's biggest language model lab), and is now the CEO of Conjecture.

Dude knows what he's talking about, and is also very careful about his wording (see the NeoX-20B paper s6 pp11 treading carefully around the subject of Transformative AI).

And yet, in Nov/2020, he went on record saying:


>“I think GPT-3 is artificial general intelligence, AGI. I think GPT-3 is as intelligent as a human. And I think that it is probably more intelligent than a human in a restricted way… in many ways it is more purely intelligent than humans are. I think humans are approximating what GPT-3 is doing, not vice versa.”
— Connor Leahy, co-founder of EleutherAI, creator of GPT-J (November 2020)


sideways t1_j9xy2qc wrote

That's... really profound.

I had never considered the possibility that our version of intelligence might be the flawed, impure one.


niconiconicnic0 t1_j9yu9bf wrote

In the most literal sense, artificial intelligence is designed to be as flawless as possible (duh). AKA optimized. Evolution makes organisms that only have to function literally just enough (to reproduce). The human body is full of imperfections. It only has to be "good enough". Same with our brain and its functions, inefficiencies, etc. The bar is literally "survive till old enough to fuck".


WarAndGeese t1_j9ywa5h wrote

Obviously our version of intelligence is flawed and impure, very much so.


jamesj t1_ja04tzq wrote

Though I agree, I'm not sure it was obvious before having some other forms of intelligence to compare to.