Comments

You must log in or register to comment.

HeinrichTheWolf_17 t1_j3opxnr wrote

The consensus is the same outside this sub too, albeit not as soon, many experts are moving their timelines to the 2030s.

37

CyberAchilles t1_j3p8fix wrote

Sources?

11

enkae7317 t1_j3pv2yh wrote

Just trust me, bro.

16

AsuhoChinami t1_j3tk1e8 wrote

Maybe you could actually give him time to respond before coming up with some snarky little jab? Heinrich is active in multiple communities and is a trustworthy and reliable person.

6

throwawaydthrowawayd t1_j3vbipb wrote

/u/rationalkat made a cherry-picked list in the big predictions thread:

  • Rob Bensinger (MIRI Berkeley)
    ----> AGI: ~2023-42
  • Ben Goertzel (SingularityNET, OpenCog)
    ----> AGI: ~2026-27
  • Jacob Cannell (Vast.ai, lesswrong-author)
    ----> AGI: ~2026-32
  • Richard Sutton (Deepmind Alberta)
    ----> AGI: ~2027-32?
  • Jim Keller (Tenstorrent)
    ----> AGI: ~2027-32?
  • Nathan Helm-Burger (AI alignment researcher; lesswrong-author)
    ----> AGI: ~2027-37
  • Geordie Rose (D-Wave, Sanctuary AI)
    ----> AGI: ~2028
  • Cathie Wood (ARKInvest)
    ----> AGI: ~2028-34
  • Aran Komatsuzaki (EleutherAI; was research intern at Google)
    ----> AGI: ~2028-38?
  • Shane Legg (DeepMind co-founder and chief scientist)
    ----> AGI: ~2028-40
  • Ray Kurzweil (Google)
    ----> AGI: <2029
  • Elon Musk (Tesla, SpaceX)
    ----> AGI: <2029
  • Brent Oster (Orbai)
    ----> AGI: ~2029
  • Vernor Vinge (Mathematician, computer scientist, sci-fi-author)
    ----> AGI: <2030
  • John Carmack (Keen Technologies)
    ----> AGI: ~2030
  • Connor Leahy (EleutherAI, Conjecture)
    ----> AGI: ~2030
  • Matthew Griffin (Futurist, 311 Institute)
    ----> AGI: ~2030
  • Louis Rosenberg (Unanimous AI)
    ----> AGI: ~2030
  • Ash Jafari (Ex-Nvidia-Analyst, Futurist)
    ----> AGI: ~2030
  • Tony Czarnecki (Managing Partner of Sustensis)
    ----> AGI: ~2030
  • Ross Nordby (AI researcher; Lesswrong-author)
    ----> AGI: ~2030
  • Ilya Sutskever (OpenAI)
    ----> AGI: ~2030-35?
  • Hans Moravec (Carnegie Mellon University)
    ----> AGI: ~2030-40
  • Jürgen Schmidhuber (NNAISENSE)
    ----> AGI: ~2030-47?
  • Eric Schmidt (Ex-Google Chairman)
    ----> AGI: ~2031-41
  • Sam Altman (OpenAI)
    ----> AGI: <2032?
  • Charles Simon (CEO of Future AI)
    ----> AGI: <2032
  • Anders Sandberg (Future of Humanity Institute at the University of Oxford)
    ----> AGI: ~2032?
  • Matt Welsh (Ex-google engineering director)
    ----> AGI: ~2032?
  • Siméon Campos (Founder CEffisciences & SaferAI)
    ----> AGI: ~2032
  • Yann LeCun (Meta)
    ----> AGI: ~2032-37
  • Chamath Palihapitiya (CEO of Social Capital)
    ----> AGI: ~2032-37
  • Demis Hassabis (DeepMind)
    ----> AGI: ~2032-42
  • Robert Miles (Youtube channel about AI Safety)
    ----> AGI: ~2032-42
  • OpenAi
    ----> AGI: <2035
  • Jie Tang (Prof. at Tsinghua University, Wu-Dao 2 Leader)
    ----> AGI: ~2035
4

tatleoat t1_j3r8a33 wrote

All the experts I've seen say 2029, like Altman and Carmack. Musk has also said 2029 if that's an opinion you care about.

5

joecunningham85 t1_j3romoc wrote

"All the experts"

Altman is a CEO with a vested interested in hyping up AI progress for his business.

Musk said we would be on Mars and having self-driving cars take us everywhere by now lol.

8

tatleoat t1_j3rpi30 wrote

I don't see how saying "[thing] will come in 7 years" influences anything as a prediction, it's too far away to generate any tangible hype in the public. If he was going to lie to manipulate a products value I'd think I'd make my predictions something more near term, if we're indeed cynically manipulating the market. Not to mention any of that about Sam Altman changes nothing about the fact he's an expert and his credibility rests on his correctness, it's in his interest to be right. You can't just claim biased interests here, it's more nuanced than that, also none of that changes the fact they all are saying the same thing, 2029. That's pretty consistent, and I'm inclined to believe it.

1

maskedpaki t1_j3tcteo wrote

you have it all backwards

generating long term hype is perfect for a tech startup for 2 reasons

  1. it overvalues the company based on long term potential. open ai only makes 60m in revenue. standard 10x multiplier would have it valued at 600m at the most. but its valued at 30 billion because of the hope that revenues will billions in the future

&#x200B;

  1. you dont have to keep your long term promises. if he makes a promise for gpt4 people will call him out when it fails. but saying AGI 2035 and chances are no one will care when its 2035 and he doesnt deliver since the whole field will be different by then.
1

Ok_Homework9290 t1_j3p62eo wrote

I've commented this before, and since it's relevant, I'll comment it again (almost verbatim):

Take Metaculus seriously at your risk. Anyone can make a prediction on that website, and those who do tend to be tech-junkies who are generally optimistic about timelines.

To my understanding, most AI/ML expert surveys continue to have an AGI arrival year average of some decades from now/mid-century plus, and the majority of individuals who are AI/ML researchers have similar AGI timelines.

Also, I'm a bit skeptical that the amount of progress that's been made in AI the past year (which has been impressive, no doubt) merits THAT much of a shave-off from the February 2022 prediction. Just my thoughts.

25

SoylentRox t1_j3p8ys3 wrote

>most AI/ML expert surveys continue to have an AGI arrival year average of some decades from now/mid-century plus, and the majority of individuals who are AI/ML researchers have similar AGI timelines

You know, when the Manhattan project was being worked on, who would you trust for a prediction of the first nuke detonation, Enrico Fermi or some physicist who had worked on radioactive materials.

I'm suspicious that any "experts" with valid opinions exist outside of well funded labs (openAI/google/meta/anthropic/hugging face etc)

They are saying a median of about ~8 years, which would be 2031.

13

Ok_Homework9290 t1_j3qcmla wrote

>They are saying a median of about ~8 years, which would be 2031.

That's an oddly specific number/year.

Also, remember that people who work at AI corporations, as opposed to academia (for example), have the tendency to hype up their work, which makes their timelines (on average) shorter. To me personally, a survey of AI researchers on timelines has more weight than AI Twitter, which is infested with hype.

1

Thelmara t1_j3s2c1e wrote

> That's an oddly specific number/year.

No, that's the median of a spread, and it's stated with the caveat of "about". That's literally the opposite of "specific".

5

will-succ-4-guac t1_j3rm0sa wrote

Source on that 8 years number? Would certainly be quite a compelling argument if a random sampling of exclusively well funded AI PhDs had a median prediction of 8 years.

1

SoylentRox t1_j3rovna wrote

It's just the opinions on the eleuther AI discord. Arguably weak general AI will be here in 1-2 years.

My main point is the members I am referring to all live in Bay Area and work for hugging face and openAI. Their opinion is more valid than say a 60 year old professor in the artificial intelligence department at Carnegie melon.

2

will-succ-4-guac t1_j3rmq84 wrote

> Also, I'm a bit skeptical that the amount of progress that's been made in AI the past year (which has been impressive, no doubt) merits THAT much of a shave-off from the February 2022 prediction. Just my thoughts.

Correct, and if anything, the mere fact that the prediction has changed by over a decade in the span of 12 months is strong evidence of exactly what you’re saying — this prediction is made by people who aren’t really in the know.

If the weather man told you it was going to be 72 and sunny tomorrow and then when you woke up tomorrow he said actually it’s going to be -15 and a blizzard you would probably think hmmm, maybe this guy doesn’t know what the fuck he’s talking about.

2

arindale t1_j3tq5wt wrote

I agree with all of your comments. And to add what I believe to be a more important point, the Metaculus question defines weakly general AI as (heavily paraphrased):

- Pass the Turing Test (text prompt)

- Achieve human-level written language comprehension on the Winograd Schema Challenge

- Achieve human-level result on the math section of the SATs

- Play the Atari game Montezuma's Revenge at a human level

We already have separate narrow AIs that can do these tasks at either human or nearly human levels. We even have more general AIs that can do multiple of these tasks at a near-human level. I wouldn't be overly surprised if by the end of 2023, we have a single AI that could do all of these tasks (and many other human-level task). But even so, many people wouldn't call it general AI.

Not trying to throw shade here on Metaculus. They had to narrowly define general AI and have concrete, measurable objectives. I just personally disagree with where they drew that line.

2

arisalexis t1_j3q2cp0 wrote

I trust the opinión of an unknown redditor without any links sure. If you do decide to post a link it should be after stable diffusion and chatgpt3 the survey.

0

Sashinii t1_j3oy3mc wrote

My guess is AGI in 2029, so they're more optimistic than I am, but I hope it happens sooner.

9

AsuhoChinami t1_j3tkcz7 wrote

One of the few non-terrible posts in the thread. Otherwise it's been largely garbage.

3

AsheyDS t1_j3orcp9 wrote

As a prediction, this is utterly meaningless. I'm not even sure if this is useful at all as a gauge of anything.

8

imlaggingsobad t1_j3oy627 wrote

it's not just a prediction, it's a crowdsourced prediction. Statistically, crowdsourcing does better at converging to the actual answer.

19

Cult_of_Chad t1_j3p0fgx wrote

>Statistically, crowdsourcing does better at converging to the actual answer.

This should be the top reply.

13

AsheyDS t1_j3pb1x8 wrote

But what is the crowd? Is this based on a sampling of all types of people, or enthusiasts being enthusiastic?

5

footurist t1_j3qdd01 wrote

Yes, this if the key question. If I'd build such a website I'd try to implement some ways to categorize the crowd. "30% expert, 50% enthusiast, 20% hobbyist" or something like that... Of course getting any kind of certainty on that would be hard, but it turns out if you actually ask nicely and with a time of seriosity most people just tell the truth, so maybe even not.

1

will-succ-4-guac t1_j3rme50 wrote

> Statistically, crowdsourcing does better at converging to the actual answer.

Statistician here, and this is a good example of a relatively meaningless statistic, to be honest. Crowdsourcing statistically tends to be more accurate than just asking one person, in the average case, for what should be mathematically obvious reasons.

But the “average case” isn’t applicable to literally every situation. I would posit that when we start to talk about areas of expertise that require a PhD to even begin to be taken seriously for your opinion, crowdsourcing from unverified users starts to become a whole lot more biased.

1

[deleted] t1_j3os5qp wrote

[removed]

2

AsheyDS t1_j3ovmd1 wrote

I just feel like a lot of people are seeing some acceleration and think that this is all of it. What I think, is that we'll continue seeing regular advances in tech and AI, science in general. But the 30's will be the start of AGI, and 40's will be when it really takes off (in terms of adoption and utilization). Even a guess of before 2035 is, in my estimation, an optimistic projection where everything goes right and there aren't any setbacks or delays. But just saying 30's is a solid guess.

0

imlaggingsobad t1_j3oyoyq wrote

Your prediction and the 2027 prediction could both be right. DeepMind and OpenAI could have something that looks like AGI in 2027, but they keep it within the lab for another 3 years just testing it and building safeguards. Then in the 30s they go public with it and it begins proliferating. Then maybe it takes 10 years for it to transform manufacturing, agriculture, robotics, medicine, and the wider population, etc, due to regulation, ethical concerns, and resource limits.

9

Baturinsky t1_j3s0gpp wrote

How big do you think are chances it going Paperclip Maximizer-level wrong?

1

coumineol t1_j3pxg3w wrote

>But the 30's will be the start of AGI, and 40's will be when it really takes off

I vehemently disagree. How would it take 10 years for such a transformative technology to be optimized and utilized? Do you have a timeline for that 10 years between "start of the AGI" and its takeoff?

3

AsheyDS t1_j3r7vte wrote

I never said it'd be 10 years, though it could for all anyone knows. If I said it would be released in 2035, and widely adopted by 2040, I don't think that's unreasonable. But I also believe in a slow takeoff and more practical timelines. Even Google, as seemingly ubiquitous as it is, did not become that way overnight, it took a few years to become widely known and used. Also we're dealing with multiple unknowns, like how many companies are working on AGI, how far along they are, how long it takes to adequately train them before release, how the rest of the world (not just enthusiasts) accepts or doesn't accept AGI, how many markets will be disrupted and the reaction to that, legal issues along the way, etc. etc. Optimistic timelines don't seem to account for everything.

Edit: I should also mention one of the biggest hurdles is even getting people to understand and agree on what AGI is! We could have it for years and many people might not even realize. Conversely, we have people claiming we have it NOW, or that certain things are AGI when they aren't even close.

2

gobbo t1_j3rm9j7 wrote

I have chatGPT in my frickin' pocket most of the day. It's amazing but mostly just a testbot still so here I am, kind of meh, even though it was not on my radar for at least a few years, or so I thought a few months ago.

Faster than expected. And yet life carries on much as before, with a little sorcerer's apprentice nearby if I want to bother. What a time!

1

arisalexis t1_j3q2fvz wrote

Did 2022 actually feel as "some" acceleration to you?

2

AsheyDS t1_j3r91af wrote

Feel? No, not quite. But it's all relative. If one narrows their perspective on what's to come, it could feel like a huge change already. Personally I think this is just us dipping our toes into the water, so to speak. So yes "some" acceleration, especially when considering how many people think that what we've seen so far is half or most of the way to AGI.

1

420BigDawg_ OP t1_j3p3m0d wrote

Who cares if it’s meaningless?

1

AsheyDS t1_j3pbd58 wrote

Fair enough, but it's a thing for a reason. Obviously the date will continue to change, so it could only possibly be a measure of that change. So why is it changing? What is it based on? It would make more sense to say a decade than a specific date or even year.

2

keefemotif t1_j3pjkt6 wrote

What's interesting is, 10 years ago the prediction of a lot of people I knew was 10 years and hey it's 10 years again. I think psychologically, 10 years is about the level people have a hard time imagining past, but still think is pretty close. For most adults, 20-25 years isn't really going to help their life, so they pick 10 years.

As far as the crowdsource comment, yikes. We aren't out there crowdsourcing PhDs and open heart surgery. I know there was that whole crowdfarm article in communications of the ACM and I think that is more degradation of labor rights than value in random input.

−1

coumineol t1_j3pxmr2 wrote

>What's interesting is, 10 years ago the prediction of a lot of people I knew was 10 years and hey it's 10 years again.

May be true for "the people you know", but if you look at the general opinion of people interested in this field, the predictions used to start at the 2040s just last year.

3

keefemotif t1_j3qzov0 wrote

While selection bias is already a thing, I'm pretty sure "the people I know" being generally software engineers with advanced degrees and philosophers into AI... it's a pretty educated opinion on the bias.

1

coumineol t1_j3r24vc wrote

In that case maybe educated opinion is worse than the wisdom of the crowds, as the community prediction for AGI was 2040 last year as you can see from the post which is not "10 years away".

1

keefemotif t1_j3rsn2g wrote

It's 18, the point I'm making is we have a cognitive bias towards 10-20 years or so when making estimates and we also have a difficult time understanding nonlinearity.

The big singinst hypothesis was there would be a "foom" moment where we go to super exponential progression. From that point of view, you would have to start talking probability distribution of when that nonlinearity happens.

I prefer stacked sigmoidal distributions, where it goes exponential for a while, hits some limit (think Moore's and around 8nm)

Training a giant neural net towards language models is a very important development, but I mean imho AlphaGo was more interesting technically with the combination of value and policy networks, vs billions of nodes in some multilayer net.

2

mnamilt t1_j3qd210 wrote

The problem is that nobody reads the definition of Metaculus for what they hold as to be 'Weakly General AI':

It requires a unified system to accomplish 4 tasks. But two of those tasks were already being able to be completed by AI (the Winogrande challenge and playing Montezuma's revenge), one of those tasks might be hard to accomplish due to the requirement (good chance that an AI system has more than 10 sat papers in it, good luck getting that out), and one of the tasks is not defunct.

Aka, I'd rate the ability of a system to meet those 4 requirements probably way earlier than 2027, but thats because the requirements dont seem to hold up great to what the community perceives to be weak AGI. Actual weak AGI id rate way later than the Metaculus question.

4

Embarrassed-Bison767 t1_j3r2xwz wrote

If you have a single AI system that does all four we will already have something lots more powerful than what exists today

2

Redvolition t1_j3qjp9c wrote

The Metaculus definition will already cause massive waves of disruption. I would consider it indeed a "weak AGI", but this is just more or less fruitless categorization.

1

maskedpaki t1_j3tcz2a wrote

making one system do all 4 is a lot harder than making 4 systems that do one each.

1

BbxTx t1_j3qyhvi wrote

My guess is that an “almost” AGI that is 90% correct and is almost 90% as productive as a human (even though it has many odd quirks) will happen by 2027. An AI like this that will be able to double check it’s own work will be enough to radically change the world.

3

joecunningham85 t1_j3rpq68 wrote

If Metaculus was reliable I could be a billionaire in a few weeks.

1

JVM_ t1_j3sar4f wrote

General AI is a mountain range.

From far away it's easy to point at it and say 'that's it!'

As you get closer though it gets harder and harder to determine when you're actually on or at the top of the mountain, because you're surrounded by other smaller mountains.

I think the same will happen with AI. We're obsessed with the only 3-5 AI's currently available, but by the end of the year there will be multiple AI's doing multiple things very very well.

The AI landscape is going to change, and we'll be so surrounded by AI's that it will be hard to determine which one, by itself, becomes the general AI of our dreams.

Maybe the General AI is just one that knows which sub-AI model is best for the task you request and farms it out to that one in particular? Kind of like a general contractor and sub-contractors when you do home renovations..

1

MyCuteData t1_j3uaxra wrote

'Prediction for General A.I continues to drop.'

Probably because of people from this sub lol

AGI tomorrow xD

1