Submitted by blueSGL t3_10gg9d8 in singularity

Way back in the mists of time, (just over a month ago)

We were informed that Metaculus community prediction for "Date Weakly General AI is Publicly Known" has now dropped to a record low of Aug 26, 2027

At the time my top voted comment was dismissive.

> "If the sign up on metaculus is driven by 'AI optimistic' podcasts/sites advertising its existence it will naturally trend lower due to a self selecting cohort of new users signing up being more optimistic about AGI."

However I was WRONG.

Twitter user @tenthkrige who works at metaculus had already run the numbers.

https://twitter.com/tenthkrige/status/1527321256835821570

TL;DR users who had previously predicted the AGI timeline and subsequently updated, did so in lock step with new users.

In summery* it is not that overly optimistic people are joining and goosing the numbers, everyone is trending in that direction.

63

Comments

You must log in or register to comment.

Thatingles t1_j52nhox wrote

Here's the thing. With the possibility being so close, you could argue the date is now being more heavily influenced by the pessimists. If the prediction date is 2027 that only gives the optimists 5 years to play with, but the pessimists can go as far out the other side as they want.

In a sense it doesn't matter because it will happen when it happens and the prediction date is like reading tea leaves - a thing you look at to distract yourself whilst you come up with a forecast.

It is worth remembering that technology moves at the rate of the fastest. Everyone else has to catch up to the new point and restart from there. What I'm trying to say is that predicted dates reflect what each individual knows but actual dates will reflect only what the fastest groups achieve. If Bob predicts 2035 based on his knowledge, but doesn't know that Sue has already achieved (but not published) several of the steps on his timeline, Bob's prediction is worthless. We obviously don't know ahead of time who falls into which category, all we can say for sure is that the pessimists are more likely to be caught out.

29

dasnihil t1_j55450p wrote

i just don't get the idea of counting days, are you guys like depressed or something? what do you think will happen the day, let's say nvidia announces that they have achieved neural network to run on a neuromorphic hardware in a very optimal way.

big announcement but we'll all forget about it in a couple of days :)

after that it's a game of implementation and industrialization. how can we make our industries more powerful and take this human enterprise on a next level. i doubt that the leaders and capitalists would have any desire for a utopian society with shared resources and harmony. that kind of ask will take at least a 100 years to be implemented on our society. this is a big change.

i personally don't expect to see much significant changes in my lifetime where i'll get a $500/mo check from some AI Labor Law Allowance. maybe in the coming generations if we play our cards right and don't wipe out all lives and any hopes for artificial life/intelligence.

1

RabidHexley t1_j56s60f wrote

> that kind of ask will take at least a 100 years to be implemented on our society. this is a big change.

I personally have come around to the thought that something like UBI being implemented due to automation won't be from compassionate, socialist ideals, but simply because it will become necessary for capitalism to continue functioning.

Reaching a point where you can produce arbitrary amounts of goods without needing to pay nearly anyone across numerous economic sectors is a recipe for rapid deflation. UBI would become one of the only practical methods of keeping the wheels turning and the money flowing.

Maybe after years of it being the norm it would lead to a cultural shift towards some sort of a properly egalitarian society, but it would start because hyper-efficiency resulting in economic collapse isn't good for anyone including the wealthy.

2

AsuhoChinami t1_j52x8ly wrote

Weird, some super aggressive, inflammatory guy outright called me a delusional idiot for not believing AGI to take until 2050-2065 (which is, in his words, the consensus amongst almost all AI experts).

28

icedrift t1_j535agx wrote

He's not wrong... In a 2017 survey distributed among AI veterans only 50% think a true AGI will arrive before 2050 https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/

I'd be interested in a more recent poll but this was the most up to date that I could find.

EDIT: Found this from last year https://www.lesswrong.com/posts/H6hMugfY3tDQGfqYL/what-do-ml-researchers-think-about-ai-in-2022

Looks like predictions haven't changed all that much, but there's still a wide range. Nobody really knows that's certain.

10

blueSGL OP t1_j53btzc wrote

You might find this section of an interview with Ajeya Cotra (of biological anchors for forecasting AI timelines fame)

Starts at 29.14 https://youtu.be/pJSFuFRc4eU?t=1754

Where she talks about how several benchmarks were past early last year that surveys of ML workers had a median of 2026.
Also she casts doubt on people that are working in the field but are not working on specifically forecasting AGI/TAI directly as a source for useful information.

16

Ortus14 t1_j54byn2 wrote

It has always been the case that people working within a field over-estimate how long it will take to achieve things within that field. They are hyper focused on their tiny part and miss the big picture.

To make accurate predictions you need to use data, trendlines and growth curves. It literally doesn't matter how many "experts" are surveyed, the facts remain the facts.

A few people making data and trendline based predictions hold far more weight than an infinite number of "experts" that base their predictions on anything other than trendlines.

5

Borrowedshorts t1_j53ksqo wrote

There are two types of AI experts. Those who focus their efforts on a very narrow subdomain and then there are those who study the problem from a broader lens. The latter group who are AGI experts and who have actually studied the problem tend to be very optimistic on timelines. I'd trust the opinion of those who have actually studied the problem vs those who haven't. There are numerous examples of experts in narrow subdomains being wrong or just completely overshadowed by changes they could not see.

12

AsuhoChinami t1_j53omnk wrote

No way icedrift and techno-skeptics cannot be wrong on anything ever, AGI in 2150 at EARLIEST and you're delusional if you think otherwise cuz I say so lmao

9

SurroundSwimming3494 t1_j546mre wrote

I think most experts fit in the latter group, though, and the ones who have very optimistic timelines are a minority in that group too, and not just in general.

1

AsuhoChinami t1_j5360dz wrote

And the half that agrees with you counts more than the half that doesn't because reasons? I'm a delusional idiot for sharing the same opinion as a tiny, miniscule, insignificant, irrelevant, vanishingly small, barely even existent 50 percent demographic?

8

icedrift t1_j537xjq wrote

I'm inclined to trust the people actually building AI. 50% or experts agreeing AGI is likely in the next 30 years is still pretty insane. Personally I think a lot of the AI by 2030 folks are delusional.

5

Borrowedshorts t1_j53lrlq wrote

The world has never seen anything like AI progress. AI capability has been advancing at nearly an order of magnitude improvement each year. It's completely unprecedented in human history. I think it's much more absurd to have such confidence AI progress will cease for no particular reason, which is what will have to happen if the post-2050 predictions are correct.

9

AsuhoChinami t1_j538ht5 wrote

A lot of the pre-2050 crowd does include people building AI.

4

icedrift t1_j538wbw wrote

Yeah of course; but that's 2050, not 2027 as metaculus predicts.

2

94746382926 t1_j53aqvj wrote

Yeah, the amount of ai experts predicting pre 2030 or 2035 is probably only like 10%.

1

Borrowedshorts t1_j53m7r6 wrote

That group also consists of a disproportionate number of researchers who have actually studied AGI broadly.

11

SurroundSwimming3494 t1_j54661j wrote

My guess is that most AI researchers are pretty familiar with AI beyond narrow cases, so I think most of them are qualified to give an answer to "will AGI ever arrive, and if so, when?".

Also, I get the sense that a lot of the AGI crowd knowingly engage in hype to get more publicity, and it makes sense. "AGI soon" is a lot sexier of a discussion to touch on on a podcast (for example) as opposed to "AGI far away".

0

Borrowedshorts t1_j55qvtl wrote

I don't think they are honestly. They may know some of the intracacies and difficulties of their specific problem and then project that it will be that difficult to make progress in other subdomains. Which is probably true, but they also tend to underestimate the efforts other groups are putting in and the progress that can happen in other subdomains, which isn't always linear. So imo, they aren't really qualified to give an accurate prediction because very few have actually even studied the problem. I'd trust the people who have actually studied the problem, these are AGI experts and tend to be much more optimistic than the AI field overall.

3

AsheyDS t1_j55s0h2 wrote

>AGI experts

No such thing yet since AGI doesn't exist. Even when it does, there are still going to be many more paths to AGI in my opinion, so it may be quite a while before anyone can be considered an expert in AGI. Even the term is new and lacks a solid definition.

1

SurroundSwimming3494 t1_j55sj7z wrote

Is studying AGI even a thing, though? AGI does not exist yet and could never do so (potentially), so I'm not sure how one can study something nonexistent. To have theories, sure, but that's another thing.

0

zero_for_effort t1_j52ko8q wrote

Very interesting. I keep waiting for the prediction timer to stall but it just keeps jumping closer to the present. I can't wait to see which direction it moves when GPT4 is released.

17

DungeonsAndDradis t1_j52mzhk wrote

AGI 2025 or bust.

(Please take this with a grain of salt. I don't know anything, and I play video games for a living.)

30

franztesting t1_j52oobs wrote

Sounds like a rather large grain of salt

7

DungeonsAndDradis t1_j52px80 wrote

Well...<pushes glasses up into firing position>

  1. Kurzweil's main shtick is The Law of Accelerating Returns. Basically, technological advances are coming more and more quickly. For example, it took Humanity like 200,000 years to develop the steam engine, and then 200 more to go to the moon.

  2. 2022 was a ballistic year for AI advances, from nearly every company that is researching it. PaLM, Lambda, Gato, DALLE-2, ChatGPT. These tools are revolutionary advances in AI.

  3. Following the Law of Accelerating Returns, 2023 should be major leaps in AI, and then again in 2024, and by 2025 things should be bonkers.

My layman's guesstimate is that the next major architectural design is going to happen this year. Much like transformers accelerated AI research in 2017. One or two more major architecture pivots leads us to AGI.

It's only going to get weird from here!

24

tatleoat t1_j53ksmm wrote

I have big suspicions too, we truly can't be that far from an AI that can learn from your behaviors in a professional setting on the fly, everything is there and just needs put together responsibly

6

icedrift t1_j539w6r wrote

Whisper too!! Being able to efficiently train on audio gives us so much more data to work with and we're going to need it. GPT models are already running out of training data.

4

AsheyDS t1_j55tl03 wrote

>My layman's guesstimate is that the next major architectural design is going to happen this year.

You may be right, but a design is speculative until it can be built and tested, and that will take some time.

2

DungeonsAndDradis t1_j56aqfa wrote

I believe the architectural changes have already been made, perhaps last year, and they are currently being tested. I believe we'll see the finished paper(s) announcing one or more breakthroughs this year.

2

AsheyDS t1_j56cejt wrote

Anything you can link me to support that belief? Or is it just a gut feeling based on overall current progress in the AI field?

2

DungeonsAndDradis t1_j56g89c wrote

Of course not, my dude. :) 99% of everything posted on this sub has basis in reality. Just huff the hopium and save for retirement.

2

kmtrp t1_j5524u1 wrote

I like the way you think...

4

blueSGL OP t1_j53yvki wrote

> but it just keeps jumping closer to the present.

Connor Leahy described this as a "pro gamer move"

> "If you see a probability distribution only ever update in one direction, just do the whole update instead of waiting for the predictable evidence to come, just update all the way bro."

Kinda looks like Kurzweil's "Law of Accelerating Returns"

3

SoylentRox t1_j534guy wrote

If you wanted to dismiss metaclus, you would argue that since it's not a real money betting market, operating for a long period of time, it's not going to work that well. Real money means that people are going to only vote when they are confident, and the long timespan means that losers lose money, and winners gain money, and over time this gives the winners larger "votes" because they can bet more money.

Over an infinite timespan, the winners become the only bettors.

As for AGI in 2027: sure. I mean it's like predicting the first crossing of the atlantic by plane after planes are flying around all over shorter distances. It's obviously possible.

6

Nervous-Newt848 t1_j53rk87 wrote

OpenAI isn't the only company working on AGI... There are other companies and governments working on this especially China

5

icedrift t1_j534dwy wrote

Does metaculus only poll people in the field and verify credentials or can anybody submit an estimate? If it's the latter, why take any stock in it? AI seems like one of those things that attracts a lot of fanatics who don't know what they're talking about.

Polls of industry veterans tend to hover around a 40% change of AGI by 2035

4

No_Airline_1790 t1_j54haff wrote

2030 was the date given to me from a source in early 2022 but then in mid 2022 (July 5th) I was told by the source that something happened unexpectedly that jumped the timeline to 2027. So I am led to agree

4

No_Ninja3309_NoNoYes t1_j5493t8 wrote

Weakly general sounds strange to me. Sounds like almost human. I think we need some sort of minimal requirements otherwise we might be talking about different things.

I think AGI has to at minimum:

  • be multimodal
  • Embodied
  • Know how to learn
  • Able to follow a chain of arguments
  • Able to communicate with autonomy
  • Understand ethical principles

And there are many other things, but these seem hard enough. I think the first two are doable by 2027. Not so sure about the others.

I know how people love to talk about exponential growth. But let's not forget that something has to drive it. Deep learning has been driven by GPUs and the abundance of data. Both are not inexhaustible resources.

3

Sea-Cake7470 t1_j54bowa wrote

Here me out.... It's Probably this yr or the nest to max... No further then that....

0