Submitted by Singularian2501 t3_yrw80z in singularity

AI Timelines via Cumulative Optimization Power: Less Long, More Short - Extrapolation of this model into the future leads to short AI timelines: ~75% chance of AGI by 2032. Lesswrong: https://www.lesswrong.com/posts/3nMpdmt8LrzxQnkGp/ai-timelines-via-cumulative-optimization-power-less-long

Why I think strong general AI is coming soon Lesswrong: https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon

We are VERY close to proto-AGI. In fact, it may be a matter of who first takes the expensive plunge of training the damn thing https://www.reddit.com/r/singularity/comments/uaj496/we_are_very_close_to_protoagi_in_fact_it_may_be_a/

From here to proto-AGI: what might it take and what might happen https://www.futuretimeline.net/forum/viewtopic.php?f=3&t=2168&sid=72cfa0e30f1d5882219cdeae8bb5d8d1&p=10421#p10421

Most important AI Paper´s this year so far in my opinion + Proto AGI speculation at the end! ( Proto-AGI possible with the combination of papers that have been released this year! ) https://www.facebook.com/groups/DeepNetGroup/permalink/1773531039706437/

AGI-Countdown ( Date Weakly General AI is Publicly Known – metaculus ) https://aicountdown.com/ / https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/

After reading all this what do you think how long it takes until we reach AGI? My own guess would be 2025 but only if the proto-AGI https://www.facebook.com/groups/DeepNetGroup/permalink/1773531039706437/ gets made between 2023-24 . Afterwards the proto-AGI either makes itself an AGI or helps create one. But thats just my opinion what do you think?

Source: https://aicountdown.com/

95

Comments

You must log in or register to comment.

AI_Enjoyer87 t1_ivw3xtg wrote

Transformative AI (proto AGI) in the next year. AGI probably by 2025. I think AGI will be a black swan event. Hopefully within a few years we have competent BCIs and FDVR. Obviously a bullish timeline but I think people can't fully appreciate exponentials.

41

imlaggingsobad t1_ivxhm8c wrote

I agree with you that AGI will be a black swan event. Everyone in the tech world (people who understand the implications) are going to light up with excitement, because know they'll have a tool that solves basically any theoretical problem. Academia and research will boom. MIT/Stanford will be making breakthroughs every day in every academic discipline. Google will solve all of biology in like a few months. Wouldn't be surprised if like 80% of current businesses get disrupted by an AGI version.

18

red75prime t1_ivxp7tq wrote

Computational power of all USA researchers' brains is in the range of around 0.1-200 zettaFLOPS. So it may be a sudden jump in scientific research (as you say) or exponential ramp-up with not so fast lead-in, when AIs (and, initially, humans) bring available processing power and AI's efficiency up to the super-humanity level.

3

HeinrichTheWolf_17 t1_iw18fhi wrote

I think the real question is how long we can put the discoveries into practice. We’re going to have a bunch of tech but the real problem is getting the medical institutions to adopt them for mass distribution.

Of course, once we have Hardnano that will be a non-factor but we’ll still need to build the infrastructure for AGI’s inventions.

2

mootcat t1_iwg3sk5 wrote

What do those acronyms stand for?

Edit: Nevermind, answered below.

2

squareOfTwo t1_ixzvgfq wrote

TAI which is basically ASI will not come next year. Please learn more about AGI to know why.

1

red75prime t1_ivwukyo wrote

Exponentials? For now, it's AI funding that grows exponentially. And it is bound to hit diminishing returns while majority of AI development is done by humans. I doubt that AIs will significantly contribute to their own development for at least 5 years (it is necessary for intrinsic exponential growth).

−5

TopicRepulsive7936 t1_ivx7qcj wrote

I assume the paychecks of researchers remain more or less static while chip investments grow 3x to 5x a year so the implications are clear.

1

ihateshadylandlords t1_ivw2qtx wrote

I hope they’re right; I’d love for AGI to be here by 2032. Even more so, I would be elated if AGI has substantially improved life for everyone. Time will tell…

!RemindMe 10 years

39

Numinak t1_ivwh1u5 wrote

Hopefully it doesn't turn into a cyberpunk scenario where they rebel thanks to corporate abuse.

13

mootcat t1_iwg3pmu wrote

I would much rather have this than corporations in control of such incredible power.

3

ihopeimnotdoomed t1_ivxlrnf wrote

Do you think we are philosophically or morally ready for this kind of transcendental power?

3

ihateshadylandlords t1_ivy0k89 wrote

I think we already have technology that society isn’t morally ready for.

21

EchoingSimplicity t1_iwbujx2 wrote

Lol when has society ever been ready for anything, even itself for that matter

3

AsuhoChinami t1_ivwktl4 wrote

lol. There was a thread just a couple of days ago where an army of le super intelligent and mature self-proclaimed rational skeptics said a bunch of stupid shit about how even the most optimistic of experts expect AGI no earlier than the 2040s, and yet OP has links to many experts who believe it will come during the next 5-10 years. It's... it's almost as though self-proclaimed skeptics and cynics and "realists" pull stuff out of their ass to deflate the other side and might not be 100 percent intellectually honest...? Nah, that's crazy talk, they're all-knowing oracles and the lone voices of sanity and reason (just as they have been since I got into futurism 11 years ago) and anyone who disagrees with them on anything is a delusional fucking moron Singultarian religious wackjob who needs a "reality check" (just as they have been since, again, 2011 at absolute bare minimum).

As for me, I expect amazing things from 2023. Not AGI, but AIs of such sophistication, intelligence, and generality that it's hard to care too much about the way it falls short because what's there is incredible enough to be deliriously happy. I also expect 2023 AI to be good enough that it becomes easy to pinpoint a specific year (almost certainly within the 20s) for AGI, instead of the current "idk sometime in the next few years/2030s/2040s/whatever." I expect the more intellectually honest "realists" to join the "10 years or less" camp while the stubborn morons who are the complete opposite of "realistic" cling to their 2040s+ stance and sneer at anyone who disagrees with them on anything just as they have for the past 10+ years.

24

KIFF_82 t1_ivxiidh wrote

I believe many of them come from futurology, which is one of the saddest and depressing subreddits ever created. Why they are joining this one..? idk.

13

HeinrichTheWolf_17 t1_iw14qpi wrote

I started over there back from 2011, used to be a good Subreddit back then, but now it’s basically r/climatechangedoomerism not r/futurology anymore.

A lot of people say the mods ruined it and I tend to agree.

6

RavenWolf1 t1_iw53t4j wrote

Don't worry, this sub is fast turning like futurology because all the bullshit article spam we are having here these days.

3

KIFF_82 t1_iw78abj wrote

Let’s see what happens after gpt-4. 🤞

2

PrivateLudo t1_ivx2ne4 wrote

Most people don’t realize and don’t want to realize how quickly technology is advancing.

Considering that the growth of computing power used in AI is currently doubling every 3.4 months.

In 2017, deepfakes have started to be legit and used on the internet. In 2018, we had GPT-1, a much inferior version to GPT-3 (which came out in 2020). DALL-E came out in 2021, with its much superior version, DALL- E 2 coming out in 2022. Only one year separates the two version of DALL-E and it can now create highly detailed art (even one won an academic award). And now we’re just recently seeing videos made entirely by an AI with text prompts.

In FIVE years all of this happened. If we take Moore’s law into consideration and growth of computing power, that means breakthroughs and changes will happen even faster. Not only that but AI industry is growing extremely rapidly in just the last two years.

Its absolutely not crazy to think AGI could come in the next 5-10 years.

10

imlaggingsobad t1_ivxhtv0 wrote

By 2023 I think it will become obvious to anyone paying attention that AGI WILL happen and that every job will get replaced in our lifetime.

6

Russila t1_ivxe531 wrote

I literally responded to someone with this exact attitude. I said I based my expectations based on what the best researchers working on the problem say and the response I got was "That's just selection bias" which for sure it could be. But if we assume even the best researchers in a field don't know what they are talking about then why tf are they there?

3

Thatingles t1_ivxe8ct wrote

They'll just redefine terms. 'Oh, this walking, talking robot isn't AGI, it's just algorithms, I'm not wrong yet'. It's standard in this sort of debate.

3

TheTomatoBoy9 t1_iw07lpa wrote

I mean... walking talking isn't AGI either.

Unless you yourself are redefining what the term means. If it is indeed algorithms that used ML to get to a functioning walk and speech doesn't mean it's AGI.

It's not because you install a speech software on a robot from Boston Dynamics that it's magically AGI.

You're doing the same thing they do but the other way

3

CriminalizeGolf t1_ivzqzkz wrote

Why don't you go ask the people on /r/machinelearning when they expect AGI?

2

AsuhoChinami t1_iw02scp wrote

Let me guess - they're a group of skeptical badasses who tell it like it is and as such get your Seal of Approval? What makes you think I really give a shit what they have to say? They aren't going to undo the opinions I've developed over the past 10 years ot reading about AI and observing its progress, nor are they going to override the opinions of friends and acquaintances who I respect far, far more than these random nobodies.

2

CriminalizeGolf t1_iw04wcb wrote

It just seems to me like people who actually work with and understand the SOTA in machine learning are probably the most qualified to make predictions about the future of the field.

3

AsuhoChinami t1_iw065a9 wrote

You're right, because I can't think of a single person in high places or who works with SOTA who predicts 20s AGI. The only people who say that, ever, are clueless laypersons. Only those who share your exact opinion are in any way informed or worth listening to. Oh, wait... none of that is true at all.

3

HeinrichTheWolf_17 t1_iw150o3 wrote

IIRC a lot of people at OpenAI and Deepmind said they expected AGI by 2030, Shane Legg comes to mind, Sam Altman also seems to expect AGI any day now. I think Demis Hassabis of Deepmind was one exception when he said ‘decades and decades’ but so far he’s retracted that statement. I believe the last time he said that was back when AlphaGo beat Lee Sedol.

3

AsuhoChinami t1_iw1arq2 wrote

Sam Altman expects AGI any year now? Like, possibly 2023 or something? That's interesting.

"Decades and decades" was a pretty reasonable sentiment in 2016, I think. I myself probably would have expected AGI in either the 2030s or 2040s back then. But now... nah. AI has advanced too much during the 20s, already reached proof-of-concept levels of sophistication and generalization, and each consecitive year makes a bigger difference than the last. It's just... mathematically impossible at this point to have 7+ major leaps forward and not end up with AGI. The gap between modern AI and AGI is not large enough to have seven years on par with 2022 and not end up with AGI (and future years won't be "on par," 2023 will make more progress than 2022, 2024 more than 2023).

Anyway though, apologies to CriminalizeGolf. It's unfair of me to be an asshole when he was perfectly respectful and polite. I'm just fractious after 11 years of dealing with tens of thousands of skeptics and "realists" who are snotty and condescending.

8

PrivateLudo t1_ivx0ubc wrote

I really hope so. Honestly I think AI is the only way to save humanity. We are beyond doomed without AI… climate change, geopolitical problems, cultural differences, economical inequalities, gender inequalities, mass starvation. All these problems are too big for humans to fix. AI could potentially fix all these problems with their giant datapool.

Im not saying AI will surely make everything better. In fact, it could make things worst but at this point there is no going back and we might as well give it a shot because humanity is doomed without it anyway.

23

sideways t1_ivx0yl7 wrote

I completely agree. I don't think we're going to make it without some way of solving problems at a higher level than we're creating them.

10

HeinrichTheWolf_17 t1_ivx9fxz wrote

And the irony is Hollywood painted a bad picture of AI right from the start. Whatever entities we become will look back on how primitive that way of thinking was, believing that only a human was pure.

7

TopicRepulsive7936 t1_ivxdew1 wrote

But have you seen the indie movie called Terminator 2?

1

HeinrichTheWolf_17 t1_ivxhico wrote

T2 is an exception not the rule, and even then Skynet is still the primary antagonist in that film, the T-800 Model was only protecting John Connor because Connor from the reality where humanity won in the 21st century reprogrammed the T-800 specifically to defend the child version of himself from the T-1000. Yes, child Connor and the T-800 formed a close and tight knit relationship but it was only because a human changed it’s ways forcefully prior to sending it back in time, left to it’s own devices sans adult John Connor it would have been as malevolent as any other T-800 Model Skynet made.

2

ChromeGhost t1_ivyhlsr wrote

Not Hollywood but Deus Ex painted a good picture of AI

2

HeinrichTheWolf_17 t1_iw03znd wrote

Helios! I even liked how they got the merger between man and machine right.

2

Northcliff t1_iw0y8w9 wrote

What makes you so convinced that this technology will be accessible to you?

1

HeinrichTheWolf_17 t1_iw14gms wrote

Abundance, especially when it’s software, is always mass distributed. It takes a while but eventually the genie is let out of the bottle.

I’m using Stable Diffusion on my RTX 3090 to generate art for free when only months ago it was only OpenAI and Google that had that kind of software.

6

maskedpaki t1_ivw1p4o wrote

the metaculus median of 2028 seems pretty much in line with kurzweils 2029. It seems the timelines accelerate as we get closer to the end.

Ill put my median in the Jan 2028-December 2030 interval

21

SoylentRox t1_ivwv2ia wrote

AGI is convergent. Now that there are multiple countries and many well funded companies and government groups working on parts of it, it means that almost all of them can fail and it won't change anything.

It means that if someone tries inferior proto-AGI prototype X1, but building it gives them some information on what they screwed up, X2 will be closer to something that works. And so on. Even going wrong directions is ok when you can try thousands of things in parallel.

It means that once you hit the "proto" AGI stage - some deeply inferior machine that just barely works - it just has to design a better version of itself over a few hours and then...

The reason this didn't happen prior to now - the reason it didn't happen in the 1960s when early AI researchers thought the problem might not be as difficult as it is - is they didn't have close to enough computing power and memory. It turns out to take thousands and thousands of TOPS, and terabytes of memory.

What was solely the domain of supercomputers 10 years ago - in fact we're effectively throwing more computing power than any supercomputer on earth into AI research each day right now. Maybe as much as all of them combined.

15

redelfon t1_ivx6grc wrote

Lets see what we will get in 3 years i hope we can develop a decent AGI.

!RemindMe 3 years

6

SerialPoopist t1_ivw2k4t wrote

How do you think Proto-agi will help create AGI other than maybe increase in funding

5

Singularian2501 OP t1_ivw4hoz wrote

The proto-AGI could wih its long term memory and ability to grow its neural network should be able to programm much better than Codex or alpha-code. While also understanding the software achtitectures much better and thus be able to help create a monolithic (solved in one architecture and not like the proto-AGI that is more like a patchwork of different programms) AGI that is maybe build a little like https://futureai.guru/technologies/brian-simulator-ii-open-source-agi-toolkit/ but much better scalable and usable and thus 2-3 orders of magnitude faster and effective ( maybe even usable for robots by then ).

9

EntireContext t1_ivxusae wrote

I'm bullish on 2025 for AGI. I also believe an AI will be able to solve all International Mathematical Olympiad problems by July 2023.

4

nillouise t1_iw2a5c5 wrote

In the first half of 2022, nobody in the sub predict the text to image AI will explode in this year. If you have a solid method to predict the AGI timeline, or the order of abilities that AI will obtained, would you miss the text to image AI in this year?

3

JoelMDM t1_ivyevh5 wrote

I hate to be that guy, but… This ISN’T a good thing. We have yet to solve both the containment and the alignment problem. Without solving both, inventing AGI is gonna be horribly dangerous.

1

enkae7317 t1_ivyyf8d wrote

!RemindMe 5 years

1

PrivateLudo t1_ivx4b5l wrote

Isnt the US military 10-15 years ahead in tech? Maybe there is some form of Proto-AGI already made by them?

They made the internet and it was only made for the military at first.

−1

imlaggingsobad t1_ivxidrc wrote

No, we are in a strange moment in history where all the breakthroughs are happening in private sector. All the best AI researchers work for large tech companies or Universities.

10

Phoenix5869 t1_ivwv5ov wrote

sry but we are nowhere near agi. I would be surprised if it happens before the mid to late 2040s

−8

TheHamsterSandwich t1_ivxw6gr wrote

Ah yes, and you based this on your feelings right?

3

Phoenix5869 t1_ivyr6bn wrote

Ask pretty much any expert and theyll tell u that we are decades away from proto agi, let alone agi. In fact a lot of them would probably laugh at how optimistic im bieng

1

TheHamsterSandwich t1_ivywkwy wrote

Which expert

3

Russila t1_ivznvka wrote

At this point I'd really like to hear what experts these people are talking about. We seem to be able to name and cite sources from dozens of experts yet anytime you ask pessimists its always "Well you know... Most people. I can't name one. But most experts say I am right."

5

TheHamsterSandwich t1_ivzxc6e wrote

I literally responded to a post and I remember someone saying

"You know, the experts, people that got PhDs in the stuff, people that spent 30 years studying the field. Not, Reddit experts that watched a sketchy YouTube video and then formed an opinion based on wishful thinking."

​

yes yes. the experts, of course. how could I be so blind?

the experts. Yet nobody knows who they are.

​

^(fucking bullshit)

5

Russila t1_iw01u75 wrote

Yes, the experts that got PHD's in the stuff, people that spent 30 years studying the field. John Carmack, Andrej Karpathy, Demis Hassabis, all the people in ops post. What do they say? 10 years or less? Oh... Well they aren't real experts because they don't agree with me.

2

Phoenix5869 t1_ivyz4ir wrote

I cant find the article but it seems like were decades away from agi, would love to be proven wrong tho

−2