Comments

You must log in or register to comment.

blueSGL t1_j9qru8n wrote

I want to know how many peoples timelines predicted ChatGPT or Dalle2 or Alphafold happening when they did.

Otherwise it's just the classic "predict a game changer is going to happen once I've retired"

104

HelloGoodbyeFriend t1_j9ragu5 wrote

Would be amazed if anyone accurately predicted a LLM chatbot getting to 100m users 2 months after launch in 2022.

46

blueSGL t1_j9rf2n4 wrote

Now come on, be fair. You know that's not the point I'm making at all.

It's people working in ML research being unable to accurately predict technological advancements, not user numbers.

You might find this section of an interview with Ajeya Cotra (of biological anchors for forecasting AI timelines fame)

Starts at 29.14 https://youtu.be/pJSFuFRc4eU?t=1754

Where she talks about how several benchmarks were past early last year that surveys of ML workers had a median of 2026.
Also she casts doubt on people that are working in the field but are not working on specifically forecasting AGI/TAI directly as a source for useful information.

17

genshiryoku t1_j9svy3v wrote

No the reason why the median prediction barely got down is because we still have the exact same bottleneck and issues on the path to AGI. These haven't been solved over the past 6 years. So while we have made great strides with scaling up Transformer and specifically Large Language Models that display emergent properties. The actual issue still plays behind the scenes.

The main issue and bottleneck is training data, we're rapidly running out of usable data on the internet with the biggest models already being trained on 30% of all relevant data on the internet. If rates continue like this we might run out of usable data between 2025-2027.

We know we can't use synthetic or AI generated data to train models on because of the overfitting problem that introduces. We essentially need to either find some way to generate orders of magnitude more data (Extremely hard problem if not outright impossible). Or we need to have breakthroughs in AI architecture so that the models need to be trained on fewer data (Still a hard problem and linear in nature).

The massive progress we're seeing currently is simply just scaling up models bigger and bigger and training them on more data but once the data stops flowing these models will rapidly stagnate and we will enter a new AI winter.

This is why the median prediction barely changed. We'd need to solve these fundamental bottlenecks and issues before we'll be able to achieve AGI.

Of course the outlier possibility of AGI already emerging before running out of training data over the next 2-4 years is also a slight possibility of course.

So essentially while the current progress and models are very cool and surprising they are essentially within the realm of expected growth, because no one was doubting the AI boom to slow down before the training data ran out. We're dreading 2-4 years from now when all usable internet data has essentially been exploited already.

8

sideways t1_j9swo39 wrote

Couldn't multimodal models capable of incorporating realtime non-textual (visual, auditory, kinesthetic, etc) data be a solution?

The current generation have pretty much mastered language anyway so more text seems kinda redundant anyway.

7

beders t1_j9u0ypl wrote

The current models have not mastered language at all. They don’t know grammar. They just complete text.

It’s like claiming you know Spanish because you can pronounce the words and “read” a book. You can utter the sounds correctly but you have no clue what you are reading.

−4

rekdt t1_j9ufkwd wrote

If I can respond to a question someone asked me in Spanish then I know spanish.

7

beders t1_j9ug9kl wrote

−1

blueSGL t1_j9ukv6h wrote

I always found that silly.

What individual parts of the brain are conscious? or is it only the brain as a gestalt that is conscious ?

3

Representative_Pop_8 t1_j9vlnsa wrote

in the Chinese room it is not the operator that knows Chinese, it is the setuo of rules + operator that clearly knows Chinese. A llm spent needed to be conscious to master language

1

beders t1_j9wj8hw wrote

The operator doesn’t know Chinese. Do I need to spell out the analogy to chatGPT?

ChatGPT is great at word embeddings and completion but is an otherwise dumb algorithm. Comparing that to human’s ability to express themselves with language is useless.

I mean if you don’t get the Chinese room experiment you might think Eliza is a master of psychology.

0

Representative_Pop_8 t1_j9wsu89 wrote

you're not getting it, the operator doesn't know chinese, but the whole setup does. chatGPT clearly understands several languages, it doesn't need to be conscious to understand.

1

beders t1_ja1diap wrote

0

Representative_Pop_8 t1_ja1f2qz wrote

it know the language it also hallucinates, but in pretty good English. humans can also invent fake stories that doent mean they don't know the language

1

beders t1_ja1g5oh wrote

It’s almost as if it would just be a text completion engine … which it is.

0

beders t1_ja1dnlz wrote

If you can’t reason about language you can’t understand it. ChatGPT is the operator.

0

Representative_Pop_8 t1_ja1eoph wrote

chatGPT can reason about language, itv is not equivalent to the operator it is v equivalent to the v while Chinese room system, which clearly understands

1

sideways t1_j9vja6e wrote

Language mastery is a function of communication and problem solving ability in that language. Understand should be judged based on results not some mysterious inferred grammar understanding.

1

beders t1_j9wq47s wrote

You can’t just measure how well or interesting a text completion engine spits out words and proclaim it has “mastery”. Frankly that is BS.

1

sideways t1_j9wtpzn wrote

Of course... that's why I would never claim that a parrot had mastered language. It may know words but it can't use language in creative, communicative, problem solving. LLMs can.

1

fangfried t1_j9snmqy wrote

There’s gonna be something after transformers in the next couple years I can feel it

6

nillouise t1_j9s98xr wrote

I ask the same problem " how to predict the order of different capabilities that AI will obtain", and nobody know how to answer it. I think obviously it show there is no method to process this problem, people's AI timeline is useless.

But nobody know the future is more funny.

4

IntrepidHorror5986 t1_j9sh7ri wrote

You are high on hopium. AGI and LLM are absolutely unrelated! You may as well ask about how many people predicted the pills for erectile dysfunction.

−9

datsmamail12 t1_j9rbamb wrote

I'd be surprised if we don't have one by 2029 at this point. Ray Kurzweil was right all along.

40

pegaunisusicorn t1_j9rh0yh wrote

if he is it would be hans morovec who was right.

−8

Economy_Variation365 t1_j9si7wb wrote

If Ray Kurzweil was right, it would be Hans Morovec who was right?

Perhaps you could explain...

15

Thatingles t1_j9r6dxz wrote

The most interesting thing about LLM is how good they are based on quite a simple idea. Given enough data and some rules, you get something that is remarkably 'smart'. The implication is that what you need is data+rules+compute, but not an absurd amount of compute. The argument against AGI was that we would need a full simulation of the human brain (which is absurdly complex) to hit the goal. LLM have undermined that view.

I'm not seeing 'it's done' but I do think the SOTA has shown that really amazing results can be achieved by building large data sets, applying some fairly straightforward rules and sufficient computing power to train the rules on the data.

Clearly visual data isn't a problem. Haptic data is still lacking. Aural isn't a problem. Nasal (chemical sensory) is still lacking. Magnetic, gravimetric sensors are far in advance of human ability already, though the data sets might not be coherent enough for training.

What's missing is sequential reasoning and internal fact-checking, the sort of feedback loops that we take for granted (we don't try to make breakfast if we know we don't have a bowl to make it in, we don't try to buy a car if we know we haven't learnt to drive yet). But these are not mysteries, they are defined problems.

AGI will happen before 2030. It won't be 'human' but it will be something we recognise as our equivalent in terms of competence. Fuck knows how we'll do with that.

31

sticky_symbols t1_j9qsmt6 wrote

Wow. I just don't get it.

This was done before chatGPT and most people hadn't used gpt 3 before that.

27

[deleted] t1_j9qt3c3 wrote

[deleted]

30

CubeFlipper t1_j9s21ls wrote

At this point I'm inclined to think that if AGI actually did arrive in 2025 and this poll was conducted again in 2026, people would still give it roughly the same timeframe.

18

Safe_Indication_6829 t1_j9rqbtv wrote

ChatGPT is a turning point, I think, because the average person can see firsthand it's effects. people pulled the fire alarm back in the GPT-3 days (only 3 years ago, if you believe that) but now even Vox is writing about AI alignment issues

13

SurroundSwimming3494 t1_j9rh2go wrote

>This was done before chatGPT

GPT3 is very similar to ChatGPT, to my knowledge, and that had been out since 2020, 2 years before the survey.

>most people hadn't used gpt 3 before that.

The people who were surveyed were researchers and experts who most likely had familiarity with GPT3.

I don't really think the advent of ChatGPT would have shortened timelines all that much had the survey been conducted after it was released, if I'm being honest.

9

sticky_symbols t1_j9w3kze wrote

The thing about chatGPT is that everyone talked about it and tried it. I and most ML folks hadn't tried GPT3

Everyone I know of was pretty shocked at how good GPT3 is. It did change timelines in the folks I know of, including the ones that think about timelines a lot as part of their jobs.

1

DukkyDrake t1_j9rq1g6 wrote

The thing they're predicting has nothing to do with anything related to GPT.

4

sticky_symbols t1_j9w3cav wrote

A lot of people who think about this a lot think it does. LLMs seem like they may play an important role in creating genuine general intelligence. But of course they would need many additions.

1

Mrkvitko t1_j9r56vi wrote

It's already happening, we're too dumb to see it and instead we move the goalpost with every new announcement.

22

WeReAllCogs t1_j9s6gci wrote

That's science and "future shock" wrapped in one.

3

Silly_Awareness8207 t1_j9rgkvh wrote

Singularity is AI smart enough to make a better AI. That has always been the goalpost

−2

Mrkvitko t1_j9rm8mi wrote

The original post talks about AGI, not ASI or technological singularity.

8

Silly_Awareness8207 t1_j9s6n8z wrote

If we have AGI, or HLMI as the article calls it, then we have machines that are smart enough to make the next generation of AI, I think. So AGI is enough to trigger the singularity.

3

Mrkvitko t1_j9t4z0f wrote

Is average or below-average human smart enough to make the next generation of AI?

1

Z1BattleBoy21 t1_j9t7n73 wrote

imagine a legion of average humans trained to be ML researchers that can't make human error, and work 24/7; I think they could.

1

BenjaminJamesBush t1_j9tf53e wrote

Oh, totally, yes. Imagine an average human who is willing to learn and work on their goals 24/7. Now as u/Z1BattleBoy21 said imagine an army of such average humans.

1

mobitymosely t1_j9tvcdq wrote

That assumes that ASI is even possible at all. We already have a network of 8 billion people collaborating on projects, and they have one big advantage—access to the real world (eyes, hands, labs, factories). It MIGHT be that there is quite a diminishing return available if you can just model our brains but increase them further in size, speed, and number.

1

Representative_Pop_8 t1_j9vm63x wrote

no, that's not true. to make a next generation of computers you need the cumulative efforts of thousands of engineers, scientists businessmen etc. you could have an ai as smart as two very bright humans and it is unlikely it would on its own develop a better AI.

1

Silly_Awareness8207 t1_j9x17cb wrote

Just have it read books the same way humans do. If it truly is as smart as an average human then it can "stand on the shoulders of giants" just like humans do.

1

AuleTheAstronaut t1_j9r8r24 wrote

Before 2030 is my guess. We’re at full sprint even ignoring the LLM hype

19

PM_ME_A_STEAM_GIFT t1_j9qu8a0 wrote

> ‘HLMI’ was defined as follows:
The following questions ask about ‘high–level machine intelligence’ (HLMI). Say we have ‘high-level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption.

I think the bottleneck here is robotics. We might have human-level intelligence in a digital-only form a lot sooner than we will be able to build a humanoid robot with human-level dexterity, speed and strength. And it will be even longer until such a robot is cheaper than human labor.

15

CubeFlipper t1_j9s2fbd wrote

I agree that robotics would come after software, but I can't imagine the additional time would be very long at all. I'd expect an AGI should have no problem making the changes required in a very short timeframe to make ai robotics a mature field.

8

Brashendeavours t1_j9rigfl wrote

Human level intelligence is typically not represented well by any measure of physical dexterity or speed.

What are you trying to say?

4

turnip_burrito t1_j9rjqz2 wrote

They're saying physical labor via robotics might be the last part of human capability to be replaced, which to be fair could be considered a form of intelligence.

5

Brashendeavours t1_j9rlgzt wrote

The central argument is regarding the timeline of AGI. The incorporation (or not) of effectors and sensors is irrelevant.

−3

cancolak t1_j9sfnjd wrote

How is that irrelevant exactly? What would humanity have achieved if all we had were minds floating in ether? For any sort of intelligent software to be civilization altering, it needs to have access to a shit ton of sensory data as input and robotics as output, ideally in real time. Otherwise you have a speech bot, which we already have. “Well, if we have AGI, it will figure out the rest” is one of the most intellectually lazy statements I’ve ever read anywhere and unfortunately it’s kind of like this sub’s one commandment. AGI without sensors isn’t intelligent; thoughts in a head aren’t intelligent without input or output. This is a fallacy. If you think this is the case, then ChatGPT should already qualify, why not call it for today?

2

kimjongun-69 t1_j9rcv9s wrote

99% chance of it happening by 2030 is my opinion

13

ReasonablyBadass t1_j9seeio wrote

That seems wildly pessimistic.

I would be shocked if one doesn't exist by 2030

11

Happynoah t1_j9ru3bv wrote

These things always take today’s tech and put it on a linear path. The key inflection is AI that writes code for AI and AI that designs chips for AI. We passed that mark decades earlier than this prediction expected. We are now past the event horizon and in the gravity well.

10

hapliniste t1_j9slkyu wrote

That's for ASI, but we won't reach AGI with just more compute

2

Happynoah t1_j9vj2jm wrote

Correct, if it doesn’t have a motor, endocrine, digestive, and aerobic system it wouldn’t generalize

1

onyxengine t1_j9s57jy wrote

Im convinced it’s happening before 2035

6

Destiny_Knight t1_j9qvcua wrote

>"Say we have ‘high-level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers. "

I think these experts are getting tripped up with things like human creativity (which requires human emotions).

"every task" is too strict. Should limit it to "majority" of human tasks.

4

NoidoDev t1_j9r5p4g wrote

Deep Learning is not enough. There might be still a lot of work to do. Let's hope we'll get something that's close much earlier.

4

DanDaBruh t1_j9rk1hr wrote

more like 100% chance lmao

4

Ortus14 t1_j9rmhho wrote

Surveys don't predict technology. And who knows if any of these people are working towards AGI.

If you want technological predictions, you need to look at the information put out by people trying to make those predictions, which involve tracking trends in requirements such as cost of computation, cost of energy, funding rates, scaling efficacy, etc.

4

Hailtothething t1_j9sg2in wrote

Quantum AI will be instantaneous. The number of parameters will be incalculable. It won’t be human, it’ll be godly. This is sooner

3

DillyDino t1_j9slxkg wrote

Large Language Models are amazing, but they have massive, massive gaps compared to a true AGI. We still don’t have a good way of augmenting their memory units. But man are people trying at least. Toolformer is the latest paper I read that attacks this idea.

They fundamentally still struggle with common sense reasoning in a similar way a deep learning model in a car is struggling to bridge the gap in common sense reasoning. And we’ve hit a bit of a wall there. So to speak. We haven’t solved this well. More self attention layers and reinforcement learning guiding won’t do it. GPT4 will be impressive but 96 layers of transformers becoming 1000 of them or whatever still is just a bigger function approximation. Extrapolating when we solve that missing piece is still just guess work. That’s why I’m amazed when people say AGI will just get solved by 2030 because of an advancement in LLM’s

3

murph1134 t1_j9tvt74 wrote

The biggest constraint to achieving true AGI, IMO, is going to be compute resources and the cost associated with running these massive models. GPT3 is a really impressive technology, but it's also still very limited and nowhere close to true AGI. And, it's currently prohibitively expensive and resource intensive to be rolled out at scale. A true AGI is going to be exponentially more expensive and resource intensive.

The first big breakthroughs in deep learning and neural networks happened in the 60's and 70's. But, deploying those models at scale was impossible given the compute resources at the time - and it wasn't until 2010/2011 that GPUs were fast enough to train deep learning models at scale.

I don't think it's going to take 40-50 years again for compute to catch up, but the fact of the matter is, it's not just going to be a simple "spin up more compute" as these models grow. There's always a balance between software and hardware - and the physical world is always going to be a limiting factor for hardware.

I wouldn't be surprised if we see another "AI winter" - where the research and the software exist, but the hardware constrains the ability to actually get to full AGI. The good news is, AI as it stands today, even without AGI, is really damn useful - and people are finding new and innovative ways to create value with what we already have. So, it's not going to be a full on "winter" for AI, just a stagnation in the ability to deploy new and more powerful models at scale.

2

Sandbar101 t1_j9r2cgb wrote

So… no change? That’s been the average estimation for a while now.

1

challengethegods t1_j9ruxf0 wrote

I feel like half the blame is on the survey itself, which apparently had all kinds of weird/arbitrary questions and asked for probabilities framed in 3 sets: 10 years, 20 years, and 50 years.

When you ask someone to put different probabilities into 3 timeframes like this, they're going to be biased to lowering at least the first one just to show an increasing probability over time, with the first being 10 years away and the last being 50 it makes sense that every time they do the survey their result is going to make it seem like everything beyond what is already public and well known is going to take forever to happen.

For the second part of the blame, I'll cite this example:

"AI Writes a novel or short story good enough to make it to the New York Times best-seller list."
"6% probability within - 50 years"

not sure who answered that, but they're probably an "expert"
just sayin'

1

valiction t1_j9s4d3q wrote

2059 seems like 100% chance me to me by then.

1

Talkat t1_j9slkpt wrote

Yeah, unless there.is a massive human wide event like a nuclear war, massive deadly pandemic, etc

2

HuemanInstrument t1_j9smjby wrote

yeah.. it's happening this year, I've got zero doubt about it.

1

No_Ninja3309_NoNoYes t1_j9sn350 wrote

Need neuromorphic hardware and spiking neural networks and quantum computers. Even if qubits double every two years, it will take a while. GPT is just static parameters. You need some way to constantly update them. Anyway LLM is one of thousands required systems. We don't have thousands of labs doing all the required projects. They are doing more or less the same. We are nowhere near that point.

1

HumanSeeing t1_j9szouq wrote

Ok ok, my prediction. I predict we will have AGI with 90% certainty before 2100. The 10% being a chance that we just die. See, i did a prediction! These are so arbitrary and literally no one can give a prediction worth anything at the moment except that things are advancing exceptionally quickly.

1

ecnecn t1_j9t8d9s wrote

The survey just offers two possible answers:

Withtin 2 years of within that point or Within thirty years of that point.

So will it be within 2 or 30 years: Majority of experts that would expect it to be 3, 5 or 10 year must choose the second answer "within thirty years"...

1

marvinthedog t1_j9tb8y4 wrote

I would really like to have a deep discussion with some of these machine learning researchers because I cannot in a million years fathom how they can hold such a different world view.

1

savagefishstick t1_j9tbrmt wrote

exponential growth is much faster than that. we will have it by the end of the decade, think of what the smart phone evolution but this will be for A.I.

1

techhouseliving t1_j9ttu03 wrote

People need to learn about accelerating acceleration.

1

Erickaltifire t1_j9tulyh wrote

Don't worry. Nuthin will happen until they discover enerjon cubes.

1

knarfomenigo t1_j9uznmh wrote

There is no way to know. Really any person who gives you a year for singularity is is full of shit.

In one hand, exponential improvement could make it happen sooner than we could even imagine, many companies or countries don't share their R+D info, so we wouldn't even notice.

In the other hand, many companies like Google who are developing big AI's might face financial problems because of them if they are not very proffitable, such as a decrease as the money paid by advertisers or cloud storage. They are risking their current working model for one in which they are not the monopolistic leaders and might not be as lucrative as the leading position they had until now. This is a big danger for ai, because if it turns un-proffitable, big tech companies will reduce their investments thus s lowing it's development speed. HOWEVER, Satia Nadella said in the last Microsoft presentation for the GPT-Bing integration that "the AI wars have started" so it looks unlikely that investment in them will go down in the following years.

Here's my "bananas" prediction (PURE FICTION).

In 2023 we will see chat gpt becoming just a great user interface, great if combined with powerful apis with updated content. We will be able to choose if use it in internet browsers search, but other products will appear allowing to interact with video, voice, images... all at once. Most people will be unable to use it though.

By 2025 Bing will be as used as google, because of the functionalities of the integration between Open AI and Microsoft's software. Many experts on this softwares will make big money offering solutions to medium sized companies to implement them. Many companies will not addapt and face big decreases in their revenues. Many movements of affraid people will create anti-ai trends in social media, traditional media companies will amplify this fear through anti-ai news, creating social disagreement about it, polytical polarization.

By 2030, many European countries will be making HUGE bans on AI companies due to the fear and inconvenience from workers with graphic arts, design, music and writing backgrounds. Countries like India, China, Turkey and Russia will invest strongly in this technologies. Conservative political western parties and far-left will include their oposition to big-ai companies in their polytical programs too. Big changes will come in porn, music and other entertainment industries in which most of the content will be ai-produced or ai-enhanced.

By 2040 it will become obvious that countries without AI restrictions achieve a higher level of efficiency in certain industries, plus many of the friction created for job destruction will be reduced enough for polytical parties to forget about it. Then it will become a polytical strategy to bet into it for military reasons, and both left and right will be highly investing into it to manipulate public opinions.

From then on, it's just a matter of time to AI to develop exponentially, maybe it's 10 more, maybe it's 30 years, not only for economical but also for military reasons. I don't believe it can be later than 2070 before China or USA achieve ai singularity.

1

Representative_Pop_8 t1_j9vkrdx wrote

the report is from before chatGPT was released, if you do the poll now i think the average date will be much sooner

1

AsuhoChinami t1_j9t3t3u wrote

Thoughts: that poll is ridiculously fucking stupid. Jesus christ. This sub gets worse and worse by the day.

−1