Viewing a single comment thread. View all comments

AsheyDS t1_j3orcp9 wrote

As a prediction, this is utterly meaningless. I'm not even sure if this is useful at all as a gauge of anything.

8

imlaggingsobad t1_j3oy627 wrote

it's not just a prediction, it's a crowdsourced prediction. Statistically, crowdsourcing does better at converging to the actual answer.

19

Cult_of_Chad t1_j3p0fgx wrote

>Statistically, crowdsourcing does better at converging to the actual answer.

This should be the top reply.

13

AsheyDS t1_j3pb1x8 wrote

But what is the crowd? Is this based on a sampling of all types of people, or enthusiasts being enthusiastic?

5

footurist t1_j3qdd01 wrote

Yes, this if the key question. If I'd build such a website I'd try to implement some ways to categorize the crowd. "30% expert, 50% enthusiast, 20% hobbyist" or something like that... Of course getting any kind of certainty on that would be hard, but it turns out if you actually ask nicely and with a time of seriosity most people just tell the truth, so maybe even not.

1

will-succ-4-guac t1_j3rme50 wrote

> Statistically, crowdsourcing does better at converging to the actual answer.

Statistician here, and this is a good example of a relatively meaningless statistic, to be honest. Crowdsourcing statistically tends to be more accurate than just asking one person, in the average case, for what should be mathematically obvious reasons.

But the “average case” isn’t applicable to literally every situation. I would posit that when we start to talk about areas of expertise that require a PhD to even begin to be taken seriously for your opinion, crowdsourcing from unverified users starts to become a whole lot more biased.

1

[deleted] t1_j3os5qp wrote

[removed]

2

AsheyDS t1_j3ovmd1 wrote

I just feel like a lot of people are seeing some acceleration and think that this is all of it. What I think, is that we'll continue seeing regular advances in tech and AI, science in general. But the 30's will be the start of AGI, and 40's will be when it really takes off (in terms of adoption and utilization). Even a guess of before 2035 is, in my estimation, an optimistic projection where everything goes right and there aren't any setbacks or delays. But just saying 30's is a solid guess.

0

imlaggingsobad t1_j3oyoyq wrote

Your prediction and the 2027 prediction could both be right. DeepMind and OpenAI could have something that looks like AGI in 2027, but they keep it within the lab for another 3 years just testing it and building safeguards. Then in the 30s they go public with it and it begins proliferating. Then maybe it takes 10 years for it to transform manufacturing, agriculture, robotics, medicine, and the wider population, etc, due to regulation, ethical concerns, and resource limits.

9

Baturinsky t1_j3s0gpp wrote

How big do you think are chances it going Paperclip Maximizer-level wrong?

1

coumineol t1_j3pxg3w wrote

>But the 30's will be the start of AGI, and 40's will be when it really takes off

I vehemently disagree. How would it take 10 years for such a transformative technology to be optimized and utilized? Do you have a timeline for that 10 years between "start of the AGI" and its takeoff?

3

AsheyDS t1_j3r7vte wrote

I never said it'd be 10 years, though it could for all anyone knows. If I said it would be released in 2035, and widely adopted by 2040, I don't think that's unreasonable. But I also believe in a slow takeoff and more practical timelines. Even Google, as seemingly ubiquitous as it is, did not become that way overnight, it took a few years to become widely known and used. Also we're dealing with multiple unknowns, like how many companies are working on AGI, how far along they are, how long it takes to adequately train them before release, how the rest of the world (not just enthusiasts) accepts or doesn't accept AGI, how many markets will be disrupted and the reaction to that, legal issues along the way, etc. etc. Optimistic timelines don't seem to account for everything.

Edit: I should also mention one of the biggest hurdles is even getting people to understand and agree on what AGI is! We could have it for years and many people might not even realize. Conversely, we have people claiming we have it NOW, or that certain things are AGI when they aren't even close.

2

gobbo t1_j3rm9j7 wrote

I have chatGPT in my frickin' pocket most of the day. It's amazing but mostly just a testbot still so here I am, kind of meh, even though it was not on my radar for at least a few years, or so I thought a few months ago.

Faster than expected. And yet life carries on much as before, with a little sorcerer's apprentice nearby if I want to bother. What a time!

1

arisalexis t1_j3q2fvz wrote

Did 2022 actually feel as "some" acceleration to you?

2

AsheyDS t1_j3r91af wrote

Feel? No, not quite. But it's all relative. If one narrows their perspective on what's to come, it could feel like a huge change already. Personally I think this is just us dipping our toes into the water, so to speak. So yes "some" acceleration, especially when considering how many people think that what we've seen so far is half or most of the way to AGI.

1

420BigDawg_ OP t1_j3p3m0d wrote

Who cares if it’s meaningless?

1

AsheyDS t1_j3pbd58 wrote

Fair enough, but it's a thing for a reason. Obviously the date will continue to change, so it could only possibly be a measure of that change. So why is it changing? What is it based on? It would make more sense to say a decade than a specific date or even year.

2

keefemotif t1_j3pjkt6 wrote

What's interesting is, 10 years ago the prediction of a lot of people I knew was 10 years and hey it's 10 years again. I think psychologically, 10 years is about the level people have a hard time imagining past, but still think is pretty close. For most adults, 20-25 years isn't really going to help their life, so they pick 10 years.

As far as the crowdsource comment, yikes. We aren't out there crowdsourcing PhDs and open heart surgery. I know there was that whole crowdfarm article in communications of the ACM and I think that is more degradation of labor rights than value in random input.

−1

coumineol t1_j3pxmr2 wrote

>What's interesting is, 10 years ago the prediction of a lot of people I knew was 10 years and hey it's 10 years again.

May be true for "the people you know", but if you look at the general opinion of people interested in this field, the predictions used to start at the 2040s just last year.

3

keefemotif t1_j3qzov0 wrote

While selection bias is already a thing, I'm pretty sure "the people I know" being generally software engineers with advanced degrees and philosophers into AI... it's a pretty educated opinion on the bias.

1

coumineol t1_j3r24vc wrote

In that case maybe educated opinion is worse than the wisdom of the crowds, as the community prediction for AGI was 2040 last year as you can see from the post which is not "10 years away".

1

keefemotif t1_j3rsn2g wrote

It's 18, the point I'm making is we have a cognitive bias towards 10-20 years or so when making estimates and we also have a difficult time understanding nonlinearity.

The big singinst hypothesis was there would be a "foom" moment where we go to super exponential progression. From that point of view, you would have to start talking probability distribution of when that nonlinearity happens.

I prefer stacked sigmoidal distributions, where it goes exponential for a while, hits some limit (think Moore's and around 8nm)

Training a giant neural net towards language models is a very important development, but I mean imho AlphaGo was more interesting technically with the combination of value and policy networks, vs billions of nodes in some multilayer net.

2