Comments
Economy_Variation365 t1_j6bjjnr wrote
It took less than 20 years to reach that milestone! I wonder what reaction those experts had to Alphago's success.
Phoenix5869 OP t1_j6bnwo8 wrote
They were only off by 81+ years, dont be so harsh! /s
Buck-Nasty t1_j6bhazp wrote
Paywall removed https://archive.is/fdBnk
throwaway_890i t1_j6cm8tw wrote
This was written during the AI winter.
Lawjarp2 t1_j6cd4rp wrote
It did happen in the next century
GayHitIer t1_j6cg96e wrote
Or two 😆
Same with the we will not have planes for thousands of years and then it happens just after that.
We will never use nuclear fuel as power and then it happens just after that.
Never say never.
Villad_rock t1_j6dii8f wrote
I always laugh at people who think in centuries
maskedpaki t1_j6g92bp wrote
If I say by 2050 is that also laughable ?
TopicRepulsive7936 t1_j6h54u6 wrote
Maybe not laughable but possibly risky because changes could happen 10 or 100 times faster than that.
maskedpaki t1_j6hmvzz wrote
Or 10 to 100 times slower.
The future is hard to predict. The past predictions mostly failed because they were too aggressive not because they were too conservative most of the time
This is a small exception
TopicRepulsive7936 t1_j6hoa8p wrote
There's actually a good lesson here. Deep Blue was completely predictable decades out, if you believed in continued accelerating returns that is. AlphaGo wasn't. Luckily it was just about a game that time.
maskedpaki t1_j6hq48c wrote
Maybe not decades out but there were computers that played go several years prior to alphago with lower elo rating
In fact alphagos elo rating is continuous with previous systems that played go. It wasn't a breakthrough. Just a flashy display because it was against superstar Lee sedol.
maskedpaki t1_j6g8xg1 wrote
20 years is on the same order of magnitude as 100
It's not that ridiculous
No_Ninja3309_NoNoYes t1_j6cw969 wrote
There were so many AI experts trying to beat Go that they saw many, many problems. So the lesson is that computers can get really good at one thing, providing that there are clear rules.
I think that Generative AI will crash and burn soon. I mean, look at ChatGPT. You need top GPUs to work a long time using a huge network that is not even trained on all the text on the Web. You could maybe increase the size of the network a thousand times, but you will need more than a thousand times more GPUs. Much more. And at inference time you still need the parameters. I am afraid it will not be enough to accommodate multimodal abilities and larger context windows.
UnlikelyPotato t1_j6dm63b wrote
How is it they're going to crash and burn when they've already revolutionized industries such as programming, content creation/writing, etc? The demand for LLMs has only just begun.
Villad_rock t1_j6dixgb wrote
We have a reddit expert here
BigZaddyZ3 t1_j6c1q52 wrote
Reminds me of the average Redditer on tech subs tbh.