Submitted by Phoenix5869 t3_yvyww2 in singularity

https://www.forbes.com/sites/forbestechcouncil/2022/01/20/the-human-brain-scale-ai-supercomputer-is-coming/

the most powerful supercomputer we had at the start of the year was 0.5 exaflops, and now its 1 exaflop. If this is true this means we will have increased the computing power of our fastest supercomputers 128x in 1 year!

edit: theyre apparently gonna use it to make and train ai

83

Comments

You must log in or register to comment.

DaggerShowRabs t1_iwgoipj wrote

I want to see this thing attached to a scaled-up GATO. Bring it on.

20

iNstein t1_iwgsgyj wrote

They have 1.5 months left to deliver in 2022. I hope it is real but seems too good to be true.

46

Kinexity t1_iwh2atu wrote

Yeah, maybe in INT4 tensor ops. This is not the same league as Frontier or Aurora.

3

Qumeric t1_iwhbhef wrote

First, the article is pretty bad, doesn't seem like high-quality journalism.
Second, there are different ways of calculating FLOPS. It depends on the kind of numbers (8-bit, 16 bit etc.) and on the benchmark. Frontier (top-1 supercomputer) has 7.5 exaflops on HPL-MxP (mixed precision) benchmark, and Google has 9 exaflops for AI tasks (probably 16 bits?) cluster.

9

starfyredragon t1_iwhk02b wrote

As someone who has worked in bioinformatics and done the math... 64 exaflops is huge. And by huge, I mean, 5 exaflops is the processing capability of the human brain. This thing, depending on how they build it, has the potential to be five times as smart as a human. Up until now, our best has been approaching one exaflop, which meant on par with certain pets.

Not convinced they'll actually pull it off, though.

3

starfyredragon t1_iwhtvbz wrote

Basically how fast a neuron can transmit a signal by how many dendrite-to-neuron connections there are.

The funny part is in the human brain, storage and processing are pretty much the same thing (so storage and processing are the same); it'd be like if the whole hard drive was stored in L1 caching. Previously, I had been watching HDD's, wondering when they'd hit the 5 exa- threshold; this article about processors hitting exabytes completely blindsided me, because if its actively processing, it can effectively count as storage, like the brain does it of the two being synonymous - meaning I was expecting this point maybe a decade and a half from now; not potentially next or even this year. Thing is, with a 5 exaflop computer, you could actually do a full human brain. With 64 exa- well... you're solidly into territory where we better start looking to science fiction for advice.

10

DukkyDrake t1_iwiz5m3 wrote

Tachyum Prodigy Offers 128 AI Exaflops for Slovakia’s €70M World Fastest AI Supercomputer

>The datacenter footprint of the Air-Cooled Prodigy Supercomputer is 107 racks of computer nodes and 10 racks of storage. The Liquid-Cooled Prodigy-based Supercomputer is 48 racks of computer nodes and 10 racks of storage. Future deployment of a modular supercomputer on the first generation Tachyum Prodigy platform is scalable up to 4 DP exaflop and 1 AI Zetaflop at less than 70 megawatts for only 500 million EUR.

>The Prodigy-enabled Slovakian supercomputer would be in AI 7x more powerful than the NVIDIA Eos, which is anticipated to provide 18.4 exaflops, and over 25x more powerful than the Fugaku Supercomputer, which is currently the world’s fastest. Tachyum rack-based solutions offer comparatively more powerful performance than Tesla Dojo and the Cineca MARCONI100 computing systems, which are ranked among the largest and most powerful supercomputers today.

I even found an older reference to the proj in an article from 2020 "Slovakia aims to build world’s fastest AI supercomputer, Technology News By Nick Flaherty"

2

Phoenix5869 OP t1_iwki498 wrote

No offence but 2040s seems more realistic than a lot of predictions on here, sry but were not gonna have agi this decade (would love to be wrong tho) and we are also not having asi next month! (I actually saw someone on here saying that!!!)

5

Aggravating_Ad5989 t1_iwkw26z wrote

0.5 exaflops to 64 exaflops by the end of the year, id love this to be true but it aint gonna happen.

1

-ZeroRelevance- t1_iwl3r6v wrote

I’m curious why you don’t think we’ll have AGI before then. The computational bottlenecks should be solved by the end of this decade, and there’s certainly no shortage of good ideas at the moment. Given that, the way I see it, early-mid 2030s seems to be a pretty reasonable estimate.

2

Black_RL t1_iwlbn5i wrote

Of course it’s for AI, what else?

1

AsuhoChinami t1_iwm7tb2 wrote

There's no "could" to it. Your opinion is completely and utterly mind-boggling. Words can't describe just how completely bad and wrong your take is. It's like the last time you paid attention to the field was 2009 or something.

−1

AsuhoChinami t1_iwmo19v wrote

That's actually not unrealistic at all. I know some incredibly well-informed people who say that, up to and including the higher-ups of OpenAI. You don't have to change your mind but the "20s AGI" opinion is absolutely not as ridiculous as you think, or even ridiculous at all.

−1