Viewing a single comment thread. View all comments

r0cket-b0i OP t1_j5n9trg wrote

This is a very good point, I wonder if Aurora supports such performance stacking, I suspect it does not. So 10 Exaflop compute power should be something available for a single application at a same time not a distributed computational power available within the network for example. In other words I am interested to identify a projected performance of a chip per square nanometer or any other metric and then see if we are are actually able to deliver that faster or slower than projected in a 5 years timeline.

We can then extrapolate that observation to singularity timeline.


phriot t1_j5p190a wrote

As I'm sure you know, supercomputers these days are racks of nodes with fast interconnects. In that way, they are distributed, just housed in a single location. It's the software that allows the resources to be allocated to however many applications are running on one at any given time. I believe most of these supercomputers actually are running different applications at once, except maybe when they're being benchmarked. I don't think it's any less legitimate to call an AuroraX5 a single system than it is to call Aurora itself a single system. (You might call a node the smallest single system unit of a supercomputer, but even then, Aurora's nodes are 2 Xeon processors and 6 Xe GPUs.)

But yeah, I don't know how the scaling works. Maybe you really need to build an AuroraX7 or X10 to get to 10 exaFLOPS instead of just X5. The point is that if you just care about having raw computing power reach a specific level, the only thing really stopping you is money.