Submitted by r0cket-b0i t3_10jap5s in singularity

I am not sure why this only came to me today, but I realized an absolutely obvious and banal thing:

Context:
If we are indeed living in a time influenced by The Law of Accelerating Returns in science, we assume progress to be speeding up ( Ray Kurzweil's classic view) exponentially. Should we not consider doubling computational speed as a linear known then ?

The Law of Accelerating Returns would mean major progress should see exponential growth in key science driven industries. So lets take computers and related measurements from memory bandwidths to actual speed of computations and apply accelerated return projections to them? Should we not see new architectures and chip designs done with the help of AI using materials generated with ai etc and expect more than Moore's law projection or does Moore's law already represent the peak performance of the Accelerating Returns?

I know that in some parts of the industry it is actually moving faster than Moore's law, carbon nanotubes in chips, analog computing for some applications etc show 10 to 100x improvements but we dont see them in production.

Measurement idea:

What I am thinking is - we could probably try to chart out few projections into the future in terms of computational progress say 5 -10 years horizons and then expect them to happen earlier if The Law of Accelerating Returns and therefore Singularity projections actually work... For example 10 Exaflops supercomputer at current pace would not be available before 2030 but with the Law of Accelerated Returns we should expect it earlier? 2026-27 ish... what do you think?

9

Comments

You must log in or register to comment.

GayHitIer t1_j5j9stx wrote

True, we don't know what technology will come tommorow and technology moves the fastest it can.

So predictions will always be off to the pessimistic side than the optimistic.

All we know we could invent a new form for cpu's, which actually happen with atom thick transitores which would allow us to stack it.

Agi and Asi could come way faster than predictions, the problem is that we just don't know what hurdle we have to climb to get there before it happens.

The predictions right now seems pretty realistic with agi being achieved around 2030 to 2040, and asi being achieved soonly after around maybe 2050, after that predictions are off.

8

robustquorum09 t1_j5jpcre wrote

Singularity will emerge after AI becomes widely used and has its own freedom. No doubt about it.

1

TopicRepulsive7936 t1_j5jqwc3 wrote

First megaflops supercomputer was introduced in 1964 and the first gigaflops supercomputer in 1985 so it took 21 years. Now we expect that thousandfold increase to take 11 or 12 years so the doublings have been decreasing in time.

2

phriot t1_j5k1uem wrote

I think it's tough to be so granular as to make a prediction based on this trend only 3-4 years out. Various versions of Kurzweil's graph show a number of machines/years below trend. This appears to be particularly true in periods just before a computing paradigm shift. Limits of the silicon transistor suggest that we're due for one of these shifts. Computational power is one of those trends that I feel much more confident making medium-term predictions over short-term. Of course, in your example of exascale supercomputers, all it would really take to get 10 exaFLOPS would be to spend $2.5B to build 5 Aurora's worth of racks, instead of just 1.

4

HatsusenoRin t1_j5n8ucm wrote

At this point in time the limiting factor is more likely to be political than technological. Even though we could measure and project technological advancements, it's extremely difficult to predict political and thus economical fluctuations that weighs in on the curves.

Until AI could incorporate all real-time human and astronomical activities into its models, the ideal projection is still just a guesstimation.

2

r0cket-b0i OP t1_j5n9trg wrote

This is a very good point, I wonder if Aurora supports such performance stacking, I suspect it does not. So 10 Exaflop compute power should be something available for a single application at a same time not a distributed computational power available within the network for example. In other words I am interested to identify a projected performance of a chip per square nanometer or any other metric and then see if we are are actually able to deliver that faster or slower than projected in a 5 years timeline.

We can then extrapolate that observation to singularity timeline.

1

phriot t1_j5p190a wrote

As I'm sure you know, supercomputers these days are racks of nodes with fast interconnects. In that way, they are distributed, just housed in a single location. It's the software that allows the resources to be allocated to however many applications are running on one at any given time. I believe most of these supercomputers actually are running different applications at once, except maybe when they're being benchmarked. I don't think it's any less legitimate to call an AuroraX5 a single system than it is to call Aurora itself a single system. (You might call a node the smallest single system unit of a supercomputer, but even then, Aurora's nodes are 2 Xeon processors and 6 Xe GPUs.)

But yeah, I don't know how the scaling works. Maybe you really need to build an AuroraX7 or X10 to get to 10 exaFLOPS instead of just X5. The point is that if you just care about having raw computing power reach a specific level, the only thing really stopping you is money.

2