Viewing a single comment thread. View all comments

phriot t1_j5kyd45 wrote

Kurzweil's prediction is based on two parameters:

  1. The availability of computing power sufficient to simulate a human brain.
  2. Neuroscience being advanced enough to tell us how to simulate a human brain at a scale sufficient to produce intelligence.

I don't think that Kurzweil does a bad job at ballparking the calculations per second of the brain. His estimate is under today's top supercomputers, but still far greater than a typical desktop workstation. (If I'm doing my math right, it would take something like 2,000 Nvidia GeForce 4090s to reach Kurzweil's estimate at double precision, which is the precision supercomputers are measured at, or ~28 at half or full precision.)

That leaves us with the neuroscience. I'm not a neuroscientist, but I am another kind of life scientist. Computing power has followed this accelerating trend, but basic science is a lot slower. It is more of a punctuated equilibrium model than an exponential. Things move really fast when you know what to do next, and then it hits a roadblock while you make sense of all this new information you gather. It also relies on funding, and people. Scientists at the Human Brain Project consider real-time models a long term goal. Static, high resolution models that incorporate structure and other data (genomics, proteomics, etc.) are listed as a medium term goal. I don't know what "long term" is to this group, but I'm assuming it's more than 6 years. And if all that complexity is required, then Kurzweil is likely off by several orders of magnitude, which could put us decades out from his prediction. Then again, maybe you don't need to model everything in that great of detail to get to intelligence, but that goes against Kurzweil's prediction.

Of course, this all presupposes that you need a human brain for human-level intelligence. It's not a bad guess, as all things that we know to be intelligent have nervous systems evolved on Earth and share some last common ancestor. If we go another route to intelligence, that puts us back at factoring people into the process. We either need people to design this alternate intelligence architecture, or create weak AI that's capable of designing this other architecture.

While I could be wrong, and maybe you can slap some additional capability modules onto an LLM, let it run and retrain itself constantly on a circa 2029 supercomputer, and that will be sufficient. But I A) don't know for sure that will be the case, and B) think that if it does happen, it's kind of just a coincidence and not to the letter of Kurzweil's prediction.

4

Ortus14 t1_j5objyy wrote

I see no reason why understanding the human brain would be needed.

We have more than enough concepts and AGI models, we just need more compute imho. Compute (for the same cost) increases by a thousand times every ten years. So by Kurzweils 2045 date, compute for the same cost can be estimated to be 4.2 million times more than today.

Even if moors law ended the trend would continue because of the fact that server farms are growing at an exponential pace, and solar energy is dropping towards zero. If we have a breakthrough in fusion power it will accelerate beyond our models.

Today we can simulate vision (roughly 20% of the human brain) but we're simulating it in a way that's far more computationally efficient than the human brain, because we're making the absolute most out of our hardware.

It's pretty likely we'll reach super human level AGI well before 2045.

1

phriot t1_j5ovn3n wrote

I don't think that you have to simulate a human brain to get intelligence, either. I discuss that toward the end of my comment. But the OP asked about counterarguments to the Kurzweil timeline for AGI. Kurzweil explicitly bases his timeline on the those two factors: computing power and a good enough brain model to simulate in real time. I don't think that the neuroscience will be there in 6 years to meet Kurzweil's timeline.

If we get AGI in 2029, it will likely be specifically because some other architecture does work. It won't be because Kurzweil was correct. In some writings, Kurzweil goes further and says that we'll have this model of the brain, because we'll have this really amazing nanotech in the late 2020s that will be able to non-invasively map all the synapses, activation states of neurons, etc. I'm not particularly up on that literature, but I don't think we're anywhere close to having that tech. I expect that we'll need AGI/ASI, first, to get there before 2100.

With regards to your own thinking, you only mention computing power. Do you think that intelligence is emergent given a system that produces enough FLOPS? Or do you think that we'll just have enough spare computing power to analyze data, run weak AI, etc., that will help us discover how to make an AGI? I don't believe that intelligence is emergent based on processing power, or else today's top supercomputers would be AGIs already, as they surpass most estimates of the human brain's computational capabilities. That implies that architecture is important. Today, we don't really have ideas that will confidently produce an AGI other than a simulated brain. But maybe we'll come up with a plan in the next couple of decades. (I am really interested to see what a LLM with a memory, some fact-checking heuristics, ability to constantly retrain, and some additional modalities would be like.)

1