Viewing a single comment thread. View all comments

phriot t1_j5ovn3n wrote

Reply to comment by Ortus14 in Steelmanning AI pessimists. by atomsinmove

I don't think that you have to simulate a human brain to get intelligence, either. I discuss that toward the end of my comment. But the OP asked about counterarguments to the Kurzweil timeline for AGI. Kurzweil explicitly bases his timeline on the those two factors: computing power and a good enough brain model to simulate in real time. I don't think that the neuroscience will be there in 6 years to meet Kurzweil's timeline.

If we get AGI in 2029, it will likely be specifically because some other architecture does work. It won't be because Kurzweil was correct. In some writings, Kurzweil goes further and says that we'll have this model of the brain, because we'll have this really amazing nanotech in the late 2020s that will be able to non-invasively map all the synapses, activation states of neurons, etc. I'm not particularly up on that literature, but I don't think we're anywhere close to having that tech. I expect that we'll need AGI/ASI, first, to get there before 2100.

With regards to your own thinking, you only mention computing power. Do you think that intelligence is emergent given a system that produces enough FLOPS? Or do you think that we'll just have enough spare computing power to analyze data, run weak AI, etc., that will help us discover how to make an AGI? I don't believe that intelligence is emergent based on processing power, or else today's top supercomputers would be AGIs already, as they surpass most estimates of the human brain's computational capabilities. That implies that architecture is important. Today, we don't really have ideas that will confidently produce an AGI other than a simulated brain. But maybe we'll come up with a plan in the next couple of decades. (I am really interested to see what a LLM with a memory, some fact-checking heuristics, ability to constantly retrain, and some additional modalities would be like.)

1