Viewing a single comment thread. View all comments

ItIsIThePope t1_jedrsmk wrote

Well that's why AGI is a cornerstone for ASI, because if we can get to AGI that is an AI capable of human intelligence only with far superior processing power and thinking resource in general, it would essentially advance itself to become super-intelligent.

Just as how expert humans continuously learn and get smarter through knowledge gathering (scientific method etc.) an AI would learn, experiment and learn some more, only this time, with far far greater rate and efficiency

Humans now are smarter than humans then because of our quest for knowledge and developing methods of acquiring them, AGI will adhere to the same principles but boost progress exponentially

47

chrisc82 t1_jeebekc wrote

This is why I think there's going to be a hard (or at least relatively fast) takeoff. Once AGI is given the prompt and ability to improve it's own code recursively, then what happens next is truly beyond the event horizon.

14

ItIsIThePope t1_jeeipqy wrote

It really is wild, considering the AGI will be in the same such awe as us when it finally creates ASI!

6

MayoMark t1_jeg8gkl wrote

Coding its own simulations could help AI learn some things, but some fields, like quantum mechanics, cosmology, biochemistry, and neuroscience, would probably still require physical experimentation. AI could help with that and even suggest experiments, but it would still need the results of the experiments to reach conclusions.

3