Viewing a single comment thread. View all comments

pig_n_anchor t1_je72e8k wrote

Under my definition (the only correct one), AGI would have the power of recursive self improvement and would therefore very rapidly become exponentially more powerful. So if you start with human level AGI, you will soon reach ASI within months or maybe just a matter hours. Also, even narrow AI is superhuman at the things it can do well. E.g. a calculator is far better at basic arithmetic than any human. If an AI were really a general purpose machine, then I can’t see how it would not be superhuman instantly at whatever it does, if only because it will produce results much faster than a human. For these reason, the definition of ASI collapses into AGI. Like I said, my definition is the only correct one and if you don’t agree with me, you are wrong 😑.

5

drekmonger t1_je73xjv wrote

While the statement that "AGI would have the power of recursive self-improvement and would therefore very rapidly become exponentially more powerful" is a possibility, it is not a required qualification of AGI.

AGI is primarily characterized by its ability to learn, understand, and apply knowledge across a wide range of tasks and domains, similar to human intelligence.

Recursive self-improvement, also known as the concept of an intelligence explosion, refers to an AGI system that can improve its own architecture and algorithms, leading to rapid advancements in its capabilities. While this scenario is a potential outcome of achieving AGI, it is not a necessary condition for AGI to exist.

--GPT4

11

pig_n_anchor t1_je75t91 wrote

AI would say that. Trying to lull us into a fall sense of security!

Edit: AI researchers are already using GPT4 to improve AI. Yes it requires an operator, but more and more of the work is being done by AI. Don’t you think this trend will continue?

1

drekmonger t1_je7cylg wrote

Yes. The trend will continue.

However, I think it's still important to note that recursive self-improvement is not a qualification of AGI, but a consequence. One could imagine a system that's intentionally curtailed from such activities, for example. It could still be AGI.

2

pig_n_anchor t1_je7mw6c wrote

I agree. I'm just saying that anything that could rightly be called AGI will almost certainly have that capability. I suppose it's theoretically possible to have one that can't improve itself, but considering how good it is at programming already, I see it as very unlikely.

1