Viewing a single comment thread. View all comments

drekmonger t1_je73xjv wrote

While the statement that "AGI would have the power of recursive self-improvement and would therefore very rapidly become exponentially more powerful" is a possibility, it is not a required qualification of AGI.

AGI is primarily characterized by its ability to learn, understand, and apply knowledge across a wide range of tasks and domains, similar to human intelligence.

Recursive self-improvement, also known as the concept of an intelligence explosion, refers to an AGI system that can improve its own architecture and algorithms, leading to rapid advancements in its capabilities. While this scenario is a potential outcome of achieving AGI, it is not a necessary condition for AGI to exist.

--GPT4

11

pig_n_anchor t1_je75t91 wrote

AI would say that. Trying to lull us into a fall sense of security!

Edit: AI researchers are already using GPT4 to improve AI. Yes it requires an operator, but more and more of the work is being done by AI. Don’t you think this trend will continue?

1

drekmonger t1_je7cylg wrote

Yes. The trend will continue.

However, I think it's still important to note that recursive self-improvement is not a qualification of AGI, but a consequence. One could imagine a system that's intentionally curtailed from such activities, for example. It could still be AGI.

2

pig_n_anchor t1_je7mw6c wrote

I agree. I'm just saying that anything that could rightly be called AGI will almost certainly have that capability. I suppose it's theoretically possible to have one that can't improve itself, but considering how good it is at programming already, I see it as very unlikely.

1