Viewing a single comment thread. View all comments

Accomplished_Diver86 t1_izuho09 wrote


__ingeniare__ OP t1_izui5xn wrote

Which part?


Accomplished_Diver86 t1_izuikfz wrote

As you have said (I will paraphrase) „We will built dumb ASI and expand its ranges of domain“

My argument is that ASI does inherently have greater ranges of domain than AGI.

So if we expand it there will be a point where the ranges of domain are human like (AGI) but not ASI like.

TLDR: You can not build a narrow ASI and scale it. That’s not an ASI but a narrow AI


__ingeniare__ OP t1_izujqe0 wrote

That is more a matter of word choice, the concept is the same. I called it narrow superintelligence because the fact that it is better than humans is important to the argument.

Let's call it narrow AI then - by the time it covers all the domains of human knowledge, it will also be significantly better than humans in all of those domains. Hence, when we get AGI, we also get ASI.


Accomplished_Diver86 t1_izujzaq wrote

Sure but you are still forgetting the first part of the picture. Expansion means movement. You will have a time where it is good but not good in all domains. This will resemble what we call AGI.

Humans are good just not in all domains and ranges you wish we were. It’s the same thing with AI.

TLDR: Yes but no


__ingeniare__ OP t1_izulilb wrote

Ah I see what you mean, I guess it depends on how strictly you enforce the generality of AGI.