Viewing a single comment thread. View all comments

__ingeniare__ OP t1_izug0hr wrote

I don't think you fully understood my point, it is slightly different from the regular "self-improving AGI -> ASI in short time" argument. What I meant was that, as the narrow intelligence that we have built is gradually combined into a multi-modal large-scale general AI, it will be superhuman from the get go. There won't be a period in which we have AGI and simply wait for better hardware to scale to ASI. We will build narrow superintelligence from the beginning, and gradually expand its range of domains until it covers everything humans can do. At that point, we have both AGI and ASI.

7

Accomplished_Diver86 t1_izuho09 wrote

Yeah well that I just don’t agree with

1

__ingeniare__ OP t1_izui5xn wrote

Which part?

2

Accomplished_Diver86 t1_izuikfz wrote

As you have said (I will paraphrase) „We will built dumb ASI and expand its ranges of domain“

My argument is that ASI does inherently have greater ranges of domain than AGI.

So if we expand it there will be a point where the ranges of domain are human like (AGI) but not ASI like.

TLDR: You can not build a narrow ASI and scale it. That’s not an ASI but a narrow AI

1

__ingeniare__ OP t1_izujqe0 wrote

That is more a matter of word choice, the concept is the same. I called it narrow superintelligence because the fact that it is better than humans is important to the argument.

Let's call it narrow AI then - by the time it covers all the domains of human knowledge, it will also be significantly better than humans in all of those domains. Hence, when we get AGI, we also get ASI.

1

Accomplished_Diver86 t1_izujzaq wrote

Sure but you are still forgetting the first part of the picture. Expansion means movement. You will have a time where it is good but not good in all domains. This will resemble what we call AGI.

Humans are good just not in all domains and ranges you wish we were. It’s the same thing with AI.

TLDR: Yes but no

1

__ingeniare__ OP t1_izulilb wrote

Ah I see what you mean, I guess it depends on how strictly you enforce the generality of AGI.

1