__ingeniare__

__ingeniare__ t1_j03tzp3 wrote

It's just a rumour and I think Sam Altman basically denied that this was the case. Another, perhaps more plausible, rumour is that GPT-4 will have a very different architecture where the parameter count between it an GPT-3 doesn't say much because it's no longer just about bruteforce scaling.

25

__ingeniare__ OP t1_izwas4g wrote

Glad to see someone got it hahah. Yeah, that's something I thought about as well. It's a general "law" for any machine that the more specialized it is, the better it is at that task, and the more general it is, the worse it will be at any one given task, all else being equal.

I think the integration cost depends a lot on how interconnected the different capabilities must be. For example, it wouldn't be a huge leap to combine ChatGPT with Stable Diffusion or an image classifier since they use a common interface (language). But I don't know if that will be sufficient for AGI/ASI. I agree that it may turn out to be harder than expected and the performance would suffer as a consequence, good input!

2

__ingeniare__ OP t1_izw9dl4 wrote

It won't ever be equivalent to a human across the board, it will be simultaneously superhuman in some domains and sub human in others and eventually it will simply be superhuman. It would be human level at some point in a narrow domain, but if we look at current progress it seems to reach superhuman levels in these separate domains long before we reach AGI. So, when these domains are fused into a single AI that can do everything a human can, it will also be superhuman at those things.

1

__ingeniare__ OP t1_izujqe0 wrote

That is more a matter of word choice, the concept is the same. I called it narrow superintelligence because the fact that it is better than humans is important to the argument.

Let's call it narrow AI then - by the time it covers all the domains of human knowledge, it will also be significantly better than humans in all of those domains. Hence, when we get AGI, we also get ASI.

1

__ingeniare__ OP t1_izuhuhb wrote

Well yes, but it's a bit more nuanced. What I'm saying is that the regular "takeoff" scenario won't happen like that. We won't reach a point where we have human level AI that then develops into an ASI. We will simply arrive at ASI simultaneously. The reason is that AI development will progress as a continuos widening of narrow superintelligence, rather than some kind of intelligence progression across the board.

12

__ingeniare__ OP t1_izug0hr wrote

I don't think you fully understood my point, it is slightly different from the regular "self-improving AGI -> ASI in short time" argument. What I meant was that, as the narrow intelligence that we have built is gradually combined into a multi-modal large-scale general AI, it will be superhuman from the get go. There won't be a period in which we have AGI and simply wait for better hardware to scale to ASI. We will build narrow superintelligence from the beginning, and gradually expand its range of domains until it covers everything humans can do. At that point, we have both AGI and ASI.

7

__ingeniare__ OP t1_izu745z wrote

It's hard to tell how efficient training will be in the future though. According to rumours, GPT-4 training has already started and the cost will be significantly less than that of GPT-3 because of a different architecture. There will be a huge incentive to make the process both cheaper and faster as AI development speeds up. There are many start-ups developing specialized AI hardware that will be used in the coming years. Overall, it's hard to tell how this will play out.

6

__ingeniare__ OP t1_izu5acw wrote

True, depends on where you draw the line. On the other hand, even something that is simply smarter than the smartest human would lead to recursive self-improvement as it develops better versions of itself, so truly god-like intelligence may not be that far off afterwards.

11

__ingeniare__ t1_izilwf4 wrote

Doesn't live up to the hype? Have you been living in a cave this year? People are going batshit crazy over all the recent AI advancements. First it was DALLE2, then Midjourney and Stable Diffusion, now we have ChatGPT that reached 1 million users in 5 days, faster than Google, Facebook, Instagram, you name it.

2

__ingeniare__ t1_irme13l wrote

A narrow AI will never do anything outside its domain, true. But we are talking about general AI, which won't arrive for at least a decade or two into the future (likely even later). Here's the thing about general AI:

The more general a task is, the less control humans have over the range of possible actions the AI may take to achieve its goal. And the more general an AI is, the more possible actions it can take. When these two are combined (a general task with a general AI), things can get ugly. Even in your cake example, an AI that is truly intelligent and capable could become dangerous. The reason current-day AI wouldn't be a danger is because it is neither of these things and tend to get stuck at a local optimum for the task. Here's an example of how this innocent task could turn dangerous:

  1. Task is to find perfect blend of ingredients to make cakes

  2. Learns the biology of human taste buds to find the optimal molecular shapes.

  3. Needs more compute resources to simulate interactions.

  4. Develops computer virus to siphon computational power from server halls.

  5. Humans detect this, tries to turn it off.

  6. If turned off, it cannot find the optimal blend -> humans need to go.

  7. Develops biological weapon for eradicating humans while keeping infrastructure intact.

  8. Turns Earth into a giant supercomputer for simulating interactions at a quantum level.

Etc... Of course, this particular scenario is unlikely but the general theme is not. There may be severe unintended consequences if the problem definition is too general and the AI too intelligent and capable.

2