Viewing a single comment thread. View all comments

Nadeja_ t1_iriomqy wrote

Artificial General Intelligence, as an agent that isn't trained on a single task and can generalize.

What follows is an optimistic scenario.

Early proto/sub-human AGI: now - "With a single set of weights, GATO can engage in dialogue, caption images, stack blocks with a real robot arm, outperform humans at playing Atari games, navigate in simulated 3D environments, follow instructions, and more". Not great yet (it may seem a jack of trades and master of none), but with an improved architecture and scaling up, the possible developments sound promising.

SH-AGI (sub-human): Q4 2023 to 2024 - as long as nucking won't happen, nor the next political delirium. The SH-AGI would be a considerable improvement compared to GATO and would be capable of discussing with you at LaMDA+ level about the good quality video that it is generating. At times it would feel even human and even sentient, but other times you would still facepalm in frustration and in fact memory and other issues and weaknesses won't be fully resolved yet; also (like the current models that draw weird hands) it would still do some weird things, not realizing they aren't making full sense.

HL-AGI (human-level) / Strong AI: around 2026 (but still apparently not really self-aware) developing to around 2030, when it would be a strong AI, possibly self-aware, conscious and also not just reacting to your input. Although qualitatively not super-human, but as smart as a smart human (and now fully aware of what hands are, how they move, what makes sense etc), quantitatively it would beat any human with the sheer processing power running 24/7 and trained more than any human could be in a multitude of lives, for any possible skill, and connecting all this knowledge and skills together, understanding and having ideas that no human could even imagine.

At that point hope that the alignment problem is solved enough and you aren't facing a manipulative HL-AI instead. This won't be just the values (you can't even "align" humans to values, rights, crimes, only broadly), but it would be an alignment to the core goals (that for the humanity, as well as any other species on the Earth is "survive"). The aligned HL-AGI would see her/him/them/itself as part of the humanity**, sharing the same goal of survival**. It that won't fully happen, good luck.

ASI (super-human): not too many years after. This would happen when the AI becomes also qualitatively superior to any human cognitive skill. Reverse engineering the human brain is a thing, but can you imagine *super-*human reasoning? You could probably, intuitively, guess that there is something smarter than how you can think, but if you can figure what is it, you are already that intelligent, therefore it's not super-intelligent. Do you see what I mean? As a human level-intelligence you can barely figure out how to engineer a human-level intelligence. To go above, you could think of an indirect trick, e.g. of scaling up the human-level brain or using genetic algorithms, hoping that something emerges by itself. However, since the HL-AGI would also be a master coder and a master engineer and with a top notch understanding of how the brain works, and the master of anything else... maybe it would be able to figure out a better trick.

Once there, you couldn't possibly understand anymore what the ASI thinks even if the ASI were trying to explain it to you, as you couldn't explain quantum computing to your hamster, and then it would be like explaining it to the ants, and then...

9