Viewing a single comment thread. View all comments

Kolinnor t1_izucjy6 wrote

I agree with the fast-takeoff argument. If I had the power to self-improve and read + understand the whole internet in a limited time, I doubt I wouldn't be basically a god.

I think AGI is a vague term and we'll probably have things that are mindblowingly close to humans but still lack some level 2 reasoning and some deeper intuition about things. ChatGPT gives me that vibe, at least.

EDIT : to clarify, humans are currently improving computers very fast, so if we truly have AGI, we have self improvement machines

47

HeinrichTheWolf_17 t1_izvfw5c wrote

I’ve been saying it’s going to be a hard takeoff for 8 years now and everyone thought I was nuts. There’s no reason to assume an AGI would take as long to learn things just because the human brain does. Even Kurzweil is wrong here.

Writing is on the wall guys, we don’t have to wait until 2045.

23

-ZeroRelevance- t1_izvhqko wrote

The problem with hard takeoff is mostly computing power. If the AI is not software limited but hardware limited, then it would likely take quite a bit longer for the anticipated exponential growth to take place, as each iteration would require new innovations in computing and manufacturing to take place. AGI would definitely speed up that process significantly, but it would be far from instantaneous.

18

HeinrichTheWolf_17 t1_izvjeuz wrote

Software optimization plays a massive role though too, Stable Diffusion, OpenAI Five and AlphaZero were able to achieve the same performance on only a fraction of the required hardware they initially needed to run, the human brain can’t really do that. Assuming we do eclipse the power the brain via hardware soon, it’ll be quick that AGI shoots right past the human learning speed. Not only that, we’ll be giving it every initial GPU it needs until it can design it’s own hardware for itself.

I’d agree it won’t be instant, but it ain’t taking 20-30 years. The writing is on the wall.

17

-ZeroRelevance- t1_izvkafp wrote

Yeah, I get that. I probably didn’t convey it well enough in my original comment, but the main reason why I don’t think it’ll be as instantaneous as people think is because not only is having better designs available important, but you also need to manufacture them too. The manufacturing alone will probably take several months, even if you have a super intelligence behind the scenes, because you will need to develop new chip manufacturing devices and facilities, which are very finicky and expensive, find an appropriate facility, and then actually construct the thing, which takes labour time and also has logistical challenges. An idea/design alone won’t suddenly manifest a new next-gen supercomputer.

4

Talkat t1_izw467u wrote

Heh, I completely agree with you but I was thinking of when a human first learns a new skill it takes up all their brainpower and focus, but once mastered can be done without thought. Kinda like how getting an AI to do something first takes a lot of power but once we nail it we can reduce it signifigantly.

​

AGI will be able to optimize itself like no ones business. I think our hardware is powerful enough for an AGI... but to get there we will need more power as we can't write god like AI

3

was_der_Fall_ist t1_izuipc1 wrote

> If I had the power to self-improve...

That's really the crux of the matter. What if we scale up to GPT-5 such that it is extremely skilled/reliable at text-based tasks, to the point that it would seem reasonable to consider it generally intelligent, yet perhaps for whatever reason it's not able to recursively self-improve by training new neural networks or conducting novel scientific research or whatever would have to be done for that. Maybe being trained on human data leaves it stuck at ~human level. It's hard to say right now.

8

overlordpotatoe t1_izvxoqt wrote

I do wonder if there's a hard limit to the intelligence of large language models like GPT considering they fundamentally don't have any actual understanding.

7

electriceeeeeeeeeel t1_j01q7j2 wrote

You can already see how good it is at coding. It does lack understanding context and memory and longer term planning. But honestly that stuff should be here by GPT-5 it seems relatively easier than other problems they have solved. So I wouldn't be suprised if it's already self improving by then.

Consider this -- an OpenAI software engineer probably already used chatbot to improve code, even if just a line. It means its already self improving just a bit slow, but with increasing speed no doubt.

2