Viewing a single comment thread. View all comments

BadassGhost t1_j5654od wrote

This is my guess as well, but I think it's much less certain than AGI happening quickly from this point. We know human intelligence is possible, and we can see that we're pretty close to that level already with LLMs (relative to other intelligences that we know of, like animals).

But we know of exactly 0 superintelligences, so it's impossible to be sure that it's as easy to achieve as human-level intelligence (let alone if it's even possible). That being said, it might not matter whether or not qualitative superintelligence is possible, since we could just make millions of AGIs that all run much faster than a human brain. Quantity/speed instead of quality

3

ArgentStonecutter t1_j56fhsa wrote

I don't think we're anywhere near human level intelligence, or even general mammalian intelligence. The current technology shows no signs of scaling up to human intelligence and there is fundamental research into the subject required before we have a grip on how to get there.

2

BadassGhost t1_j56i9dt wrote

https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks

LLMs are close to, equal to, or beyond human abilities in a lot of these tasks. Some of them, they're not there yet though. I'd argue this is pretty convincing that they are more intelligent than typically mammals in abstract thinking. Clearly animals are much more intelligent in other ways, even more so than humans in many different domains (e.g. chimps selecting 10 numbers on a screen in order from memory experiment). But in terms of high-level reasoning, they're pretty close to human performance

7

ArgentStonecutter t1_j56sxck wrote

Computers have been better than humans at an increasing number of tasks since before WWII. Many of these tasks, like Chess and Go, were once touted as requiring 'real' intelligence. No possible list of such tasks is even meaningful.

2

BadassGhost t1_j570h0y wrote

Then what would be meaningful? What would convince you that something is close to AGI, but not yet AGI?

For me, this is exactly what I would expect to see if something was almost AGI but not yet there.

The difference from previous specialized AI is that these models are able to learn seemingly any concept, both in training and after training (in context). Things that are out of distribution can be taught with a single digit number of examples.

(I am not the one downvoting you)

3