Viewing a single comment thread. View all comments

abc-5233 t1_ivz8udd wrote

The whole consciousness/sentience issue is silly. We don't know what consciousness is, we don't have any way to measure it, there is no way to prove or disprove it.

The analogy I find is the Sun. Imagine people in the XVI century debating whether they can recreate the energy of the Sun on Earth, without actually understanding of what the Sun is.

"But it is right there, we know what the Sun is". No you don't. Understanding that the Sun is a fusion engine that converts light elements into heavier elements by fusing them with the force of gravity was so many concepts ahead of their understanding, that the debate of what a Sun is or isn't would be completely unproductive.

We have absolutely no idea what consciousness/sentience actually is. We can see its effects, like our ancestors could see the Sun. But we have no actual understanding of the mechanics of it.

As far as an AI that is not narrow, it already exists. Models like Deepmind's GATO, are capable of a myriad of tasks with the same weights. They are the very definition of AGI, but nobody calls them that, because AGI has become this unachievable ideal that changes definition every time there is a new advance.

Like Artificial Intelligence before it, the concept of AGI is an effort to put human intelligence in a category of its own.

A far more interesting question, in my view, is when will algorithms be able to do any productive task that a human can do, at a competent level.

I believe we are now like a third through, and will be at 100% next decade.

20

AdditionalPizza OP t1_ivzavx1 wrote

>A far more interesting question, in my view, is when will algorithms be able to do any productive task that a human can do, at a competent level.

That's what I would personally define AGI as. Any task a human can do, at least intellectually but possibly physically as well but to me that's more robotics than intelligence. It may require a physical body to achieve true AGI.

I agree with your statement about consciousness, that's why I excluded it from the definition.

But, I somewhat disagree about GATO. But only slightly, and that's more to the point of my post. I don't know exactly what to define Proto-AGI as and how many general tasks it must be equal or greater to a human at. But I'd definitely define full AGI as capable in all human intellectual tasks at a level equal or greater to humans.

So GATO might be Proto-AGI today by definition. It's general, it's definitely not narrow. But I'm trying to say 2023 will be when we get a general AI that is able to meet or surpass human ability across most/many intellectual tasks. I think memory and reinforcement learning will be the key to achieving something that's basically AGI next year, but we'll probably move goal posts as it gets closer.

2

Lone-Pine t1_iw687ao wrote

It's been a few years since my last latin class, what century is XVI century?

1

abc-5233 t1_iw8mnkw wrote

It’s the 16th century, so the 1500’s. But it was just an example of a time before the understanding of the atom, elements, atomic fission and fusion.

1

squareOfTwo t1_ixzrybf wrote

Everyone who says that GATO is proto AGI or "the definition of AGI" is either ignorant or doesn't get what AGI is about. Hint: GATO can't even learn with RL.

1