Viewing a single comment thread. View all comments

footurist t1_ivyw1fq wrote

Aggressive TLDR : Inadequate definition term

I've read about these "Proto-AGI" definitions before here, but to me these mostly don't make sense.

Perhaps there's debate about the definition of AGI itself, but in general ( heh ) the G in it should imply the ability to ( constrained, because total generality isn't really achievable with our current knowledge I believe, have read ) learn any task that ( continuously aswell ) and like a human would.

The coming up of these definitions chronically lined up with the rise of transformer based LLMs I believe, especially GPT-3. This timeline makes sense.

However, these architectures don't learn like humans do at all. They don't leverage armadas of extremely subtle abstractions like our brains ( the kind of which can be displayed in simple thought experiments, but which I'm too tired to go through here; think carefully about stages in the first time assessing the rules of a roundabout for example ) efficiently do and they don't learn continuously. They're more like impressive data crunchers than efficient abstracters like our brains.

To me it's only logical that this ability to potentially learn each and every task that crosses one's mind and approach human level in it ( again, within the constraints mentioned above ), leveraging efficient transfer learning along the way, were deemed a requirement of this definition, because otherwise the agent wouldn't really be a general learner, but merely a sort of wasteful imitator thereof. That is especially true for the current LLMs, however impressive they are.

So, in conclusion, maybe at the admission of improving the term at hand something in resemblance of what is talked about in this post could indeed surface in the coming year. But as it stands, no imo.

2

AdditionalPizza OP t1_ivyznve wrote

To clarify the definition I'm using a little more, just simply something between Narrow AI and AGI. When it can't be classified as just another narrow AI or several Narrow AI, but also hasn't reached the pinnacle of human ability in every task. It's a very broad range, sure, but something undeniably not just narrow AI.

As for an LLM's ability to learn, I don't have anything on hand at the moment without searching for it, but they've shown success in Reinforcement Learning during pretraining for language models. And the models were able to surpass the abilities of the original algorithms they were pretrained on. I strongly believe RL tied into an LLM will be vastly explored next year and the results will lead to something most would call or strongly resemble a Proto-AGI. Though of course, the term isn't official, it will be the point where people start really considering AGI on a shorter timeframe.

I don't know/think about any public release of this though. Just the existence.

1

footurist t1_ivz26ai wrote

The inadequacy occurs with the usage of the term prototype, which has a reasonably well defined meaning. Basically it serves as an MVP for one or more concepts that are themselves well-defined, so their feasibility and worthiness can be displayed. In the case at hand the concept is true generality of learning as we know it, which the current mainstream paradigm is definitively not capable of. As mentioned before, they might achieve limited imitation thereof, to an extent which is probably quite hard to guesstimate, but never the real thing ( in their current form, evolution can always change the landscape of course, but then they wouldn't be the same thing anymore ).

I recommend some YouTube videos by Numenta. Jeff Hawkins can explain these kinds of things to laymen incredibly well ( he was on Lex's podcast aswell ).

2

AdditionalPizza OP t1_iw00jyx wrote

Your definition of prototype is not the full definition of the word though. Prototype can simply be the inspiration for later models. As in, we're on the right track and probably only adjustments/tweaking/fine-tuning, compute, and data away from being able to create full AGI. I think memory is a hurdle we will over come shortly.

1

footurist t1_iw01b6x wrote

It is, in the sense that it must prove the concept. If it doesn't, it's maybe a precursor of some kind, but not the prototype.

2

AdditionalPizza OP t1_iw02859 wrote

I'm saying 2023 the concept will be proven, we will see a concrete roadmap toward AGI because of the success that SOTA models will achieve.

But I think our very slight difference in 2 basically synonymous words is more pedantic than I feel like debating haha. Precursor and prototype are so similar I see no reason to argue either way.

2

AsheyDS t1_iw0vwj7 wrote

Similar in your estimation. I'm guessing you don't work in a technical field. Proto-AGI is just not a good term and is wildly misleading to the general public and enthusiasts alike, and you're not doing anyone any favors by propagating it. You yourself are a victim of it's effects. All it does is create the sense that we're almost there, and that the current architectures are sufficient for AGI, and that any outstanding 'problems' aren't really problems anymore. That's nothing but pure speculation. We're not even sure if current transformers are on the same spectrum of classification as AGI. Who's to say it's a linear path? Narrow AI, even an interoperable collection of them, may yet hit a wall in terms of capability, and may not be the way forward. We just don't know yet. Nobody is stopping you from speculating, but using this term is highly inaccurate.

2

AdditionalPizza OP t1_iw1b3ou wrote

And people in technical fields aren't notoriously awful at predicting what's best for the general public.

I'm not doing anyone any disservice, and not propogating anything negative here. My post is literally a poll asking people's opinion, and stating my own.

1