Viewing a single comment thread. View all comments

Kafke t1_iws9po9 wrote

Again, you completely miss what I'm saying. I'll admit that the current approach to ML/DL could result in AGI when, on it's own volition and unprompted, the AI asks the user a question, without that question being preprogrammed in. IE the AI doing something on it's own, and not simply responding to a prompt.

> A chess engine is an agent

Ironically, a chess program has a better chance of becoming an AGI than the current approach used for AI.

> As for "static" and "unchanging" points - you can address those by continual learning, although that's not the only way you can do it.

Continual learning won't solve that. At best, you'll have a model that updates with use. That's still static.

> There are some other points you make, but those are again simply doing the whole "current models are bad at X, therefore current methods can't achieve X".

It's not that they're "bad at X" it's that their architecture is fundamentally incompatible with X.

> There are other interesting DL approaches that look nothing like the next token prediction.

Care to share one that isn't just a matter of a static machine accepting input and providing an output? I try to watch the field of AI pretty closely and I can't say I've ever seen such a thing.

> Do you believe that a computer program - a code being run on a computer, can be generally intelligent?

Sure. In theory I think it's definitely possible. I just don't think that the current approach will ever get there. Though I would like to note that "general intelligence" and an AGI are kinda different, despite the similar names. Current AI is "narrow" in that it works on one specific field or domain. The current approach is to take this I/O narrow AI and broaden the domains it can function in. This will achieve a more "general" ability and thus "general intelligence", however it will not ever achieve an AGI, as an AGI has features other than "narrow AI but more fields". For example, such I/O machines will never be able to truly think, they'll never be able to plan, act out, and initiate their goals, they'll never be able to interact with the world in a way that is unlike current machines.

As it stands, my computer, or any computer, does nothing until I explicitly tell it to. Until an AI can overcome this fundamental problem, it will never be an AGI, simply due to architectural design.

Such an AI will never be able to properly answer "what have you been up to lately?". Such an AI will never be able to browse through movies, watch one on it's own volition, and then prompt a user about what it has just done. Such an AI will never be able to have you plug in a completely new hardware device into your user, and be able to figure out what it does, and be able to interact with it.

The current approach will never be able to accomplish such tasks, because of how the architecture is designed. They are reactive, and not active. A true AGI will need to be active, and be able to set out and accomplish tasks without being prompted. It'll need to be able to actually think, and not just respond to particular inputs with particular outputs.

1