Viewing a single comment thread. View all comments

ECEngineeringBE t1_iwq8f8j wrote

>Static. Deterministic. Unchanging. Such a thing can never be an agent, and thus can never be a true AGI

It can deterministically output probability distributions, which you can then sample, making it nondeterministic. You also say that such a system can never be an agent. A chess engine is an agent. Anything that has a goal and acts in an environment to achieve it is an agent, whether deterministic or not.

But even a fully deterministic program can be an AGI. If you deny this, then this turns into a philosophical debate on determinism, which I'd rather avoid.

As for "static" and "unchanging" points - you can address those by continual learning, although that's not the only way you can do it.

There are some other points you make, but those are again simply doing the whole "current models are bad at X, therefore current methods can't achieve X".

I also think that you might be pattern matching a lot to GPT specifically. There are other interesting DL approaches that look nothing like the next token prediction.

Now, I think we ought to narrow down our disagreements here, as to avoid pointless arguments. So let me ask a concrete question:

Do you believe that a computer program - a code being run on a computer, can be generally intelligent?

1

Kafke t1_iws9po9 wrote

Again, you completely miss what I'm saying. I'll admit that the current approach to ML/DL could result in AGI when, on it's own volition and unprompted, the AI asks the user a question, without that question being preprogrammed in. IE the AI doing something on it's own, and not simply responding to a prompt.

> A chess engine is an agent

Ironically, a chess program has a better chance of becoming an AGI than the current approach used for AI.

> As for "static" and "unchanging" points - you can address those by continual learning, although that's not the only way you can do it.

Continual learning won't solve that. At best, you'll have a model that updates with use. That's still static.

> There are some other points you make, but those are again simply doing the whole "current models are bad at X, therefore current methods can't achieve X".

It's not that they're "bad at X" it's that their architecture is fundamentally incompatible with X.

> There are other interesting DL approaches that look nothing like the next token prediction.

Care to share one that isn't just a matter of a static machine accepting input and providing an output? I try to watch the field of AI pretty closely and I can't say I've ever seen such a thing.

> Do you believe that a computer program - a code being run on a computer, can be generally intelligent?

Sure. In theory I think it's definitely possible. I just don't think that the current approach will ever get there. Though I would like to note that "general intelligence" and an AGI are kinda different, despite the similar names. Current AI is "narrow" in that it works on one specific field or domain. The current approach is to take this I/O narrow AI and broaden the domains it can function in. This will achieve a more "general" ability and thus "general intelligence", however it will not ever achieve an AGI, as an AGI has features other than "narrow AI but more fields". For example, such I/O machines will never be able to truly think, they'll never be able to plan, act out, and initiate their goals, they'll never be able to interact with the world in a way that is unlike current machines.

As it stands, my computer, or any computer, does nothing until I explicitly tell it to. Until an AI can overcome this fundamental problem, it will never be an AGI, simply due to architectural design.

Such an AI will never be able to properly answer "what have you been up to lately?". Such an AI will never be able to browse through movies, watch one on it's own volition, and then prompt a user about what it has just done. Such an AI will never be able to have you plug in a completely new hardware device into your user, and be able to figure out what it does, and be able to interact with it.

The current approach will never be able to accomplish such tasks, because of how the architecture is designed. They are reactive, and not active. A true AGI will need to be active, and be able to set out and accomplish tasks without being prompted. It'll need to be able to actually think, and not just respond to particular inputs with particular outputs.

1