Submitted by Shiningc t3_11wj2l1 in Futurology
Since most people seem to have no idea about the difference between an AI and an AGI, there needs to be a distinction.
AI stands for Artificial Intelligence, and AGI stands for Artificial General Intelligence.
Well, what is "general" intelligence? Before we get into this, we'll need to understand the concept of "Turing completeness".
Alan Turing was the "father of CPUs", and he imagined a theoretical computer that could compute any function/computation that is physically possible. He called it the Turing machine. He called a CPU that was identical in its function to this Turing machine was "Turing complete", barring infinite time and memory. Any modern CPU is basically Turing complete. The human brain is Turing complete.
Alan Turing imagined that eventually, people could program human level intelligence into this CPU. This is what we now call an "AI", which is really now an "AGI". The "Turing test" is named after him, as he came up with the test. It's a test to see if an AI could fool a human into believing that it's actually a human and not an AI.
So anyway, what is a general intelligence, then? A general intelligence is an intelligence that is capable of doing any kind of intelligent tasks. This intelligence can create art, learn and speak languages, have consciousness, have morality, do science and philosophy, do mathematics, come up with imagination, and virtually anything that you could ever imagine doing.
The current AI is basically LLM or Machine Learning, which are basically just algorithms based on statistics and probabilities. This is obviously not a general intelligence. It's not Turing complete. It's a singular intelligence that could only do statistics and probabilities. It's not an AGI. It's not human-level intelligence.
So, what does this mean? It means that no, an AGI is nowhere near close to being created. In the end, the AI is only doing a bunch of statistics and probabilities. The AI "looks" as if it's performing human-level intelligent tasks, but in reality it's just copying and aping humans. The AI can't do art, or science, or philosophy, or mathematics on its own. It doesn't have consciousness, and it can't "think" like a human can. It's nowhere near human-level intelligence.
If you think that an AGI or singularity is near and there's going to be an imminent AI revolution or an apocalypse, then no, it's nowhere near close. Of course that an AGI could be created tomorrow, but first, we'll need to understand how the human intelligence works. Or how the human brain manages to have a general intelligence.
Surur t1_jcyks6i wrote
You write a definition and then you draw the wrong conclusion.
The main issue with LLM is that they are currently static (no continuous learning), though they do have in context learning, but otherwise they are pretty close to general intelligence. Current feed-forward LLM are not Turing complete, but once the loop gets closed they would be.
> Of course that an AGI could be created tomorrow, but first, we'll need to understand how the human intelligence works.
This is obviously not true, since your mother made you, and she knows nothing about AGI.