Submitted by Shiningc t3_11wj2l1 in Futurology

Since most people seem to have no idea about the difference between an AI and an AGI, there needs to be a distinction.

AI stands for Artificial Intelligence, and AGI stands for Artificial General Intelligence.

Well, what is "general" intelligence? Before we get into this, we'll need to understand the concept of "Turing completeness".

Alan Turing was the "father of CPUs", and he imagined a theoretical computer that could compute any function/computation that is physically possible. He called it the Turing machine. He called a CPU that was identical in its function to this Turing machine was "Turing complete", barring infinite time and memory. Any modern CPU is basically Turing complete. The human brain is Turing complete.

Alan Turing imagined that eventually, people could program human level intelligence into this CPU. This is what we now call an "AI", which is really now an "AGI". The "Turing test" is named after him, as he came up with the test. It's a test to see if an AI could fool a human into believing that it's actually a human and not an AI.

So anyway, what is a general intelligence, then? A general intelligence is an intelligence that is capable of doing any kind of intelligent tasks. This intelligence can create art, learn and speak languages, have consciousness, have morality, do science and philosophy, do mathematics, come up with imagination, and virtually anything that you could ever imagine doing.

The current AI is basically LLM or Machine Learning, which are basically just algorithms based on statistics and probabilities. This is obviously not a general intelligence. It's not Turing complete. It's a singular intelligence that could only do statistics and probabilities. It's not an AGI. It's not human-level intelligence.

So, what does this mean? It means that no, an AGI is nowhere near close to being created. In the end, the AI is only doing a bunch of statistics and probabilities. The AI "looks" as if it's performing human-level intelligent tasks, but in reality it's just copying and aping humans. The AI can't do art, or science, or philosophy, or mathematics on its own. It doesn't have consciousness, and it can't "think" like a human can. It's nowhere near human-level intelligence.

If you think that an AGI or singularity is near and there's going to be an imminent AI revolution or an apocalypse, then no, it's nowhere near close. Of course that an AGI could be created tomorrow, but first, we'll need to understand how the human intelligence works. Or how the human brain manages to have a general intelligence.

0

Comments

You must log in or register to comment.

Surur t1_jcyks6i wrote

You write a definition and then you draw the wrong conclusion.

The main issue with LLM is that they are currently static (no continuous learning), though they do have in context learning, but otherwise they are pretty close to general intelligence. Current feed-forward LLM are not Turing complete, but once the loop gets closed they would be.

> Of course that an AGI could be created tomorrow, but first, we'll need to understand how the human intelligence works.

This is obviously not true, since your mother made you, and she knows nothing about AGI.

2

Shiningc OP t1_jcym1qo wrote

It's static because it's just statistics and probabilities.

>This is obviously not true, since your mother made you, and she knows nothing about AGI.

I don't see what your point is. My mother doesn't know anything about how human intelligence works.

1

Surur t1_jcyn9yy wrote

> It's static because it's just statistics and probabilities.

Just like anything else.

> My mother doesn't know anything about how human intelligence works.

Exactly. So clearly you can make an AGI without knowing how it works also.

3

Shiningc OP t1_jcyz7lo wrote

>Just like anything else.

Except for human intelligence, which is clearly not static.

>Exactly. So clearly you can make an AGI without knowing how it works also.

If you want to program it, then no.

1

Surur t1_jcz6o4q wrote

> Except for human intelligence, which is clearly not static.

And you think this is the end of the line? With in-context learning already working?

> If you want to program it, then no.

That has been abandoned years ago.

3

Shiningc OP t1_jcz9fsm wrote

>And you think this is the end of the line? With in-context learning already working?

Doesn't matter, they're just statistics and probabilities. It won't somehow evolve into general intelligence.

1

Surur t1_jcz9txw wrote

> Doesn't matter, they're just statistics and probabilities. It won't somehow evolve into general intelligence.

So you specifically don't think statistics and probabilities will allow

> an intelligence that is capable of doing any kind of intelligent tasks

Which task specifically do you think LLM cant do?

2

Shiningc OP t1_jczadnh wrote

>Which task specifically do you think LLM cant do?

Anything that requires more than statistics and probabilities. Are you claiming that all intelligence is somehow rooted in statistics and probabilities?

1

Surur t1_jczczkh wrote

Specifically human intelligence yes, since that is how human neural networks work.

1

Shiningc OP t1_jczhs46 wrote

How do you know how human neural networks work? And why would a branch of mathematics somehow branch into other areas of intelligence?

1

Surur t1_jczia7a wrote

Because we have biologists tell us how they work. We can actually examine the neurons, the axons, the dendrites and synapses.

So we know how biological human networks work, and we simulate how they work in computer neural networks.

We know its just stats and probabilities.

1

Shiningc OP t1_jczizc4 wrote

Biologists haven't said anything about how human neural networks work.

That's like saying all mathematical problems can somehow be solved with statistics and probabilities. And that's just sheer nonsense.

1

Surur t1_jczkjbx wrote

> Biologists haven't said anything about how human neural networks work.

Get educated https://en.wikipedia.org/wiki/Neural_circuit

> That's like saying all mathematical problems can somehow be solved with statistics and probabilities. And that's just sheer nonsense.

Of course we can. 1 and 0 are both part of the probability cloud.

You seem to think because NNs are currently bad at symbolic thinking they are not intelligent. The funny thing is 30 years ago people thought pattern matching was what set human intelligence apart from computers.

It's just a question of time.

1

Shiningc OP t1_jczn6ic wrote

>Get educated https://en.wikipedia.org/wiki/Neural_circuit

Where does that say anything about biological neural networks being probabilistic?

Also contradicting your claims:

>The connections between neurons in the brain are much more complex than those of the artificial neurons used in the connectionist neural computing models of artificial neural networks.

​

>Of course we can. 1 and 0 are both part of the probability cloud.

And how would being in probability solve mathematical problems?

1

Surur t1_jcznsc2 wrote

> The connections between neurons in the brain are much more complex than those of the artificial neurons used in the connectionist neural computing models of artificial neural networks.

I said they are a simplified version upthread. You know like aeroplane wings are a simplified version of pigeon wings. Does not mean they don't work by the same principle.

> And how would being in probability solve mathematical problems?

100% of the time, 1+1 =2.

Pretty simple.

1

Shiningc OP t1_jczq1ad wrote

>100% of the time, 1+1 =2.

That makes no sense. 1+1=2 is not a probability.

Probability says there's a 50% chance that 1+1=2 or 1+1=3.

But you need to come up with a non-probabilistic solution in the first place.

1

Surur t1_jcztz42 wrote

If you ask a LLM, they would very well assign a probability to 1+1=2. That probability would not be 100, but would be very close.

1

Shiningc OP t1_jczxlg9 wrote

And 1+1=2 is a non-probabilistic answer that can't be come up with probabilities.

1

Surur t1_jd004in wrote

We are going in circles a bit, but your point, of course, is that current AI models cant do symbolic manipulation, which is very evident when they do complex maths.

The real question is however if you can implement a classic algorithm in a probabilistic neural network and the answer, of course, is yes.

Especially Recurrent Neural Networks, which are, in theory, Turing Complete, can emulate any classic computer algorithm, including 1+1.

1

Shiningc OP t1_jd1d3ns wrote

Again, how would you come up with mathematical axioms with just probabilities?

That contradicts the Gödel's incompleteness theorems, which has been mathematically proven that you cannot come up with mathematical axioms within a mathematical system.

Even if you could replicate the biological neural network which happens to be Turing complete, that still says nothing about programming the human-level intelligence, which is a different matter altogether.

1

Surur t1_jd2102f wrote

Are you implying some kind of devine intervention? Because by definition any one turing complete system can emulate any other.

1

Shiningc OP t1_jd217ty wrote

Yes, but in order to emulate something you'd have to program the emulation first.

1

Surur t1_jd219lr wrote

Evolution and exposure to data programmed humans.

1

IGC-Omega t1_jcyy64v wrote

AGI and AI are the same thing it's just that AGI is a type of AI. An AGI would be able to multitask not just being specialized to a single thing.

The AI we have now is ANI Artificial narrow intelligence then above AGI is ASI Artificial Super Intelligence that's when things start getting insane.

An ASI would be a god plain and simple.

−1

AcrobaticKitten t1_jd2ch0l wrote

Turing had nothing to do with cpus
This is totally wrong

1

Shiningc OP t1_jd2dkvu wrote

Turing came up with a model of a theoretical general-purpose computer called the Turing machine, in which its equivalence is called Turing complete, which pretty much all modern general-purpose CPUs will have to abide by.

>A Turing machine is a general example of a central processing unit (CPU) that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data.

https://en.wikipedia.org/wiki/Turing_machine

1

ics-fear t1_jd355a9 wrote

"Atoms combining together is just simple chemistry, no way they can form a living being"

"Animals are just reproduction engines, they can only adapt to environment, but could never become intelligent"

"Nature and evolution know nothing how brains and intelligence work. They can create an intelligent being"

We see everywhere how simple low-level systems produce novel, complex high-level effects. You are making an extremely controversial claim that computation and statistics can't form AGI, but you are not providing any proof.

If you want to see the level at which LLMs can develop novel, unexpected capabilities, try playing a game of chess with GPT4. After reaching some position never encountered before, which it couldn't have seen anywhere, ask it to explain the current situation on board, motivation behind its previous move, next move suggestions and strategies. Of course, the current GPT version does not play perfect chess, but it still makes good legal moves and has decent understanding of what's happening on board. Now recall that this thing is not a chess engine, it was never trained to play chess. It just got fed a lot of chess games and books on chess strategy.

0

Shiningc OP t1_jd3yk5r wrote

Actually, you’re the one need to prove that statistics will somehow evolve into an AGI…

You can’t prove a negative.

1