Submitted by Shiningc t3_11wj2l1 in Futurology
Surur t1_jcyks6i wrote
You write a definition and then you draw the wrong conclusion.
The main issue with LLM is that they are currently static (no continuous learning), though they do have in context learning, but otherwise they are pretty close to general intelligence. Current feed-forward LLM are not Turing complete, but once the loop gets closed they would be.
> Of course that an AGI could be created tomorrow, but first, we'll need to understand how the human intelligence works.
This is obviously not true, since your mother made you, and she knows nothing about AGI.
Shiningc OP t1_jcym1qo wrote
It's static because it's just statistics and probabilities.
>This is obviously not true, since your mother made you, and she knows nothing about AGI.
I don't see what your point is. My mother doesn't know anything about how human intelligence works.
Surur t1_jcyn9yy wrote
> It's static because it's just statistics and probabilities.
Just like anything else.
> My mother doesn't know anything about how human intelligence works.
Exactly. So clearly you can make an AGI without knowing how it works also.
Shiningc OP t1_jcyz7lo wrote
>Just like anything else.
Except for human intelligence, which is clearly not static.
>Exactly. So clearly you can make an AGI without knowing how it works also.
If you want to program it, then no.
Surur t1_jcz6o4q wrote
> Except for human intelligence, which is clearly not static.
And you think this is the end of the line? With in-context learning already working?
> If you want to program it, then no.
That has been abandoned years ago.
Shiningc OP t1_jcz9fsm wrote
>And you think this is the end of the line? With in-context learning already working?
Doesn't matter, they're just statistics and probabilities. It won't somehow evolve into general intelligence.
Surur t1_jcz9txw wrote
> Doesn't matter, they're just statistics and probabilities. It won't somehow evolve into general intelligence.
So you specifically don't think statistics and probabilities will allow
> an intelligence that is capable of doing any kind of intelligent tasks
Which task specifically do you think LLM cant do?
Shiningc OP t1_jczadnh wrote
>Which task specifically do you think LLM cant do?
Anything that requires more than statistics and probabilities. Are you claiming that all intelligence is somehow rooted in statistics and probabilities?
Surur t1_jczczkh wrote
Specifically human intelligence yes, since that is how human neural networks work.
Shiningc OP t1_jczhs46 wrote
How do you know how human neural networks work? And why would a branch of mathematics somehow branch into other areas of intelligence?
Surur t1_jczia7a wrote
Because we have biologists tell us how they work. We can actually examine the neurons, the axons, the dendrites and synapses.
So we know how biological human networks work, and we simulate how they work in computer neural networks.
We know its just stats and probabilities.
Shiningc OP t1_jczizc4 wrote
Biologists haven't said anything about how human neural networks work.
That's like saying all mathematical problems can somehow be solved with statistics and probabilities. And that's just sheer nonsense.
Surur t1_jczkjbx wrote
> Biologists haven't said anything about how human neural networks work.
Get educated https://en.wikipedia.org/wiki/Neural_circuit
> That's like saying all mathematical problems can somehow be solved with statistics and probabilities. And that's just sheer nonsense.
Of course we can. 1 and 0 are both part of the probability cloud.
You seem to think because NNs are currently bad at symbolic thinking they are not intelligent. The funny thing is 30 years ago people thought pattern matching was what set human intelligence apart from computers.
It's just a question of time.
Shiningc OP t1_jczn6ic wrote
>Get educated https://en.wikipedia.org/wiki/Neural_circuit
Where does that say anything about biological neural networks being probabilistic?
Also contradicting your claims:
>The connections between neurons in the brain are much more complex than those of the artificial neurons used in the connectionist neural computing models of artificial neural networks.
​
>Of course we can. 1 and 0 are both part of the probability cloud.
And how would being in probability solve mathematical problems?
Surur t1_jcznsc2 wrote
> The connections between neurons in the brain are much more complex than those of the artificial neurons used in the connectionist neural computing models of artificial neural networks.
I said they are a simplified version upthread. You know like aeroplane wings are a simplified version of pigeon wings. Does not mean they don't work by the same principle.
> And how would being in probability solve mathematical problems?
100% of the time, 1+1 =2.
Pretty simple.
Shiningc OP t1_jczq1ad wrote
>100% of the time, 1+1 =2.
That makes no sense. 1+1=2 is not a probability.
Probability says there's a 50% chance that 1+1=2 or 1+1=3.
But you need to come up with a non-probabilistic solution in the first place.
Surur t1_jcztz42 wrote
If you ask a LLM, they would very well assign a probability to 1+1=2. That probability would not be 100, but would be very close.
Shiningc OP t1_jczxlg9 wrote
And 1+1=2 is a non-probabilistic answer that can't be come up with probabilities.
Surur t1_jd004in wrote
We are going in circles a bit, but your point, of course, is that current AI models cant do symbolic manipulation, which is very evident when they do complex maths.
The real question is however if you can implement a classic algorithm in a probabilistic neural network and the answer, of course, is yes.
Especially Recurrent Neural Networks, which are, in theory, Turing Complete, can emulate any classic computer algorithm, including 1+1.
Shiningc OP t1_jd1d3ns wrote
Again, how would you come up with mathematical axioms with just probabilities?
That contradicts the Gödel's incompleteness theorems, which has been mathematically proven that you cannot come up with mathematical axioms within a mathematical system.
Even if you could replicate the biological neural network which happens to be Turing complete, that still says nothing about programming the human-level intelligence, which is a different matter altogether.
Surur t1_jd2102f wrote
Are you implying some kind of devine intervention? Because by definition any one turing complete system can emulate any other.
IGC-Omega t1_jcyy64v wrote
AGI and AI are the same thing it's just that AGI is a type of AI. An AGI would be able to multitask not just being specialized to a single thing.
The AI we have now is ANI Artificial narrow intelligence then above AGI is ASI Artificial Super Intelligence that's when things start getting insane.
An ASI would be a god plain and simple.
Viewing a single comment thread. View all comments