Submitted by Shiningc t3_11wj2l1 in Futurology
Shiningc OP t1_jcz9fsm wrote
Reply to comment by Surur in The difference between AI and AGI by Shiningc
>And you think this is the end of the line? With in-context learning already working?
Doesn't matter, they're just statistics and probabilities. It won't somehow evolve into general intelligence.
Surur t1_jcz9txw wrote
> Doesn't matter, they're just statistics and probabilities. It won't somehow evolve into general intelligence.
So you specifically don't think statistics and probabilities will allow
> an intelligence that is capable of doing any kind of intelligent tasks
Which task specifically do you think LLM cant do?
Shiningc OP t1_jczadnh wrote
>Which task specifically do you think LLM cant do?
Anything that requires more than statistics and probabilities. Are you claiming that all intelligence is somehow rooted in statistics and probabilities?
Surur t1_jczczkh wrote
Specifically human intelligence yes, since that is how human neural networks work.
Shiningc OP t1_jczhs46 wrote
How do you know how human neural networks work? And why would a branch of mathematics somehow branch into other areas of intelligence?
Surur t1_jczia7a wrote
Because we have biologists tell us how they work. We can actually examine the neurons, the axons, the dendrites and synapses.
So we know how biological human networks work, and we simulate how they work in computer neural networks.
We know its just stats and probabilities.
Shiningc OP t1_jczizc4 wrote
Biologists haven't said anything about how human neural networks work.
That's like saying all mathematical problems can somehow be solved with statistics and probabilities. And that's just sheer nonsense.
Surur t1_jczkjbx wrote
> Biologists haven't said anything about how human neural networks work.
Get educated https://en.wikipedia.org/wiki/Neural_circuit
> That's like saying all mathematical problems can somehow be solved with statistics and probabilities. And that's just sheer nonsense.
Of course we can. 1 and 0 are both part of the probability cloud.
You seem to think because NNs are currently bad at symbolic thinking they are not intelligent. The funny thing is 30 years ago people thought pattern matching was what set human intelligence apart from computers.
It's just a question of time.
Shiningc OP t1_jczn6ic wrote
>Get educated https://en.wikipedia.org/wiki/Neural_circuit
Where does that say anything about biological neural networks being probabilistic?
Also contradicting your claims:
>The connections between neurons in the brain are much more complex than those of the artificial neurons used in the connectionist neural computing models of artificial neural networks.
​
>Of course we can. 1 and 0 are both part of the probability cloud.
And how would being in probability solve mathematical problems?
Surur t1_jcznsc2 wrote
> The connections between neurons in the brain are much more complex than those of the artificial neurons used in the connectionist neural computing models of artificial neural networks.
I said they are a simplified version upthread. You know like aeroplane wings are a simplified version of pigeon wings. Does not mean they don't work by the same principle.
> And how would being in probability solve mathematical problems?
100% of the time, 1+1 =2.
Pretty simple.
Shiningc OP t1_jczq1ad wrote
>100% of the time, 1+1 =2.
That makes no sense. 1+1=2 is not a probability.
Probability says there's a 50% chance that 1+1=2 or 1+1=3.
But you need to come up with a non-probabilistic solution in the first place.
Surur t1_jcztz42 wrote
If you ask a LLM, they would very well assign a probability to 1+1=2. That probability would not be 100, but would be very close.
Shiningc OP t1_jczxlg9 wrote
And 1+1=2 is a non-probabilistic answer that can't be come up with probabilities.
Surur t1_jd004in wrote
We are going in circles a bit, but your point, of course, is that current AI models cant do symbolic manipulation, which is very evident when they do complex maths.
The real question is however if you can implement a classic algorithm in a probabilistic neural network and the answer, of course, is yes.
Especially Recurrent Neural Networks, which are, in theory, Turing Complete, can emulate any classic computer algorithm, including 1+1.
Shiningc OP t1_jd1d3ns wrote
Again, how would you come up with mathematical axioms with just probabilities?
That contradicts the Gödel's incompleteness theorems, which has been mathematically proven that you cannot come up with mathematical axioms within a mathematical system.
Even if you could replicate the biological neural network which happens to be Turing complete, that still says nothing about programming the human-level intelligence, which is a different matter altogether.
Surur t1_jd2102f wrote
Are you implying some kind of devine intervention? Because by definition any one turing complete system can emulate any other.
Viewing a single comment thread. View all comments