Submitted by Shiningc t3_11wj2l1 in Futurology
Shiningc OP t1_jczq1ad wrote
Reply to comment by Surur in The difference between AI and AGI by Shiningc
>100% of the time, 1+1 =2.
That makes no sense. 1+1=2 is not a probability.
Probability says there's a 50% chance that 1+1=2 or 1+1=3.
But you need to come up with a non-probabilistic solution in the first place.
Surur t1_jcztz42 wrote
If you ask a LLM, they would very well assign a probability to 1+1=2. That probability would not be 100, but would be very close.
Shiningc OP t1_jczxlg9 wrote
And 1+1=2 is a non-probabilistic answer that can't be come up with probabilities.
Surur t1_jd004in wrote
We are going in circles a bit, but your point, of course, is that current AI models cant do symbolic manipulation, which is very evident when they do complex maths.
The real question is however if you can implement a classic algorithm in a probabilistic neural network and the answer, of course, is yes.
Especially Recurrent Neural Networks, which are, in theory, Turing Complete, can emulate any classic computer algorithm, including 1+1.
Shiningc OP t1_jd1d3ns wrote
Again, how would you come up with mathematical axioms with just probabilities?
That contradicts the Gödel's incompleteness theorems, which has been mathematically proven that you cannot come up with mathematical axioms within a mathematical system.
Even if you could replicate the biological neural network which happens to be Turing complete, that still says nothing about programming the human-level intelligence, which is a different matter altogether.
Surur t1_jd2102f wrote
Are you implying some kind of devine intervention? Because by definition any one turing complete system can emulate any other.
Viewing a single comment thread. View all comments