Viewing a single comment thread. View all comments

No_Ninja3309_NoNoYes t1_j59ufny wrote

OpenAI had teams of Kenyans score ChatGPT using Proximal Policy Optimisation. You can say that they upvoted or downvoted it for Reinforced Human Feedback. This is of course not how society works. We don't upvote or downvote each other except for Reddit and other websites. AI is limited currently in the kinds of raw data it can process.

For historical reasons people value intelligence. Some experts think that language and intelligence are almost the same thing. But there are thousands of languages and thousands of ways to say similar things. Language is ambiguous.

You can say that mathematics and logic are also languages, yet they are more formal. Of course they are not perfect because they rely on axioms. But anyway if a system is not perfect that doesn't mean that we should stop using it. Experimental data and statistics rule, but certain things are not measurable and other phenomena can only be estimated. That doesn't mean we have to give up on science.

In the same vein, rules like 'Don't be rude to people' and 'Do unto others as you want done unto you' sound vague and arbitrary. But how can AI develop its own morality if it doesn't understand ours? Can a child develop its own values without parents or guardians? Yes, parents and guardians can be toxic and rude. But can AI learn in a vacuum?

2

LoquaciousAntipodean OP t1_j5ch4pj wrote

👌🤩👆 This, 100% this, you have hit the nail right bang on the head, here! Language and intelligence are not quite the same thing, but it is a relationship similar to the one between 'fuel' and 'fire', as I see it. Language is the fuel, evolution is the oxygen, survival selection is the heat, and intelligence is the fire that emerges from the continuous relationship of the first three. And, like fire, intelligence is what gives a reason for more of the first three ingredients to be gathered - in order to keep the fire going.

Language is ambiguous (to greater and lesser degrees: English is highly ambiguous, deliberately so, to enable poetic language; while mathematics strives structurally to eliminate ambiguity as much as possible, but there's still some tough nuts like √2, √-1, e, i, π, etc, that defy easy comprehension) but intelligence is also ambiguous!

This was my whole point with the supercilious ranting about Descartes in my OP. This solipsistic, mechanistic 'magical thinking' about intelligence, that fundamentally derives from the meaningless tautology of 'I think therefore I am', is a complete philosophical dead-end, and it will only cause AI developers more frustration if they stick with it, in my opinion.

They are, if you will, putting Descartes before Des Horses; obsessing over the mysteries of 'internal intelligence' inside the brain, and entirely forgetting about the mountains and mountains of socially-generated, culturally-encoded stories and lessons that live all around us, outside of our brains, our 'external intelligence', 'extelligence', if you like.

That 'extelligence' is what AI is actually modelling itself off, not our 'internal intelligence'. That's why LLMs seem to have all these enigmatic-seeming 'emergent properties', I think.

1