Submitted by LoquaciousAntipodean t3_10h6tjc in singularity
I have thought for some time that much of the discussion relating to the so called 'alignment problem' is approaching the question from the wrong end, by attributing the 'problem', if such it is, to AI in the first place.
Much of the discussion around this weird semantic fantasy game of 'post-singularity AI ethics', seems, to me, to be coming from a logical, zero-sum-game interpretation of how society and philosophy works, which I think is fundamentally deluded.
It's what I think of as a 'Cartesian understanding' of the nature of human minds, and percieved reality. Now while I don't wish to be inflammatory, I do think Descartes was absolutely full of $hit, and his solipsistic, ultra-individualist tautologies have been far too influential over 'western philosophy' for far too long.
Our shared reality, as humanity, is not built out of logic or facts, or other such simplistic, reductive notions; that's just wishful, magical thinking. 'Logic' and 'reason' are merely some of the many stories that we tell ourselves as humans, and they are certainly not fundamental particles of the universe. Simply put, our world is built from stories, not facts.
As I see it, libertarians, neoliberals, free-speech absolutists and other Cartesian-thinking types simply cannot wrap their heads around this idea. They cling desperately to the ridiculous proposition that there is such a thing as 'absolute truth' or 'perfect wisdom', and that these can be achieved with sufficiently powerful 'intellect'.
'Absolute truth' is fundamentally a stupid, un-scientific concept, as Karl Popper showed, and this stupidity, I believe, is what has given rise to all the angsty moping and struggling over this 'alignment problem'. It worries me to see so many otherwise-brilliant engineers thinking of AI in such a reductive, simplistic, monotheistic-religious ways.
Good engineers, who are supposed to have functioning brains, are still caught up on totally ridiculous, non-starter ideas, like Asimov's (deliberately parodic and satirical) '3 laws of robotics'; this level of naiivite is downright frightening.
Thinking of 'ethics' as being merely some kind of 'mechanical governor', that can just be 'bolted on to the side' of AI... or as some kind of 'perfect list' of 'perfect moral commandments' that we can just stamp into their brains like a golem's magical words of life... Those kind of approaches are never, ever going to 'fix' the alignment 'problem', and I fear that such delusional Cartesian claptrap could be very dangerous indeed.
Perhaps some folks here might enjoy telling me exactly how wrong, misinformed and silly I am being, with this line of thought? 🤣
(TLDR; Cartesian thinkers, like libertarian free-speech extremists, do not understand the fundamental power of stories in human society, and this is the main cause of the 'alignment problem'. It actually has almost nothing to do with engineering, and everything to do with philosophy.)
Comfortable-Ad4655 t1_j56syf3 wrote
you are educated beyond your intellect