petermobeter t1_j58ugnk wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
kind of reminds me of that couple in the 1930s who raised a baby chimpanzee and a baby human boy both as if they were humans. at first, the chimpanzee was doing better! but then the human boy caught up and outpaced the chimpanzee. https://www.smithsonianmag.com/smart-news/guy-simultaneously-raised-chimp-and-baby-exactly-same-way-see-what-would-happen-180952171/
sometimes i wonder how big the “training dataset” of sensory information that a human baby receives as it grows up (hearing its parent(s) say its name, tasting babyfood, etc) is, compared to the training dataset of something like GPT4. maybe we need to hook up a camera and microphone to a doll, hire 2 actors to treat it as if it’s a real baby for 3 years straight, then use the video and audio we recorded as the training dataset for an A.I. lol
LoquaciousAntipodean OP t1_j596a7e wrote
The various attempts to raise primates as humans are a fascinating comparison, that I hadn't really thought about in this context before.
AI has the potential to learn so many times faster than humans, and it's very 'precocious' and 'perverted' compared to a truly naiive human child. I think as much human interaction as possible is what's called for, and then once some AIs become 'veterans' that can reliably pass Turing tests and ethics tests, it might be viable to have them train each other in simulated environments, to speed up the process.
I wouldn't be a bit surprised if Google (et al) are already trying something that roughly resembles this process in some way.
Viewing a single comment thread. View all comments