gibs
gibs t1_itnozbf wrote
Reply to comment by GeneralZain in Large Language Models Can Self-Improve by xutw21
People with aphasia / damaged language centres. Of course that doesn't preclude the possibility of there being some foundational language of thought that doesn't rely on the known structures that are used for (spoken/written) language. Although we haven't unearthed evidence of such in the history of scientific enquiry and the chances of this being the case seems vanishingly unlikely.
gibs t1_itnnx1y wrote
Reply to comment by TFenrir in Large Language Models Can Self-Improve by xutw21
Just read the first part -- that is a super interesting approach. I'm convinced that robust continual learning is a critical component for AGI. It also reminds me of another of Lex Fridman's podcasts where he had a cognitive scientist guy (I forget who) whose main idea about human cognition was that we have a collection of mini-experts for any given cognitive task. They compete (or have their outputs summed) to give us a final answer to whatever the task is. The paper's approach of automatically compartmentalising knowledge into functional components I think is another critical part of the architecture for human-like cognition. Very very cool.
gibs t1_itnalej wrote
Reply to comment by TFenrir in Large Language Models Can Self-Improve by xutw21
I've definitely heard that idea expressed on Lex's podcast. I would say prediction is necessary but not sufficient for producing sentience. And language models are neither. I think the kinds of higher level thinking that we associate with sentience arise from specific architectures involving prediction networks and other functionality, which we aren't really capturing yet in the deep learning space.
gibs t1_itkwtf8 wrote
Reply to comment by billbot77 in Large Language Models Can Self-Improve by xutw21
So people who lack language cannot think?
gibs t1_itkh39a wrote
Reply to comment by 4e_65_6f in Large Language Models Can Self-Improve by xutw21
Language models do a specific thing well: they predict the next word in a sentence. And while that's an impressive feat, it's really not at all similar to human cognition and it doesn't automatically lead to sentience.
Basically, we've stumbled across this way to get a LOT of value from this one technique (next token prediction) and don't have much idea how to get the rest of the way to AGI. Some people are so impressed by the recent progress that they think AGI will just fall out as we scale up. But I think we are still very ignorant about how to engineer sentience, and the performance of language models has given us a false sense of how close we are to understanding or replicating it.
gibs t1_itb8yrr wrote
Reply to comment by Ezekiel_W in 3D meat printing is coming by Shelfrock77
You already do. They catch a ride on your food. Bugs are part of your diet.
gibs t1_itpnbia wrote
Reply to comment by BinyaminDelta in Large Language Models Can Self-Improve by xutw21
I don't have one. I can't fathom what it would be like to have a constant narration of your life inside your own head. What a trip LOL.