Submitted by Neurogence t3_121zdkt in singularity
AsheyDS t1_jdov1ik wrote
Reply to comment by maskedpaki in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
Symbolic failed because it was difficult for people to come up with the theory of mind first and lay down the formats and the functions and the rules to create the base knowledge and logic. And from what was created (which did have a lot of use, so I wouldn't say it amounted to nothing) they couldn't find a way to make it scale, and so it couldn't learn much or independently. On top of that, they were probably limited by hardware too. Researchers focus on ML because it's comparatively 'easy' and because it has produced results that so far can scale. What I suspect they'll try doing with LLMs is learning how they work and building structure into them after the fact, and finding that their performance has degraded or can't be improved significantly. In my opinion, neurosymbolic will be the ideal way forward to achieve AGI and ASI, especially for safety reasons, and will take the best of both symbolic and ML, and together helping with the drawbacks to both.
maskedpaki t1_jdoyj5e wrote
I've been hearing the Neuro-symbolic cheerleading for 5 years now. I remember Yoshua bengio once debating against it and seeming dogmatic about his belief in pure learning and in how neurosymbolic systems wont solve all the limitations that deep learning has. I have yet to see any results and don't expect to see any. My guess is that transformers continue to scale for 5 more years at least and we will stop asking questions then about what paradigm shift needs to take place because it will be obvious that the current paradigm will do just fine.
Zer0D0wn83 t1_jdp68ky wrote
Exactly this. 10x the ability of GPT-4 may not be AGI, but to anyone but the most astute observer there will be no practical difference.
footurist t1_jdq23y4 wrote
I'm baffled neurosymbolic hasn't been attempted with a huge budget like OpenAI. You've got these two fields, with one you see it can work really precisely but breaks down at fuzziness, scaling and going beyond the rules. With the other you get almost exactly the opposites.
It seems like such a no brainer to make a huge effort trying to combine these in large ways...
Viewing a single comment thread. View all comments