navillusr t1_j43zaqk wrote
I think this is a very common belief. Symbolic systems can do many things that neural networks struggle with very sample efficiently. But they’ve failed to scale with more data as well as neural networks for most tasks, and are harder to train. If we could magically combine the reasoning ability of symbolic systems with the pattern recognition and generalization of neural networks, we would be getting very close to AGI imo. That being said idk much about recent research in symbolic reasoning so my knowledge might be outdated.
Farconion t1_j44ebwm wrote
this is why neuro-symbolic computing hasn't gotten much traction right
currentscurrents OP t1_j44ycdz wrote
From what I've seen, it's a promising field that should be possible. But so far but nobody's made it work for more than toy problems.
throwaway2676 t1_j47m2r9 wrote
> If we could magically combine the reasoning ability of symbolic systems with the pattern recognition and generalization of neural networks, we would be getting very close to AGI imo.
I must be misunderstanding your meaning, because I don't see why this is particularly difficult. Train an AI to recognize deductive/mathematical reasoning and translate it into symbolic or mathematical logic. Run an automated proof assistant or computer algebra system on the result. Use the AI to translate back into natural language. Shouldn't be much more difficult than creating code, which ChatGPT can already do, and it would instantly eliminate 95% of the goofy problems LLMs get wrong.
navillusr t1_j47wc3c wrote
It’s definitely a hard problem. The challenge isn’t a pipeline problem of “solve this reasoning task” where you can just take the english task -> convert to code -> run code-> convert to english answer. We could probably do that with some degree of accuracy in some contexts.
The hard part is having the agent solve reasoning tasks without prompt engineering, when they appear, without telling it that it’s a reasoning task. In essence it should be able to combine reasoning and planning seamlessly with the generative side of intelligence, not just piece them together when you tell it to outsource the task to a reasoning engine (assuming it could even do this accurately)
For example, if you ask ChatGPT to play rock paper scissors, but choose the option that beats the option that beats the option that you pick. (i.e if I pick Rock, you pick Scissors, because scissors beats paper which beats rock), it cant plan that far ahead.
> Let’s play a modified version of Rock Paper Scissors, but to win, you have to pick the option that beats the option that beats the option that I pick.
> Sure, I'd be happy to play a modified version of Rock Paper Scissors with you. Please go ahead and make your selection, and I'll pick the option that beats the option that beats it.
> Rock
> In that case, I will pick paper.
Since this game requires 2 steps of thinking, and goes against the statistically likely answer in this scenario it fails. As you described, you could maybe write code that identifies a rock paper scissor game, generates and runs code, then answers in english, but there are many real world tasks that require more than 1 step planning that the agent needs to be able to seamlessly identify and work through. (For the record, it also outputs incorrect python code for this game when prompted)
I don’t do research in this specific area so again I could be off base here, but I think that’s why its harder than you’re imagining.
Fwiw, there was a recent paper (the method was called the Mind’s Eye) where they used an LLM to generate physics simulator code to answer physics question similar to what you described.
jokerinndisguise t1_j44l2dt wrote
i guess hybrid AI have a chance though
Viewing a single comment thread. View all comments