Accomplished-Bill-45 OP t1_iz8tp4b wrote
Reply to comment by hayAbhay in Are currently state of art model for logical/common-sense reasoning all based on NLP(LLM)? by Accomplished-Bill-45
So I just found out that ppl tends to categorize the reasoning
logical reasoning
common sense reasoning
knowledge-based,
social reasoning,
psychological reasoning
qualitative reasoning ( solving some math problem)
So do you mean that If some needs to build a generalized model that can do all of above without specific fine tuning, LLM might be the most straightforward way. We can expecting them to do some simply reasoning like GPT
But to further improvement, can we use GPT as pre-trained model, and adding additional domain specific model ( mostly likely to using symbolic representation) to train.
But can symbolic AI alone perform all of above reasoning ? Can graphical model ( which my intuition tells me is some way representation of logical thought process) be incorporated into symbolic representation ?
hayAbhay t1_iz8xbk0 wrote
I'm not entirely sure what those different categorizations entail but they seem really an application of reasoning. At it's core, everything we do is based on logical reasoning. There are paradoxes but it's the best we have. Within this, there are three core categories
- Deductive reasoning - This is the core of how we reason about this. If we know "If A, then B" as a "rule" and if we observe "A" then it follows that B is definitely true - Premise: A, A => B, Conclusion: B
- Inductive reasoning - This is coming up with the rule itself as a means of observation i.e if you observe many different instances (you notice the grass gets wet after it rains everytime - observation) and you concur, that "if it rains, then grass gets wet" or "A => B"
- Abductive reasoning - This is a sort of reverse reasoning where you observe something and "hypothesize" the cause. This is inherently uncertain and makes a lot of assumptions (closed world). So here, Premise: A=>B, B, Conclusion - A? (yes if closed world and no other rule exists that entail B, uncertain otherwise)
There are several variations of these as well. Everything you mentioned are really applications of these. Natural language is inherently uncertain and so is reality itself! The closest any natural language comes to capturing logic is legal documents (and we know the semantic games that happen there :) )
In terms of AI, logic based systems got pretty popular in the 80s but they're very brittle given our reality but they do have their place. This is the knowledge-based/logical reasoning you mentioned. Knowledge bases are simply a format in which "knowledge" or in other words some textual representation of real world concepts live and have a structure that you can apply logic based rules over.
With LLMs, they're probabilistic in a weird sort of way. Their optimization task is largely predicting the next word and essentially modeling the language underneath (which is inherently filled with ambiguities). Given large repetitions in text, it can easily do what appears to be reasoning largely from high probability occurrence. But, it won't be able to say, systematically pick concepts and trace up the reasoning like a human can. However, their biggest advantage is general utility. They can, as a single algorithm, solve a wide range of problems that would otherwise require a lot of bespoke systems. And, LLMs over the past 5-6 years have consistently hammered bespoke special purpose systems built from scratch. After all, for a human to apply crisp reasoning, they need some language :)
If you're curious, look up "Markov Logic Networks". Its from Pedro Domingos (his book "Master Algorithm" is also worth a read) and it tried to tie logic & probability too but had this intense expectation maximization over a combinatorial explosion. Also, check out yann lecunn talk at berkeley last month (he shared some of that at neurips from what i heard)
Viewing a single comment thread. View all comments