Submitted by Accomplished-Bill-45 t3_zen8l4 in deeplearning
Not very familiar with NLP, but I'm playing around with OpenAI's ChatGPT; particularly impressed by its reasoning, and its thought-process.
Are all good reasoning models derived from NLP (LLM) models with RL training method at the moment?
What are some papers/research team to read/follow to understand this area better and stay on updated?
​
​
for ChatGPT. I've tested it with following cases
Social reasoning ( which does a good job; such as: if I'm going to attend meeting tonight. I have a suit, but its dirty and size doesn't fit. another option is just wear underwear, the underwear is clean and fit in size. Which one should I wear to attend the meeting. )
Psychological reasoning ( it did a bad job.I asked it to infer someone's intention given his behaviours, expression, talks etc.)
Solving math question ( it’s ok, better then Minerva)
Asking LSAT logic game questions ( it gives its thought process, but failed to give correct answers)
I also wrote up a short mystery novel, ( like 200 words, with context) ask if it can tell is the victim is murdered or committed suicide; if its murdered, does victim knows the killer etc. It actually did ok job on this one if the context is clearly given that everyone can deduce some conclusion using common sense.
hayAbhay t1_iz7vok7 wrote
It's important to note here that llms are NOT very good at reasoning but they are perhaps the best when you consider a "generic" algorithm i.e. without a lot of domain specific work.
For logical reasoning, you'll usually need to resort to symbolic representations underneath and apply the rules of logic. ChatGPT may appear to do that well especially with 1st and even 2nd order but longer chains will make it stumble.