Submitted by starstruckmon t3_10qhgmv in MachineLearning

Paper : https://arxiv.org/abs/2301.13379

Abstract :

>While Chain-of-Thought (CoT) prompting boosts Language Models' (LM) performance on a gamut of complex reasoning tasks, the generated reasoning chain does not necessarily reflect how the model arrives at the answer (aka. faithfulness). We propose Faithful CoT, a faithful-by-construction framework that decomposes a reasoning task into two stages: Translation (Natural Language query → symbolic reasoning chain) and Problem Solving (reasoning chain → answer), using an LM and a deterministic solver respectively. We demonstrate the efficacy of our approach on 10 reasoning datasets from 4 diverse domains. It outperforms traditional CoT prompting on 9 out of the 10 datasets, with an average accuracy gain of 4.4 on Math Word Problems, 1.9 on Planning, 4.0 on Multi-hop Question Answering (QA), and 18.1 on Logical Inference, under greedy decoding. Together with self-consistency decoding, we achieve new state-of-the-art few-shot performance on 7 out of the 10 datasets, showing a strong synergy between faithfulness and accuracy.

122

Comments

You must log in or register to comment.

RandomCandor t1_j6qr7t0 wrote

Man, i feel like we're living the beginning an arms race of AI.

What a time to be alive! ( Like one of my favorite YouTubers would say)

35

Nhabls t1_j6sbq3i wrote

The arms race has been going for over a decade now...

5

mlresearchoor t1_j6r93hm wrote

we got front-row seats to this race and a chance to participate, +1 great time to be alive

3

mlresearchoor t1_j6r8x7y wrote

nice find! would be helpful, as well, to compare with similar papers from 2022 that this paper cites, but did not compare to in results section

("We note that our work is concurrent with Chen et al. (2022) and Gao et al. (2022), both generating the reasoning chain in Python code and calling a Python interpreter to derive the answer. While we do not compare with them empirically since they are not yet published...")

Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks (Chen)
https://arxiv.org/abs/2211.12588

PAL: Program-aided Language Models (Gao)
https://arxiv.org/abs/2211.10435

20

LetterRip t1_j6shnin wrote

The prompts are so specific to the datasets for those two papers they don't seem very useful. We'll have to wait for the code to see if FCoT is a similar case or not.

2

axm92 t1_j6tw995 wrote

Thanks! Can you please clarify what do you mean by prompts are specific to the datasets for PaL?

​

As an example, for the ~10 math reasoning datasets used in PaL, identical prompts were used (same prompt for all datasets, without changing anything). The prompts/code is also open sourced at https://reasonwithpal.com/ if you want to check if out!

Incidentally, the idea that Python programs lead to faithful reasoning chains was used in PaL to create a new split of GSM, called GSM-hard. GSM-hard is available on huggingface.

​

(I'm a co-author of the PaL paper. )

2

LetterRip t1_j6u7cu9 wrote

In my view something like "Let's think things through step by step" prompt is extremely generic and requires no knowledge specific to the upcoming questions.

I was basing my comment on the content of this folder mostly,

https://github.com/reasoning-machines/pal/tree/main/pal/prompt

Each of the prompts seem to require extensive knowledge of the test set to have formulated the prompts.

This seems more akin to Watson where the computer scientists analyzed the form of a variety of questions and did programs for each type of question.

1

axm92 t1_j6uf2a7 wrote

Ah I see, thanks for clarifying. I see your point, but I wouldn't say that the prompts require an extensive knowledge of the test set. After all:

> As an example, for the ~10 math reasoning datasets used in PaL, identical prompts were used (same prompt for all datasets, without changing anything).

​

Notably, take a look at the section on GSM-hard (4.1). You may also enjoy the analysis in the new version of the paper (Section 6: https://arxiv.org/pdf/2211.10435.pdf).

​

Further, "Let's think step by step" is outperformed by "Write Python code to solve this." We'll add the numbers in the next version, but if you are interested please lmk and I can share the results earlier.

Thanks again for reading our work and sharing your feedback, I really appreciate it.

1

LetterRip t1_j6uj087 wrote

> ​Further, "Let's think step by step" is outperformed by "Write Python code to solve this."

Interesting I was just wondering while reading that paper how well that would work compared to the n-shot prompts.

> Ah I see, thanks for clarifying. I see your point, but I wouldn't say that the prompts require an extensive knowledge of the test set. After all:

>> As an example, for the ~10 math reasoning datasets used in PaL, identical prompts were used (same prompt for all datasets, without changing anything).

That's fair. My thoughts were mostly directed at the "Table 2: Solve rate on three symbolic reasoning datasets and two algorithmic datasets" items. I think you could be right that my comments don't apply to the results in Figure 5 (GSM8K GSM-HARD SVAMP ASDIV SINGLEEQ SINGLEOP ADDSUB MULTIARITH).

Would be curious how well the 'write python code to solve this' performs in and of itself vs the "Let's think things through step by step" prompt.

1

oscineStyron415 t1_j6titmk wrote

Was a good read. Lots of big movement these past few months

12

Hyper1on t1_j6xm8m9 wrote

This is a fine approach, but it's not necessarily chain of thought if you move the actual problem solving outside of the LM. The entire point of Chain of Thought as originally conceived is that it's a better way of doing within-model problem solving. I would be interested to see the result if you were to finetune the LM on a dataset of reasoning from this approach, however.

2