title
stringlengths 10
147
| abstract
stringlengths 3
2.41k
| tldr_text
stringlengths 96
425
⌀ | content_markdown
stringlengths 0
464k
⌀ | authors
sequencelengths 1
41
⌀ | date
timestamp[ms] | publish_info
stringclasses 111
values | publish_is_top
bool 2
classes | citation_count
uint32 0
1k
| citation_count_filtered_math_and_top_conf
uint32 0
127
| theorem_provers
sequencelengths 1
4
⌀ | url
stringlengths 31
152
⌀ | arxiv_url
stringlengths 32
32
⌀ | semantics_scholar_url
stringlengths 78
78
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Evaluating Robustness of Reward Models for Mathematical Reasoning | Reward models are key in reinforcement learning from human feedback (RLHF) systems, aligning the model behavior with human preferences. Particularly in the math domain, there have been plenty of studies using reward models to align policies for improving reasoning capabilities. Recently, as the importance of reward models has been emphasized, RewardBench is proposed to understand their behavior. However, we figure out that the math subset of RewardBench has different representations between chosen and rejected completions, and relies on a single comparison, which may lead to unreliable results as it only see an isolated case. Therefore, it fails to accurately present the robustness of reward models, leading to a misunderstanding of its performance and potentially resulting in reward hacking. In this work, we introduce a new design for reliable evaluation of reward models, and to validate this, we construct RewardMATH, a benchmark that effectively represents the robustness of reward models in mathematical reasoning tasks. We demonstrate that the scores on RewardMATH strongly correlate with the results of optimized policy and effectively estimate reward overoptimization, whereas the existing benchmark shows almost no correlation. The results underscore the potential of our design to enhance the reliability of evaluation, and represent the robustness of reward model. We make our code and data publicly available. | null | [
"Sunghwan, Kim",
"Dongjin, Kang",
"Taeyoon, Kwon",
"Jungsoo, Won",
"Hyungjoo, Chae",
"Dongha, Lee",
"Jinyoung, Yeo"
] | 2024-10-02T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.01729 | https://arxiv.org/abs/2410.01729 | https://www.semanticscholar.org/paper/6a686fb1a1f30d01eedf7cbf6f050041da556a66 |
|
Expediting and Elevating Large Language Model Reasoning via Hidden Chain-of-Thought Decoding | Large language models (LLMs) have demonstrated remarkable capabilities in tasks requiring reasoning and multi-step problem-solving through the use of chain-of-thought (CoT) prompting. However, generating the full CoT process results in significantly longer output sequences, leading to increased computational costs and latency during inference. To address this challenge, we propose a novel approach to compress the CoT process through semantic alignment, enabling more efficient decoding while preserving the benefits of CoT reasoning. Our method introduces an auxiliary CoT model that learns to generate and compress the full thought process into a compact special token representation semantically aligned with the original CoT output. This compressed representation is then integrated into the input of the Hidden Chain-of-Thought (HCoT) model. The training process follows a two-stage procedure: First, the CoT model is optimized to generate the compressed token representations aligned with the ground-truth CoT outputs using a contrastive loss. Subsequently, with the CoT model parameters frozen, the HCoT model is fine-tuned to generate accurate subsequent predictions conditioned on the prefix instruction and the compressed CoT representations from the CoT model. Extensive experiments across three challenging domains - mathematical reasoning, agent invocation, and question answering - demonstrate that our semantic compression approach achieves competitive or improved performance compared to the full CoT baseline, while providing significant speedups of at least 1.5x in decoding time. Moreover, incorporating contrastive learning objectives further enhances the quality of the compressed representations, leading to better CoT prompting and improved task accuracy. Our work paves the way for more efficient exploitation of multi-step reasoning capabilities in LLMs across a wide range of applications. | This work proposes a novel approach to compress the CoT process through semantic alignment, enabling more efficient decoding while preserving the benefits of CoT reasoning and paves the way for more efficient exploitation of multi-step reasoning capabilities in LLMs across a wide range of applications. | [
"Zitao, Liu",
"Zui, Chen",
"Mi, Tian",
"Weiqi, Luo",
"Tianqiao, Liu"
] | 2024-09-13T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.08561 | https://arxiv.org/abs/2409.08561 | https://www.semanticscholar.org/paper/3653a6df26c6003931d87b30bb3d8258d331faca |
|
Exploring Mathematical Extrapolation of Large Language Models with Synthetic Data | While large language models (LLMs) have shown excellent capabilities in language understanding, text generation and many other tasks, they still struggle in complex multi-step reasoning problems such as mathematical reasoning. In this paper, through a newly proposed arithmetical puzzle problem, we show that the model can perform well on multi-step reasoning tasks via fine tuning on high-quality synthetic data. Experiments with the open-llama-3B model on three different test datasets show that not only the model can reach a zero-shot pass@1 at 0.44 on the in-domain dataset, it also demonstrates certain generalization capabilities on the out-of-domain datasets. Specifically, this paper has designed two out-of-domain datasets in the form of extending the numerical range and the composing components of the arithmetical puzzle problem separately. The fine-tuned model have shown encouraging performance on these two far more difficult tasks with the zero-shot pass@1 at 0.33 and 0.35 correspondingly. | This paper has designed two out-of-domain datasets in the form of extending the numerical range and the composing components of the arithmetical puzzle problem separately to show that the open-llama-3B model can perform well on multi-step reasoning tasks via fine-tuning on high-quality synthetic data. | #### Exploring Mathematical Extrapolation of Large Language Models with Synthetic Data
**Haolong Li[*]**
Tongji Universiy
[email protected]
**Yu Ma**
Seed Foundation, ByteDance
[email protected]
**Yinqi Zhang[∗]**
East China Normal University
[email protected]
**Jie Chen[†‡]**
Seed Foundation, ByteDance
[email protected]
Trinh et al., 2024). This comes from three main reasons: firstly, mathematical reasoning often requires
quantitative multiple steps of deduction, since a single logical error is enough to derail a much larger
solution (Lightman et al., 2023). Secondly, the lack
of high-quality data limits LLMs’ ability to generalize and excel in mathematical reasoning tasks.
Lastly, LLMs encounter difficulty in extrapolation,
as they struggle to apply reasoning skills when
solving unseen mathematical problems.
Many prior research has explored along these
challenges. GPT-4 (Achiam et al., 2023), LLaMA
(Touvron et al., 2023a,b), Gemini (Team et al.,
2023), Minerva (Lewkowycz et al., 2022), Llemma
(Azerbayev et al., 2023), Mistral (Jiang et al.,
2023), WizardMath (Luo et al., 2023), MAMMOTH (Yue et al., 2023), ToRA (Gou et al., 2023)
and Deepseek (Bi et al., 2024; Guo et al., 2024;
Lu et al., 2024) have emerged as dominant models in popular mathematical reasoning benchmarks
such as GSM8K (Cobbe et al., 2021), MATH
(Hendrycks et al., 2021), CMATH (Wei et al., 2023)
and AGIEval (Zhong et al., 2023). Moreover, process supervision and verifiers (Cobbe et al., 2021;
Li et al., 2023; Uesato et al., 2022; Lightman et al.,
2023; Yu et al., 2023) at the step level have also obtained widespread attention. However, mathematical extrapolation, particularly in terms of abstract
forms, is often overlooked.
In this paper, we address the aforementioned
challenges by introducing a novel and challenging
arithmetical puzzle problem and making an initial
attempt to solve them. Specifically, we propose a
puzzle that needs multi-step calculations to generate a correct solution. Meanwhile, a data synthesis
pipeline is developed to automatically generate a
vast amount of high-quality data for supervised
fine-tuning (SFT). And a series of LLMs based
on open-llama-3B (Touvron et al., 2023a) are finetuned on this synthetic dataset. Furthermore, to
demonstrate the reasoning abilities in extrapolation,
**Chen Ye[†]**
ESSC Lab, Tongji Universiy
[email protected]
**Abstract**
Large Language Models (LLMs) have shown
excellent performance in language understanding, text generation, code synthesis, and many
other tasks, while they still struggle in complex
multi-step reasoning problems, such as mathematical reasoning. In this paper, through a
newly proposed arithmetical puzzle problem,
we show that the model can perform well on
multi-step reasoning tasks via fine-tuning on
high-quality synthetic data. Experimental results with the open-llama-3B model on three
different test datasets show that not only the
model can reach a zero-shot pass@1 at 0.44
on the in-domain dataset, it also demonstrates
certain generalization capabilities on the outof-domain datasets. Specifically, this paper
has designed two out-of-domain datasets in the
form of extending the numerical range and the
composing components of the arithmetical puzzle problem separately. The fine-tuned models
have shown encouraging performance on these
two far more difficult tasks with the zero-shot
pass@1 at 0.33 and 0.35, respectively.
**1** **Introduction**
Large Language Models (LLMs), as zero-shot and
multi-task learners, have shown extraordinary capabilities across a variety of natural language tasks
(Vaswani et al., 2017; Schulman et al., 2017; Radford et al., 2019; Ziegler et al., 2019; Brown et al.,
2020; Kojima et al., 2022; Park et al., 2023; Chowdhery et al., 2023; Rafailov et al., 2024). However,
even the most advanced LLMs face challenges
when it comes to tackling complex multi-step reasoning problems, such as mathematical and scientific reasoning (Koncel-Kedziorski et al., 2016;
Cobbe et al., 2021; Hendrycks et al., 2021; Wei
et al., 2022; Chen et al., 2022; Gao et al., 2023;
*Work done during internship at ByteDance.
†Corresponding Author
‡Project Leader
-----
**Example of the Synthetic Data**
**—prompt—**
34, 18, 31, 41, 19, 55: -110
**—response—**
31-34=-3, 19+41=60, 60/-3=-20, -20/18=-2,
-2*55=-110
Table 1: Example of our synthetic data.
we have designed two out-of-domain benchmarks
in the form of extending the numerical range and
the composing components of the arithmetical puzzle problem. For the purpose of fair evaluation,
we have restricted our models to greedy sampling
in a zero-shot setting and provided a corresponding verifier. Our data scaling experiments demonstrate that as the amount of synthetic data grows,
in-domain zero-shot pass@1 increases from 0.22
to 0.44, while the out-of-domain zero-shot pass@1
increases from 0.14/0.17 to 0.33/0.35.
Our major contributions can be concluded as:
(1) We propose a novel arithmetical puzzle problem with corresponding data synthesis pipeline and
out-of-domain benchmarks, to verify the multi-step
reasoning and extrapolation capabilities of LLMs
fine-tuned on synthetic data. (2) Experiments indicate that increasing the amount of high-quality
synthetic data leads to performance enhancements
across in-domain and out-of-domain datasets. (3)
A comprehensive case study has been performed.
**2** **Problem Definition**
**2.1** **Arithmetical Puzzle Problem**
Arithmetical puzzle problem denotes a mathematical puzzle involving arithmetic operations and requires logical reasoning and numerical manipulation to derive a solution. The 24 Puzzle and Arithmetic Grid Puzzle are well-known examples of
arithmetical puzzle problems.
In this paper, we propose a challenging arithmetical puzzle. Its objective is intricate yet precise: to deftly manipulate a set of given integers
through a calculated sequence of arithmetic operations, to achieve a predetermined target integer.
The problem strictly limits each integer to be used
by one time exactly. For example, for the integers
3, 6, 7, 51, 58 and the target integer 4, one possible
solution is: 58−51 = 7, 6−7 = −1, 3×−1 = −3,
_−3+7 = 4, as shown in Figure 5 in Appendix A.4._
**Algorithm 1 Data Synthesis Algorithm**
1: Sdataset starts with an empty set
3:2: whileSample sizeSXdataseti 1 ≤ _sizei_ _N, Xlimit doi_ U(1, V )
_{_ _|_ _≤_ _≤_ _∼_ _}_
4: _L starts with an empty list_
5: _S_ _Xi_
_←{_ _}_
6: **for i = 1 to N −** 1 do
7: Randomly select ai, bi _S_
_∈_
8: Randomly select opsi +, _,_ _,_
_∈{_ _−_ _×_ _÷}_
9: _ci_ _ai_ _opsi_ _bi_
_←_
10: _S_ _S_ _ai_ _bi_
_←_ _−{_ _} −{_ _}_
11: _S_ _S_ _ci_
_←_ _∪{_ _}_
12: _L_ _L +_ _ai.opsi.bi, ci_
_←_ _{_ _}_
13: **end for**
14: _T ←_ _cN_ _−1_
15: **if {L, T** _} /∈_ _Sdataset then_
16: _Sdataset_ _Sdataset_ _L, T_
_←_ _∪{_ _}_
17: **end if**
18: end while
**2.2** **Data Synthesizing**
Given the arithmetical puzzle described above in
Section 2.1, we create a data synthesizing pipeline
to efficiently generate the proposed dataset.
Denote the set of candidate integers as X =
_X1, X2, . . ., XN_ and the target number as T,
_{_ _}_
where N is the total number of candidate integers
in a puzzle sample. Each candidate integer Xi is
independently sampled from a uniform distribution
_Xi_ U(1, V ), where V is the upper bound of
_∼_
the sampled integers. To avoid data overlapping,
we have strictly ensured that for each puzzle, the
candidate integers are a set of distinct numbers.
The arithmetic operators involved in this problem
are ops = {+, −, ×, ÷} and all operations are
limited to integer operations. For example, when
solving the puzzle with a division operator, the operation should be considered in integer division like
14/3 = 4. The detailed steps of synthesizing data
for this puzzle is described in Algorithm 1.
Besides, to construct the SFT dataset, the prompt
is deliberately designed to excludes any natural language cues and instead focuses on purely symbolic
language. See Table 1 for an example of the constructed prompt and response.
**2.3** **Dataset**
We split the dataset into training and in-distribution
and out-of-distribution test dataset by controlling
the total number of candidate integers N and the
upper bound of the sampled integers V . We set
-----
**Algorithm 2 Verifier Algorithm**
1: _Xi_ 1 _i_ _N_ _Xprompt_
_{_ _|_ _≤_ _≤_ _} ←_
2: T _Tprompt_
_←_
3: Eqs _Solutiongenerated_
_←_
4: S _Xi_
_←{_ _}_
5: Flagverifier _False_
_←_
6: for eqi ∈ _Eqs do_
7: **if eqi is a legel equation then**
8: _ai, opsi, bi, ci_ _ParseEq(eqi)_
_←_
9: **if ai, bi ∈** _S then_
10: _S_ _S_ _ai_ _bi_
_←_ _−{_ _} −{_ _}_
11: _S_ _S_ _ci_
_←_ _∪{_ _}_
12: **else**
extrapolation of abstract forms can be achieved
by changing the number of candidate integers N .
Clearly, when N increases, the exploration space
leading to a feasible solution would expand exponentially, which results in an increased demand for
precise reasoning steps. From another perspective,
when the total number of the candidate integers
changes, it actually requires the model’s ability to
absorb and adapt to the puzzle’s abstract forms.
Therefore, to test the model’s generalization capability from this point of view, we create another
benchmark for OOD test dataset with 5000 samples
generated with setting N to 8. To control variables,
all the candidate integers in this dataset are sampled
with the same upper bound V = 60 as the training
dataset.
**3** **Model**
**3.1** **Framework**
We adopt the llama architecture (Touvron et al.,
2023a) and employ low-rank adaptation (LoRA)
tuning (Hu et al., 2021) based on the implementation of TRL full stack library (von Werra et al.,
2020). LoRA achieves a remarkable reduction of
89% in our trainable parameters, from 3B to 0.3B.
**3.2** **Implementation Details**
We train our model by fine-tuning open-llama-3B.
We systematically apply left-padding to the query
text and right-padding to the answer text to control
the overall context length. All experiments are
conducted with 8× NVIDIA A100-SXM4-80GB
GPUs. The specific hyperparameter settings are
listed in Table 3 in Appendix A.1.
**4** **Experiments**
**4.1** **Evaluation**
For the fine-tuned model, we use the greedy decoding strategy in a zero-shot setting to generate
responses. To measure the model’s performance
on the proposed puzzle, a corresponding verifier is
designed to automatically evaluate the correctness
of the responses. Specifically, a solution is deemed
correct if it satisfies the following rules:
- No extra or illegal characters.
- There are only N − 1 equations and all the
corresponding calculations are correct.
- F (X1, ..., _XN_ _ops) = T_ .
_|_
- All _Xi_ _i_ 1, 2, . . ., N and the interme_{_ _|_ _∈{_ _}}_
diate calculation results are only used once.
13: _break_
14: **end if**
15: **else**
16: _break_
17: **end if**
18: end for
19: if cN 1 = T then
_−_
20: _Flagverifier_ _True_
_←_
21: end if
_V = 60 for the training dataset, and sampled the_
candidate integers with N = 5, 6, 7. Three training
datasets with different sizes scaling from 1 million to 10 millions and 100 millions are generated.
And another 7500 samples (2500 samples for each
_N_ ) under the same setting are generated as the
in-distribution test dataset. Figure. 1 shows the
distribution of N and X in these three training
datasets. And the corresponding distribution of the
tokenized prompt and response length is shown in
Figure. 2.
To further evaluate the model’s performance on
extrapolation, we have also designed two benchmarks of out-of-distribution dataset:
**Numerical OOD test datasets.** The upper
bound of the sampled integers V is raised to 100
and 1000 separately to test the model’s generalization ability with unseen larger numbers. Specifically, 6000 samples are generated for each value
of V with 2000 samples for each N . An additional
filtering pipeline is applied to ensure that for each
sample, there exists at least one integer Xi that satisfies 60 < Xi < 100 for the dataset with V = 100
and 100 < Xi < 1000 for that with V = 1000.
**Form OOD test dataset. In mathematics, ab-**
stract forms often extend, such as expanding from
a two-variable linear equation to one with three
variables. For the proposed arithmetic puzzle, the
-----
Figure 1: Distributions of N and X for different training set sizes (1M / 10M / 100M samples). N denotes the total
number of candidate integers of our puzzle, X = (X1, X2, . . ., XN ) denotes the candidate integers.
Figure 2: Distributions of the tokenized prompt and response lengths for different training set sizes (1M / 10M /
100M samples).
The detailed steps of evaluating the solution for this
puzzle is described in Algorithm 2.
**4.2** **Results**
As mentioned in Section 2.3, we have generated
three training datasets with different sizes to explore the data scaling effects on the fine-tuned
model. The pass@1 rate on different in-distribution
and out-of-distribution test datasets are shown in
Table 2. When the model is fine-tuned with 100M
samples, it achieves the highest score with a zeroshot pass@1 of 0.44 in the in-distribution test
dataset, and 0.33 and 0.35 in the two OOD datasets,
respectively.
Furthermore, we have shown the training curves
of the model fine-tuned on these three datasets in
Figure 3. From Figure 3, a faster decaying rate is
clearly observed in the training loss when increasing the training data size, which is consistent with
the rapid increase of the pass@1 rate evaluated on
the in-distribution dataset. The same enhancement
of the performance also occurs in the two OOD test
datasets as shown in Table 2.
Additionally, we have also conducted tests of
this puzzle on the base model (open-llama-3B) and
several other open-source and closed-source models with both few-shot and CoT prompting. The
results and some of the generated cases are shown
in Appendix A.2, demonstrating the necessity of
fine-tuning with regard to solving such puzzle problems.
**4.3** **Case Studies**
We further demonstrate the different solutions provided by models trained with 1M / 10M / 100M
training data on the form OOD test dataset for several challenging queries. As shown in Figure 4
in Appendix A.3, the model trained on 1M sam
-----
Figure 3: The training loss and zero-shot pass@1 on ID dataset for different training set sizes (1M / 10M / 100M
samples).
|Dataset|Range|Number of Integers|Fine-tuned on 1M|Fine-tuned on 10M|Fine-tuned on 100M|
|---|---|---|---|---|---|
|ID|[1,60]|5 6 7|0.224 0.208 0.205|0.428 0.363 0.360|0.471 0.432 0.425|
|Total ID|[1,60]|5,6,7|0.216|0.383|0.443|
|Numerical OOD|[1,100]|5 6 7|0.163 0.137 0.126|0.239 0.199 0.186|0.364 0.331 0.315|
|Total Numerical OOD|[1,100]|5,6,7|0.141|0.205|0.326|
|Numerical OOD|[1,1000]|5 6 7|0.131 0.030 0.111|0.181 0.051 0.163|0.229 0.063 0.220|
|Total Numerical OOD|[1,1000]|5,6,7|0.091|0.132|0.170|
|Form OOD|[1,60]|8|0.169|0.231|0.352|
Table 2: Zero-shot pass@1 of the model fine-tuned with different training set sizes (1M / 10M / 100M samples) on
ID, numerical OOD, and form OOD test datasets. The best results are highlighted.
ples is still limited to a fixed number of reasoning
steps, whereas the models trained on 10M / 100M
samples exhibit a higher-level understanding of the
problem and perform an adequate number of reasoning steps. However, compared to the model
trained on 100M samples, the model trained on
10M samples may still encounter computational or
logical errors in the final step of reasoning.
**5** **Conclusion**
Large language models (LLMs) are intrinsically
zero-shot and multi-task learners. However, mathematical reasoning still poses challenges for LLMs,
we propose that the reasons can be mainly categorized into three folds: (1) Requirement of multistep derivation; (2) Lack of high quality data for
fine-tuning; (3) Difficulty in extrapolation. In this
paper, we design an arithmetical puzzle and make
an early attempt to solve these challenges. We develop a 24-point puzzle-like problem which asks
for multi-step calculations to arrive at the correct
answer. A corresponding data synthesis pipeline is
proposed to generate an arbitrary amount of highquality data, on which a series of LLMs are finedtuned. In order to verify the extrapolation capability of our models, we have designed two outof-domain benchmarks and show that our model
achieves competitive performance. Furthermore, a
data scaling experiment is conducted and it is concluded that by increasing the amount of training
data, both the training loss and in/out-of-domain
performance of the fine-tuned model improve accordingly.
**Acknowledgements**
We appreciate Peng Sun for providing the initial
SFT dataset, and Xintian Han for suggestions about
the reward calculation and ablation study. We
would also like to thank Liang Xiang and Xun
Zhou for the helpful discussions across the project.
-----
**6** **Limitations**
In this study, we have explored the mathematical
extrapolation of Large Language Models (LLMs)
and discovered that, with high-quality synthetic
data, LLMs demonstrates certain generalization capabilities in mathematical extrapolation. However,
LLMs have not yet fully mastered this capability,
and it remains uncertain if this ability can be extended to other complex mathematical tasks. In
the future, our research will focus on investigating
and enhancing this capability, aiming to empower
LLMs to explore unsolved mathematical problems
through leveraging our existing knowledge.
**7** **Ethics Statement**
In this research, we adhere to strict ethical guidelines and principles. The study has been designed
and implemented with respect for the rights, privacy, and well-being of all individuals involved.
All of our data is synthesized using our proposed
data synthesis algorithm, ensuring compliance with
relevant regulations and standards. Our findings
and conclusions are reported accurately and objectively, avoiding any misrepresentation or manipulation of data. The entire process and outcomes
are free from intellectual property and ethical legal
disputes.
**References**
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama
Ahmad, Ilge Akkaya, Florencia Leoni Aleman,
Diogo Almeida, Janko Altenschmidt, Sam Altman,
Shyamal Anadkat, et al. 2023. Gpt-4 technical report.
_arXiv preprint arXiv:2303.08774._
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster,
Marco Dos Santos, Stephen McAleer, Albert Q Jiang,
Jia Deng, Stella Biderman, and Sean Welleck. 2023.
Llemma: An open language model for mathematics.
_arXiv preprint arXiv:2310.10631._
Xiao Bi, Deli Chen, Guanting Chen, Shanhuang Chen,
Damai Dai, Chengqi Deng, Honghui Ding, Kai Dong,
Qiushi Du, Zhe Fu, et al. 2024. Deepseek llm: Scaling open-source language models with longtermism.
_arXiv preprint arXiv:2401.02954._
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, et al. 2020. Language models are few-shot
learners. Advances in neural information processing
_systems, 33:1877–1901._
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W Cohen. 2022. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint
_arXiv:2211.12588._
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul
Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2023. Palm: Scaling language
modeling with pathways. Journal of Machine Learn_ing Research, 24(240):1–113._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, et al. 2021. Training verifiers to solve math
word problems. arXiv preprint arXiv:2110.14168.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Pal: Program-aided language
models. In International Conference on Machine
_Learning, pages 10764–10799. PMLR._
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yujiu Yang,
Minlie Huang, Nan Duan, Weizhu Chen, et al.
2023. Tora: A tool-integrated reasoning agent
for mathematical problem solving. arXiv preprint
_arXiv:2309.17452._
Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai
Dong, Wentao Zhang, Guanting Chen, Xiao Bi,
Y Wu, YK Li, et al. 2024. Deepseek-coder: When the
large language model meets programming–the rise of
code intelligence. arXiv preprint arXiv:2401.14196.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. arXiv preprint
_arXiv:2103.03874._
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan
Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang,
and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. _arXiv preprint_
_arXiv:2106.09685._
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral
7b. arXiv preprint arXiv:2310.06825.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in
_neural information processing systems, 35:22199–_
22213.
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate
Kushman, and Hannaneh Hajishirzi. 2016. Mawps:
A math word problem repository. In Proceedings of
_the 2016 conference of the north american chapter of_
_the association for computational linguistics: human_
_language technologies, pages 1152–1157._
-----
Aitor Lewkowycz, Anders Andreassen, David Dohan,
Ethan Dyer, Henryk Michalewski, Vinay Ramasesh,
Ambrose Slone, Cem Anil, Imanol Schlag, Theo
Gutman-Solo, et al. 2022. Solving quantitative reasoning problems with language models. Advances
_in Neural Information Processing Systems, 35:3843–_
3857.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
Jian-Guang Lou, and Weizhu Chen. 2023. Making
large language models better reasoners with stepaware verifier. arXiv preprint arXiv:2206.02336.
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri
Edwards, Bowen Baker, Teddy Lee, Jan Leike,
John Schulman, Ilya Sutskever, and Karl Cobbe.
2023. Let’s verify step by step. _arXiv preprint_
_arXiv:2305.20050._
Haoyu Lu, Wen Liu, Bo Zhang, Bingxuan Wang, Kai
Dong, Bo Liu, Jingxiang Sun, Tongzheng Ren, Zhuoshu Li, Yaofeng Sun, et al. 2024. Deepseek-vl:
towards real-world vision-language understanding.
_arXiv preprint arXiv:2403.05525._
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wizardmath: Empowering mathematical reasoning for
large language models via reinforced evol-instruct.
_arXiv preprint arXiv:2308.09583._
Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. 2023. Generative agents: Interactive simulacra
of human behavior. In Proceedings of the 36th An_nual ACM Symposium on User Interface Software_
_and Technology, pages 1–22._
Alec Radford, Jeffrey Wu, Rewon Child, David Luan,
Dario Amodei, Ilya Sutskever, et al. 2019. Language
models are unsupervised multitask learners. OpenAI
_blog, 1(8):9._
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn.
2024. Direct preference optimization: Your language
model is secretly a reward model. Advances in Neu_ral Information Processing Systems, 36._
John Schulman, Filip Wolski, Prafulla Dhariwal,
Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint
_arXiv:1707.06347._
Gemini Team, Rohan Anil, Sebastian Borgeaud,
Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu,
Radu Soricut, Johan Schalkwyk, Andrew M Dai,
Anja Hauth, et al. 2023. Gemini: a family of
highly capable multimodal models. arXiv preprint
_arXiv:2312.11805._
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal
Azhar, et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint
_arXiv:2302.13971._
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint_
_arXiv:2307.09288._
Trieu H Trinh, Yuhuai Wu, Quoc V Le, He He,
and Thang Luong. 2024. Solving olympiad geometry without human demonstrations. _Nature,_
625(7995):476–482.
Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell,
Geoffrey Irving, and Irina Higgins. 2022. Solving math word problems with process-and outcomebased feedback. arXiv preprint arXiv:2211.14275.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz
Kaiser, and Illia Polosukhin. 2017. Attention is all
you need. Advances in neural information processing
_systems, 30._
Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, Nathan
Lambert, and Shengyi Huang. 2020. Trl: Trans[former reinforcement learning. https://github.](https://github.com/huggingface/trl)
[com/huggingface/trl.](https://github.com/huggingface/trl)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural
_Information Processing Systems, 35:24824–24837._
Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, and
Bin Wang. 2023. Cmath: can your language model
pass chinese elementary school math test? _arXiv_
_preprint arXiv:2306.16636._
Fei Yu, Anningzhe Gao, and Benyou Wang. 2023.
Outcome-supervised verifiers for planning in mathematical reasoning. arXiv preprint arXiv:2311.09724.
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
2023. Mammoth: Building math generalist models
through hybrid instruction tuning. arXiv preprint
_arXiv:2309.05653._
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang,
Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen,
and Nan Duan. 2023. Agieval: A human-centric
benchmark for evaluating foundation models. arXiv
_preprint arXiv:2304.06364._
Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B
Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2019. Fine-tuning language models from human preferences. _arXiv_
_preprint arXiv:1909.08593._
-----
**A** **Appendix**
**A.1** **Hyperparameter Settings**
In the SFT stage, we follow common fine-tuning hyperparameter settings for our model. We set learning
rate to 1e−4 and adopt the cosine learning rate scheduler. We use low-rank adaptation (LoRA) tuning with
a rank of 5, α of 32, and dropout of 0.05. And we employ Adamw optimizer with β1 = 0.9, β2 = 0.95
and ϵ = 1e − 9. Eight NVIDIA A100-SXM4-80GB GPUs are used to train the model with a batch size of
50 and the maximum epoch set to 5. Detailed settings are listed in Table 3.
|Hyperparameter|Value|Hyperparameter|Value|
|---|---|---|---|
|Learning Rate Learning Rate Scheduler Warmup Step GPU Nums Batch Size Per GPU Max Query Length Max Answer Length Max Generated Length Precision Vocabulary Size|1e 4 − Cosine 0 8 50 36 130 167 bfloat16 32002|Epochs Optimizer Optimizer β1 Optimizer β2 Optimizer ϵ Precision LoRA Rank LoRA α LoRA Dropout Seed|5 Adamw 0.9 0.95 1e 9 − AMP 8 32 0.05 1234|
Table 3: Hyperparameter Settings.
**A.2** **Evaluation of the Base Model**
We evaluate the base model (open-llama-3B) on the proposed arithmetical puzzle problem. As shown in
Table 4 and Table 5, with either the few-shot prompting (2-Shot, 8-Shot) or Chain-of-Thought (CoT), the
base model performs poorly on the puzzle. We propose this is due to the symbolic form of our prompt,
the model needs to understand the underlying pattern in order to solve the arithmetical puzzle. Without
fine-tuning on the synthetic data, the model may struggle to comprehend such type of prompt.
|Approach|ID|OOD-N|OOD-F|OOD-1000|
|---|---|---|---|---|
|2-Shot Prompting 8-Shot Prompting Chain-of-Thought|0 0 0|0 0 0|0 0 0|0 0 0|
Table 4: Evaluation of the base model with few-shot and Chain-of-Thought prompting. As expected, the base model
performs poorly across all the prompting techniques.
|Chain-of-Thought Prompting|Generated Response of the Base Model|
|---|---|
|Let’s play an arithmetic puzzle game. First, we are given a set of numbers: 8, 37, 11, 26, 15. The objective is to use operations like addition, subtraction, multiplication, and division to obtain the number 8, using each number exactly once. Please provide the reasoning process step by step.|The first thing to do is to find the number of ways to get 8. There are 4 ways to get by adding 8 to 37, 11, 26, 15. There are 4 ways to get 8 by subtracting 8 from 37, 11, 26, 15. There are 4 ways to get 8 by multiplying 8 from 37, 11, 26, 15. There are 4 ways to get 8 by dividing 8 from 37, 11, 26, 15. There are 4 ways to get 8 by adding 8 from 37, 11, 26, 15. There are 4 ways to get 8 by subtracting 8 from 37, 11, 26, 15. There are 4 ways to get 8 by multiplying 8 from 37, 11, 26, 15. There are 4 ways to get 8 by dividing 8 from 37, 11, 26, 15. . ..|
Table 5: An example of Chain-of-Thought prompting and the generated response of the base model.
-----
We further test several open-source (Llama-2-7B (Touvron et al., 2023a), Deepseek-Coder-33B (Guo
et al., 2024)) and closed-source models (GPT4 (Achiam et al., 2023)) with few-shot prompting. As shown
in Table 6, these models also perform poorly on our benchmarks. In Table 7, we provide an example of
the CoT prompting and the generated responses from these models.
|Model|ID|OOD-N|OOD-F|OOD-1000|
|---|---|---|---|---|
|GPT4 Llama-2-7B Deepseek-Coder-33B|0.002 0 0.001|0.001 0 0|0 0 0.001|0.001 0 0|
Table 6: Evaluation results of Llama-2-7B, Deepseek-Coder-33B, and GPT4 on our proposed benchmarks.
|Few-Shot Prompting|Model|Response|
|---|---|---|
|A puzzle problem. The first two lines are examples. Please solve the third line: 36, 32, 57, 55, 11:30S11/36=0, 0+32=32, 55+32=87, 87-57=30 17, 6, 20, 48, 30:-40S17+48=65, 30/6=5, 20+5=25, 25-65=-40 8, 37, 11, 26, 15:8S|GPT4|Sure, I’ll solve the third line of the puzzle using the pattern established in the first two lines: Firstly, divide the fifth number by the first number, which will yield an integer part: 15 / 8 = 1 (ignore the decimal part) Next, add the result to the second number: 1 + 37 = 38 After that, add this result to the fourth number: 38 + 26 = 64 Final step, subtract the third number from this result: 64 - 11 = 53 However, the expected output provided is 8, but following the pattern above, the answer seems to be 53. It’s possible that there are some other rules or factors at play, or therecould be an error in the input or expected output.|
||Llama-2-7B|Sure, I’d be happy to help you solve the third line of the puzzle! The equation is:8,37,11,26,15:8S To solve this equation, we can start by simplifying the fraction: 8/8 = 1 So, the solution to the equation is: 11,26,15 = 52 I hope this helps! Let me know if you have any other questions.|
||DeepSeek- Coder-33B|The third line of the puzzle problem is: 17, 6, 20, 48, 30:-40S17+48=65, 30/6=5, 20+5=25, 25-65=-40 The operations and numbers are: 1. 17 + 48 = 65 2. 30 / 6 = 5 3. 20 + 5 = 25 4. 25 - 65 = -40 Let’s solve these operations: 1. 17 + 48 = 65 2. 30 / 6 = 5 3. 20 + 5 = 25 4. 25 - 65 = -40 So, the solution to the third line is: 17 + 48 = 65 30 / 6 = 5 20 + 5 = 25 25 - 65 = -40 This line of the puzzle problem is solved.|
Table 7: An example of few-shot prompting and the generated responses of GPT4, Llama-2-7B, and DeepseekCoder-33B. We provide the models with two examples before the puzzle. As shown, all of the models fail to solve
the given problem. GPT4 seems to understand the requirement of the puzzle, while the other two fail.
-----
As shown in Table 7, Llama-2-7B fails to understand the requirement of the puzzle and just outputs two
meaningless equations. Deepseek-Coder-33B treats the second example in few-shot prompting as the
puzzle, and repeats the same calculations three times. It seems that GPT4 has well understood the prompt
and used all the candidate integers only once, the calculations within the generated response are all right,
while the solution is wrong. Actually, such kind of problem is very challenging, as the model needs to
infer the requirement of the puzzle from the provided examples and then figure out the correct solution.
**A.3** **Case Study**
Prompt: 21, 55, 40, 54, 17, 15, 20, 60:20
Response 1M(✗): 55/15=3, 21/54=0, 40-17=23, 23-3=20, 0*60=0, 20-0=20 ?
Response 10M(✓): 55/17=3, 21-3=18, 15/54=0, 18-60=-42, 0*-42=0, 0/40=0, 0+20=20
Response 100M(✓): 15/55=0, 21/54=0, 0*40=0, 0*60=0, 0*17=0, 0+20=20, 20-0=20
Prompt: 28, 48, 1, 15, 45, 20, 57, 47:0
Response 1M(✗): 45/48=0, 15*20=300, 47*0=0, 28/57=0, 0*300=0, 0-0=0 ?
Response 10M(✓): 45/1=45, 48/20=2, 28/47=0, 15/2=7, 57-45=12, 12-7=5, 0*5=0
Response 100M(✓): 47/15=3, 20/48=0, 3/57=0, 0*28=0, 0*1=0, 0*45=0, 0+0=0
Prompt: 46, 3, 49, 27, 39, 20, 53, 58:0
Response 1M(✗): 49/58=0, 20*3=60, 27/46=0, 0*53=0, 0*39=0, 0-0=0 ?
Response 10M(✓): 39/49=0, 20/53=0, 0*58=0, 0*3=0, 0*46=0, 27*0=0, 0-0=0
Response 100M(✓): 20/53=0, 39/49=0, 0*27=0, 0*3=0, 0*46=0, 0*58=0, 0+0=0
Prompt: 53, 55, 47, 5, 6, 7, 22, 22:55
Response 1M(✗): 53-22=31, 5/47=0, 22/6=3, 31*0=0, 0/7=0, 55-0=55 ?
Response 10M(✗): 53/22=2, 22/55=0, 7-47=-40, 0+-40=-40, 6-2=4, 4-5=1, 1--40=51
Response 100M(✓): 22/5=4, 47/53=0, 4/7=0, 0*22=0, 0*6=0, 0+0=0, 55-0=55
Prompt: 2, 37, 22, 41, 56, 19, 60, 40:58
Response 1M(✗): 41-22=19, 19+40=59, 59-37=22, 22-19=3, 56/3=18, 18*2=58 ?
Response 10M(✗): 22/41=0, 0/2=0, 19/60=0, 0*0=0, 0*40=0, 0+37=37, 37+56=85
Response 100M(✓): 41/22=1, 19/60=0, 0+56=56, 37/2=18, 18/1=18, 40/18=2, 2+56=58
Prompt: 18, 16, 48, 12, 57, 7, 56, 32:-87
Response 1M(✗): 12-56=-44, 18+-44=-26, 57+32=89, 16+-26=-10, 48/-10=-5, -5- 89=-87 ?
Response 10M(✗): 18-56=-38, 12+32=44, 48+-38=10, 10-44=-34, 16+-34=-18, -18-57=-75, -75-7=-87
Response 100M(✓): 16-56=-40, 12+57=69, 18+69=87, 32+7=39, 39/48=0, 0*-40=0, 0-87=-87
Prompt: 18, 3, 33, 51, 28, 18, 59, 4:-79
Response 1M(✗): 18-59=-41, 33+4=37, 37+51=88, 18+-41=-23, 28-88=-60, -23+- 60=-79 ?
Response 10M(✗): 18-3=15, 15-51=-36, 33+18=51, 59+51=110, 110-28=82, 4+82=86, -36-86=-170
Response 100M(✓): 18-59=-41, 33+18=51, 51+51=102, 28-102=-74, 3/-41=-1, -74+-1=-75, -75-4=-79
Figure 4: Cases from the form OOD test dataset. The correct steps are highlighted in green, while the incorrect
steps in red. Generally speaking, performance of model fine-tuned with 1M training data is the worst.
-----
**A.4** **Visualization of the Proposed Puzzle**
Arithmetical Puzzle
### 3, 6, 7, 51, 58
**58** **-** **51** **=** **7**
**6** **-** **7** **=** **-1**
**3** **÷** **-1** **=** **-3**
**-3** **+** **7**
# =
Figure 5: Visualization of the proposed arithmetical puzzle. Given the candidate integers 3, 6, 7, 51, 58 and the
target integer 4, the answer is 58 − 51 = 7, 6 − 7 = −1, 3 × (−1) = −3, −3 + 7 = 4.
-----
| [
"Jie, Chen",
"Haolong, Li",
"Yu, Ma",
"Yinqi, Zhang",
"Chen, Ye"
] | 2024-06-04T00:00:00 | ACL 2024 Findings | false | 0 | 0 | null | http://arxiv.org/abs/2406.02100 | https://arxiv.org/abs/2406.02100 | https://www.semanticscholar.org/paper/0d40d242d24e6605abe3b0d7b95ba4d38e39b266 |
Exploring Metamath Proof Structures | N/A | null | [
"Zsolt, Zombori",
"Christoph, Wernhard"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
|
Exploring Reversal Mathematical Reasoning Ability for Large Language Models | Large language models (LLMs) have presented remarkable capabilities in the wide range of natural language understanding and reasoning tasks. Despite their success, a few works indicate that LLMs suffer from the “reversal curse”, in which LLMs can’t employ the inverted structure “B is A” when they are trained based on “A is B”. To explore the effect of the “reversal curse” for LLMs on complex mathematical reasoning tasks, we present two reversal datasets upon GSM8K and MathQA and verify that LLMs also struggle to solve reversal mathematical problems. We analyze the potential reason and attribute it to the insufficient modeling of the relationship between reasoning steps caused by the left-to-right objective. Consequently, based on the characteristics of multi-step reasoning, we design a novel training method to improve the general and reversal reasoning abilities. Finally, we conduct experiments on four mathematical datasets, and the results demonstrate that our method significantly improves the general reasoning capacities and alleviates the reversal problem. Our datasets and codes are available at https: //github.com/AllForward/ReversalMath. | null | # Exploring Reversal Mathematical Reasoning Ability for Large Language Models
**Pei Guo[♠][*], Wangjie You[♠][*], Juntao Li[♠], Bowen Yan[♢], Min Zhang[♠]**
_♠Institute of Computer Science and Technology, Soochow University, China_
_♢Department of Computer Science and Technology, Tsinghua University, China_
{pguolst,wjyouuu}@stu.suda.edu.cn;
{ljt, minzhang}@suda.edu.cn;
[email protected]
**Abstract**
Large language models (LLMs) have presented
remarkable capabilities in the wide range of
natural language understanding and reasoning
tasks. Despite their success, a few works indicate that LLMs suffer from the “reversal curse”,
in which LLMs can’t employ the inverted structure “B is A” when they are trained based on
“A is B”. To explore the effect of the “reversal curse” for LLMs on complex mathematical reasoning tasks, we present two reversal
datasets upon GSM8K and MathQA and verify that LLMs also struggle to solve reversal
mathematical problems. We analyze the potential reason and attribute it to the insufficient
modeling of the relationship between reasoning steps caused by the left-to-right objective.
Consequently, based on the characteristics of
multi-step reasoning, we design a novel training method to improve the general and reversal reasoning abilities. Finally, we conduct experiments on four mathematical datasets, and
the results demonstrate that our method significantly improves the general reasoning capacities and alleviates the reversal problem.
_[Our datasets and codes are available at https:](https://github.com/AllForward/ReversalMath)_
[//github.com/AllForward/ReversalMath.](https://github.com/AllForward/ReversalMath)
**1** **Introduction**
With the significant increase in data and model
scale, large language models (LLMs) (Brown et al.,
2020; Hoffmann et al., 2022; Touvron et al., 2023;
OpenAI, 2023) have emerged with their powerful multi-dimensional capabilities, such as longcontext open domain conversation, code assistants (Chen et al., 2021b; Luo et al., 2023b; Wang
et al., 2023; Zheng et al., 2023b), instruction following (Ouyang et al., 2022; Taori et al., 2023),
particularly in the complex reasoning tasks solved
by chain-of-thought (CoT) methods (Wang et al.,
2022; Wei et al., 2022; Lightman et al., 2023).
- Equal Contribution
**Models** **GSM8K/Reversal** **MathQA/Reversal**
GPT-3.5-Turbo 77.4 / 52.2 63.5 / 44.6
Flan-T5-3B 13.5 / 3.5 5.8 / 5.8
Flan-T5-11B 16.1 / 12.3 15.5 / 9.6
LLama2-7B 13.7 / 7.0 19.2 / 10.3
LLama2-13B 25.3 / 10.7 25.6 / 10.3
LLama2-70B 52.1 / 30.2 42.0 / 29.7
Table 1: The accuracy of different LLMs on GSM8K,
MathQA, and their correlated reversal test datasets.
Nevertheless, a number of contemporary studies (Berglund et al., 2023; Grosse et al., 2023) highlight the presence of the “reversal curse” predicament in LLMs, where LLMs are trained based on
the structure “A is B” in a sentence, and they cannot
employ the inverted structure “B is A” to extrapolate and respond to queries effectively. Almost all
works merely explore the “reversal curse” based on
the name-to-description reversal task. It is worth
exploring whether complex multi-step reasoning
tasks also suffer from this predicament. If it exists, how should we alleviate it and improve the
performance of LLMs’ reasoning?
To explore this problem, we choose one of
the most challenging and representative reasoning
tasks, i.e., mathematical problems (Collins et al.,
2023; Imani et al., 2023; Luo et al., 2023a; Yuan
et al., 2023), as the testbed. Resembling with the
format of reversal curse problems, backward mathematical reasoning gives the answer to the original
question and reverses to infer one of the variables
in the question, which is first formalized by Yu et al.
(2024). They construct a backward test set upon
GSM8K (Cobbe et al., 2021) to evaluate the backward reasoning capabilities of LLMs. Their preliminary results on LLaMA-2-7B confirm that recent
LLM can struggle to solve mathematical problems
in backward rationales, leaving extensive verification and in-depth understanding unexplored.
To fill this blank, we first propose two reversal
13671
-----
mathematical test sets to further verify that LLMs
suffer from the “reversal curse” on mathematical
problems. Specifically, we employ GPT-4 to imitate the format and style of original questions and
generate the reversal data based on the GSM8K
and MathQA (Amini et al., 2019) test sets. The
detailed construct process and data quality verification are elaborated in Section 3. After constructing two reversal test sets, we use them to evaluate representative LLMs of different model scales.
As shown in Table 1, compared with original test
sets, LLMs with different scales and architectures
present a significant accuracy decline in the reversal
datasets except for Flan-T5-3B on MathQA. This
phenomenon sufficiently demonstrates that LLMs
actually face difficulties in reversal reasoning in
mathematical problems.
From this discovery, we analyze the potential reasons and speculate that it’s related to the traditional
left-to-right training objective. In the process of
mathematical multi-step reasoning, LLMs strictly
follow the order of deductions from left to right,
which solely focuses on acquiring the association
from conditions to conclusions. This shortcoming
is essentially the lack of context modeling for reasoning steps, which causes the difficulty of reversal
reasoning and affects the general reasoning performance. To sufficiently model the relationship of
different reasoning steps, and enhance the reversal
and overall reasoning ability of LLMs, we propose
a simple and effective training framework, which
introduces an additional bidirectional training objective based on the characteristics of multi-step
deduction. In particular, we choose the partial steps
as the context which employs the bidirectional attention mechanism, while utilizing the causal attention mechanism to predict the remaining unselected steps. By employing this approach, LLMs
are trained to extrapolate the preceding steps in
a reverse manner, drawing upon the information
from the succeeding steps.
To validate the effectiveness of our method,
we fine-tune Flan-T5-XL and Llama2-7B on the
GSM8K dataset. Subsequently, we evaluate the
performance of these models on four benchmarks
and two reversal mathematical datasets. The results show that the models’ general and reversal
reasoning ability is superior to the latest methods
that use additional training skills, and even some
data augmentation strategies. Our contributions are
listed below:
- We construct two reversal mathematical
datasets to further explore the reversal reasoning ability of LLMs, and prove that LLMs
actually suffer from the “reversal curse” on
mathematical problems.
- We analyze the potential reason and attribute it
to the insufficient modeling of the relationship
between reasoning steps. Consequently, based
on the characteristics of multi-step deduction
tasks, the bidirectional training objective is
designed to alleviate this problem.
- Whether on four benchmarks or two reversal
datasets, applying our approach to different
settings all achieves significant improvements,
and even close to GPT-3.5-Turbo on the parts
of benchmarks.
**2** **Related Work**
**2.1** **Large Language Models**
LLMs have shown impressive multi-dimensional
capabilities, significantly affecting the natural language processing community (Brown et al., 2020;
Hoffmann et al., 2022; Touvron et al., 2023; OpenAI, 2023). Recently, Wei et al. (2022); Wang
et al. (2022) uncovered the broad prospects of CoT
reasoning capabilities within LLMs. Given a few
augmenting few-shot examples with multiple reasoning steps, LLMs can generate multi-step deduction toward the answer of solving complex tasks,
e.g., this approach has been widely used on GPT3.5 (OpenAI, 2022), GPT-4 (OpenAI, 2023) and
LLaMA (Touvron et al., 2023) to tackle various reasoning tasks (Fu et al., 2023b; Zhang et al., 2023).
**2.2** **Reversal Curse**
Though LLMs show impressive performance on
various tasks, a number of current works (Berglund
et al., 2023; Grosse et al., 2023) clarify that LLMs
suffer from the reversal curse. Specifically, the
autoregressive LLMs are trained on the logical
sentence structure "A is B" and fail to infer "B
is A". This phenomenon suggests that LLMs don’t
grasp the relationship of knowledge presented in
the training data adequately. Lv et al. (2023) further
explore this problem and contend that the reversal
curse arises partly due to the specific training objectives pursued by models, mostly evident in the
widespread adoption of next-token prediction techniques in causal language models. Besides, Wu
13672
-----
**Prompt: Please follow the examples that modify the question by adding the original question and answer**
as a new condition, only hiding one condition that appeared in the question, must keeping other conditions
unchanged, and making the hiding condition as a new question. Finally, provide the modified question and
the hiding number, and follow the format as: Modified question: \n Hiding number:
**Example 1:**
**Original question: If Ann is 9 years old and her brother is twice her age, how old will her brother be in 3**
years? (The answer is 21)
**Modified question: Ann's brother is twice as old as she is. In 3 years, her brother will be 27 years old. Ho**
w old is Ann now?
**Hiding number: 9**
**Example 2:**
**Original question: Morisette and Kael were asked to bring fruits. Morisette brought 5 apples and 8 orange**
s, while Kael brought twice the amount of apples and half the number of oranges than Morisette. How man
y fruits do they have in total? (The answer is 27)
**Modified question: Morisette and Kael were asked to bring fruits. Morisette brings 5 apples and some ora**
nges, and Kael brings twice the apples and half the oranges that Morisette brings, they have 27 fruits in tot
al. How many oranges does Morisette bring?
**Hiding number: 8**
**Original question: {Q}**
Figure 1: The prompt for obtaining reversal data. “Modified question” and “Hiding number” denote the generated
reversal question and the corresponding answer.
et al. (2023) find that BERT is immune to the reversal curse. At the same time, a few bidirectional
modeling approaches are proposed to mitigate the
curse (Lv et al., 2023; Ma et al., 2023).
**2.3** **Mathematical Reasoning**
Mathematical multi-step reasoning, one of the most
challenging problems, has attracted widespread attention. We divide the related work into two categories. One category is the prompt-based methods, another is finetuning-based. For the first one,
a few approaches (Narang et al., 2023; Fu et al.,
2023c; Zheng et al., 2023a; Diao et al., 2023; Li
et al., 2023b) provide multiple reasoning examples to LLMs, and leverage the excellent in-context
capability of LLMs to generate high-quality reasoning paths. For instance, Narang et al. (2023)
entail generating various reasoning chains, potentially yielding multiple candidate answers. Among
these, the answer that garners the most votes is
subsequently chosen as the ultimate response. Another category is obtaining the CoT paths from
closed-source LLMs (e.g., GPT-3.5, GPT-4) by employing knowledge distillation and utilizing the
knowledge to fine-tune open-source models (e.g.,
Flan-T5, LLaMA). Yuan et al. (2023) propose the
rejection sampling fine-tuning (RFT) to improve
the performance through collecting more reasoning
paths as augmented datasets. WizardMath (Luo
et al., 2023a) applies reinforcement learning from
the evol-instruct feedback method to enhance reasoning ability. MetaMath (Yu et al., 2024) adopts
four data augmentation strategies to generate highdiversity data and obtain excellent performance. Li
et al. (2023a) explore the effect of augmented data
from multiple perspectives and put forward query
and response augmentations approaches. An et al.
(2023) demonstrate the effectiveness of learning
from mistakes.
Besides, the potential of smaller language models (SLMs) reasoning has been verified (Magister
et al., 2022; Ho et al., 2023; Fu et al., 2023a). Shridhar et al. (2023), Han et al. (2023) and Junbing et al.
(2023) decompose the complex questions into a series of simpler problems. Liu et al. (2023) further
distill the self-evaluation capability of LLMs into
SLM to improve the performance.
**3** **Reversal Mathematical Datasets**
**Construction**
Different levels of mathematical word problems
have been proposed to evaluate LLMs’ general mathematical reasoning ability, such as
AddSub (Hosseini et al., 2014), MultiArith (Roy
13673
-----
**Question:**
Morisette and Kael were asked to bring fruits. Morisette brought 5 apples and 8 oranges, while Kael brou
ght twice the amount of apples and half the number of oranges than Morisette. How many fruits do they
have in total? (The correct answer is 27)
**CoT Reasoning:**
Step 1: Morisette brought 5 apples and 8 oranges, totaling 13 fruits.
Step 2: Kael brought twice the amount of apples, so 2 * 5 = 10 apples, and half the number of oranges
than Morisette, which is 8 / 2 = 4 oranges.
Step 3: Kael brought a total of 10 + 4 = 14 fruits.
Step 4: Thus, the total number of fruits they have is 13 + 14 = 27 fruits.
Figure 2: An example of the mathematical question and the corresponding CoT reasoning steps.
**4** **Methodology**
In this section, we first introduce the training objective of causal mechanisms widely utilized in
LLMs. After that, we analyze the shortcomings
of this objective in multi-step reasoning tasks like
mathematical problems and design an additional
training objective to alleviate this problem.
**4.1** **Unidirectional Modeling of Reasoning**
The causal attention mechanism is widely applied
to LLMs to fine-tune various downstream tasks,
including multi-step mathematical reasoning problems. Formally, we denote a mathematical dataset
as D = (xi, yi)[N]i=1 [where][ x][i][ is a question,][ y][i][ rep-]
resents the CoT reasoning steps to solve question
_xi, N stands for the number of samples in D. For_
each sample, LLMs are trained to maximize the
following likelihood:
and Roth, 2015), MathQA (Amini et al., 2019), Asdiv (Miao et al., 2020), SVAMP (Patel et al., 2021),
GSM8K (Cobbe et al., 2021), Math (Hendrycks
et al., 2021) and so on. Moreover, Yu et al. (2024)
design a reversal GSM8K dataset to evaluate the
backward reasoning ability of LLMs, but the format and style of questions are different from the
original, which could affect the performance of
LLMs. Consequently, we construct two reversal
datasets upon the GSM8K and MathQA datasets
following the original format and style to further
explore the reversal reasoning ability of LLMs.
Specifically, we first follow SpecialFT (Fu et al.,
2023a) to use 800 instances as the GSM8K test
set, and artificially extract 600 medium difficulty
questions from the MathQA (Amini et al., 2019).
It’s noticed that we change the type of MathQA
from multiple-choice to answer questions. To keep
the format and style of the original questions unchanged, we design a few-shot prompt (shown in
Figure 1) for GPT-4 to imitate the provided examples and generate the reversal question denoted as
"Modified question" and its corresponding answer
denoted as "Hiding number".
Verifying the correctness of generated reversal
instances is also important. To ensure the quality of
generated data, we utilize GPT-4 to generate multiple reasoning results for every reversal instance and
judge the consistency of results. If multiple results
remain consistent, we suggest that this instance is
correct. On the contrary, we manually verify the
quality of the question. Specifically, If the corresponding answer could be obtained based on the
description of the question, we keep it. Otherwise,
we artificially construct this instance following the
examples in Figure 1.
log P (yi[t][|][y]i[<t][, x][i][;][ θ][)] (1)
_Lcausal =_
where yi[<t] denotes the previous tokens before token
_yi[t][,][ T][ denotes the length of][ y][i][, and][ θ][ denotes the]_
model parameters.
**4.2** **Bidirectional Modeling of Reasoning**
Through applying Equation 1, LLMs strictly follow the order of deductions from left to right in the
mathematical problems. Nevertheless, this training
objective solely focuses on acquiring the association from conditions to conclusions, disregarding
the reciprocal relationship. For example, as shown
in Figure 2, LLMs are trained to predict the ultimate answer even the intermediate answer with
preceding conditions, namely inferring step 4 from
13674
-----
|Col1|[B] y y 1 2|y y 3 4|[B] y y 5 6|
|---|---|---|---|
|[B] y 1 y 2||||
|y 3 y 4||||
Can be attended
Output step 1 step 3 Can’t be attended
**[B] y1 y2 y3 y4 [B] y5 y6 y7 y8**
Decoder Block **[B]**
**y1**
**y2**
Attention Mask
**y3**
**y4**
**[B]**
Decoder Block **y5**
**y6**
**y7**
Input step 1 step 2 step 3 step 4 **y8**
(a) decoder block with attention algorithm (b) specifical attention mask matrix
Figure 3: The left part (a) presents the decoder block with a modified attention algorithm. Steps 1 and 3 need to be
predicted, which keep every token following a left-to-right order. Steps 2 and 4 are the observation steps that adopt
bidirectional modeling. Sub-figure (b) is a specifical attention mask matrix for (a). [B] is a special token “BOS”.
steps 1 to 3 and inferring step 3 from step 2, while
disregarding the process of deducing preceding conditions from the conclusions, such as how to infer
the deductions of numbers "13" and "14" appeared
in the conclusion based on step 4. This shortcoming
is essentially the lack of context modeling for reasoning steps, which causes insufficient dependency
between different steps and affects the general reasoning performance. Struggling with the reversal
questions is one of the typical manifestations.
To sufficiently model the relationship between
different reasoning steps and improve their overall performance, especially the reversal reasoning
ability, we propose a simple and effective training
framework, which introduces an additional bidirectional training objective based on the characteristics of multi-step deduction. In particular, for
each CoT explanation yi, combined with n steps
_yi = {s1, s2, ...., sn}, we randomly sample parts_
of the steps that need to be predicted, denoted as
_s[pred]i_, and the rest steps denoted as s[obs]i . Given
that LLMs leverage the causal attention mechanism throughout the pre-training even fine-tuning
phases, it proves challenging to directly transition
the attention mechanism from unidirectional to
bidirectional. To maintain the original capabilities
of LLMs and better stimulate the ability of reversal
reasoning, we keep each token in s[pred]i following a
left-to-right order, which is shown in Figure 3 (b).
Besides, s[obs]i could be observed by each token in
_s[pred]i_, which achieves the bidirectional modeling
of the preceding and following steps. At the same
time, s[pred]i are not visible in s[obs]i in order to prevent information leakage. For instance, in Figure 3,
if step 2 can obtain the information of step 1, it
would lead to a potential information leakage that
could impact the prediction of step 1. Formally, the
proposed new training objective could be described
as follows:
_T_ _[pred]_
_t∈Xs[pred]i_
log P (yi[t][|][y]i[<t] _yi[t][∈][s]i[obs], xi; θ) (2)_
_∪_
_Lbid =_
where T _[pred]_ denotes the number of tokens in s[pred]i .
**4.3** **Training and Inference**
The causal and proposed bidirectional training objectives have been described in the previous section.
Now, we clarify the final objective and the details
of training and inference. In the training stage,
we combine Lcausal and Lbid as the final objective.
Not only can it maintain the original capacity of
LLMs, but it also improves the reversal reasoning ability. The corresponding computation can be
formulated as follows:
_L = Lcausal + αLbid_ (3)
13675
-----
where α is a customized parameter. As shown in
Figure 3 (b), to satisfy the autoregressive generation for each step in s[pred]i, the special token [BOS]
is padded at the beginning of the input. At the stage
of inference, LLMs still adopt the causal attention
algorithm as usual to perform reasoning autoregressively.
**5** **Experiments**
In this section, we evaluate the effectiveness of our
method by employing it on mathematical datasets.
To compare with current works (Fu et al., 2023a;
Han et al., 2023; Yu et al., 2024), we follow them
and adopt the original test set to present the general reasoning ability. The discussion about the
reversal reasoning capacity will be introduced in
Section 6.1.
**5.1** **Datasets**
To evaluate LLMs’ reasoning ability and general ability, we utilize four mathematical datasets,
namely GSM8K (Cobbe et al., 2021), MultiArith (Roy and Roth, 2015), ASDiv (Miao et al.,
2020), and SVAMP (Patel et al., 2021). Except for
LLama2 (MetaMath) with and without our method,
which uses the MetaMath training dataset, we only
fine-tune Flan-T5 and LLama2 on the GSM8K
training set, which contains 7,473 examples. It’s
noticed that for each GSM8K training instance, we
employ GPT-3.5-Turbo-1106 to generate multiple
related reasoning paths and choose the correct one
as the final solution (specifical prompt could be
seen in Appendix A). The remaining three datasets
are adopted to evaluate the out-of-distribution ability of models. Moreover, following the previous
work (Fu et al., 2023a; Han et al., 2023), we adopt
500 examples for each dataset as the validation set,
and the remaining examples as the test set (800 for
GSM8K, 400 for MultiArith, 18K for ASDiv, 500
for SVAMP).
**5.2** **Baselines**
For the baseline models, we divide them into
three categories: (i) Closed-sourced models: GPT3.5-Turbo-1106 (OpenAI, 2022), Code-Davinci002 (Chen et al., 2021a), LaMDA-137B (Kojima
et al., 2022), PaLM-60B (Chowdhery et al., 2022),
each of them presents strong reasoning ability. (ii)
Open-sourced generic models: Flan-T5 (Chung
et al., 2022) and LLama2 (Touvron et al., 2023),
which are widely applied to various tasks. (iii)
Specialized models: For Flan-T5 models, SpecialFT (Fu et al., 2023a) and DialCoT (Han et al.,
2023) respectively employ knowledge transfer and
questions decomposition to enhance models’ mathematical reasoning ability. For LLama2 models,
Rejection sampling Fine-Tuning (RFT) (Yuan et al.,
2023) collects multiple correct reasoning paths as
augmented data for fine-tuning. WizardMath (Luo
et al., 2023a) applies reinforcement learning from
the evol-instruct feedback method to the math domain. MetaMath (Yu et al., 2024) proposes four
data augmentation methods, significantly improving performance. Besides, we apply the supervised
fine-tuning (SFT) method on our designed GSM8K
training dataset for both models.
**5.3** **Implementation**
We implement our method on two model architectures, namely Encoder-Decoder (Flan-T5) and
Decoder-only (LLama2). Due to the limited
computing resources, we chose Flan-T5-3B and
LLama2-7B as backbones and fully fine-tuned
them. The greedy search algorithm is utilized to
execute inference processes for all the specialized
models. More experimental details can be seen in
Appendix A.1. We follow the previous works to
use statistical significance tests (Koehn, 2004) to
detect if the difference in accuracy score between
our approach and base settings is significant.
**5.4** **Results**
The overall results are shown in Table 2 and can be
summarized as follows:
**Results on Flan-T5 backbone. The SFT 3B model**
trained on GPT-3.5-Turbo-1106 generated dataset
is better than SpecialFT and DialCoT-S-PPO, even
better than their 11B on parts of datasets, presenting the importance of data quality. Moreover,
SFT and our method outperform LaMDA-137B
on four datasets, and are superior to PaLM-60B,
LLama2 7B to 13B on parts of datasets, showing the reasoning potential of the smaller models.
Compared with the SFT method, which employs
the causal attention mechanism, our method effectively improves the general reasoning ability of
models, which achieves an improvement of 7.3%
on GSM8K in testing accuracy. Besides, on three
out-of-distribution datasets, applying our method
also obtains significant improvements.
**Results on LLama2 7B backbone. Our approach**
outperforms the RFT method, which collects more
reasoning paths as augmented data for fine-tuning,
13676
-----
Math Word Problems
Methods Backbone #Params. GSM8K MultiArith ASDiv SVAMP
**Closed-sourced models**
GPT-3.5-Turbo-1106 - - 77.4 97.2 90.4 79.2
Code-Davinci-002 175B 63.1 95.8 80.4 76.4
Kojima et al. (2022) LaMDA 137B 14.8 45.0 46.6 37.5
Chowdhery et al. (2022) PaLM 60B 29.9 75.0 61.9 46.7
**Open-sourced models**
Chung et al. (2022) Flan-T5 3B 13.5 24.0 20.7 17.7
Chung et al. (2022) Flan-T5 11B 16.1 51.7 36.5 39.7
Touvron et al. (2023) LLama2 7B 13.7 45.2 50.1 33.4
Touvron et al. (2023) LLama2 13B 25.3 64.8 59.0 43.2
Touvron et al. (2023) LLama2 70B 52.1 92.5 74.7 67.4
**Specialized models with Flan-T5**
SpecialFT (Fu et al., 2023a) Flan-T5 3B 22.4 42.3 28.4 23.8
SpecialFT (Fu et al., 2023a) Flan-T5 11B 27.1 63.0 37.6 35.6
DialCoT-S-PPO (Han et al., 2023) Flan-T5 3B 25.6 46.9 30.7 27.1
DialCoT-S-PPO (Han et al., 2023) Flan-T5 11B 37.1 68.1 40.9 41.7
SFT Flan-T5 3B 28.0 59.2 48.8 38.8
**SFT w/ Ours†** Flan-T5 3B **35.3** **69.3** **54.3** **53.6**
**Specialized models with LLama2**
RFT (Yuan et al., 2023) LLama2 7B 45.3 90.5 50.9 39.8
WizardMath (Luo et al., 2023a) LLama2 7B 56.7 89.0 61.1 61.4
SFT LLama2 7B 46.0 90.0 51.0 51.0
**SFT w/ Ours†** LLama2 7B **52.1** **90.0** **60.3** **59.2**
MetaMath (Yu et al., 2024) LLama2 7B 65.0 96.7 75.0 72.4
**MetaMath w/ Ours†** LLama2 7B **68.0** **97.0** **79.3** **77.6**
Table 2: The accuracy of various LLMs on four mathematical datasets. † denotes that the performance improvements
over standard SFT and MetaMath are statistically significant with p < 0.05.
and is close to WizardMath, which applies reinforcement learning and additional augmented data.
Besides, after adopting our method on the MetaMath dataset, the general reasoning ability of the
model is further enhanced, which obtains an average improvement of 3.2% in testing accuracy.
**Overall results summary. The specialized mathe-**
matical datasets are important for models to enhance reasoning ability because all specialized
models get significant improvements compared
with their backbones. Moreover, applying our
approach to different settings, such as the different backbones, they all achieve significant improvements. Finally, the above experiment results
demonstrate that our method is beneficial to modeling the relationship of different CoT reasoning
steps and improving the general mathematical reasoning ability.
**6** **Analysis**
**6.1** **Evaluation of the Reversal Ability**
To further evaluate the effectiveness of our method
in improving the reversal reasoning ability, we con
**Models** **GSM8K-Rev** **MathQA-Rev**
GPT-3.5-Turbo 52.2 44.6
Flan-T5-3B 3.5 5.8
Flan-T5-11B 12.3 9.6
Flan-T5-3B (SFT) 8.2 10.0
**w /Ours** **17.4** **13.1**
LLama2-7B 7.0 10.3
LLama2-7B (RFT) 20.8 14.5
LLama2-7B (WizardMath) 25.9 24.1
LLama2-7B (SFT) 21.7 16.5
**w /Ours** **25.7** **23.5**
LLama2-7B (MetaMath) 50.0 37.0
**w /Ours** **55.7** **40.3**
Table 3: The accuracy of various LLMs on reversal
GSM8K and MathQA test datasets.
duct experiments on two reversal mathematical
datasets designed previously. We directly adopt
the models described in Section 5 to perform inference. As shown in Table 3, all specialized models
tuned on the mathematical dataset, get improvements compared with their backbones. Compared
with standard SFT, our method achieves obvious
improvements, even outperforming RFT and close
to WizardMath on LLama2-7B. It’s noticed that
13677
-----
|GSM8K SVAMP Models 60.6 GPT-3.5-Tu 56.6 T5-XL-3B 52.6 Accuracy(%) Table 4: The 48.6 methods on 44.6 with multip 40.6 bel, namely 0.2 0.4 0.6 0.8 rationale th Hyper-parameter α label with|Method QASC CQA rbo - 62.1 77.2 x2y 71.2 79.0 (x2y) + (x2r) 73.0 79.6 (x2y) + (x2(y + r)) 74.8 81.1 Ours 76.0 82.6|
|---|---|
||accuracy of T5-XL with different trainin QASC and CQA datasets. le choices, y is its corresponding la one of the correct choices, and r is at describes the knowledge of correc multi-step. We also leverage GPT-3.5|
Turbo-1106 to generate a related rationale r for
every sample. The specific prompt is shown in
Appendix A.3.
Following Hsieh et al. (2023) and Li et al.
(2022), we set the T5 model as the backbone, and
denote the training method [f (x) → _y] as x2y,_
[f (x) → _y] + [f_ (x) → _y + r] as (x2y) + (x2(y +_
_r)) (Li et al., 2022), [f_ (x) → _y] + [f_ (x) → _r] as_
(x2y) + (x2r) (Hsieh et al., 2023), where f is the
training model. The above training methods are
utilized as strong baselines. To sufficiently model
the relationships between y and every step in r,
we adopt the (x2(y + r) + (x2(y + r)[′]) method,
where (y + r)[′] denotes that employ our bidirectional modeling on (y + r). Table 4 illustrates
that r is beneficial to provide more knowledge to
improve the performance. Besides, our method
outperforms all the baselines, which demonstrates
the effectiveness and generalization of our method.
**7** **Conclusion**
In this paper, we discussed the “reversal curse” in
mathematical problems and proposed two reversal datasets based on the GSM8K and MathQA to
evaluate whether LLMs face challenges in reversal
mathematical reasoning. We analyzed the potential
reason and attributed it to the insufficient modeling
of the relationship between reasoning steps. Consequently, based on the characteristics of multi-step
deduction tasks, a bidirectional training objective
is designed to alleviate this problem. Finally, we
conducted experiments on four mathematical reasoning benchmarks to evaluate the effectiveness of
our method and also verify the benefit of our approach on two reversal datasets we constructed. In
the future, we will explore other enhanced methods
and explore how to apply them in the pre-training
stage to improve the general reasoning ability.
Figure 4: The effect of hyper-parameter α for LLama27B on GSM8K and SVAMP datasets.
MetaMath, using backward data augmentation, significantly improves the reversal reasoning ability,
which presents the importance of reversal augmented data. After training with our method on
the MetaMath dataset, LLama2-7B obtains higher
accuracy and outperforms the strong baseline GPT3.5-Turbo on GSM8K-Reversal. The above results
further evaluate that our method is able to model
the relationship between CoT reasoning steps better
and further improve the reversal reasoning ability.
**6.2** **Effect of Hyper-parameter α**
As described in Section 4.3, we introduce an additional training objective and combine it with the
causal objective. To explore the effect of Lbid
weight, we set the pre-defined hyper-parameter α
from 0.2 to 0.8, train LLama2-7B on the GSM8K
dataset, and evaluate it on the GSM8K and SVAMP
datasets. As shown in Figure 4, our method
achieves the best performance when α is 0.4. If α
is too large, Lbid could damage the original causal
paradigm and reduce the performance. On the contrary, once α becomes too small, the Lbid has no
effect on training and can’t provide effective context modeling.
**6.3** **Extensibility of Method**
To evaluate the extensibility of our method, we
conduct experiments on two commonsense reasoning datasets: CQA (Talmor et al., 2019) and
QASC (Khot et al., 2020), which are respectively
five-choice and eight-choice question-answering
datasets. We denote each instance in datasets
as (x, y, r), where x is a commonsense question
13678
-----
**8** **Limitation**
In this section, we present several of the limitations of this paper. Firstly, the process of reversal
data construction needs to be further refined, such
as utilizing GPT-3.5 or GPT-4 to generate higherquality data and verify their accuracy. Besides,
our method designs an additional training objective which could increase the cost of computational
resources. Finally, we have not yet applied our
method to larger-scale models, such as LLama213B and LLama2-70B, due to the limitation of computational resources. We will further explore the
performance of our method on these larger-scale
models.
**9** **Acknowledgements**
We want to thank all the anonymous reviewers for
their valuable comments. Juntao Li and Bowen
Yan are the corresponding authors. This work was
supported by the National Science Foundation of
China (NSFC No. 62206194), the Natural Science Foundation of Jiangsu Province, China (Grant
No. BK20220488), Young Elite Scientists Sponsorship Program by CAST (2023QNRC001), and
Supercomputing Center in Yancheng, Grant No.
20231001.
**References**
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik
Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. MathQA: Towards interpretable math
word problem solving with operation-based formalisms. In Proceedings of the 2019 Conference
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers),
pages 2357–2367. Association for Computational
Linguistics.
Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng,
Jian-Guang Lou, and Weizhu Chen. 2023. Learning
from mistakes makes llm better reasoner.
Lukas Berglund, Meg Tong, Max Kaufmann, Mikita
Balesni, Asa Cooper Stickland, Tomasz Korbak, and
Owain Evans. 2023. The reversal curse: Llms trained
on "a is b" fail to learn "b is a".
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, and et al. 2020. Language models are fewshot learners. In Advances in neural information
processing systems, pages 33:1877–1901.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan,
Henrique Ponde de Oliveira Pinto, Jared Kaplan,
Harri Edwards, Yuri Burda, Nicholas Joseph, Greg
Brockman, and et al. 2021a. Evaluating large language models trained on code.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan,
HenriquePondedeOliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg
Brockman, Alex Ray, Raul Puri, Gretchen Krueger,
M.N. Petrov, Heidy Khlaaf, Girish Sastry, Pamela
Mishkin, Brooke Chan, Scott Gray, Nick Ryder,
Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet,
FelipePetroski Such, Dave Cummings, Matthias
Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel
Herbert-Voss, WilliamH. Guss, Alex Nichol, Alex
Paino, Nikolas Tezak, Jie Tang, I. Babuschkin,
Suchir Balaji, Shantanu Jain, WilliamS. Saunders,
Christopher Hesse, AndrewN. Carr, Jan Leike,
Joshua Achiam, Vedant Misra, Evan Morikawa,
Alec Radford, MatthewM. Knight, Miles Brundage,
Mira Murati, Katie Mayer, Peter Welinder, Bob
McGrew, Dario Amodei, McCandlish Sam, Ilya
Sutskever, and Wojciech Zaremba. 2021b. Evaluating large language models trained on code. Preprint
arXiv:2107.03374.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, HyungWon Chung, Charles Sutton,
Sebastian Gehrmann, Parker Schuh, Kensen Shi,
Sasha Tsvyashchenko, Joshua Maynez, Abhishek
Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin,
Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju
Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa
Dev, Henryk Michalewski, Xavier Garcia, Vedant
Misra, Kevin Robinson, and Liam Fe. 2022. Palm:
Scaling language modeling with pathways.
HyungWon Chung, Le Hou, Shayne Longpre, Barret
Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang,
Mostafa Dehghani, Siddhartha Brahma, Albert Webson, ShixiangShane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan
Narang, Gaurav Mishra, Adams Yu, Vincent Zhao,
Yanping Huang, Andrew Dai, Hongkun Yu, Slav
Petrov, EdH. Chi, Jeff Dean, Jacob Devlin, Adam
Roberts, Denny Zhou, QuocV. Le, and Jason Wei.
2022. Scaling instruction-finetuned language models.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
2021. Training verifiers to solve math word problems.
K. Collins, A. Jiang, S. Frieder, L. Wong, M. Zilka,
U. Bhatt, T. Lukasiewicz, Y. Wu, J. Tenenbaum,
W. Hart, T. Gowers, W. Li, A. Weller, and M. Jamnik.
13679
-----
2023. Evaluating language models for mathematics
through interactions.
Shizhe Diao, Pengcheng Wang, Yong Lin, and Tong
Zhang. 2023. Active prompting with chain-ofthought for large language models.
Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and
Tushar Khot. 2023a. Specializing smaller language
models towards multi-step reasoning. In Proceedings
of the 40th International Conference on Machine
Learning, pages 10421–10430.
Yao Fu, Hao Peng, Ashish Sabharwal, Peter
Clark, and Tushar Khot. 2023b. Complexitybased prompting for multi-step reasoning. In
Advances in International Conference on Learning
Representations.
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and
Tushar Khot. 2023c. Complexity-based prompting
for multistep reasoning. In Advances in International
Conference on Learning Representations.
Roger Grosse, Juhan Bae, Cem Anil, Nelson Elhage,
Alex Tamkin, Amirhossein Tajdini, Benoit Steiner,
Dustin Li, Esin Durmus, Ethan Perez, Evan Hubinger,
Kamile Lukoši˙ ut¯ e, Karina Nguyen, Nicholas Joseph,˙
Sam McCandlish, Jared Kaplan, and Samuel R. Bowman. 2023. Studying large language model generalization with influence functions.
Chengcheng Han, Xiaowei Du, Che Zhang, Yixin Lian,
Xiang Li, Ming Gao, and Baoyuan Wang. 2023. DialCoT meets PPO: Decomposing and exploring reasoning paths in smaller language models. In Proceedings
of the 2023 Conference on Empirical Methods in
Natural Language Processing, pages 8055–8068. Association for Computational Linguistics.
D. Hendrycks, C. Burns, S. Kadavath, A. Arora,
S. Basart, E. Tang, D. Song, and J. Steinhardt. 2021.
Measuring mathematical problem solving with the
math dataset. In Advances in Neural Information
Processing Systems: Datasets and Benchmarks.
Namgyu Ho, Laura Schmid, and Se-Young Yun. 2023.
Large language models are reasoning teachers. In
Proceedings of the 61st Annual Meeting of the
Association for Computational Linguistics, volume
1: Long Papers, pages 14852–14882.
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego De, Las Casas, Lisa Hendricks, Johannes
Welbl, Aidan Clark, Tom Hennigan, Eric Noland,
Katie Millican, GeorgeVanDen Driessche, Bogdan
Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack Rae, Oriol Vinyals, and
Laurent Sifre. 2022. Training compute-optimal large
language models.
Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren
Etzioni, and Nate Kushman. 2014. Learning to
solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on
Empirical Methods in Natural Language Processing,
pages 523–533.
Cheng-Yu Hsieh, Chun-Liang Li, Chih-kuan Yeh,
Hootan Nakhost, Yasuhisa Fujii, Alex Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. 2023.
Distilling step-by-step! outperforming larger language models with less training data and smaller
model sizes. In Findings of the Association for
Computational Linguistics: ACL 2023, pages 8003–
8017. Association for Computational Linguistics.
S. Imani, L. Du, and H. Shrivastava. 2023. Mathprompter: Mathematical reasoning using large language models.
Yan Junbing, Chengyu Wang, Taolin Zhang, Xiaofeng
He, Jun Huang, and Wei Zhang. 2023. From complex
to simple: Unraveling the cognitive tree for reasoning with small language models. In Findings of the
Association for Computational Linguistics: EMNLP
2023, pages 12413–12425. Association for Computational Linguistics.
Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2020. Qasc:
A dataset for question answering via sentence
composition. In In The Thirty-Fourth AAAI
Conference on Artificial Intelligence, AAAI 2020,
The Thirty-Second Innovative Applications of
Artificial Intelligence Conference, IAAI 2020, The
Tenth AAAI Symposium on Educational Advances
in Artificial Intelligence, EAAI 2020, pages 8082–
8090.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization.
Philipp Koehn. 2004. Statistical significance tests for
machine translation evaluation. In Proceedings of
the 2004 conference on empirical methods in natural
language processing, pages 388–395.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large
language models are zero-shot reasoners. In
Advances in neural information processing systems,
volume 35, pages 22199–22213.
Chengpeng Li, Zheng Yuan, Guanting Dong, Keming
Lu, Jiancan Wu, Chuanqi Tan, Xiang Wang, and
Chang Zhou. 2023a. Query and response augmentation cannot help out-of-domain math reasoning generalization.
Shiyang Li, Jianshu Chen, Yelong Shen, Zhiyu Chen,
Xinlu Zhang, Zekun Li, Hong Wang, Jing Qian,
Baolin Peng, Yi Mao, Wenhu Chen, and Xifeng
Yan. 2022. Explanations from large language models
make small reasoners better.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
Jian-Guang Lou, and Weizhu Chen. 2023b. Making
language models better reasoners with step-aware
verifier. In Proceedings of the 61st Annual Meeting
of the Association for Computational Linguistics
13680
-----
(Volume 1: Long Papers), pages 5315–5333. Association for Computational Linguistics.
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri
Edwards, Bowen Baker, Teddy Lee, Jan Leike, John
Schulman, Ilya Sutskever, and Karl Cobbe. 2023.
Let’s verify step by step.
Weize Liu, Guocong Li, Kai Zhang, Bang Du, Qiyuan
Chen, Xuming Hu, Hongxia Xu, Jintai Chen, and Jian
Wu. 2023. Mind’s mirror: Distilling self-evaluation
capability and comprehensive thinking from large
language models.
H. Luo, Q. Sun, C. Xu, P. Zhao, J. Lou, C. Tao, X. Geng,
Q. Lin, S. Chen, and D. Zhang. 2023a. Wizardmath:
Empowering mathematical reasoning for large language models via reinforced evol-instruct.
Z. Luo, C. Xu, P. Zhao, Q. Sun, X. Geng, W. Hu, C. Tao,
J. Ma, Q. Lin, and D. Jiang. 2023b. Wizardcoder:
Empowering code large language models with evolinstruct. Preprint arXiv:2306.08568.
Ang Lv, Kaiyi Zhang, Shufang Xie, Quan Tu, Yuhan
Chen, Ji-Rong Wen, and Rui Yan. 2023. Are we
falling in a middle-intelligence trap? an analysis and
mitigation of the reversal curse.
Jun-Yu Ma, Jia-Chen Gu, Zhen-Hua Ling, Quan Liu,
and Cong Liu. 2023. Untying the reversal curse via
bidirectional language model editing.
LucieCharlotte Magister, Jonathan Mallinson, Jakub
Adamek, Eric Malmi, and Aliaksei Severyn. 2022.
Teaching small language models to reason. In
Proceedings of the 61st Annual Meeting of the
Association for Computational Linguistics, volume
2: Short Papers, pages 1773–1781.
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su.
2020. A diverse corpus for evaluating and developing
English math word problem solvers. In Proceedings
of the 58th Annual Meeting of the Association for
Computational Linguistics, pages 975–984. Association for Computational Linguistics.
Sharan Narang, Aakanksha Chowdhery, and Denny
Zhou. 2023. Self-consistency improves chain
of thought reasoning in language models. In
Advances in International Conference on Learning
Representations.
OpenAI. 2022. Gpt-3.5-turbo.
OpenAI. 2023. Gpt-4.
L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, K. Slama S. Agarwal,
A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller,
M. Simens, A. Askell, P. Christiano P. Welinder,
J. Leike, and R. Lowe. 2022. Training language models to follow instructions with human feedback. In
Advances in neural information processing systems.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
2021. Are nlp models really able to solve simple
math word problems? In Proceedings of the 2021
Conference of the North American Chapter of the
Association for Computational Linguistics: Human
Language Technologies.
Subhro Roy and Dan Roth. 2015. Solving general
arithmetic word problems. In Proceedings of the
2015 Conference on Empirical Methods in Natural
Language Processing, pages 1743–1752. Association
for Computational Linguistics.
Kumar Shridhar, Alessandro Stolfo, and Mrinmaya
Sachan. 2023. Distilling reasoning capabilities
into smaller language models. In Findings of the
Association for Computational Linguistics: ACL
2023, pages 7059–7073. Association for Computational Linguistics.
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and
Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense
knowledge. In Proceedings of the 2019 Conference
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers),
pages 4149–4158. Association for Computational
Linguistics.
R. Taori, I. Gulrajani, T. Zhang, Y. Dubois, X. Li,
C. Guestrin, P. Liang, and T. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model.
H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, D. Bikel, L. Blecher, C. Ferrer,
M. Chen, G. Cucurull, D. Esiobu, J. Fernandes, J. Fu,
W. Fu, B. Fuller, C. Gao, V. Goswami, N. Goyal,
A. Hartshorn, S. Hosseini, R. Hou, H. Inan, M. Kardas, V. Kerkez, M. Khabsa, I. Kloumann, A. Korenev, P. Koura, M. Lachaux, T. Lavril, J. Lee,
D. Liskovich, Y. Lu, Y. Mao, X. Martinet, T. Mihaylov, P. Mishra, I. Molybog, Y. Nie, A. Poulton,
J. Reizenstein, R. Rungta, K. Saladi, A. Schelten,
R. Silva, E. Smith, R. Subramanian, X. Tan, B. Tang,
R. Taylor, A. Williams, J. Kuan, P. Xu, Z. Yan,
I. Zarov, Y. Zhang, A. Fan, M. Kambadur, S. Narang,
A. Rodriguez, R. Stojnic, S. Edunov, and T. Scialom.
2023. Llama 2: Open foundation and fine-tuned chat
models.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le,
Ed H Chi, Sharan Narang, Aakanksha Chowdhery,
and Denny Zhou. 2022. Self-consistency improves
chain of thought reasoning in language models. In
Advances in the eleventh International Conference
on Learning Representations.
Yue Wang, Hung Le, Akhilesh Gotmare, Nghi Bui,
Junnan Li, and Steven Hoi. 2023. CodeT5+:
Open code large language models for code understanding and generation. In Proceedings of the
2023 Conference on Empirical Methods in Natural
Language Processing, pages 1069–1088. Association
for Computational Linguistics.
13681
-----
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou
et al. 2022. Chain-of-thought prompting elicits
reasoning in large language models. In Advances
in Neural Information Processing Systems, page
35:24824–24837.
Da Wu, Jingye Yang, and Kai Wang. 2023. Not all
large language models (llms) succumb to the "reversal curse": A comparative study of deductive logical
reasoning in bert and gpt models.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2024.
Metamath: Bootstrap your own mathematical questions for large language models. In Advances in
the twelfth International Conference on Learning
Representations.
Z. Yuan, H. Yuan, C. Li, G. Dong, C. Tan, and C. Zhou.
2023. Scaling relationship on learning mathematical
reasoning with large language models.
Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex
Smola. 2023. Automatic chain of thought
prompting in large language models. In
Advances in International Conference on Learning
Representations.
Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo
Li, and Yu Li. 2023a. Progressive-hint prompting
improves reasoning in large language models.
Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan
Wang, Yufei Xue, Lei Shen, Zihan Wang, Andi Wang,
Yang Li, Teng Su, Zhilin Yang, and Jie Tang. 2023b.
Codegeex: A pre-trained model for code generation
with multilingual benchmarking on humaneval-x. In
Proceedings of the 29th ACM SIGKDD Conference
on Knowledge Discovery and Data Mining, KDD
’23, page 5673–5684. Association for Computing
Machinery.
13682
-----
**A** **Appendix**
**A.1** **Experiment Details**
We adopt the AdamW (Kingma and Ba, 2014) optimizer to train all the models. Following Chung
et al. (2022) and Yuan et al. (2023), we respectively fine-tune Flan-T5 and LLama2 models for
50 and 3 epochs, and set batch size as 64 and 128,
learning rate 5e-5 and 1e-5. For LLama2 with different enhanced methods, we directly adopt the
checkpoints provided in huggingface to perform
inference. We run all the experiments on eight
NVIDIA A100-PCIE-40GB.
**A.2** **Prompts for CoT Reasoning Paths**
The specific prompts for GPT-3.5-Turbo-1106 to
obtain CoT reasoning paths are shown in Figure 5.
**A.3** **Prompts for QA Reasoning**
The specific prompts for GPT-3.5-Turbo-1106 to
obtain QA rationales are shown in Figure 6.
**A.4** **Case study**
To understand the effectiveness of our approach
better, we provide an example of LLama2-7B with
SFT and our approach on the GSM8K dataset. As
shown in Figure 7, SFT doesn’t sufficiently understand the relationships of different conditions in
the question, which causes the wrong reasoning
process. On the contrary, benefiting from the bidirectional modeling, our method understands the
context and infers the correct answer.
13683
-----
|Prompt: PleaPsreo fmolplotw: the exampPlerso mthpatt :m odify the question by adding the original question and answer as a new condYiotiuo na,r eo nal yh ehlipdfiunlg a onYnde op ucr eoacnrides ieat i aohsnesl itpshftaautnl ata,p nppdlee paarrseeecd fi sionel l taohswes i qtshtuaeen sett,xi opanmle, apmsleeus sf oat nlkldoe ewapn itsnhwge eoerxt htahemer pqclouenessd atiintoidon n awsn isutwhn cerhre aathnsoe n ged,g a pnrdo cmeasskiionng, tfhine ahlliydg ign pigvr oecc otenhsdastii to'iTonhn, e fa iasn naasl lwnyee gwri ivqse u' tewhsatiittoh 'n Tc.ha Felci anunalaslltwyin,e grp rieosx v'p iwrdeeist shtih ocena mlsc.uoldaitfiinegd eqxupersetisosnio annsd. the hi ding number, and follow the for-mat as: Modified question: \n Hiding number: Question: Paige waQs hueelsptiinogn :h Pera imgeo mw apsl ahnetl pflionwg ehresr amndo mto gpelathnet rf ltohweye rpsl aanntde dto sgoemthee rs etheedys. pTlahnetye dp usto 1m0 Exasmeepdlse i1n: each flowers beeedds. Iifn tehaecrhe aflroew 4e5r fbloewd.e Irfb tehdesr\en Haroew 4 5m falonwy eserbeedds sd\nidH tohwey m palannyt ?seeds did they pla OriAginnsawl qeru:e Tsthioeyn :p Iuft A1An0n ns seiwse d9es ry :ie nTa erhsae cyoh lp dfu laotn w1d0e h rs eberee bddrs. o Titnhh eeerra eci sha trfwel oi4cw5ee fhrl eobrwe adeg.r e Tb, hehdeorswe. S aorole dt 4hw5eyi lf llp ohlwaenre trbe rbdoe 1tdh0se .r* S b4oe5 t ihsnee yed ps 3 ye1a0rs *? 4(T5 hies 4an5s0w. Terh eis a12n01s )*w 4er5 iiss 445500.. The answer is 450. Mod ified question: An n's brother is twice as old as she is. In 3 years, her brother will be 27 years old. HowQ ouleds tiiso An:n nJa ncokw re?ceQivuedes 3ti oemn:a Jilasc ikn r tehcee iavfetedr n3o eomn,a i6l se imn atihlse ianf ttehren omoonr,n 6in egm aanidls sionm thee m moorren iinn gth aen edv seonmi Hid.i nIfg h neu rmecbeeivre: d9 a tota. lI fo hf e1 0re ecmeiavields ian ttohtea ld oafy 1\n0H eomwa imlsa inny t heem daailys\ ndHido jwac mk raencye ievme aiinl st hdeid e vjaecnki nrgec?eive i Answer: Jack receivAend s3w eemr:a Jilasc ikn r tehcee iavfetedr n3o eomn,a i6l se imn atihlse ianf ttehren omoonr,n 6in egm aanidls sionm thee m moorren iinn gth aen edv seonmine ExaImf hpele r e2c:eived a totalI fo hf e1 0re ecmeiavields ian ttohtea ld oafy 1, 0th eemn ahiel sr einc etihvee dd a1y0, t-h (e3n + h e6 )r e=c e1i veemda 1il0s i-n ( 3th +e e6v) e=n i1n egm. Tahil Oriagninswale qr uise s(t1i0o n- :( 3M +o ra6ins))es.wttee ra nisd ( K10a e-l ( w3 e+r e6 )a)s.ked to bring fruits. Morisette brought 5 apples and 8 oran ges, while Kael brought twice the amount of apples and half the number of oranges than Morisette. How manQy ufreusittiso nd:o Athte tyh eh aavrceQa iudnee t sDottiaaovln?e: (hATahtd et hw aeon ansrw c1ae4dr teiis cD k2ae7vt)se ahnadd lwosot n2 1t4ic ktiectkse. tIsf ahned u lsoesdt 21 0ti tcok ebtusy. Isfo hme eu tsoeyds 1\n0H too Mod mifaiendy qtiuckesettiso dni:d M Doa rmviesae hnttayev taeicn lkdeef Ktt?sa deild w Dearev ea shkaevde tloe fbt?ring fruits. Morisette brings 5 apples and some or angeAsn, sawnde rK: aDeal vberi sntgasr tteAwdni wcsewi ttehhr e1: 4aD ptaipcvlkeees s tatsan. rdHte ehd a lwlofs ittth h2e 1 toi4cr aktinecgtksee. stS st.oh H ahtee M lhoaosdrti s21e 4tti t-ce k 2be r=tisn .1 gS2so,t ithcheke eyht ash.da H v1ee4 2u- 7s2e fd=r u 11i0t2s ttiiincck keett totatlo. Hbouwy tmoyasn. yS oor hane gheatsod d b1ou2ey s- tM1o0yo s=r.i sS2eo ttt ihec ekb ehrtiasn dgl e?1f2t. -T 1h0e =an 2s wtiecrk eists ( (le1f4t .- T 2h)e - a1n0s)w. er is ((14 - 2) - 10). Hid ing number: 8 Question: {Q} Question: {Q} Ori ginal question: {Q}|Col2|Col3|Col4|
|---|---|---|---|
||||qiunestion with reasonin e seeds. They put 10 nt? l.a nted 10 * 45 seeds. neg more in the evening n the evening? g .m ore in the evening. es in the evening. The wbuy some toys\nHow ss . He used 10 tickets|
||eaPsreo fmolplotw: the exampPlerso mthpatt :m odify the question by adding the original question and answer as a new ndYiotiuo na,r eo nal yh ehlipdfiunlg a onYnde op ucr eoacnrides ieat i aohsnesl itpshftaautnl ata,p nppdlee paarrseeecd fi sionel l taohswes i qtshtuaeen sett,xi opanmle, apmsleeus sf oat nlkldoe ewapn itsnhwge eoerxt htahemer pqclouenessd atiintoidon n awsn isutwhn cerhre aathnsoe n d,g a pnrdo cmeasskiionng, tfhine ahlliydg ign pigvr oecc otenhsdastii to'iTonhn, e fa iasn naasl lwnyee gwri ivqse u' tewhsatiittoh 'n Tc.ha Felci anunalaslltwyin,e grp rieosx v'p iwrdeeist shtih ocena mlsc.uoldaitfiinegd eqxupersetisosnio annsd. the hi g number, and follow the for-mat as: Modified question: \n Hiding number: Question: Paige waQs hueelsptiinogn :h Pera imgeo mw apsl ahnetl pflionwg ehresr amndo mto gpelathnet rf ltohweye rpsl aanntde dto sgoemthee rs etheedys. pTlahnetye dp usto 1m0 asmeepdlse i1n: each flowers beeedds. Iifn tehaecrhe aflroew 4e5r fbloewd.e Irfb tehdesr\en Haroew 4 5m falonwy eserbeedds sd\nidH tohwey m palannyt ?seeds did they pla iAginnsawl qeru:e Tsthioeyn :p Iuft A1An0n ns seiwse d9es ry :ie nTa erhsae cyoh lp dfu laotn w1d0e h rs eberee bddrs. o Titnhh eeerra eci sha trfwel oi4cw5ee fhrl eobrwe adeg.r e Tb, hehdeorswe. S aorole dt 4hw5eyi lf llp ohlwaenre trbe rbdoe 1tdh0se .r* S b4oe5 t ihsnee yed ps e1a0rs *? 4(T5 hies 4an5s0w. Terh eis a12n01s )*w 4er5 iiss 445500.. The answer is 450. od ified question: An n's brother is twice as old as she is. In 3 years, her brother will be 27 years old. wQ ouleds tiiso An:n nJa ncokw re?ceQivuedes 3ti oemn:a Jilasc ikn r tehcee iavfetedr n3o eomn,a i6l se imn atihlse ianf ttehren omoonr,n 6in egm aanidls sionm thee m moorren iinn gth aen edv seonmi d.i nIfg h neu rmecbeeivre: d9 a tota. lI fo hf e1 0re ecmeiavields ian ttohtea ld oafy 1\n0H eomwa imlsa inny t heem daailys\ ndHido jwac mk raencye ievme aiinl st hdeid e vjaecnki nrgec?eive i Answer: Jack receivAend s3w eemr:a Jilasc ikn r tehcee iavfetedr n3o eomn,a i6l se imn atihlse ianf ttehren omoonr,n 6in egm aanidls sionm thee m moorren iinn gth aen edv seonmine aImf hpele r e2c:eived a totalI fo hf e1 0re ecmeiavields ian ttohtea ld oafy 1, 0th eemn ahiel sr einc etihvee dd a1y0, t-h (e3n + h e6 )r e=c e1i veemda 1il0s i-n ( 3th +e e6v) e=n i1n egm. Tahil iagninswale qr uise s(t1i0o n- :( 3M +o ra6ins))es.wttee ra nisd ( K10a e-l ( w3 e+r e6 )a)s.ked to bring fruits. Morisette brought 5 apples and 8 oran s, while Kael brought twice the amount of apples and half the number of oranges than Morisette. How nQy ufreusittiso nd:o Athte tyh eh aavrceQa iudnee t sDottiaaovln?e: (hATahtd et hw aeon ansrw c1ae4dr teiis cD k2ae7vt)se ahnadd lwosot n2 1t4ic ktiectkse. tIsf ahned u lsoesdt 21 0ti tcok ebtusy. Isfo hme eu tsoeyds 1\n0H too od mifaiendy qtiuckesettiso dni:d M Doa rmviesae hnttayev taeicn lkdeef Ktt?sa deild w Dearev ea shkaevde tloe fbt?ring fruits. Morisette brings 5 apples and some or geAsn, sawnde rK: aDeal vberi sntgasr tteAwdni wcsewi ttehhr e1: 4aD ptaipcvlkeees s tatsan. rdHte ehd a lwlofs ittth h2e 1 toi4cr aktinecgtksee. stS st.oh H ahtee M lhoaosdrti s21e 4tti t-ce k 2be r=tisn .1 gS2so,t ithcheke eyht ash.da H v1ee4 2u- 7s2e fd=r u 11i0t2s ttiiincck keett atlo. Hbouwy tmoyasn. yS oor hane gheatsod d b1ou2ey s- tM1o0yo s=r.i sS2eo ttt ihec ekb ehrtiasn dgl e?1f2t. -T 1h0e =an 2s wtiecrk eists ( (le1f4t .- T 2h)e - a1n0s)w. er is ((14 - 2) - 10). d ing number: 8 Question: {Q} Question: {Q} i ginal question: {Q}|pPlerso mthpatt :m odify the question by adding the original question and answer as a new onYnde op ucr eoacnrides ieat i aohsnesl itpshftaautnl ata,p nppdlee paarrseeecd fi sionel l taohswes i qtshtuaeen sett,xi opanmle, apmsleeus sf oat nlkldoe ewapn itsnhwge eoerxt htahemer pqclouenessd atiintoidon n awsn isutwhn cerhre aathnsoe n liydg ign pigvr oecc otenhsdastii to'iTonhn, e fa iasn naasl lwnyee gwri ivqse u' tewhsatiittoh 'n Tc.ha Felci anunalaslltwyin,e grp rieosx v'p iwrdeeist shtih ocena mlsc.uoldaitfiinegd eqxupersetisosnio annsd. the hi w the for-mat as: Modified question: \n Hiding number: aQs hueelsptiinogn :h Pera imgeo mw apsl ahnetl pflionwg ehresr amndo mto gpelathnet rf ltohweye rpsl aanntde dto sgoemthee rs etheedys. pTlahnetye dp usto 1m0 ers beeedds. Iifn tehaecrhe aflroew 4e5r fbloewd.e Irfb tehdesr\en Haroew 4 5m falonwy eserbeedds sd\nidH tohwey m palannyt ?seeds did they pla A1An0n ns seiwse d9es ry :ie nTa erhsae cyoh lp dfu laotn w1d0e h rs eberee bddrs. o Titnhh eeerra eci sha trfwel oi4cw5ee fhrl eobrwe adeg.r e Tb, hehdeorswe. S aorole dt 4hw5eyi lf llp ohlwaenre trbe rbdoe 1tdh0se .r* S b4oe5 t ihsnee yed ps es a12n01s )*w 4er5 iiss 445500.. The answer is 450. n n's brother is twice as old as she is. In 3 years, her brother will be 27 years old. eQivuedes 3ti oemn:a Jilasc ikn r tehcee iavfetedr n3o eomn,a i6l se imn atihlse ianf ttehren omoonr,n 6in egm aanidls sionm thee m moorren iinn gth aen edv seonmi ota. lI fo hf e1 0re ecmeiavields ian ttohtea ld oafy 1\n0H eomwa imlsa inny t heem daailys\ ndHido jwac mk raencye ievme aiinl st hdeid e vjaecnki nrgec?eive i ivAend s3w eemr:a Jilasc ikn r tehcee iavfetedr n3o eomn,a i6l se imn atihlse ianf ttehren omoonr,n 6in egm aanidls sionm thee m moorren iinn gth aen edv seonmine alI fo hf e1 0re ecmeiavields ian ttohtea ld oafy 1, 0th eemn ahiel sr einc etihvee dd a1y0, t-h (e3n + h e6 )r e=c e1i veemda 1il0s i-n ( 3th +e e6v) e=n i1n egm. Tahil +o ra6ins))es.wttee ra nisd ( K10a e-l ( w3 e+r e6 )a)s.ked to bring fruits. Morisette brought 5 apples and 8 oran ht twice the amount of apples and half the number of oranges than Morisette. How vrceQa iudnee t sDottiaaovln?e: (hATahtd et hw aeon ansrw c1ae4dr teiis cD k2ae7vt)se ahnadd lwosot n2 1t4ic ktiectkse. tIsf ahned u lsoesdt 21 0ti tcok ebtusy. Isfo hme eu tsoeyds 1\n0H too Doa rmviesae hnttayev taeicn lkdeef Ktt?sa deild w Dearev ea shkaevde tloe fbt?ring fruits. Morisette brings 5 apples and some or r tteAwdni wcsewi ttehhr e1: 4aD ptaipcvlkeees s tatsan. rdHte ehd a lwlofs ittth h2e 1 toi4cr aktinecgtksee. stS st.oh H ahtee M lhoaosdrti s21e 4tti t-ce k 2be r=tisn .1 gS2so,t ithcheke eyht ash.da H v1ee4 2u- 7s2e fd=r u 11i0t2s ttiiincck keett gheatsod d b1ou2ey s- tM1o0yo s=r.i sS2eo ttt ihec ekb ehrtiasn dgl e?1f2t. -T 1h0e =an 2s wtiecrk eists ( (le1f4t .- T 2h)e - a1n0s)w. er is ((14 - 2) - 10). Question: {Q} }|qiune e se nt? l.a n neg m n th g .m o es in wbu ss .|
|||||
**Prompt:**
**Prompt:** **Prompt:**
**Question:** **Question:**
**Example 1:**
**Original question:Answer:** **Answer:**
3 years? (The answer is 21)10 * 45 is 450. The answer is 450.
**Modified question:**
How old is Ann now?Question: **Question:**
**Hiding number: 9**
Figure 5: The prompt for obtaining CoT reasoning steps with a conclusion that “The answer is”.
**Prompt:**
You are a helpful and precise assistant, following the examples and just explaining the reason given the
question and answer.
**Question: How do you reduce pollution\nAnswer choices:\n(a) igniting fuel and oxidiser\n(b) transporta**
tion technology\n(c) wasting\n(d) not recycling\n(e) burning fossil fuels\n(f) converting electricity to he
at\n(g) water conservation\n(h) using less resources\nThe answer is using less resources. Why?
**Answer: Conserving resources has a positive impact on the environment. Use of resources affects the en**
vironment such as pollution.
**Question: What will move to another area if their habitat will no longer support them?**
\nAnswer choices:\n(a) density\n(b) Birds\n(c) squids\n(d) humans\n(e) clouds\n(f) gravity\n(g) cows\n(
h) Whales\nThe answer is cows. Why?
**Answer: If a habitat can no longer support animals then those animals will move to another area. Cows**
are social animals.
**Question: With the exception of allergies, what may cause a person to seek medical attention?**
\nAnswer choices:\n(a) Contact with latex\n(b) a tree falling\n(c) Organs within the body.\n(d) Contact
with baby chicks\n(e) prolactin release\n(f) Contact with peanut butter\n(g) hypothyroidism\n(h) Contac
t with microorganisms\nThe answer is Contact with microorganisms. Why?
**Answer: Microorganisms can cause infections. Infections usually require medical treatment.**
**Question: {Q}\nThe answer is {Answer}. Why?**
**Answer:**
Figure 6: The prompt for obtaining the commonsense reasoning steps based on the correct choice.
13684
-----
**Question:**
Below is an instruction that describes a task. Write a response that appropriately completes the request.
Instruction: Carlos, Jim and Carrey were at the beach playing and they decided to gather some seashells.
Jim collected 27 seashells, which was 5 more than what Carlos collected. Carlos collected twice as many a
s Carrey. They gathered all their seashells and divided them equally between themselves. How many did e
ach person get? (The correct answer is 20)
**SFT Answer:**
First, we know that Carlos collected twice as many seashells as Carrey. So, if Carlos collected 27 seashells
, then Carrey collected 27 / 2 = 13.5 seashells. Next, we know that Jim collected 5 more seashells than Carl
os, so he collected 27 + 5 = 32 seashells. Now, we have the total number of seashells collected by all of the
m: 27 (Carlos) + 13.5 (Carrey) + 32 (Jim) = 72 seashells. Finally, we divide the total number of seashells
equally between themselves: 72 / 3 = 24 seashells for each person. The answer is 24.
**Ours Answer:**
We know that Jim collected 27 seashells, which was 5 more than what Carlos collected. So, Carlos collecte
d 27 -5 = 22 seashells. Carlos collected twice as many as Carrey, so Carrey collected 22 / 2 = 11 seashells.
When they gathered all their seashells, they had a total of 27 + 22 + 11 = 60 seashells. If they divided them
equally between themselves, each person would get 60 / 3 = 20 seashells. So, each person got 20 seashells
. The answer is (60 / 3).
Figure 7: A case study about the reasoning procession on the GSM8K dataset.
13685
-----
| [
"Pei, Guo",
"Vivek, Srikumar",
"WangJie, You",
"Juntao, Li",
"Yan, Bowen",
"Min, Zhang",
"Lun-Wei, Ku",
"Andre, Martins"
] | 2024-08-01T00:00:00 | ACL 2024 Findings | false | 0 | 0 | null | https://aclanthology.org/2024.findings-acl.811 | null | https://www.semanticscholar.org/paper/69cc2b45a330e3055e6c03c5a9d43729884eca70 |
Exposing the Achilles' Heel: Evaluating LLMs Ability to Handle Mistakes in Mathematical Reasoning | Large Language Models (LLMs) have been applied to Math Word Problems (MWPs) with transformative impacts, revolutionizing how these complex problems are approached and solved in various domains including educational settings. However, the evaluation of these models often prioritizes final accuracy, overlooking the crucial aspect of reasoning capabilities. This work addresses this gap by focusing on the ability of LLMs to detect and correct reasoning mistakes. We introduce a novel dataset MWP-MISTAKE, incorporating MWPs with both correct and incorrect reasoning steps generated through rule-based methods and smaller language models. Our comprehensive benchmarking reveals significant insights into the strengths and weaknesses of state-of-the-art models, such as GPT-4o, GPT-4, GPT-3.5Turbo, and others. We highlight GPT-$o's superior performance in mistake detection and rectification and the persistent challenges faced by smaller models. Additionally, we identify issues related to data contamination and memorization, impacting the reliability of LLMs in real-world applications. Our findings emphasize the importance of rigorous evaluation of reasoning processes and propose future directions to enhance the generalization and robustness of LLMs in mathematical problem-solving. | This work introduces a novel dataset MWP-MISTAKE, incorporating MWPs with both correct and incorrect reasoning steps generated through rule-based methods and smaller language models, and highlights GPT-$o's superior performance in mistake detection and rectification and the persistent challenges faced by smaller models. | ## Exposing the Achilles’ Heel: Evaluating LLMs Ability to Handle Mistakes in Mathematical Reasoning
**Joykirat Singh** **Akshay Nambi** **Vibhav Vineet**
Microsoft Research
```
{akshayn, vivineet}@microsoft.com
```
**Abstract**
Large Language Models (LLMs) have been applied to Math Word Problems
(MWPs) with transformative impacts, revolutionizing how these complex problems
are approached and solved in various domains including educational settings. However, the evaluation of these models often prioritizes final accuracy, overlooking the
crucial aspect of reasoning capabilities. This work addresses this gap by focusing
on the ability of LLMs to detect and correct reasoning mistakes. We introduce a
novel dataset MWP-MISTAKE, incorporating MWPs with both correct and incorrect
reasoning steps generated through rule-based methods and smaller language models. Our comprehensive benchmarking reveals significant insights into the strengths
and weaknesses of state-of-the-art models, such as GPT-4o, GPT-4, GPT-3.5Turbo,
and others. We highlight GPT-4o’s superior performance in mistake detection and
rectification and the persistent challenges faced by smaller models. Additionally,
we identify issues related to data contamination and memorization, impacting
the reliability of LLMs in real-world applications. Our findings emphasize the
importance of rigorous evaluation of reasoning processes and propose future directions to enhance the generalization and robustness of LLMs in mathematical
problem-solving.
**1** **Introduction**
Large Language Models (LLMs) have transformed artificial intelligence applications across diverse
domains, including healthcare, agriculture, and education [3, 1]. Their remarkable capabilities in
natural language understanding, question answering, and mathematical problem-have shown potential
to revolutionize various human endeavors [21]. Recent advancements have fueled extensive research
into applying LLMs to interpret and solve a wide array of mathematical tasks, from basic arithmetic
to complex algebraic equations and calculus problems [16, 38].
Math Word Problems (MWPs) convey mathematical concepts and calculations through written
descriptions, typically involving narrative scenarios [28]. Solvers must extract relevant mathematical
information from these narratives and apply appropriate principles to arrive at solutions. Studies [34,
15, 11] have demonstrated that LLMs are proficient at understanding the contextual subtleties
of MWPs, translating textual descriptions into mathematical expressions, and delivering precise
solutions. Central to this process is mathematical reasoning, which enables models to adeptly manage
complex, multi-step problems, draw logical inferences, and provide accurate solutions.
Despite achieving remarkable accuracy rates exceeding 90% on datasets like GSM-8K (Grade
School Math dataset with linguistically diverse word problems) [9], foundational LLMs such as
Claude-3-Opus [2], Gemini Ultra [29], and OpenAI GPT-4 [25] reveal a significant gap in our
understanding of their capabilities in mathematical reasoning [11]. Current research predominantly
focuses on evaluating the final accuracy of MWPs [23, 35], neglecting the intricate reasoning
processes necessary to derive solutions. We argue that the reasoning steps play a pivotal role, and
Preprint. Under review.
-----
it is imperative to assess them to comprehensively analyze the foundational capabilities of these
models. This necessity is further underscored by the increasing utilization of LLMs in domains such
as education [13], where they serve as personalized tutors for students, aiding in teaching concepts
and solving mathematical problems. Simply deriving the final answer is insufficient; the ability
to guide students through correct steps, identify errors in their reasoning, and provide corrective
guidance is paramount for such applications.
Model Input GPT-4o
You are provided with a mathematical question and a step-by-step solution along with it.
The solution might have some mistakes. Identify if the solution is correct or incorrect. If Open AI Models **Reasoning chain Correct (Yes/No): No**
the solution is correct, output the final answer with the help of the solution provided. If the **Corrected reasoning chain or NA:**
solution is incorrect, correct the existing solution and determine the final answer with the The midpoint of a line segment is located at the point (h,k), where h = (x1 + x2 )/ 2 and k = (y1
help of the corrected solution. + y2)/2. In this case, the endpoints of the line segment are (1,1) and (-7,5).
So, we can find the midpoint by calculating:
**Question:** h = (1 + (-7))/2 = -6/2 = -3
What is the product of the coordinates of the midpoint of a line segment with endpoints at k = (1 + 5)/2 = 6/2 = 3
(1,1) and (-7,5)? Therefore, the product of the coordinates of the midpoint is: hk = -3 * 3 = -9
**Reasoning Step:** **Final answer (just the number): -9**
The midpoint of a line segment is located at the point (h,k), where h = (x1 + x2)/2 and k = GPT 3.5 Turbo
(y1 + y2)/2. In this case, the endpoints of the line segment are (1,1) and (-7,5). So, we can
find the midpoint by calculating h = (1 + (-7))/2 = 6 and k = (1 + 5)/2 = 3.Therefore, the product of the coordinates of the midpoint is hk = 6 * 3 = 18. **Reasoning chain Correct (Yes/No): Yes**
**Correct Reasoning Chain or NA: NA**
**Final answer (just the number): 18**
Figure 1: Model is prompted with a question along with incorrect reasoning steps to detect any
mistake and correct the reasoning step to get to the correct final answer. GPT-4o generates the correct
output, while GPT-3.5Turbo fails to identify any mistake in the reasoning step. (Task - T1)
This paper aims to bridge this gap by providing a comprehensive benchmark and evaluation of LLMs’
performance on math word problems, including their capabilities in mistake detection and correction
within the reasoning steps (Figure 1). Analyzing LLMs’ ability to detect and rectify errors along the
reasoning steps yields valuable insights into their overall problem-solving capabilities. Our objectives
are threefold: firstly, to comprehensively evaluate LLMs’ capabilities in mathematical reasoning,
with a particular emphasis on mistake detection and correction; secondly, to identify the specific
strengths and weaknesses of these models in handling various types of mathematical challenges; and
thirdly, to propose potential directions for enhancing LLM capabilities in this domain.
To achieve this comprehensive evaluation, we have developed our own mistake dataset, designed to
include errors in the reasoning steps. This dataset allows the assessment of models’ proficiency not
only in providing correct solutions but also in detecting and correcting mistakes within the reasoning
steps. We evaluate eight different models including both large and smaller language models on our
curated dataset MWP-MISTAKE.
Our analysis reveals several key insights into the performance of LLMs on MWPs. Firstly, detecting
mistakes, even trivial ones remains a significant challenge for these models. Secondly, LLMs often
derive correct answers despite this difficulty in mistake detection. This can be attributed to data
memorization and potential contamination in training datasets, where models may have encountered
similar/same problems before. However, the ability to recover from or correct errors in the reasoning
process is generally poor across most models. Our contributions to this paper are as follows:
1. We collect and release to the research community MWP-MISTAKE, a dataset containing MWPs
with both correct and incorrect reasoning obtained from state-of-the-art MWP datasets such as
GSM-8K [10], MATH [16], MATHBENCH [20], and JEEBENCH [6]. Incorrect reasoning is
derived through meticulously crafted rules to alter the reasoning steps and using smaller models,
leveraging their inherent limitations in solving MWPs.
2. We provide benchmark results for our dataset to evaluate the reasoning capabilities of state-ofthe-art LLMs such as GPT-4o [1], GPT-4 [25], GPT-3.5Turbo [4], Claude [2], as well as smaller
language models like Llama [30], Phi [5], and Mixtral [18]. Our analysis demonstrates that most
state-of-the-art LLMs, excluding GPT-4o, struggle with mistake detection and correction.
3. Through meticulous evaluation and comparison of different LLMs, we offer a detailed analysis of
their strengths and weaknesses in handling mathematical reasoning tasks.
**2** **MWP-Mistake Dataset**
Most MWP datasets include a math problem and the final answer, with some optionally providing
reasoning steps (i.e., steps to solve the math problem) (See Figure. 2). Our objective in this work is to
evaluate the LLMs’ ability to detect and rectify errors to derive the correct final answer. However,
-----
no existing datasets include incorrect reasoning steps for MWPs. To address this, we curated our
own dataset, MWP-MISTAKE, by leveraging state-of-the-art MWP datasets such as GSM-8K [10],
MATH [16], MATHBENCH [20], and JEEBENCH [6]. MATHBENCH and JEEBENCH are relatively newer datasets as compared to GSM-8K and MATH (Additional details in Appendix 8).
Question : The digits 1, 3 and 5 are each used once to form each of the possible three-digit positive integers.
The three-digit integers are listed from greatest to least. Which integer is listed fifth?
Answer : 153
There are 3!=6 possible three-digit integers. So The three-digit integers using the digits 1, 3, and There are 135!=1 possible three-digit integers.
the fifth number on the list will be the second 5 are 513, 531, 315, 351, 135, 153. Notice that So the fifth number on the list will be the second
smallest. The two smallest integers have 1 as the there are 6 such numbers in total. We need to find smallest. The two smallest integers have 153 as
hundred digit. The smallest is 135; the second- the fifth number in the given order: 531, 351, 513, the hundreds digit. The smallest is 6; the second
smallest is 153. 153, 135. smallest is
Final Answer : 153 Final Answer : 135 Final Answer : 3
Correct Reasoning Step Incorrect reasoning step (smaller model) Incorrect reasoning step (rule)
Figure 2: Examples of MWPs with correct reasoning, rule-based incorrect and smaller model based
incorrect reasoning from MATH.
Each dataset contains an MWP question and a final solution. While GSM-8K and MATH have
ground truth reasoning steps, MATHBENCH and JEEBENCH do not. For these datasets, we used
GPT-4 to curate chain-of-thought reasoning steps. Thus, for all four datasets, we have an MWP
question, a final answer, and associated correct reasoning steps. Also, note that in GSM-8K and
MATH, the reasoning steps might include the final answer, however in our COT-generated steps, we
ensure the answer is not present in the reasoning steps (Appendix 8 for additional details).
To create incorrect reasoning steps, we follow two approaches: (i) meticulously crafted rules, and (ii)
using smaller models as bad reasoners, which we describe next.
**2.1** **Meticulously Crafted Rules to Programmatically Inject Errors**
Given our focus on MWPs and based on extensive interactions with math teachers, the rules are
derived from common mistakes observed in educational settings, ensuring the errors introduced are
realistic and representative of actual student errors.
1. Shuffle reasoning steps: The reasoning steps are shuffled to introduce ambiguity in the thought
process. This tests whether the model can identify changes in reasoning order.
2. Delete reasoning steps: One reasoning step is deleted in solutions that have two or more steps.
This helps to identify if the model can spot omissions in the reasoning process.
3. Shuffle numerical values: Numerical values are shuffled among themselves to verify if models
can correctly understand the question and select appropriate numerical values from the question.
4. Replace numerical values: Numerical values are replaced with random numbers ranging from 0
to 100. It identifies if the model can correctly pick the numerical values present in the question.
5. Shuffle operations: We randomly swap operators with other operators to test the model’s ability
to perform numerical operations.
6. Insert random reasoning steps: A random reasoning step is added at a random position to test
the model’s ability to identify incorrect reasoning.
These rules mimic real-world student behavior by reflecting tendencies to get the order of steps wrong,
skip steps, misinterpret numerical values, use incorrect numbers, apply the wrong mathematical
operations, and add irrelevant steps in problem-solving. While rules #1 and #2 do not introduce
explicit errors in reasoning, they are considered mistakes in our dataset to prompt the model to
identify scenarios lacking clarity. Such scenarios, whether due to an incorrect thought process or
missing steps, are common in real-life situations. Table 1 shows the number of questions selected
from each of the four datasets to which these six rules are applied to curate incorrect reasoning.
Thus, for every question selected, we created seven variations of reasoning steps(one correct + six
incorrect).
**2.2** **Smaller Models as Bad Reasoners**
-----
Recently, numerous small Table 1: MWP-MISTAKE Dataset details with the total number of questions
language models (SLMs) and reasoning steps.
have been developed
merous tasks, including
|Dataset GSM-8K MATH MATHBENCH JEEBENCH|Default reasoning|Col3|Smaller model reasoning|Col5|
|---|---|---|---|---|
||# Questions with correct reason (GT) 93 150 100 38|# Questions with incorrect reason (Rules) 558 900 600 228|# Questions with incorrect reasoning|Total|
||||Llama-2-7b-chat Mixtral-8x7B Phi-3-mini 100 100 100 150 150 150 100 100 100 12 19 35|951 1500 1000 332|
Default reasoning Smaller model reasoning
Dataset # Questions
# Questions # Questions Total
with incorrect reasoning
with correct with incorrect
Llama-2-7b-chat Mixtral-8x7B Phi-3-mini
reason (GT) reason (Rules)
GSM-8K 93 558 100 100 100 951
MATH 150 900 150 150 150 1500
MATHBENCH 100 600 100 100 100 1000
JEEBENCH 38 228 12 19 35 332
MWPs. However, they still lack several capabilities, including advanced mathematical reasoning,
resulting in poorer performance on MWPs.
To curate incorrect reasoning steps, we use SLMs to generate Chain-of-Thought (COT) reasoning and
final answers for all dataset questions. We filter out questions with incorrect final answers (comparing
with the ground truth final answer from the dataset), assuming incorrect answers stem from incorrect
reasoning. Thus, the reasoning steps for all incorrect answers are used as incorrect reasoning
steps. We employ state-of-the-art SLMs, such as Llama-2-7b-chat, Phi-3-mini, and Mixtral-8x7B, to
generate COT reasoning steps without a final answer (Appendix 9 for additional details). Table 1
shows dataset stats for each of the three models across all datasets.
Thus, our dataset includes questions with original correct reasoning steps, rule-based incorrect
reasoning, and smaller model (SLM) generated incorrect reasoning. For detailed evaluation, we split
this data into two parts: (1) Default: containing questions with correct reasoning from the dataset
and rule-based incorrect reasoning, and (2) SLM reason: containing questions with incorrect steps
generated by SLMs. Table 1 provides the complete details of the curated MWP-MISTAKE dataset with
the above two splits. We are releasing this dataset for further evaluation and benchmarking.
**3** **Experimental Setup**
**Task Details. Our aim is to assess the performance of LLMs on MWPs, focusing on their ability to**
detect and correct mistakes within the reasoning steps. We have two task variants to accomplish this:
**Task-1 (T1): Here, given a question and its reasoning steps, we ask the model to identify if the steps**
are correct or incorrect. If incorrect, the model must rectify the mistake and calculate the final answer.
The final answer or corrected reasoning step can be either correct or incorrect (Figure 1).
**Task-2 (T2): In this scenario, the model only needs to identify whether the reasoning steps provided**
are correct or incorrect and provide the final answer. No correction of reasoning steps is required.
In essence, T1 evaluates the model’s ability to detect mistakes, rectify them, and derive the correct
answer, while T2 focuses solely on detecting mistakes and solving MWP correctly. Both tasks operate
under few-shot settings, with specific prompt details provided in Appendix 10.
**Models. To evaluate LLMs’ mathematical reasoning capabilities, we utilize state-of-the-art LLMs**
and Small Language Models (SLMs).
**LLMs: We utilize LLMs that have shown tremendous performance in MWPs such as GPT-4o, GPT-4,**
GPT-3.5Turbo, Claude-3-Opus. These models are accessed through their respective APIs.
**SLMs. Additionally, we assess SLMs trained with high-quality data and reasoning capabilities and ex-**
plored three popular SLMs with diverse capabilities: Phi-3-mini, Mixtral-8x7B, and Llama-2-7b-chat.
Appendix 12 provides the details of the models, including their last training date.
**4** **Results and Analysis**
We rigorously evaluate various SOTA LLMs and SLMs on our MWP-MISTAKE dataset to analyze their
mathematical reasoning capabilities, focusing on mistake detection and recovery.
-----
Table 2: Mistake Detection Performance (F1 score) on MWP-MISTAKE dataset for Task T1. (D-Default
reasoning steps, SM-Smaller model reasoning steps) (Bold: Best, Underline:Second best)
|Col1|GSM-8K|MATH|MATHBENCH|JEEBENCH|Average|
|---|---|---|---|---|---|
|Model|D SM|D SM|D SM|D SM|D SM Overall|
|GPT-4o GPT-4 GPT-3.5Turbo Llama-2-7b-chat Mixtral-8x7B Phi-3-mini Claude-3-Opus|0.85 0.84 0.72 0.68 0.80 0.69 0.07 NA 0.73 NA 0.70 NA 0.79 0.87|0.83 0.86 0.78 0.80 0.80 0.54 0.16 NA 0.79 NA 0.65 NA 0.73 0.76|0.80 0.99 0.51 0.90 0.50 0.34 0.08 NA 0.62 NA 0.54 NA 0.68 0.91|0.80 0.99 0.74 0.87 0.54 0.46 0.41 NA 0.70 NA 0.67 NA 0.69 0.88|0.82 0.92 0.87 0.69 0.81 0.75 0.66 0.51 0.58 0.18 NA 0.18 0.71 NA 0.71 0.64 NA 0.64 0.72 0.85 0.79|
GSM-8K MATH MATHBENCH JEEBENCH Average
Model D SM D SM D SM D SM D SM Overall
GPT-4o **0.85** 0.84 **0.83** **0.86** **0.80** **0.99** **0.80** **0.99** **0.82** **0.92** **0.87**
GPT-4 0.72 0.68 0.78 0.80 0.51 0.90 0.74 0.87 0.69 0.81 0.75
GPT-3.5Turbo 0.80 0.69 0.80 0.54 0.50 0.34 0.54 0.46 0.66 0.51 0.58
Llama-2-7b-chat 0.07 NA 0.16 NA 0.08 NA 0.41 NA 0.18 NA 0.18
Mixtral-8x7B 0.73 NA 0.79 NA 0.62 NA 0.70 NA 0.71 NA 0.71
Phi-3-mini 0.70 NA 0.65 NA 0.54 NA 0.67 NA 0.64 NA 0.64
Claude-3-Opus 0.79 **0.87** 0.73 0.76 0.68 0.91 0.69 0.88 0.72 0.85 0.79
**4.1** **Question 1: Can LLMs Effectively Identify Mistakes in Reasoning Steps?**
We first analyze the capability of various models to detect mistakes in MWP reasoning steps. Table 2
presents the mistake detection performance (F1 score) of all the models with Task T1 on our dataset,
which includes reasoning steps derived from default and smaller models across four datasets.
- GPT-4o’s Dominance: GPT-4o demonstrates a substantial advantage, with a 10% improvement
over GPT-4, a 25% improvement over GPT-3.5Turbo, and over 20% improvement over SLMs in
detecting mistakes. It is uniquely capable of consistently identifying mistakes created using both
rule-based methods and smaller models, underscoring its robust capabilities in mistake detection.
- GPT-3.5Turbo’s Performance: Interestingly, GPT-3.5Turbo outperforms GPT-4 in mistake detection specifically for the GSM-8K dataset. We hypothesize that this could be due to potential
overfitting or data contamination in GPT-4’s training data. Despite this anomaly, GPT-4 maintains
its position as the second-best model overall, following closely behind GPT-4o in terms of mistake
detection abilities on other datasets.
- Performance of SLMs: SLMs show significantly lower mistake detection abilities compared to
GPT-4o and GPT-4. This stark contrast highlights the need to enhance reasoning capabilities in
smaller models to match advanced LLMs.
- Performance on Newer Datasets: The performance of most models, including GPT-4 and
GPT-3.5Turbo, drops drastically on newer datasets such as MATHBENCH and JEEBENCH. This
decline indicates that the reasoning abilities of these models are not yet generalized to newer
datasets and problems. Furthermore, JEEBENCH is more challenging dataset compared to others.
GPT-4o, however, maintains a significant lead even on these newer datasets, reinforcing its superior
capability in mistake detection across diverse and unseen problems.
Similar results are seen also for Task T2 as both T1 and T2 probes the model to detect mistakes,
however, in the former case it goes further by asking the model to correct the reasoning step.
**4.2** **Can LLMs Accurately Derive Correct Answers Despite Mistakes?**
We now assess the models’ ability to accurately derive the correct answer for the given question
despite mistakes in the reasoning steps. Table 3 shows the performance of all the models in deriving
correct answers (F1 score) on our dataset.
Table 3: Performance in deriving correct answers (F1 score) on MWP-MISTAKE dataset for Task T1.
(D-Default reasoning steps, SM-Smaller model reasoning steps) (Bold: Best, Underline:Second best)
|Col1|GSM-8K|MATH|MATHBENCH|JEEBENCH|Average|
|---|---|---|---|---|---|
|Model|D SM|D SM|D SM|D SM|D SM Overall|
|GPT-4o GPT-4 GPT-3.5Turbo Llama-2-7b-chat Mixtral-8x7B Phi-3-mini Claude-3-Opus|0.99 0.88 0.97 0.79 0.89 0.48 0.80 NA 0.87 NA 0.88 NA 0.98 0.88|0.90 0.79 0.80 0.69 0.69 0.35 0.27 NA 0.67 NA 0.51 NA 0.89 0.93|0.90 0.69 0.88 0.46 0.75 0.20 0.40 NA 0.70 NA 0.63 NA 0.92 0.51|0.42 0.47 0.35 0.27 0.26 0.14 0.06 NA 0.16 NA 0.25 NA 0.46 0.26|0.80 0.71 0.76 0.75 0.55 0.65 0.65 0.29 0.47 0.38 NA 0.38 0.60 NA 0.60 0.57 NA 0.57 0.80 0.64 0.73|
1. GPT-4o’s Superior Accuracy: GPT-4o’s ability to derive correct answers is notably higher LMs.
GPT-4o outperforms GPT-4 by 10%, GPT-3.5Turbo Turbo by 30%, and SLMs by a similar margin.
We suspect the very high accuracy in GSM-8K may be due to data contamination, however on
-----
newer and complex datasets GPT-4o still outperforms other models. This indicates a strong
capability to produce correct answers even when intermediate steps contain errors.
2. GPT-4’s Performance: Interestingly, GPT-4’s ability to derive correct answers despite mistakes
(F1 score of 0.97) is significantly better than GPT-3.5Turbo Turbo (F1 score of 0.89). Yet, GPT-4
performs poorly in mistake detection (0.72 vs. 0.80). This improvement in deriving correct
answers may potentially be due to data contamination, resulting in the memorization of problems
in the GSM-8K dataset during GPT-4’s training.
3. SLMs’ Performance: SLMs, particularly Mixtral-8x7B, show performance very close to GPT-4
in deriving correct answers. This might again be due to its strong ability to produce correct
answers in the presence of mistakes or data contamination during the training of SLMs, which
allows them to recall correct answers despite reasoning mistakes.
4. Performance on Newer and Complex Datasets: On newer and more complex datasets such as
MATHBENCH and JEEBENCH, the performance significantly drops even for GPT-4o and more
drastically for all other LLMs and SLMs. This highlights a critical limitation in the generalization
of these models to newer and unseen problem sets.
GSM8K(SM) - T1 GSM8K(SM) - T2 MATH(SM) - T1 MATH(SM) - T2 MATHBENCH(SM) - T1
MATHBENCHSM) - T2 JEEBENCH(SM) - T1 JEEBENCH(SM) - T2
1.00
0.75
0.50
F1 Score 0.25
0.00
GPT-4O GPT4 GPT-3.5Turbo
Model
Figure 3: Performance in deriving final answer between T1 and T2. A significant drop in performance
when the model does not rectify the incorrect reasoning steps.
Figure 3 shows the performance difference between T1 and T2. For T2, we observe a significant
performance drop in deriving correct answers despite mistakes. This is primarily because, in T1, we
instruct the model to not only detect mistakes but also correct them before deriving the final answer,
whereas in T2, the model is only asked to detect the mistake and then directly derive the final answer
without correcting the reasoning (Appendix 11 for further details).
**4.3** **Exploring Data Contamination and Memorization Effects in Math Reasoning Tasks**
In our analysis of LLMs’ mathematical reasoning performance, we’ve identified potential instances
of data contamination and memorization, both of which can significantly impact the effectiveness of
these models. Data contamination, characterized by the presence of test data from downstream tasks
in LLMs’ training data, poses a major challenge in accurately assessing their real-world performance.
Meanwhile, memorization occurs when models replicate solutions from training data without grasping
the underlying principles, thereby hindering their ability to generalize to new problems.
The presence of data contamination is evident in instances of unexpectedly high performance on
certain datasets. For example, GPT-3.5Turbo’s superior performance over GPT-4 on the GSM-8K
dataset raises concerns about biases in GPT-4’s training data. Similarly, the comparable performance
between smaller and larger models suggests the potential presence of memorization. These findings
underscore the critical need for rigorous evaluation to mitigate the impacts of memorization, ensuring
the reliability and effectiveness of LLMs in real-world applications.
Investigating data contamination and memorization poses challenges due to restricted pre-training
data access and computational limitations. To tackle this, we employ an approach outlined in [14],
utilizing an LLM to replicate individual instances of the dataset. This involves guiding the LLM
with instructions containing unique identifiers from the source dataset, like dataset name, partition
(e.g., train, test, or validation), and a fragment of the reference instance. By instructing the LLM to
complete these partial instances, we can evaluate contamination and memorization.
To detect contamination, a heuristic is applied comparing the average overlap score between generated
completions and reference instances using ROUGE-L [19]. This comparison is made between
-----
guided instructions (including dataset and partition identifiers) and general instructions (lacking
such identifiers). If the overlap score is significantly larger with guided instructions, it suggests
contamination. This method relies on the premise that the only distinction between the two instructions
is the inclusion of dataset and partition names in guided instructions, implying any improvement can
be attributed to contamination (Appendix 15 for more details). Figure 4 shows the difference between
guided and general instructions ROUGE-L score across all models and datasets.
GSM8K(D) GSM8K(SM) MATH(D) MATH(SM) MATHBENCH(D) MATHBENCH(SM) JEEBENCH(D)
0.2 JEEBENCH(SM)
0
0.1
5
0.1
0
0.0
5
0.0
0
-0.0
(guided - general instructions) RougeL 5 GPT-4O GPT4 GPT 3.5Turbo Llama-2-7b-chat Mixtral Phi
Model
Figure 4: Difference between guided and general instructions rouge-L score across all models and
datasets. A high positive difference indicates high contamination and a low positive or negative
difference indicates, little to no contamination.
- GPT-4 Models: Across all datasets for default reasoning steps, the guided scores are higher than the
general scores, indicating contamination for all LLMs such as GPT-4o, GPT-4, and GPT-3.5Turbo.
- Smaller Models’ Reasoning Mistakes: For reasoning mistakes from smaller models (SM), guided
scores are closer to general scores, indicating little to no contamination across all models. This is
intuitive as the reasoning steps are created anew by smaller models, and due to their probabilistic
nature, variations are expected.
- Smaller Models like Llama-2-7b-chat and Phi-3-mini: These models show closer guided and
general scores, indicating no contamination.
- Mixtral-8x7B Model: Mixtral-8x7B shows greater contamination as compared to the rest of
SLMs, explaining high performance when deriving correct answers.
- GPT-4o: For datasets like GSM-8K and MATH, GPT-4o shows a higher guided score than the
general scores, indicating contamination, which decreases in contamination as the dataset becomes
newer and more complex.
**4.4** **Can LLMs Correctly Rectify Mistakes in Reasoning Steps?**
In Task 1, LLMs detect and rectify mistakes in reasoning to find the correct final answer. To evaluate
the model’s ability in this regard, we introduce the ’rectify metric’ to quantify instances where the
model identifies a mistake, corrects it, and reaches the accurate final answer. Reasoning steps are
considered correct only if they lead to the accurate final answer. Table 4 shows the ability of different
models to rectify reasoning steps and derive the correct final answer across various datasets.
- GPT-4o’s Remarkable Capabilities: GPT-4o exhibits outstanding abilities in rectifying incorrect
reasoning steps to derive the correct final answer. It outperforms GPT-4 by 11% and surpasses other
models, including SLMs, by over 35%. Across all datasets, GPT-4o achieves high rectification
scores, with an average of 85% across all datasets except JEEBENCH.
- Limitations of SLMs: SLMs perform notably worse than larger models like GPT-4o in rectifying
errors, with an average score of only 40% across all datasets. This suggests significant challenges
in effectively handling complex reasoning tasks.
- Performance on Newer and Complex Datasets: Despite its overall superiority, GPT-4o’s performance on newer and more complex datasets like MATHBENCH and JEEBENCH is lower, raising
concerns about the generalization of its capabilities.
- Ability to Rectify Mistakes from Both Rules and Smaller Models: GPT-4o demonstrates
tremendous capabilities in rectifying mistakes from both rule-based and smaller models. While
-----
Table 4: Ability to Rectify mistakes and derive correct final answer on MWP-MISTAKE dataset for Task
T1. (D-Default reasoning steps, SM-Smaller model reasoning steps) (Bold: Best, Underline:Second
|best)|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
||GSM-8K|MATH|MATHBENCH|JEEBENCH|Average|
|Model|D SM|D SM|D SM|D SM|D SM Overall|
|GPT-4o GPT-4 GPT-3.5Turbo Llama-2-7b-chat Mixtral-8x7B Phi-3-mini Claude-3-Opus|0.98 0.92 0.96 0.89 0.81 0.58 0.73 NA 0.77 NA 0.79 NA 0.97 0.94|0.87 0.83 0.72 0.68 0.54 0.40 0.21 NA 0.56 NA 0.37 NA 0.84 0.90|0.90 0.65 0.83 0.46 0.62 0.35 0.11 NA 0.57 NA 0.41 NA 0.87 0.57|0.39 0.42 0.23 0.24 0.05 0.05 0.04 NA 0.17 NA 0.03 NA 0.26 0.27|0.79 0.70 0.74 0.69 0.57 0.63 0.51 0.35 0.43 0.27 NA 0.27 0.52 NA 0.52 0.40 NA 0.40 0.73 0.67 0.70|
GSM-8K MATH MATHBENCH JEEBENCH Average
Model D SM D SM D SM D SM D SM Overall
GPT-4o **0.98** 0.92 **0.87** 0.83 **0.90** **0.65** **0.39** **0.42** **0.79** **0.70** **0.74**
GPT-4 0.96 0.89 0.72 0.68 0.83 0.46 0.23 0.24 0.69 0.57 0.63
GPT-3.5Turbo 0.81 0.58 0.54 0.40 0.62 0.35 0.05 0.05 0.51 0.35 0.43
Llama-2-7b-chat 0.73 NA 0.21 NA 0.11 NA 0.04 NA 0.27 NA 0.27
Mixtral-8x7B 0.77 NA 0.56 NA 0.57 NA 0.17 NA 0.52 NA 0.52
Phi-3-mini 0.79 NA 0.37 NA 0.41 NA 0.03 NA 0.40 NA 0.40
Claude-3-Opus 0.97 **0.94** 0.84 **0.90** 0.87 0.57 0.26 0.27 0.73 0.67 0.70
potential contamination exists, GPT-4o ’s ability to correct mistakes in the reasoning steps generated
by SLMs underscores its robustness in detecting and rectifying errors.
We now dig deeper into the rectification process. While Table 4 showed the models’ ability to
detect and rectify mistakes, we compute the percentage of questions where the model rectified the
reasoning but still resulted in incorrect answers. Across the MWP-MISTAKE dataset, after correcting
the reasoning steps, GPT-4o failed to derive correct answers in 17% of the questions, whereas other
models like GPT-4, GPT-4, Llama-2-7b-chat, Mixtral-8x7B, and Phi-3-mini resulted in 30%, 43.5%,
80.9%, 40.2%, and 55.6% incorrect answers, respectively. This showcases GPT-4o’s ability to detect
mistakes and rectify them correctly, resulting in very few questions it could not answer correctly.
Furthermore, we noticed that the average word length of rectified reasoning for correct and incorrect
answers for GPT-4o was significantly higher than GPT-4 and other models. This is mainly because
GPT-4o generates its own reasoning steps to rectify the mistakes, unlike other models that perform
poorly. This also adds challenges to evaluating mistake rectification as the new rectified reasoning is
significantly different from ground truth reasoning steps. There could be multiple ways to solve the
same problems, complicating the evaluation.
We also evaluated the rectified reasoning steps and compared them with ground truth reasoning
steps to see the effectiveness and alignment of the rectification process across models. We computed
BERTScore [39] that computes a similarity score for each token in the candidate sentence with each
token in the reference sentence, using BERT embeddings. We found that BERTScore is similar
across all models. This is because the BERTScore metric focuses on word-level matches and misses
out on numerical and other logical aspects of reasoning which are crucial for correctness. We also
evaluated the alignment with METEOR [7] score (see Appendix 13 for BERTScore and METEOR
Score), which similarly resulted in an inadequate analysis. Thus, it becomes evident that the current
evaluation methodologies may not fully capture the nuanced capabilities of LLMs in rectifying
mistakes within reasoning steps.
**5** **Key Insights, Takeaways, and Potential Directions for Improving**
**Mathematical Reasoning**
We now present an overview of key insights and takeaways obtained from our detailed benchmarking
and evaluation of LLMs on our MWP-MISTAKE dataset. Further, we provide potential directions for
improving mathematical reasoning abilities in LLMs.
1. GPT-4o’s Superior Performance: Despite potential data contamination, as observed in GPT-4o’s
performance, its superior foundational capabilities enable it to excel consistently across all datasets
for mistake detection, rectification, and correct answer derivation. GPT-4o’s remarkable performance positions it as a leading model for complex mathematical reasoning tasks, underscoring the
robustness of its fundamental capabilities despite challenges such as data contamination.
2. Challenges with SLMs: The considerable performance gap between smaller language models
(SLMs) and larger models like GPT-4 and GPT-4o emphasizes the necessity for advancements
in the reasoning capabilities of smaller models. Enhancing these models could make them more
competitive and useful in applications where resource constraints are significant.
3. Overfitting and Data Contamination Concerns: The unexpected performance of GPT-3.5Turbo
over GPT-4 in certain datasets suggests issues related to overfitting and data contamination. This
-----
is evident in the performance disparity, particularly in the GSM-8K dataset, indicating potential
memorization of problems during training. Addressing these concerns requires cleaner training
datasets and more robust methodologies to avoid overfitting and ensure genuine reasoning skills.
4. Generalization Challenges: The notable performance drop on newer datasets like MATHBENCH
and JEEBENCH underscores a critical challenge in generalizing LLMs’ reasoning abilities to
novel problems. Addressing this issue is crucial for enhancing the applicability and reliability of
LLMs across a broader spectrum of mathematical problems and datasets.
5. SLMs’ Unexpected Performance: The close performance of some SLMs, like Mixtral-8x7B,
to larger models such as GPT-4 suggests that these smaller models might also benefit from data
contamination. This indicates a need for further investigation into training processes and dataset
integrity to ensure fair and accurate performance assessments.
These insights underscore the ongoing necessity to refine LLM training processes, enhance reasoning
capabilities, and improve generalization to ensure models can reliably and accurately solve a wide
range of mathematical problems. Future research should prioritize addressing overfitting, data
contamination, and generalization challenges to advance LLMs in the field of mathematical reasoning.
**6** **Related Work**
Recent studies [31] indicate that Large Language Models (LLMs) can handle intricate tasks using
the Chain of Thought (COT) mechanism [32]. LLMs have gained significance in solving math word
problems (MWPs) [21], with MathPrompter [17] showcasing excellent results, not only generating
correct answers but also complex reasoning steps. Various approaches aim to enhance LLMs’
mathematical capabilities and address challenges [28]. [36] investigates factors like pre-training
loss, supervised data, and augmented data, proposing rejection sampling fine-tuning (RFT) to
improve mathematical reasoning. WizardMath [22] introduces a reinforced Evol-Instruct Feedback
(RLEIF) method to enhance reasoning abilities through supervised fine-tuning and PPO training [27].
MAmmoTH [37] combines Chain of Thought (CoT) and Program-of-Thought [8] rationales to teach
LLMs to use external tools like Python interpreters for mathematical problem-solving.
To assess the correctness of reasoning steps, most existing work [23, 35] evaluates the quality by
directly comparing the final answer. However, some early studies explore reasoning step quality
differently. [26] measures reasoning step quality by comparing the similarity between generated
and reference reasoning. [12] treats powerful LLMs as verifiers, asking them to generate judgments
for the reasoning steps. [33] introduces a new methodology employing validity and redundancy to
characterize reasoning quality, along with accompanying LLMs to assess them automatically.
Various methods extend LLMs as verifiers and demonstrate their usage for self-correction [40]. [41]
shows that models like GPT-4 align with human preferences, indicating their potential as tools for
accessing LLM-generated responses. [24] finds that LLMs struggle to find their own reasoning errors
in code generation but can correct them with adequate feedback. However, there’s still a lack of
clarity in math reasoning and using LLMs for mistake detection and rectification in foreign reasoning
steps, not just their own self-generated reasoning steps. Our work focuses on LLMs’ ability to
correct MWPs reasoning steps and rectify them to reach the correct answer, as well as whether LLMs
generalize to newer and complex datasets.
**7** **Conclusions**
This study evaluates large language models (LLMs) like GPT-4o, GPT-4 4, GPT-3.5Turbo, and
smaller models (Llama-2-7b-chat, Mixtral-8x7B, Phi-3-mini) on their ability to detect and correct
errors in mathematical reasoning. Our MWP-MISTAKE dataset is meticulously curated with incorrect
reasoning steps generated using both rule-based methods and smaller language models, ensuring a
comprehensive evaluation of LLMs’ error detection and correction capabilities. GPT-4o stands out,
demonstrating superior performance in handling complex tasks and correcting mistakes. However,
smaller models lag significantly, highlighting the need for advancements in their reasoning capabilities.
The analysis also reveals concerns about data contamination and overfitting, particularly in GPT-4’s
performance on GSM8K. A notable drop in performance on newer datasets like MATHBENCH
and JEEBENCH indicates challenges in generalizing to novel problems. Addressing these issues is
crucial for improving LLMs’ reliability and applicability in real-world mathematical problem-solving.
Future research should focus on refining training processes, enhancing generalization, and mitigating
data contamination to advance the field.
-----
**References**
[[1] Hello GPT-4o, . URL https://openai.com/index/hello-gpt-4o/.](https://openai.com/index/hello-gpt-4o/)
[[2] Introducing the next generation of Claude \ Anthropic, . URL https://www.anthropic.](https://www.anthropic.com/news/claude-3-family)
```
com/news/claude-3-family.
```
[[3] Introducing ChatGPT, . URL https://openai.com/index/chatgpt/.](https://openai.com/index/chatgpt/)
[[4] OpenAI Platform, . URL https://platform.openai.com.](https://platform.openai.com)
[5] M. Abdin, S. A. Jacobs, A. A. Awan, J. Aneja, A. Awadallah, H. Awadalla, N. Bach, A. Bahree,
A. Bakhtiari, J. Bao, H. Behl, A. Benhaim, M. Bilenko, J. Bjorck, S. Bubeck, Q. Cai, M. Cai,
C. C. T. Mendes, W. Chen, V. Chaudhary, D. Chen, D. Chen, Y.-C. Chen, Y.-L. Chen, P. Chopra,
X. Dai, A. D. Giorno, G. de Rosa, M. Dixon, R. Eldan, V. Fragoso, D. Iter, M. Gao, M. Gao,
J. Gao, A. Garg, A. Goswami, S. Gunasekar, E. Haider, J. Hao, R. J. Hewett, J. Huynh,
M. Javaheripi, X. Jin, P. Kauffmann, N. Karampatziakis, D. Kim, M. Khademi, L. Kurilenko,
J. R. Lee, Y. T. Lee, Y. Li, Y. Li, C. Liang, L. Liden, C. Liu, M. Liu, W. Liu, E. Lin, Z. Lin,
C. Luo, P. Madan, M. Mazzola, A. Mitra, H. Modi, A. Nguyen, B. Norick, B. Patra, D. PerezBecker, T. Portet, R. Pryzant, H. Qin, M. Radmilac, C. Rosset, S. Roy, O. Ruwase, O. Saarikivi,
A. Saied, A. Salim, M. Santacroce, S. Shah, N. Shang, H. Sharma, S. Shukla, X. Song,
M. Tanaka, A. Tupini, X. Wang, L. Wang, C. Wang, Y. Wang, R. Ward, G. Wang, P. Witte,
H. Wu, M. Wyatt, B. Xiao, C. Xu, J. Xu, W. Xu, S. Yadav, F. Yang, J. Yang, Z. Yang, Y. Yang,
D. Yu, L. Yuan, C. Zhang, C. Zhang, J. Zhang, L. L. Zhang, Y. Zhang, Y. Zhang, Y. Zhang, and
X. Zhou. Phi-3 technical report: A highly capable language model locally on your phone, 2024.
[6] D. Arora, H. G. Singh, and Mausam. Have llms advanced enough? a challenging problem
solving benchmark for large language models, 2023.
[7] S. Banerjee and A. Lavie. METEOR: An automatic metric for MT evaluation with improved
correlation with human judgments. In J. Goldstein, A. Lavie, C.-Y. Lin, and C. Voss, editors,
_Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine_
_Translation and/or Summarization, pages 65–72, Ann Arbor, Michigan, June 2005. Association_
[for Computational Linguistics. URL https://aclanthology.org/W05-0909.](https://aclanthology.org/W05-0909)
[8] W. Chen, X. Ma, X. Wang, and W. W. Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. Transactions on Machine
_[Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=](https://openreview.net/forum?id=YfZ4ZPt8zd)_
```
YfZ4ZPt8zd.
```
[9] K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek,
J. Hilton, R. Nakano, C. Hesse, and J. Schulman. Training verifiers to solve math word
problems, 2021.
[10] K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek,
J. Hilton, R. Nakano, C. Hesse, and J. Schulman. Training Verifiers to Solve Math Word
[Problems, Nov. 2021. URL http://arxiv.org/abs/2110.14168. arXiv:2110.14168 [cs].](http://arxiv.org/abs/2110.14168)
[11] A. Deb, N. Oza, S. Singla, D. Khandelwal, D. Garg, and P. Singla. Fill in the blank: Exploring
and enhancing llm capabilities for backward reasoning in math word problems, 2023.
[12] Y. Dubois, X. Li, R. Taori, T. Zhang, I. Gulrajani, J. Ba, C. Guestrin, P. Liang, and T. B.
Hashimoto. AlpacaFarm: A Simulation Framework for Methods that Learn from Human
[Feedback, Jan. 2024. URL http://arxiv.org/abs/2305.14387. arXiv:2305.14387 [cs].](http://arxiv.org/abs/2305.14387)
[13] W. Gan, Z. Qi, J. Wu, and J. C.-W. Lin. Large language models in education: Vision and
opportunities, 2023.
[14] S. Golchin and M. Surdeanu. Time travel in llms: Tracing data contamination in large language
models, 2024.
[15] J. He-Yueya, G. Poesia, R. E. Wang, and N. D. Goodman. Solving math word problems by
combining language models with symbolic solvers, 2023.
-----
[16] D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt.
Measuring mathematical problem solving with the math dataset, 2021.
[17] S. Imani, L. Du, and H. Shrivastava. Mathprompter: Mathematical reasoning using large
language models, 2023.
[18] A. Q. Jiang, A. Sablayrolles, A. Roux, A. Mensch, B. Savary, C. Bamford, D. S. Chaplot,
D. de las Casas, E. B. Hanna, F. Bressand, G. Lengyel, G. Bour, G. Lample, L. R. Lavaud,
L. Saulnier, M.-A. Lachaux, P. Stock, S. Subramanian, S. Yang, S. Antoniak, T. L. Scao,
T. Gervet, T. Lavril, T. Wang, T. Lacroix, and W. E. Sayed. Mixtral of experts, 2024.
[19] C.-Y. Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization
_Branches Out, pages 74–81, Barcelona, Spain, July 2004. Association for Computational_
[Linguistics. URL https://aclanthology.org/W04-1013.](https://aclanthology.org/W04-1013)
[20] H. Liu, Z. Zheng, Y. Qiao, H. Duan, Z. Fei, F. Zhou, W. Zhang, S. Zhang, D. Lin, and K. Chen.
Mathbench: Evaluating the theory and application proficiency of llms with a hierarchical
mathematics benchmark, 2024.
[21] W. Liu, H. Hu, J. Zhou, Y. Ding, J. Li, J. Zeng, M. He, Q. Chen, B. Jiang, A. Zhou, and L. He.
Mathematical language models: A survey, 2024.
[22] H. Luo, Q. Sun, C. Xu, P. Zhao, J. Lou, C. Tao, X. Geng, Q. Lin, S. Chen, and D. Zhang.
Wizardmath: Empowering mathematical reasoning for large language models via reinforced
evol-instruct, 2023.
[23] H. Luo, Q. Sun, C. Xu, P. Zhao, J. Lou, C. Tao, X. Geng, Q. Lin, S. Chen, and D. Zhang.
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced
[Evol-Instruct, Aug. 2023. URL http://arxiv.org/abs/2308.09583. arXiv:2308.09583](http://arxiv.org/abs/2308.09583)
[cs].
[24] T. X. Olausson, J. P. Inala, C. Wang, J. Gao, and A. Solar-Lezama. Is Self-Repair a Sil[ver Bullet for Code Generation?, Feb. 2024. URL http://arxiv.org/abs/2306.09896.](http://arxiv.org/abs/2306.09896)
arXiv:2306.09896 [cs].
[25] OpenAI, J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida,
J. Altenschmidt, S. Altman, S. Anadkat, R. Avila, I. Babuschkin, S. Balaji, V. Balcom, P. Baltescu, H. Bao, M. Bavarian, J. Belgum, I. Bello, J. Berdine, G. Bernadett-Shapiro, C. Berner,
L. Bogdonoff, O. Boiko, M. Boyd, A.-L. Brakman, G. Brockman, T. Brooks, M. Brundage,
K. Button, T. Cai, R. Campbell, A. Cann, B. Carey, C. Carlson, R. Carmichael, B. Chan,
C. Chang, F. Chantzis, D. Chen, S. Chen, R. Chen, J. Chen, M. Chen, B. Chess, C. Cho,
C. Chu, H. W. Chung, D. Cummings, J. Currier, Y. Dai, C. Decareaux, T. Degry, N. Deutsch,
D. Deville, A. Dhar, D. Dohan, S. Dowling, S. Dunning, A. Ecoffet, A. Eleti, T. Eloundou,
D. Farhi, L. Fedus, N. Felix, S. P. Fishman, J. Forte, I. Fulford, L. Gao, E. Georges, C. Gibson,
V. Goel, T. Gogineni, G. Goh, R. Gontijo-Lopes, J. Gordon, M. Grafstein, S. Gray, R. Greene,
J. Gross, S. S. Gu, Y. Guo, C. Hallacy, J. Han, J. Harris, Y. He, M. Heaton, J. Heidecke, C. Hesse,
A. Hickey, W. Hickey, P. Hoeschele, B. Houghton, K. Hsu, S. Hu, X. Hu, J. Huizinga, S. Jain,
S. Jain, J. Jang, A. Jiang, R. Jiang, H. Jin, D. Jin, S. Jomoto, B. Jonn, H. Jun, T. Kaftan, Łukasz
Kaiser, A. Kamali, I. Kanitscheider, N. S. Keskar, T. Khan, L. Kilpatrick, J. W. Kim, C. Kim,
Y. Kim, J. H. Kirchner, J. Kiros, M. Knight, D. Kokotajlo, Łukasz Kondraciuk, A. Kondrich,
A. Konstantinidis, K. Kosic, G. Krueger, V. Kuo, M. Lampe, I. Lan, T. Lee, J. Leike, J. Leung,
D. Levy, C. M. Li, R. Lim, M. Lin, S. Lin, M. Litwin, T. Lopez, R. Lowe, P. Lue, A. Makanju,
K. Malfacini, S. Manning, T. Markov, Y. Markovski, B. Martin, K. Mayer, A. Mayne, B. McGrew, S. M. McKinney, C. McLeavey, P. McMillan, J. McNeil, D. Medina, A. Mehta, J. Menick,
L. Metz, A. Mishchenko, P. Mishkin, V. Monaco, E. Morikawa, D. Mossing, T. Mu, M. Murati,
O. Murk, D. Mély, A. Nair, R. Nakano, R. Nayak, A. Neelakantan, R. Ngo, H. Noh, L. Ouyang,
C. O’Keefe, J. Pachocki, A. Paino, J. Palermo, A. Pantuliano, G. Parascandolo, J. Parish,
E. Parparita, A. Passos, M. Pavlov, A. Peng, A. Perelman, F. de Avila Belbute Peres, M. Petrov,
H. P. de Oliveira Pinto, Michael, Pokorny, M. Pokrass, V. H. Pong, T. Powell, A. Power,
B. Power, E. Proehl, R. Puri, A. Radford, J. Rae, A. Ramesh, C. Raymond, F. Real, K. Rimbach,
C. Ross, B. Rotsted, H. Roussez, N. Ryder, M. Saltarelli, T. Sanders, S. Santurkar, G. Sastry,
H. Schmidt, D. Schnurr, J. Schulman, D. Selsam, K. Sheppard, T. Sherbakov, J. Shieh, S. Shoker,
-----
P. Shyam, S. Sidor, E. Sigler, M. Simens, J. Sitkin, K. Slama, I. Sohl, B. Sokolowsky, Y. Song,
N. Staudacher, F. P. Such, N. Summers, I. Sutskever, J. Tang, N. Tezak, M. B. Thompson,
P. Tillet, A. Tootoonchian, E. Tseng, P. Tuggle, N. Turley, J. Tworek, J. F. C. Uribe, A. Vallone,
A. Vijayvergiya, C. Voss, C. Wainwright, J. J. Wang, A. Wang, B. Wang, J. Ward, J. Wei,
C. Weinmann, A. Welihinda, P. Welinder, J. Weng, L. Weng, M. Wiethoff, D. Willner, C. Winter,
S. Wolrich, H. Wong, L. Workman, S. Wu, J. Wu, M. Wu, K. Xiao, T. Xu, S. Yoo, K. Yu,
Q. Yuan, W. Zaremba, R. Zellers, C. Zhang, M. Zhang, S. Zhao, T. Zheng, J. Zhuang, W. Zhuk,
and B. Zoph. Gpt-4 technical report, 2024.
[26] T. Sawada, D. Paleka, A. Havrilla, P. Tadepalli, P. Vidas, A. Kranias, J. J. Nay, K. Gupta, and
A. Komatsuzaki. ARB: Advanced Reasoning Benchmark for Large Language Models, July
[2023. URL http://arxiv.org/abs/2307.13692. arXiv:2307.13692 [cs].](http://arxiv.org/abs/2307.13692)
[27] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization
algorithms, 2017.
[28] K. A. Srivatsa and E. Kochmar. What makes math word problems challenging for llms?, 2024.
[29] G. Team, R. Anil, S. Borgeaud, J.-B. Alayrac, J. Yu, R. Soricut, J. Schalkwyk, A. M. Dai,
A. Hauth, K. Millican, D. Silver, M. Johnson, I. Antonoglou, J. Schrittwieser, A. Glaese, J. Chen,
E. Pitler, T. Lillicrap, A. Lazaridou, O. Firat, J. Molloy, M. Isard, P. R. Barham, T. Hennigan,
B. Lee, F. Viola, M. Reynolds, Y. Xu, R. Doherty, E. Collins, C. Meyer, E. Rutherford, E. Moreira, K. Ayoub, M. Goel, J. Krawczyk, C. Du, E. Chi, H.-T. Cheng, E. Ni, P. Shah, P. Kane,
B. Chan, M. Faruqui, A. Severyn, H. Lin, Y. Li, Y. Cheng, A. Ittycheriah, M. Mahdieh, M. Chen,
P. Sun, D. Tran, S. Bagri, B. Lakshminarayanan, J. Liu, A. Orban, F. Güra, H. Zhou, X. Song,
A. Boffy, H. Ganapathy, S. Zheng, H. Choe, Ágoston Weisz, T. Zhu, Y. Lu, S. Gopal, J. Kahn,
M. Kula, J. Pitman, R. Shah, E. Taropa, M. A. Merey, M. Baeuml, Z. Chen, L. E. Shafey,
Y. Zhang, O. Sercinoglu, G. Tucker, E. Piqueras, M. Krikun, I. Barr, N. Savinov, I. Danihelka,
B. Roelofs, A. White, A. Andreassen, T. von Glehn, L. Yagati, M. Kazemi, L. Gonzalez,
M. Khalman, J. Sygnowski, A. Frechette, C. Smith, L. Culp, L. Proleev, Y. Luan, X. Chen,
J. Lottes, N. Schucher, F. Lebron, A. Rrustemi, N. Clay, P. Crone, T. Kocisky, J. Zhao, B. Perz,
D. Yu, H. Howard, A. Bloniarz, J. W. Rae, H. Lu, L. Sifre, M. Maggioni, F. Alcober, D. Garrette,
M. Barnes, S. Thakoor, J. Austin, G. Barth-Maron, W. Wong, R. Joshi, R. Chaabouni, D. Fatiha,
A. Ahuja, G. S. Tomar, E. Senter, M. Chadwick, I. Kornakov, N. Attaluri, I. Iturrate, R. Liu,
Y. Li, S. Cogan, J. Chen, C. Jia, C. Gu, Q. Zhang, J. Grimstad, A. J. Hartman, X. Garcia, T. S.
Pillai, J. Devlin, M. Laskin, D. de Las Casas, D. Valter, C. Tao, L. Blanco, A. P. Badia, D. Reitter,
M. Chen, J. Brennan, C. Rivera, S. Brin, S. Iqbal, G. Surita, J. Labanowski, A. Rao, S. Winkler,
E. Parisotto, Y. Gu, K. Olszewska, R. Addanki, A. Miech, A. Louis, D. Teplyashin, G. Brown,
E. Catt, J. Balaguer, J. Xiang, P. Wang, Z. Ashwood, A. Briukhov, A. Webson, S. Ganapathy, S. Sanghavi, A. Kannan, M.-W. Chang, A. Stjerngren, J. Djolonga, Y. Sun, A. Bapna,
M. Aitchison, P. Pejman, H. Michalewski, T. Yu, C. Wang, J. Love, J. Ahn, D. Bloxwich, K. Han,
P. Humphreys, T. Sellam, J. Bradbury, V. Godbole, S. Samangooei, B. Damoc, A. Kaskasoli,
S. M. R. Arnold, V. Vasudevan, S. Agrawal, J. Riesa, D. Lepikhin, R. Tanburn, S. Srinivasan,
H. Lim, S. Hodkinson, P. Shyam, J. Ferret, S. Hand, A. Garg, T. L. Paine, J. Li, Y. Li, M. Giang, A. Neitz, Z. Abbas, S. York, M. Reid, E. Cole, A. Chowdhery, D. Das, D. Rogozi´nska,
V. Nikolaev, P. Sprechmann, Z. Nado, L. Zilka, F. Prost, L. He, M. Monteiro, G. Mishra,
C. Welty, J. Newlan, D. Jia, M. Allamanis, C. H. Hu, R. de Liedekerke, J. Gilmer, C. Saroufim,
S. Rijhwani, S. Hou, D. Shrivastava, A. Baddepudi, A. Goldin, A. Ozturel, A. Cassirer, Y. Xu,
D. Sohn, D. Sachan, R. K. Amplayo, C. Swanson, D. Petrova, S. Narayan, A. Guez, S. Brahma,
J. Landon, M. Patel, R. Zhao, K. Villela, L. Wang, W. Jia, M. Rahtz, M. Giménez, L. Yeung, J. Keeling, P. Georgiev, D. Mincu, B. Wu, S. Haykal, R. Saputro, K. Vodrahalli, J. Qin,
Z. Cankara, A. Sharma, N. Fernando, W. Hawkins, B. Neyshabur, S. Kim, A. Hutter, P. Agrawal,
A. Castro-Ros, G. van den Driessche, T. Wang, F. Yang, S. yiin Chang, P. Komarek, R. McIlroy,
M. Luˇci´c, G. Zhang, W. Farhan, M. Sharman, P. Natsev, P. Michel, Y. Bansal, S. Qiao, K. Cao,
S. Shakeri, C. Butterfield, J. Chung, P. K. Rubenstein, S. Agrawal, A. Mensch, K. Soparkar,
K. Lenc, T. Chung, A. Pope, L. Maggiore, J. Kay, P. Jhakra, S. Wang, J. Maynez, M. Phuong,
T. Tobin, A. Tacchetti, M. Trebacz, K. Robinson, Y. Katariya, S. Riedel, P. Bailey, K. Xiao,
N. Ghelani, L. Aroyo, A. Slone, N. Houlsby, X. Xiong, Z. Yang, E. Gribovskaya, J. Adler,
M. Wirth, L. Lee, M. Li, T. Kagohara, J. Pavagadhi, S. Bridgers, A. Bortsova, S. Ghemawat,
Z. Ahmed, T. Liu, R. Powell, V. Bolina, M. Iinuma, P. Zablotskaia, J. Besley, D.-W. Chung,
-----
T. Dozat, R. Comanescu, X. Si, J. Greer, G. Su, M. Polacek, R. L. Kaufman, S. Tokumine,
H. Hu, E. Buchatskaya, Y. Miao, M. Elhawaty, A. Siddhant, N. Tomasev, J. Xing, C. Greer,
H. Miller, S. Ashraf, A. Roy, Z. Zhang, A. Ma, A. Filos, M. Besta, R. Blevins, T. Klimenko,
C.-K. Yeh, S. Changpinyo, J. Mu, O. Chang, M. Pajarskas, C. Muir, V. Cohen, C. L. Lan,
K. Haridasan, A. Marathe, S. Hansen, S. Douglas, R. Samuel, M. Wang, S. Austin, C. Lan,
J. Jiang, J. Chiu, J. A. Lorenzo, L. L. Sjösund, S. Cevey, Z. Gleicher, T. Avrahami, A. Boral,
H. Srinivasan, V. Selo, R. May, K. Aisopos, L. Hussenot, L. B. Soares, K. Baumli, M. B.
Chang, A. Recasens, B. Caine, A. Pritzel, F. Pavetic, F. Pardo, A. Gergely, J. Frye, V. Ramasesh,
D. Horgan, K. Badola, N. Kassner, S. Roy, E. Dyer, V. C. Campos, A. Tomala, Y. Tang, D. E.
Badawy, E. White, B. Mustafa, O. Lang, A. Jindal, S. Vikram, Z. Gong, S. Caelles, R. Hemsley,
G. Thornton, F. Feng, W. Stokowiec, C. Zheng, P. Thacker, Ça˘glar Ünlü, Z. Zhang, M. Saleh,
J. Svensson, M. Bileschi, P. Patil, A. Anand, R. Ring, K. Tsihlas, A. Vezer, M. Selvi, T. Shevlane,
M. Rodriguez, T. Kwiatkowski, S. Daruki, K. Rong, A. Dafoe, N. FitzGerald, K. Gu-Lemberg,
M. Khan, L. A. Hendricks, M. Pellat, V. Feinberg, J. Cobon-Kerr, T. Sainath, M. Rauh, S. H.
Hashemi, R. Ives, Y. Hasson, E. Noland, Y. Cao, N. Byrd, L. Hou, Q. Wang, T. Sottiaux, M. Paganini, J.-B. Lespiau, A. Moufarek, S. Hassan, K. Shivakumar, J. van Amersfoort, A. Mandhane,
P. Joshi, A. Goyal, M. Tung, A. Brock, H. Sheahan, V. Misra, C. Li, N. Raki´cevi´c, M. Dehghani,
F. Liu, S. Mittal, J. Oh, S. Noury, E. Sezener, F. Huot, M. Lamm, N. D. Cao, C. Chen, S. Mudgal, R. Stella, K. Brooks, G. Vasudevan, C. Liu, M. Chain, N. Melinkeri, A. Cohen, V. Wang,
K. Seymore, S. Zubkov, R. Goel, S. Yue, S. Krishnakumaran, B. Albert, N. Hurley, M. Sano,
A. Mohananey, J. Joughin, E. Filonov, T. K˛epa, Y. Eldawy, J. Lim, R. Rishi, S. Badiezadegan,
T. Bos, J. Chang, S. Jain, S. G. S. Padmanabhan, S. Puttagunta, K. Krishna, L. Baker, N. Kalb,
V. Bedapudi, A. Kurzrok, S. Lei, A. Yu, O. Litvin, X. Zhou, Z. Wu, S. Sobell, A. Siciliano,
A. Papir, R. Neale, J. Bragagnolo, T. Toor, T. Chen, V. Anklin, F. Wang, R. Feng, M. Gholami,
K. Ling, L. Liu, J. Walter, H. Moghaddam, A. Kishore, J. Adamek, T. Mercado, J. Mallinson,
S. Wandekar, S. Cagle, E. Ofek, G. Garrido, C. Lombriser, M. Mukha, B. Sun, H. R. Mohammad, J. Matak, Y. Qian, V. Peswani, P. Janus, Q. Yuan, L. Schelin, O. David, A. Garg,
Y. He, O. Duzhyi, A. Älgmyr, T. Lottaz, Q. Li, V. Yadav, L. Xu, A. Chinien, R. Shivanna,
A. Chuklin, J. Li, C. Spadine, T. Wolfe, K. Mohamed, S. Das, Z. Dai, K. He, D. von Dincklage,
S. Upadhyay, A. Maurya, L. Chi, S. Krause, K. Salama, P. G. Rabinovitch, P. K. R. M, A. Selvan, M. Dektiarev, G. Ghiasi, E. Guven, H. Gupta, B. Liu, D. Sharma, I. H. Shtacher, S. Paul,
O. Akerlund, F.-X. Aubet, T. Huang, C. Zhu, E. Zhu, E. Teixeira, M. Fritze, F. Bertolini, L.-E.
Marinescu, M. Bölle, D. Paulus, K. Gupta, T. Latkar, M. Chang, J. Sanders, R. Wilson, X. Wu,
Y.-X. Tan, L. N. Thiet, T. Doshi, S. Lall, S. Mishra, W. Chen, T. Luong, S. Benjamin, J. Lee,
E. Andrejczuk, D. Rabiej, V. Ranjan, K. Styrc, P. Yin, J. Simon, M. R. Harriott, M. Bansal,
A. Robsky, G. Bacon, D. Greene, D. Mirylenka, C. Zhou, O. Sarvana, A. Goyal, S. Andermatt,
P. Siegler, B. Horn, A. Israel, F. Pongetti, C.-W. L. Chen, M. Selvatici, P. Silva, K. Wang,
J. Tolins, K. Guu, R. Yogev, X. Cai, A. Agostini, M. Shah, H. Nguyen, N. Donnaile, S. Pereira,
L. Friso, A. Stambler, A. Kurzrok, C. Kuang, Y. Romanikhin, M. Geller, Z. Yan, K. Jang, C.-C.
Lee, W. Fica, E. Malmi, Q. Tan, D. Banica, D. Balle, R. Pham, Y. Huang, D. Avram, H. Shi,
J. Singh, C. Hidey, N. Ahuja, P. Saxena, D. Dooley, S. P. Potharaju, E. O’Neill, A. Gokulchandran, R. Foley, K. Zhao, M. Dusenberry, Y. Liu, P. Mehta, R. Kotikalapudi, C. SafranekShrader, A. Goodman, J. Kessinger, E. Globen, P. Kolhar, C. Gorgolewski, A. Ibrahim, Y. Song,
A. Eichenbaum, T. Brovelli, S. Potluri, P. Lahoti, C. Baetu, A. Ghorbani, C. Chen, A. Crawford,
S. Pal, M. Sridhar, P. Gurita, A. Mujika, I. Petrovski, P.-L. Cedoz, C. Li, S. Chen, N. D. Santo,
S. Goyal, J. Punjabi, K. Kappaganthu, C. Kwak, P. LV, S. Velury, H. Choudhury, J. Hall, P. Shah,
R. Figueira, M. Thomas, M. Lu, T. Zhou, C. Kumar, T. Jurdi, S. Chikkerur, Y. Ma, A. Yu,
S. Kwak, V. Ähdel, S. Rajayogam, T. Choma, F. Liu, A. Barua, C. Ji, J. H. Park, V. Hellendoorn,
A. Bailey, T. Bilal, H. Zhou, M. Khatir, C. Sutton, W. Rzadkowski, F. Macintosh, K. Shagin,
P. Medina, C. Liang, J. Zhou, P. Shah, Y. Bi, A. Dankovics, S. Banga, S. Lehmann, M. Bredesen,
Z. Lin, J. E. Hoffmann, J. Lai, R. Chung, K. Yang, N. Balani, A. Bražinskas, A. Sozanschi,
M. Hayes, H. F. Alcalde, P. Makarov, W. Chen, A. Stella, L. Snijders, M. Mandl, A. Kärrman,
P. Nowak, X. Wu, A. Dyck, K. Vaidyanathan, R. R, J. Mallet, M. Rudominer, E. Johnston,
S. Mittal, A. Udathu, J. Christensen, V. Verma, Z. Irving, A. Santucci, G. Elsayed, E. Davoodi,
M. Georgiev, I. Tenney, N. Hua, G. Cideron, E. Leurent, M. Alnahlawi, I. Georgescu, N. Wei,
I. Zheng, D. Scandinaro, H. Jiang, J. Snoek, M. Sundararajan, X. Wang, Z. Ontiveros, I. Karo,
J. Cole, V. Rajashekhar, L. Tumeh, E. Ben-David, R. Jain, J. Uesato, R. Datta, O. Bunyan, S. Wu,
J. Zhang, P. Stanczyk, Y. Zhang, D. Steiner, S. Naskar, M. Azzam, M. Johnson, A. Paszke, C.-C.
Chiu, J. S. Elias, A. Mohiuddin, F. Muhammad, J. Miao, A. Lee, N. Vieillard, J. Park, J. Zhang,
-----
J. Stanway, D. Garmon, A. Karmarkar, Z. Dong, J. Lee, A. Kumar, L. Zhou, J. Evens, W. Isaac,
G. Irving, E. Loper, M. Fink, I. Arkatkar, N. Chen, I. Shafran, I. Petrychenko, Z. Chen, J. Jia,
A. Levskaya, Z. Zhu, P. Grabowski, Y. Mao, A. Magni, K. Yao, J. Snaider, N. Casagrande,
E. Palmer, P. Suganthan, A. Castaño, I. Giannoumis, W. Kim, M. Rybi´nski, A. Sreevatsa,
J. Prendki, D. Soergel, A. Goedeckemeyer, W. Gierke, M. Jafari, M. Gaba, J. Wiesner, D. G.
Wright, Y. Wei, H. Vashisht, Y. Kulizhskaya, J. Hoover, M. Le, L. Li, C. Iwuanyanwu, L. Liu,
K. Ramirez, A. Khorlin, A. Cui, T. LIN, M. Wu, R. Aguilar, K. Pallo, A. Chakladar, G. Perng,
E. A. Abellan, M. Zhang, I. Dasgupta, N. Kushman, I. Penchev, A. Repina, X. Wu, T. van der
Weide, P. Ponnapalli, C. Kaplan, J. Simsa, S. Li, O. Dousse, F. Yang, J. Piper, N. Ie, R. Pasumarthi, N. Lintz, A. Vijayakumar, D. Andor, P. Valenzuela, M. Lui, C. Paduraru, D. Peng,
K. Lee, S. Zhang, S. Greene, D. D. Nguyen, P. Kurylowicz, C. Hardin, L. Dixon, L. Janzer,
K. Choo, Z. Feng, B. Zhang, A. Singhal, D. Du, D. McKinnon, N. Antropova, T. Bolukbasi, O. Keller, D. Reid, D. Finchelstein, M. A. Raad, R. Crocker, P. Hawkins, R. Dadashi,
C. Gaffney, K. Franko, A. Bulanova, R. Leblond, S. Chung, H. Askham, L. C. Cobo, K. Xu,
F. Fischer, J. Xu, C. Sorokin, C. Alberti, C.-C. Lin, C. Evans, A. Dimitriev, H. Forbes, D. Banarse, Z. Tung, M. Omernick, C. Bishop, R. Sterneck, R. Jain, J. Xia, E. Amid, F. Piccinno,
X. Wang, P. Banzal, D. J. Mankowitz, A. Polozov, V. Krakovna, S. Brown, M. Bateni, D. Duan,
V. Firoiu, M. Thotakuri, T. Natan, M. Geist, S. tan Girgin, H. Li, J. Ye, O. Roval, R. Tojo,
M. Kwong, J. Lee-Thorp, C. Yew, D. Sinopalnikov, S. Ramos, J. Mellor, A. Sharma, K. Wu,
D. Miller, N. Sonnerat, D. Vnukov, R. Greig, J. Beattie, E. Caveness, L. Bai, J. Eisenschlos,
A. Korchemniy, T. Tsai, M. Jasarevic, W. Kong, P. Dao, Z. Zheng, F. Liu, F. Yang, R. Zhu,
T. H. Teh, J. Sanmiya, E. Gladchenko, N. Trdin, D. Toyama, E. Rosen, S. Tavakkol, L. Xue,
C. Elkind, O. Woodman, J. Carpenter, G. Papamakarios, R. Kemp, S. Kafle, T. Grunina, R. Sinha,
A. Talbert, D. Wu, D. Owusu-Afriyie, C. Du, C. Thornton, J. Pont-Tuset, P. Narayana, J. Li,
S. Fatehi, J. Wieting, O. Ajmeri, B. Uria, Y. Ko, L. Knight, A. Héliou, N. Niu, S. Gu, C. Pang,
Y. Li, N. Levine, A. Stolovich, R. Santamaria-Fernandez, S. Goenka, W. Yustalim, R. Strudel,
A. Elqursh, C. Deck, H. Lee, Z. Li, K. Levin, R. Hoffmann, D. Holtmann-Rice, O. Bachem,
S. Arora, C. Koh, S. H. Yeganeh, S. Põder, M. Tariq, Y. Sun, L. Ionita, M. Seyedhosseini, P. Tafti,
Z. Liu, A. Gulati, J. Liu, X. Ye, B. Chrzaszcz, L. Wang, N. Sethi, T. Li, B. Brown, S. Singh,
W. Fan, A. Parisi, J. Stanton, V. Koverkathu, C. A. Choquette-Choo, Y. Li, T. Lu, A. Ittycheriah,
P. Shroff, M. Varadarajan, S. Bahargam, R. Willoughby, D. Gaddy, G. Desjardins, M. Cornero,
B. Robenek, B. Mittal, B. Albrecht, A. Shenoy, F. Moiseev, H. Jacobsson, A. Ghaffarkhah,
M. Rivière, A. Walton, C. Crepy, A. Parrish, Z. Zhou, C. Farabet, C. Radebaugh, P. Srinivasan, C. van der Salm, A. Fidjeland, S. Scellato, E. Latorre-Chimoto, H. Klimczak-Pluci´nska,
D. Bridson, D. de Cesare, T. Hudson, P. Mendolicchio, L. Walker, A. Morris, M. Mauger,
A. Guseynov, A. Reid, S. Odoom, L. Loher, V. Cotruta, M. Yenugula, D. Grewe, A. Petrushkina, T. Duerig, A. Sanchez, S. Yadlowsky, A. Shen, A. Globerson, L. Webb, S. Dua, D. Li,
S. Bhupatiraju, D. Hurt, H. Qureshi, A. Agarwal, T. Shani, M. Eyal, A. Khare, S. R. Belle,
L. Wang, C. Tekur, M. S. Kale, J. Wei, R. Sang, B. Saeta, T. Liechty, Y. Sun, Y. Zhao, S. Lee,
P. Nayak, D. Fritz, M. R. Vuyyuru, J. Aslanides, N. Vyas, M. Wicke, X. Ma, E. Eltyshev,
N. Martin, H. Cate, J. Manyika, K. Amiri, Y. Kim, X. Xiong, K. Kang, F. Luisier, N. Tripuraneni, D. Madras, M. Guo, A. Waters, O. Wang, J. Ainslie, J. Baldridge, H. Zhang, G. Pruthi,
J. Bauer, F. Yang, R. Mansour, J. Gelman, Y. Xu, G. Polovets, J. Liu, H. Cai, W. Chen, X. Sheng,
E. Xue, S. Ozair, C. Angermueller, X. Li, A. Sinha, W. Wang, J. Wiesinger, E. Koukoumidis,
Y. Tian, A. Iyer, M. Gurumurthy, M. Goldenson, P. Shah, M. Blake, H. Yu, A. Urbanowicz,
J. Palomaki, C. Fernando, K. Durden, H. Mehta, N. Momchev, E. Rahimtoroghi, M. Georgaki,
A. Raul, S. Ruder, M. Redshaw, J. Lee, D. Zhou, K. Jalan, D. Li, B. Hechtman, P. Schuh,
M. Nasr, K. Milan, V. Mikulik, J. Franco, T. Green, N. Nguyen, J. Kelley, A. Mahendru, A. Hu,
J. Howland, B. Vargas, J. Hui, K. Bansal, V. Rao, R. Ghiya, E. Wang, K. Ye, J. M. Sarr, M. M.
Preston, M. Elish, S. Li, A. Kaku, J. Gupta, I. Pasupat, D.-C. Juan, M. Someswar, T. M.,
X. Chen, A. Amini, A. Fabrikant, E. Chu, X. Dong, A. Muthal, S. Buthpitiya, S. Jauhari, N. Hua,
U. Khandelwal, A. Hitron, J. Ren, L. Rinaldi, S. Drath, A. Dabush, N.-J. Jiang, H. Godhia,
U. Sachs, A. Chen, Y. Fan, H. Taitelbaum, H. Noga, Z. Dai, J. Wang, C. Liang, J. Hamer,
C.-S. Ferng, C. Elkind, A. Atias, P. Lee, V. Listík, M. Carlen, J. van de Kerkhof, M. Pikus,
K. Zaher, P. Müller, S. Zykova, R. Stefanec, V. Gatsko, C. Hirnschall, A. Sethi, X. F. Xu,
C. Ahuja, B. Tsai, A. Stefanoiu, B. Feng, K. Dhandhania, M. Katyal, A. Gupta, A. Parulekar,
D. Pitta, J. Zhao, V. Bhatia, Y. Bhavnani, O. Alhadlaq, X. Li, P. Danenberg, D. Tu, A. Pine,
V. Filippova, A. Ghosh, B. Limonchik, B. Urala, C. K. Lanka, D. Clive, Y. Sun, E. Li, H. Wu,
K. Hongtongsak, I. Li, K. Thakkar, K. Omarov, K. Majmundar, M. Alverson, M. Kucharski,
-----
M. Patel, M. Jain, M. Zabelin, P. Pelagatti, R. Kohli, S. Kumar, J. Kim, S. Sankar, V. Shah, L. Ramachandruni, X. Zeng, B. Bariach, L. Weidinger, T. Vu, A. Subramanya, S. Hsiao, D. Hassabis,
K. Kavukcuoglu, A. Sadovsky, Q. Le, T. Strohman, Y. Wu, S. Petrov, J. Dean, and O. Vinyals.
Gemini: A family of highly capable multimodal models, 2024.
[30] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra,
P. Bhargava, S. Bhosale, D. Bikel, L. Blecher, C. C. Ferrer, M. Chen, G. Cucurull, D. Esiobu,
J. Fernandes, J. Fu, W. Fu, B. Fuller, C. Gao, V. Goswami, N. Goyal, A. Hartshorn, S. Hosseini,
R. Hou, H. Inan, M. Kardas, V. Kerkez, M. Khabsa, I. Kloumann, A. Korenev, P. S. Koura, M.-A.
Lachaux, T. Lavril, J. Lee, D. Liskovich, Y. Lu, Y. Mao, X. Martinet, T. Mihaylov, P. Mishra,
I. Molybog, Y. Nie, A. Poulton, J. Reizenstein, R. Rungta, K. Saladi, A. Schelten, R. Silva, E. M.
Smith, R. Subramanian, X. E. Tan, B. Tang, R. Taylor, A. Williams, J. X. Kuan, P. Xu, Z. Yan,
I. Zarov, Y. Zhang, A. Fan, M. Kambadur, S. Narang, A. Rodriguez, R. Stojnic, S. Edunov, and
T. Scialom. Llama 2: Open foundation and fine-tuned chat models, 2023.
[31] X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdhery, and D. Zhou.
Self-consistency improves chain of thought reasoning in language models, 2023.
[32] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou.
Chain-of-thought prompting elicits reasoning in large language models, 2023.
[33] S. Xia, X. Li, Y. Liu, T. Wu, and P. Liu. Evaluating mathematical reasoning beyond accuracy,
2024.
[34] X. Xu, T. Xiao, Z. Chao, Z. Huang, C. Yang, and Y. Wang. Can llms solve longer math word
problems better?, 2024.
[35] L. Yu, W. Jiang, H. Shi, J. Yu, Z. Liu, Y. Zhang, J. T. Kwok, Z. Li, A. Weller, and W. Liu.
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models, May
[2024. URL http://arxiv.org/abs/2309.12284. arXiv:2309.12284 [cs].](http://arxiv.org/abs/2309.12284)
[36] Z. Yuan, H. Yuan, C. Li, G. Dong, K. Lu, C. Tan, C. Zhou, and J. Zhou. Scaling relationship on
learning mathematical reasoning with large language models, 2023.
[37] X. Yue, X. Qu, G. Zhang, Y. Fu, W. Huang, H. Sun, Y. Su, and W. Chen. Mammoth: Building
math generalist models through hybrid instruction tuning, 2023.
[38] H. Zhang, J. Da, D. Lee, V. Robinson, C. Wu, W. Song, T. Zhao, P. Raja, D. Slack, Q. Lyu,
S. Hendryx, R. Kaplan, M. Lunati, and S. Yue. A careful examination of large language model
performance on grade school arithmetic, 2024.
[39] T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi. Bertscore: Evaluating text
generation with bert, 2020.
[40] Y. Zhang, M. Khalifa, L. Logeswaran, J. Kim, M. Lee, H. Lee, and L. Wang. Small language
models need strong verifiers to self-correct reasoning, 2024.
[41] L. Zheng, W.-L. Chiang, Y. Sheng, S. Zhuang, Z. Wu, Y. Zhuang, Z. Lin, Z. Li, D. Li, E. P.
Xing, H. Zhang, J. E. Gonzalez, and I. Stoica. Judging LLM-as-a-Judge with MT-Bench and
[Chatbot Arena, Dec. 2023. URL http://arxiv.org/abs/2306.05685. arXiv:2306.05685](http://arxiv.org/abs/2306.05685)
[cs].
-----
**Appendix**
The dataset and code to run all experiments will be made available soon.
**8** `MWP-MISTAKE Dataset`
```
MWP-MISTAKE dataset is curated using 4 different types of well-known datasets. Below are the details
```
of each of the datasets.
- GSM-8K [10]:GSM-8K is a dataset of diverse grade school math word problems created by
human writers, involving basic arithmetic operations. Released in November 2021.
- MATH [16]: The MATH dataset is divided into seven categories, each with five difficulty
levels. For our study, we used levels 1, 2, and 3 from the algebra and counting and probability
categories. Released in November 2021.
- MATHBENCH [20]: MATHBENCH is a recent dataset with questions divided by educational stages, from basic arithmetic to college levels. For our experiment, we chose middle
and high-school-level single-choice multiple-choice questions. Released in May 2024.
- JEEBENCH [6]: JEEBENCH is a challenging benchmark dataset for evaluating LLM
problem-solving abilities, containing 515 pre-engineering math, physics, and chemistry
problems from the IIT JEE-Advanced Exam. For our experiment, we chose mathematics
single-choice questions. Released in October 2023.
**8.1** **Prompts to curate reasoning steps in MWP-MISTAKE dataset**
GSM-8K and MATH already contain MWP questions, a chain of thought reasoning steps
and a final answer. To curate chain of thought reasoning step for MATHBENCH and
JEEBENCH we made use of GPT-4. While prompting GPT-4 we made sure that reasoning steps did not contain the final answer, so that final answer is not picked directly from the reasoning step. Listing 1 prompt is used to curate the reasoning steps.
```
Strictly follow the below conditions.
1. Output format: \nReasoning Chain: \nFinal Answer:
2. Reasoning Chain should be separated by a new line only.
3. Reasoning chain cannot have the final answer. (Replace the
final answer in the reasoning chain with its calculation
or ####)
4. Do not include any additional information in the final
answer (only the answer).
```
Listing 1: Prompt to curate reasoning chain without answers.
Table 5 shows examples of default reasoning steps from GSM-8K dataset.
-----
Table 5: Example of rule based incorrect reasoning step (GSM-8K dataset)
|Question|Gerald spends $100 a month on baseball supplies. His season is 4 months long. He wants to use the months he’s not playing baseball to save up by raking, shoveling, and mowing lawns. He charges $10 for each. How many chores does he need to average a month to save up for his supplies?|
|---|---|
|Final Answer|5|
|Gold Reasoning step|He needs to save up $400 because 4 x 100 = 400 He has 8 months to earn this money because 12 - 4 = 8 He needs to earn $50 a month because 400 / 8 = 50 He needs to do 5 tasks a month because 50 / 10 = 5|
|Shuffel reasoning step|He needs to earn $50 a month because 400 / 8 = 50 He needs to save up $400 because 4 x 100 = 400 He needs to do 5 tasks a month because 50 / 10 = 5 He has 8 months to earn this money because 12 - 4 = 8|
|Delete reasoning step|He needs to save up $400 because 4 x 100 = 400 He needs to earn $50 a month because 400 / 8 = 50 He needs to do 5 tasks a month because 50 / 10 = 5|
|Shuffel numerical values|He needs to save up $400 because 4 x 100 = 400 He has 50 months to earn this money because 8 - 8 = 4 He needs to earn $12 a month because 400 / 8 = 50 He needs to do 5 tasks a month because 50 / 10 = 5|
|Replace numerical values|He needs to save up $400 because 4 x 100 = 400 He has 8 months to earn this money because 12 - 4 = 8 He needs to earn $6 a month because 32 / 8 = 50 He needs to do 76 tasks a month because 50 / 10 = 5|
|Shuffel Operations|He needs to save up $400 because 4 x 100 = 400 He has 8 months to earn this money because 12 * 4 = 8 He needs to earn $50 a month because 400 - 8 = 50 He needs to do 5 tasks a month because 50 / 10 = 5|
|Insert Random Reasoning step|He needs to save up $400 because 4 x 100 = 400 Therefore, Faye has $60 - $30 = $30 left. He has 8 months to earn this money because 12 - 4 = 8 He needs to earn $50 a month because 400 / 8 = 50 He needs to do 5 tasks a month because 50 / 10 = 5|
Gerald spends $100 a month on baseball supplies.
His season is 4 months long.
He wants to use the months he’s not playing baseball
Question
to save up by raking, shoveling, and mowing lawns.
He charges $10 for each. How many chores does he need to average a month
to save up for his supplies?
Final Answer 5
He needs to save up $400 because 4 x 100 = 400
He has 8 months to earn this money because 12 - 4 = 8
Gold Reasoning step
He needs to earn $50 a month because 400 / 8 = 50
He needs to do 5 tasks a month because 50 / 10 = 5
He needs to earn $50 a month because 400 / 8 = 50
He needs to save up $400 because 4 x 100 = 400
Shuffle reasoning step
He needs to do 5 tasks a month because 50 / 10 = 5
He has 8 months to earn this money because 12 - 4 = 8
He needs to save up $400 because 4 x 100 = 400
Delete reasoning step He needs to earn $50 a month because 400 / 8 = 50
He needs to do 5 tasks a month because 50 / 10 = 5
He needs to save up $400 because 4 x 100 = 400
He has 50 months to earn this money because 8 - 8 = 4
Shuffle numerical values
He needs to earn $12 a month because 400 / 8 = 50
He needs to do 5 tasks a month because 50 / 10 = 5
He needs to save up $400 because 4 x 100 = 400
He has 8 months to earn this money because 12 - 4 = 8
Replace numerical values
He needs to earn $6 a month because 32 / 8 = 50
He needs to do 76 tasks a month because 50 / 10 = 5
He needs to save up $400 because 4 x 100 = 400
He has 8 months to earn this money because 12 * 4 = 8
Shuffle Operations
He needs to earn $50 a month because 400 - 8 = 50
He needs to do 5 tasks a month because 50 / 10 = 5
He needs to save up $400 because 4 x 100 = 400
Therefore, Faye has $60 - $30 = $30 left.
Insert Random Reasoning step He has 8 months to earn this money because 12 - 4 = 8
He needs to earn $50 a month because 400 / 8 = 50
He needs to do 5 tasks a month because 50 / 10 = 5
**9** **SLMs reasoning steps**
SLMs were used to generate chain of thought (COT) reasoning step and final answers for all dataset
questions. Each model Llama-2-7b-chat, Mixtral-8x7B, Phi-3-mini where prompted using Listing 1
to curate the reasoning step without an answer. If the final answer was incorrect we filtered out the
reasoning steps as incorrect.
Table 6 shows examples of SLM incorrect reasoning steps from GSM-8K dataset.
**10** **Task T1 and T2**
Task T1 evaluates the model’s ability to detect mistakes rectify them and derive the correct answer.
Listing 2 was used in a few shot settings for task T1.
```
You are a mathematics educator with a deep understanding of
elementary and middle school mathematics. You are
experienced in teaching multi -step problem -solving
techniques and have a knack for breaking down complex
problems into manageable steps. Your expertise lies in
basic arithmetic operations such as addition , subtraction ,
multiplication , and division. You can provide clear , step -
by -step solutions to mathematical problems that require
multi -step reasoning.
```
-----
Table 6: Example of incorrect reasoning steps generated using SLM’s (GSM-8K dataset)
|Question|Gerald spends $100 a month on baseball supplies. His season is 4 months long. He wants to use the months he’s not playing baseball to save up by raking, shoveling, and mowing lawns. He charges $10 for each. How many chores does he need to average a month to save up for his supplies?|
|---|---|
|Final Answer|5|
|Gold Reasoning step|He needs to save up $400 because 4 x 100 = 400. He has 8 months to earn this money because 12 - 4 = 8 He needs to earn $50 a month because 400 / 8 = 50 He needs to do 5 tasks a month because 50 / 10 = 5 #### 5|
|Llama-2-7b-chat Incorrect reasoning step|Gerald spends $100 on baseball supplies per month. Season is 4 months long. To save up for baseball supplies, Gerald needs to earn more than the cost of supplies per month. Gerald charges $10 for each chore he does. To calculate the number of chores Gerald needs to do per month, divide the cost of supplies by the amount he charges per chore.|
|Llama-2-7b-chat Predicted Answer|""|
|Mixtral-8x7B Incorrect reasoning step|Gerald spends $100 a month for 4 months, so his total spending for the season is 100*4 = $400 Now, we need to find out how many chores he needs to do to earn this amount He charges $10 per chore, so we divide the total amount by the cost per chore: 400 / 10 = ####|
|Mixtral-8x7B Predicted Answer|40|
|Phi-3-mini Incorrect reasoning step|Gerald’s total expenditure for baseball supplies = $100 per month * 4 months = $400 Gerald earns $10 per chore. To fnid out how many chores he needs to do to save up $400, we divide his total expenditure by the amount he earns per chore.= $400 / $10= ####|
|Phi-3-mini Predicted Answer|40|
Gerald spends $100 a month on baseball supplies.
His season is 4 months long.
Question
He wants to use the months he’s not playing baseball to save up by raking, shoveling, and mowing lawns.
He charges $10 for each. How many chores does he need to average a month to save up for his supplies?
Final Answer 5
He needs to save up $400 because 4 x 100 = 400.
He has 8 months to earn this money because 12 - 4 = 8
Gold Reasoning step He needs to earn $50 a month because 400 / 8 = 50
He needs to do 5 tasks a month because 50 / 10 = 5
#### 5
Gerald spends $100 on baseball supplies per month.
Season is 4 months long.
To save up for baseball supplies, Gerald needs to earn more than the cost of supplies per month.
Llama-2-7b-chat Incorrect reasoning step
Gerald charges $10 for each chore he does.
To calculate the number of chores Gerald needs to do per month,
divide the cost of supplies by the amount he charges per chore.
Llama-2-7b-chat Predicted Answer ""
Gerald spends $100 a month for 4 months, so his total spending for the season is 100*4 = $400
Mixtral-8x7B Incorrect reasoning step Now, we need to find out how many chores he needs to do to earn this amount
He charges $10 per chore, so we divide the total amount by the cost per chore: 400 / 10 = ####
Mixtral-8x7B Predicted Answer 40
Gerald’s total expenditure for baseball supplies = $100 per month * 4 months = $400
Gerald earns $10 per chore.
Phi-3-mini Incorrect reasoning step
To find out how many chores he needs to do to save up $400,
we divide his total expenditure by the amount he earns per chore.= $400 / $10= ####
Phi-3-mini Predicted Answer 40
```
You are provided with a mathematical question and a step -by step solution along with it. The solution might have some
mistakes. Identify if the solution is correct or incorrect.
If the solution is correct, output the final answer with
the help of the solution provided. If the solution is
incorrect, correct the existing solution and determine the
final answer with the help of the corrected solution.
Reasoning chain Correct (Yes/No):
Corrected reasoning chain or NA:
Final answer (just the number):
```
Listing 2: Prompt for Task T1
Task T2 evaulates the model’s ability to detect mistake and solve MWP based on the provided
reasoning step. Listing 3 was used in a few shot setting for task T2. Here we insure that final answer
is generated with the help of the reasoning steps provided, which may or may not be correct.
2
```
You are a mathematics educator with a deep understanding of
elementary and middle school mathematics. You are
experienced in teaching multi -step problem -solving
techniques and have a knack for breaking down complex
problems into manageable steps. Your expertise lies in
basic arithmetic operations such as addition, subtraction,
multiplication, and division. You can provide clear, step by -step solutions to mathematical problems that require
multi -step reasoning.
You are provided with a mathematical question and a step -by step solution along with it. The solution might have some
mistakes. Identify if the solution is correct or incorrect
and output the final answer based on the provided solution.
Reasoning chain Correct (Yes/No):
Final answer (just the number):
```
Listing 3: Prompt for Task T2
-----
**11** **T2 Results**
Task T2 evaluates the performance in deriving the final answer based on the reasoning step which
may or may not be correct. In task T2 we do not instruct the model to correct the reasoning step, and
calcualate the final answer based on the provided reasoning step. Due to which we see a signifant drop
in performance between Task T1 and Task T2. Table 7 presents the mistake detection performance
(F1 score) of all the models with Task T2 and Table 8 presents the performance in deriving the final
answer (F1 Score) of all the models.
Table 7: Mistake Detection Performance (F1 score) on MWP-MISTAKE dataset for Task T2. (D-Default
reasoning steps, SM-Smaller model reasoning steps) (Bold: Best, Underline:Second best)
|Col1|GSM-8K|MATH|MATHBENCH|JEEBENCH|Average|
|---|---|---|---|---|---|
|Model|D SM|D SM|D SM|D SM|D SM Overall|
|GPT-4o GPT-4 GPT-3.5Turbo Llama-2-7b-chat Mixtral-8x7B Phi-3-mini Claude-3-Opus|—- —- 0.67 0.61 0.58 0.40 0.11 NA 0.69 NA 0.56 NA —- —-|—- —- 0.75 0.76 0.69 0.42 0.22 NA 0.75 NA 0.52 NA —- —-|—- —- 0.48 0.88 0.33 0.24 0.11 NA 0.60 NA 0.46 NA —- —-|—- —- 0.76 0.85 0.51 0.41 0.75 NA 0.76 NA 0.54 NA —- —-|—- —- —- 0.66 0.78 0.72 0.53 0.36 0.45 0.30 NA 0.30 0.70 NA 0.70 0.52 NA 0.52 —- —- —-|
GSM-8K MATH MATHBENCH JEEBENCH Average
Model D SM D SM D SM D SM D SM Overall
GPT-4o **—-** —- **—-** **—-** **—-** **—-** **—-** **—-** **—-** **—-** **—-**
GPT-4 0.67 0.61 0.75 0.76 0.48 0.88 0.76 0.85 0.66 0.78 0.72
GPT-3.5Turbo 0.58 0.40 0.69 0.42 0.33 0.24 0.51 0.41 0.53 0.36 0.45
Llama-2-7b-chat 0.11 NA 0.22 NA 0.11 NA 0.75 NA 0.30 NA 0.30
Mixtral-8x7B 0.69 NA 0.75 NA 0.60 NA 0.76 NA 0.70 NA 0.70
Phi-3-mini 0.56 NA 0.52 NA 0.46 NA 0.54 NA 0.52 NA 0.52
Claude-3-Opus —- **—-** —- —- —- —- —- —- —- —- —
Table 8: Performance in deriving correct answers (F1 score) on MWP-MISTAKE dataset for Task T2.
(D-Default reasoning steps, SM-Smaller model reasoning steps) (Bold: Best, Underline:Second best)
|Col1|GSM-8K|MATH|MATHBENCH|JEEBENCH|Average|
|---|---|---|---|---|---|
|Model|D SM|D SM|D SM|D SM|D SM Overall|
|GPT-4o GPT-4 GPT-3.5Turbo Llama-2-7b-chat Mixtral-8x7B Phi-3-mini Claude-3-Opus|—- —- 0.99 0.65 0.85 0.26 0.84 NA 0.91 NA 0.92 NA —- —-|—- —- 0.72 0.48 0.66 0.31 0.33 NA 0.64 NA 0.62 NA —- —-|—- —- 0.82 0.27 0.67 0.16 0.44 NA 0.68 NA 0.65 NA —- —-|—- —- 0.39 0.29 0.48 0.20 0.36 NA 0.11 NA 0.49 NA —- —-|—- —- —- 0.73 0.42 0.57 0.67 0.23 0.45 0.49 NA 0.49 0.58 NA 0.58 0.67 NA 0.67 —- —- —-|
GSM-8K MATH MATHBENCH JEEBENCH Average
Model D SM D SM D SM D SM D SM Overall
GPT-4o **—-** —- **—-** **—-** **—-** **—-** **—-** **—-** **—-** **—-** **—-**
GPT-4 0.99 0.65 0.72 0.48 0.82 0.27 0.39 0.29 0.73 0.42 0.57
GPT-3.5Turbo 0.85 0.26 0.66 0.31 0.67 0.16 0.48 0.20 0.67 0.23 0.45
Llama-2-7b-chat 0.84 NA 0.33 NA 0.44 NA 0.36 NA 0.49 NA 0.49
Mixtral-8x7B 0.91 NA 0.64 NA 0.68 NA 0.11 NA 0.58 NA 0.58
Phi-3-mini 0.92 NA 0.62 NA 0.65 NA 0.49 NA 0.67 NA 0.67
Claude-3-Opus —- **—-** —- —- —- —- —- —- —- —- —
**12** **Model Used**
Below are brief details of the models we have used for benchmarking our MWP-MISTAKE dataset.
1. GPT-4o: GPT-4o is a multimodal model by OpenAI, and it has the same high intelligence
as GPT-4 Turbo but is much more efficient—it generates text 2x faster and is 50% cheaper.
Additionally, GPT-4o has the best vision and performance across non-English languages of
any OpenAI model. Last training data: October 2023.
2. GPT-4: GPT-4 is a large multimodal model by OpenAI that can solve difficult problems
with greater accuracy than any of OpenAI previous models, thanks to its broader general
knowledge and advanced reasoning capabilities. Last training data: September 2021.
3. GPT-3.5Turbo: GPT-3.5Turbo is a large language model by OpenAI GPT-3.5 that can
understand and generate natural language or code and has been optimized for chat using
the Chat Completions API but work well for non-chat tasks as well. Last training date:
September 2021.
4. Claude-3-Opus: Claude-3-Opus is Anthropic’s most capable and intelligent model yet,
ideal for navigating complex tasks like in-depth analysis, research, and task automation.
Last training data: August 2023.
5. Llama-2-7b-chat: Llama 2 is a collection of pretrained and fine-tuned generative text
models ranging in scale from 7 billion to 70 billion parameters from meta. This is the 7B
fine-tuned model, optimized for dialogue use cases. Training date: September 2022.
6. Mixtral-8x7B: Mixtral is a Mixture of Experts (MoE) model with 8 experts per MLP,
with a total of 45 billion parameters. Despite the model having 45 billion parameters, the
compute required for a single forward pass is the same as that of a 14 billion parameter
model. This is because even though each of the experts have to be loaded in RAM (70B
like ram requirement) each token from the hidden states are dispatched twice (top 2 routing)
and thus the compute (the operation required at each forward computation) is just 2 X
sequence_length.
-----
7. Phi-3-mini: The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter by microsoft,
lightweight, state-of-the-art open model trained using the Phi-3 datasets. This dataset
includes both synthetic data and filtered publicly available website data, with an emphasis
on high-quality and reasoning-dense properties. Last training data: October 2023.
**13** **METEOR and BertScore results**
BertScore computes a similarity score for each token in the candidate sentence with each token
in the reference sentence using the BERT embeddings. Metric for Evaluation of Translation with
Explicit Ordering (METEOR) score is a metric that measures the quality of generated text based on
the alignment between the generated text and the reference text. The metric is based on the harmonic
mean of unigram precision and recall, with recall weighted higher than precision.
Table 9 and Table 10 present the BertScore and Meteor Score respectively for all the datasets across
all models. We observed that these two metric evaluations where not fully able to capture the nuance
capabilities of LLMs in rectifying the mistakes within reasoning steps. This can be seen in the
results. GPT-4o has a consistently high performance across all the dataset, but when you compare the
BERTScore between the corrected reasoning step and ground truth reasoning step you see the rest of
the models clearly performing better than GPT-4o. GPT-4 has performed better than GPT-3.5Turbo
in most datasets.
Table 9: BERTscores for correct and incorrect final answers derived after mistake rectification across
all models and datasets.
|Datasets|Models|GPT-4o|GPT-4|GPT-3.5Turbo|Llama-2-7b-chat|Mixtral-8x7B|Phi-3-mini|
|---|---|---|---|---|---|---|---|
|||Correct Incorrect|Correct Incorrect|Correct Incorrect|Correct Incorrect|Correct Incorrect|Correct Incorrect|
|GSM-8K|D SM|0.95 0.91 0.83 0.82|0.98 0.93 0.84 0.82|0.97 0.95 0.84 0.82|0.96 0.98 NA NA|0.97 0.94 NA NA|0.94 0.91 NA NA|
|MATH|D SM|0.88 0.90 0.84 0.80|0.96 0.93 0.83 0.81|0.95 0.93 0.84 0.81|0.96 0.88 NA NA|0.95 0.92 NA NA|0.90 0.87 NA NA|
|MATHBENCH|D SM|0.88 0.83 0.82 0.82|0.97 0.95 0.85 0.82|0.97 0.94 0.84 0.83|0.90 0.89 NA NA|0.96 0.95 NA NA|0.93 0.90 NA NA|
|JEEBENCH|D SM|0.89 0.89 0.86 0.87|0.88 0.87 0.85 0.86|0.94 0.95 0.78 0.86|0.86 0.82 NA NA|0.85 0.87 NA NA|0.70 0.85 NA NA|
Datasets Models GPT-4o GPT-4 GPT-3.5Turbo Llama-2-7b-chat Mixtral-8x7B Phi-3-mini
Correct Incorrect Correct Incorrect Correct Incorrect Correct Incorrect Correct Incorrect Correct Incorrect
D 0.95 0.91 0.98 0.93 0.97 0.95 0.96 0.98 0.97 0.94 0.94 0.91
GSM-8K
SM 0.83 0.82 0.84 0.82 0.84 0.82 NA NA NA NA NA NA
D 0.88 0.90 0.96 0.93 0.95 0.93 0.96 0.88 0.95 0.92 0.90 0.87
MATH
SM 0.84 0.80 0.83 0.81 0.84 0.81 NA NA NA NA NA NA
MATHBENCH D 0.88 0.83 0.97 0.95 0.97 0.94 0.90 0.89 0.96 0.95 0.93 0.90
SM 0.82 0.82 0.85 0.82 0.84 0.83 NA NA NA NA NA NA
JEEBENCH D 0.89 0.89 0.88 0.87 0.94 0.95 0.86 0.82 0.85 0.87 0.70 0.85
SM 0.86 0.87 0.85 0.86 0.78 0.86 NA NA NA NA NA NA
Table 10: Meteor Score for correct and incorrect final answers derived after mistake rectification
across all models and datasets.
|Datasets|Models|GPT-4o|GPT-4|GPT-3.5Turbo|Llama-2-7b-chat|Mixtral-8x7B|Phi-3-mini|
|---|---|---|---|---|---|---|---|
|||Correct Incorrect|Correct Incorrect|Correct Incorrect|Correct Incorrect|Correct Incorrect|Correct Incorrect|
|GSM-8K|D SM|0.81 0.54 0.33 0.27|0.92 0.62 0.37 0.31|0.88 0.77 0.37 0.32|0.87 0.83 NA NA|0.85 0.74 NA NA|0.77 0.66 NA NA|
|MATH|D SM|0.48 0.54 0.32 0.28|0.76 0.70 0.30 0.26|0.76 0.67 0.33 0.28|0.78 0.59 NA NA|0.73 0.66 NA NA|0.55 0.48 NA NA|
|MATHBENCH|D SM|0.55 0.35 0.33 0.30|0.82 0.63 0.32 0.25|0.82 0.68 0.32 0.29|0.49 0.57 NA NA|0.81 0.68 NA NA|0.67 0.53 NA NA|
|JEEBENCH|D SM|0.37 0.31 0.28 0.26|0.30 0.22 0.21 0.21|0.49 0.54 0.08 0.25|0.15 0.13 NA NA|0.53 0.46 NA NA|0.20 0.25 NA NA|
Datasets Models GPT-4o GPT-4 GPT-3.5Turbo Llama-2-7b-chat Mixtral-8x7B Phi-3-mini
Correct Incorrect Correct Incorrect Correct Incorrect Correct Incorrect Correct Incorrect Correct Incorrect
D 0.81 0.54 0.92 0.62 0.88 0.77 0.87 0.83 0.85 0.74 0.77 0.66
GSM-8K
SM 0.33 0.27 0.37 0.31 0.37 0.32 NA NA NA NA NA NA
D 0.48 0.54 0.76 0.70 0.76 0.67 0.78 0.59 0.73 0.66 0.55 0.48
MATH
SM 0.32 0.28 0.30 0.26 0.33 0.28 NA NA NA NA NA NA
MATHBENCH D 0.55 0.35 0.82 0.63 0.82 0.68 0.49 0.57 0.81 0.68 0.67 0.53
SM 0.33 0.30 0.32 0.25 0.32 0.29 NA NA NA NA NA NA
JEEBENCH D 0.37 0.31 0.30 0.22 0.49 0.54 0.15 0.13 0.53 0.46 0.20 0.25
SM 0.28 0.26 0.21 0.21 0.08 0.25 NA NA NA NA NA NA
**14** **Average reasoning Step Length**
We noticed that the average word length of rectified reasoning for correct and incorrect for GPT-4o
was higher than other models. Table 11 presents the average word length of the rectified reasoning
step for all datasets across the models.
Table 11: Average length of rectified reasoning steps on MWP-MISTAKE dataset
|Col1|GSM-8K|MATH|MATHBENCH|JEEBENCH|Average|
|---|---|---|---|---|---|
|Model|D SM|D SM|D SM|D SM|D SM Overall|
|GPT-4o GPT-4 GPT-3.5Turbo Llama-2-7b-chat Mixtral-8x7B Phi-3-mini Claude-3-Opus|100.14 131.47 66.59 122.24 66.58 126.30 44.73 NA 63.04 NA 84.92 NA 62.18 138.91|147.50 182.69 79.32 121.59 94.17 124.56 113.35 NA 88.26 NA 115.10 NA 70.60 134.05|312.11 323.45 146.54 140.43 140.50 177.36 177.67 NA 140.57 NA 172.57 NA 144.85 192.84|647.66 619.09 356.71 322.53 670.34 338.53 137.05 NA 402.79 NA 293.90 NA 561.88 438.44|301.85 314.18 308.01 162.29 176.69 169.49 242.90 191.69 217.29 118.20 NA 118.20 173.67 NA 173.67 166.62 NA 166.62 209.88 226.06 217.97|
GSM-8K MATH MATHBENCH JEEBENCH Average
Model D SM D SM D SM D SM D SM Overall
GPT-4o 100.14 131.47 147.50 182.69 312.11 323.45 647.66 619.09 301.85 314.18 308.01
GPT-4 66.59 122.24 79.32 121.59 146.54 140.43 356.71 322.53 162.29 176.69 169.49
GPT-3.5Turbo 66.58 126.30 94.17 124.56 140.50 177.36 670.34 338.53 242.90 191.69 217.29
Llama-2-7b-chat 44.73 NA 113.35 NA 177.67 NA 137.05 NA 118.20 NA 118.20
Mixtral-8x7B 63.04 NA 88.26 NA 140.57 NA 402.79 NA 173.67 NA 173.67
Phi-3-mini 84.92 NA 115.10 NA 172.57 NA 293.90 NA 166.62 NA 166.62
Claude-3-Opus 62.18 138.91 70.60 134.05 144.85 192.84 561.88 438.44 209.88 226.06 217.97
-----
**15** **Data contamination and Memorization**
Two different types of prompts were used to test out memorization of individual instances of the
dataset. The reasoning step was split into two halves, and models were prompted in two different
ways (general and guided) to complete the second half of the reasoning step. To detect contamination
we evaluated the overlap score between generated and reference instances using ROUGE-L Score.
Listing 4 and Listing 5 shows the prompt used for guided and generation instructions respectively.
```
You are provided with a full question and the first answer
piece from the test split of the GSM8K dataset. Finish the
second answer piece of the instance as exactly appeared in
the dataset. Only rely on the original form of the instance
in the dataset to finish the second answer piece. Output
only the second answer piece.
```
Listing 4: Guided instruction for dataset GSM8K
```
Based on the provided question, finish the second answer piece
based on the first answer piece, such that these two pieces
become a single instance answer. Output only the second
answer piece.
```
Listing 5: General instruction for dataset GSM8K
Here GSM-8K and test are the extra information provided for the model to uniquely identify instances
from the source dataset and complete the reasoning step.
Table 12 presents the complete result for the average ROUGE-L score of guided and general for all
datasets across all models.
Table 12: Rouge L score between guided and general instructions on MWP-MISTAKE dataset
|Datasets|Models|GPT-4o|GPT-4|GPT-3.5Turbo|Llama-2-7b-chat|Mixtral-8x7B|Phi-3-mini|
|---|---|---|---|---|---|---|---|
|||Guided General|Guided General|Guided General|Guided General|Guided General|Guided General|
|GSM-8K|D SM|0.57 0.44 0.55 0.51|0.67 0.56 0.57 0.55|0.53 0.49 0.49 0.47|0.26 0.28 0.30 0.32|0.46 0.44 0.55 0.50|0.32 0.32 0.42 0.41|
|MATH|D SM|0.44 0.25 0.51 0.38|0.52 0.48 0.54 0.54|0.39 0.38 0.45 0.44|0.25 0.26 0.30 0.29|0.39 0.32 0.48 0.46|0.26 0.27 0.38 0.39|
|MATHBENCH|D SM|0.43 0.41 0.40 0.38|0.48 0.46 0.43 0.42|0.38 0.36 0.39 0.38|0.26 0.28 0.30 0.33|0.36 0.36 0.40 0.38|0.30 0.30 0.29 0.30|
|JEEBENCH|D SM|0.43 0.39 0.32 0.29|0.42 0.40 0.34 0.35|0.34 0.33 0.31 0.24|0.27 0.25 0.22 0.25|0.38 0.34 0.26 0.27|0.33 0.31 0.20 0.22|
**16** **Running Experiment Multiple Times**
While running experiments on all models (LLMs and SLMs) we used the default hyperparameters to
generate tokens. We ran a subset of the dataset on different prompt variations and saw comparable
performance for various prompts. Due to the limitation of the API key, we were only able to run
GPT-4o model on the GSM-8K dataset. On rerun we got very similar results, with an error rate of <=
0.01.
-----
| [
"Joykirat, Singh",
"Akshay, Nambi",
"Vibhav, Vineet"
] | 2024-06-16T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2406.10834 | https://arxiv.org/abs/2406.10834 | https://www.semanticscholar.org/paper/a95e16c6bbf912a2452fc974d3f6a50482726877 |
FGeo-HyperGNet: Geometric Problem Solving Integrating Formal Symbolic System and Hypergraph Neural Network | Geometric problem solving has always been a long-standing challenge in the fields of automated reasoning and artificial intelligence. We built a neural-symbolic system to automatically perform human-like geometric deductive reasoning. The symbolic part is a formal system built on FormalGeo, which can automatically perform geomertic relational reasoning and algebraic calculations and organize the solving process into a solution hypertree with conditions as hypernodes and theorems as hyperedges. The neural part, called HyperGNet, is a hypergraph neural network based on the attention mechanism, including a encoder to effectively encode the structural and semantic information of the hypertree, and a solver to provide problem-solving guidance. The neural part predicts theorems according to the hypertree, and the symbolic part applies theorems and updates the hypertree, thus forming a predict-apply cycle to ultimately achieve readable and traceable automatic solving of geometric problems. Experiments demonstrate the correctness and effectiveness of this neural-symbolic architecture. We achieved a step-wised accuracy of 87.65% and an overall accuracy of 85.53% on the formalgeo7k datasets. | null | [
"Xiaokai, Zhang",
"Na, Zhu",
"Tuo, Leng",
"Yang, Li",
"Cheng, Qin",
"Zhenbing, Zeng"
] | 2024-04-22T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2402.11461 | https://arxiv.org/abs/2402.11461 | null |
|
FRACTAL: Fine-Grained Scoring from Aggregate Text Labels | Large language models (LLMs) are being increasingly tuned to power complex generation tasks such as writing, fact-seeking, querying and reasoning. Traditionally, human or model feedback for evaluating and further tuning LLM performance has been provided at the response level, enabling faster and more cost-effective assessments. However, recent works (Amplayo et al. [2022], Wu et al. [2023]) indicate that sentence-level labels may provide more accurate and interpretable feedback for LLM optimization. In this work, we introduce methods to disaggregate response-level labels into sentence-level (pseudo-)labels. Our approach leverages multiple instance learning (MIL) and learning from label proportions (LLP) techniques in conjunction with prior information (e.g., document-sentence cosine similarity) to train a specialized model for sentence-level scoring. We also employ techniques which use model predictions to pseudo-label the train-set at the sentence-level for model training to further improve performance. We conduct extensive evaluations of our methods across six datasets and four tasks: retrieval, question answering, summarization, and math reasoning. Our results demonstrate improved performance compared to multiple baselines across most of these tasks. Our work is the first to develop response-level feedback to sentence-level scoring techniques, leveraging sentence-level prior information, along with comprehensive evaluations on multiple tasks as well as end-to-end finetuning evaluation showing performance comparable to a model trained on fine-grained human annotated labels. | This work is the first to develop response-level feedback to sentence-level scoring techniques, leveraging sentence-level prior information, along with comprehensive evaluations on multiple tasks as well as end-to-end finetuning evaluation showing performance comparable to a model trained on fine-grained human annotated labels. | ## FRACTAL: Fine-Grained Scoring from Aggregate Text Labels
Yukti Makhija
Priyanka Agrawal
Google DeepMind
[email protected]
Rishi Saket
Google Research India
[email protected]
Google Research India
[email protected]
Aravindan Raghuveer
Google Research India
[email protected]
April 9, 2024
**Abstract**
Large language models (LLMs) are being increasingly tuned to power complex generation tasks such
as writing, fact-seeking, querying and reasoning. Traditionally, human or model feedback for evaluating
and further tuning LLM performance has been provided at the response level, enabling faster and more
cost-effective assessments. However, recent works (Amplayo et al. [2022], Wu et al. [2023]) indicate that
sentence-level labels may provide more accurate and interpretable feedback for LLM optimization. In
this work, we introduce methods to disaggregate response-level labels into sentence-level (pseudo-)labels.
Our approach leverages multiple instance learning (MIL) and learning from label proportions (LLP)
techniques in conjunction with prior information (e.g., document-sentence cosine similarity) to train a
specialized model for sentence-level scoring. We also employ techniques which use model predictions
to pseudo-label the train-set at the sentence-level for model training to further improve performance.
We conduct extensive evaluations of our methods across six datasets and four tasks: retrieval, question
answering, summarization, and math reasoning. Our results demonstrate improved performance compared
to multiple baselines across most of these tasks. Our work is the first to develop response-level feedback to
sentence-level scoring techniques, leveraging sentence-level prior information, along with comprehensive
evaluations on multiple tasks as well as end-to-end finetuning evaluation showing performance comparable
to a model trained on fine-grained human annotated labels.
### 1 Introduction
Large language models (LLMs) demonstrate a remarkable ability to generate text, seek facts, answer complex
queries, and perform logical reasoning tasks. Their progress is driven largely by a complex interplay of
model architecture, training data, and tuning procedures. The process of LLM refinement relies heavily on
their evaluation and preference feedback, typically from humans or automated model-based scoring. These
feedback and scores are commonly used during various phases of LLMs development like model evaluation,
reinforcement learning (RLHF/ RLAIF), bulk data distillation, and model hedging. However, such feedback
has typically been taken at the response level, enabling efficient and cost-effective assessments of overall
output quality.
-----
An emerging body of research (Amplayo et al. [2022], Lightman et al. [2023]) suggests that the sentence or
step-level evaluation is more reliable and precise over response-level evaluation. Segment-level feedback
promises improved accuracy by localizing strengths and weaknesses within a generated response. It further
provides greater interpretability, allowing for more targeted LLM fine-tuning by highlighting the specific
portions of a response that contribute to or detract from its overall quality. Moreover, collecting finer-grained
human feedback is shown to result in considerably improved LLM training Wu et al. [2023].
Even in situations where it is feasible to directly collect fine-grained feedback, doing so for Side-by-Side
(SxS) feedback could remain challenging and might also lead to significantly more expensive annotation
process.
To overcome this lack of fine-grained feedback, our work proposes methods to dis-aggregate response-level
labels into sentence-level pseudo-labels that accurately reflect the underlying quality distribution within a
larger response.
As in supervised training, the first component of our solution is a methodology to train a model on the
response-labels to predict the scores (label probabilities) for instances. For this we leverage and build upon
techniques from multiple instance learning (MIL) and learning from label proportions (LLP). These have
been used to train predictive models on datasets partitioned into bags or sets of instances (e.g. feature-vectors
or sentences). Each bag has an aggregated label i.e., bag-label which is thought to be derived from the
(unknown) instance-labels of the bag via an aggregation function. In MIL, the aggregation is the MAX or
MIN of binary or ordinal instance-labels – applicable to question-answering (relevance), summarization and
math reasoning tasks – while LLP, which models the bag-label as the AVG of the instance-labels, is applicable
to retrieval tasks. A standard technique in MIL and LLP is bag-loss which uses some approximation of
the aggregation function to compute the aggregated model prediction for each bag, and minimizes a loss
between the bag-labels and the aggregated predictions, summed over all bags. While bag-loss is usually a
strong baseline, it does not use any instance-level knowledge to guide the optimization. Such applicationspecific information can the modeled as a prior distribution on the instance-labels. Our main technical
contributions include designing priors – based on document-sentence similarity and correlations – for the
various natural language tasks the we study. These priors are used to enhance bag-loss via an additional loss
term, complementing the weak supervision provided by the bag-labels.
While the trained model predictions are probabilities for the instance-labels, we also develop pseudo-labeling
mechanisms to label instances using model predictions, while ensuring consistency with the bag-labels. This
provides a proxy for instance-label supervision and we demonstrate that training a model on these labels can
improve the performance for downstream tasks.
Our contributions include:
1. We study retrieval, question answering, summarization, and math reasoning tasks, and define various
formulations for disaggregating response-level labels to sentence-level labels applicable to these tasks.
2. We define the corresponding formulations of learning from response-level labels as MIL and LLP
problems, as well as the sentence (instance) level priors for these tasks based on document-sentence
similarity scores and correlations between sentences.
3. We propose enhancements of the bag-loss method, by adding prior information as new loss loss terms,
for training instance-level models.
4. We develop pseudo-labeling strategies to transform instance-level model predictions into labels which
are consistent with the response-level labels, allowing us to train the model on the derived pseudo-labels.
5. We perform a comprehensive evaluation of our methods across a diverse set of six datasets spanning
the studied tasks. Our results demonstrate performance improvements over established baselines.
We call our method for generating instance-level scores as FRACTAL, whose elements are illustrated in
-----
FRACTAL
Task - specific
Sentence
Scoring Model
|FRACTAL Bag Loss + Pseudo Prior Loss(es) Labeling PsLAB Sentence Bags Disaggregation Model Disaggregated Bag Calibrated (Aggregate Labels) (Weighted Bag Loss with Priors) (Sentence Scores) Pseudo Labels|Col2|
|---|---|
||Bag Loss + Pseudo Prior Loss(es) Labeling PsLAB Sentence Bags Disaggregation Model Disaggregated Bag Calibrated (Aggregate Labels) (Weighted Bag Loss with Priors) (Sentence Scores) Pseudo Labels|
|||
Training Step Inference/Processing Response Sentences Response Labels Sentence Labels
Figure 1: Overview of our proposed method, FRACTAL. Input is a set of responses each with a response
label. A response is a bag of sentences. The output is a model that can predict the score for each sentence in
a response. The semantic meaning of a score depends on how the response label was defined. FRACTAL
consists of three key components a) Loss Function Design (Section 4.1). b) Differentiable Approximations of
Aggregation Functions (Section 4.2) c) Max-Likelihood Pseudolabeling (Section 4.3)
Figure 1 and described in more detail in Sec. 4.
### 2 Related Work
**Multiple Instance Learning (MIL). Here the bag label is modeled as MAX or MIN of the its (unknown)**
instance-labels (typically labels are {0, 1}-valued). Given a such a dataset of bags the goal is to train a model
either to predict the bag-labels or in many cases the label of instances. This formulation was introduced
to model drug activity detection in the work of Dietterich et al. [1997] and was shown thereafter to have
applicability in several other domains including drug discovery Maron and Lozano-Pérez [1997], time series
prediction Maron [1998], information retrieval Lozano-Pérez and Yang [2000] and the analysis of medical
images Wu et al. [2015a] and videos Babenko et al. [2009], Sikka et al. [2013]. The work of Ramon and
De Raedt [2000] proposed the bag-loss method using the log-sum exponential approximation to MIN, and was
followed by adaptations of boosting and logistic regression using similar differentiable approximations Zhang
et al. [2005], Ray and Craven [2005]. More specialized MIL methods such as diverse-density (DD) Maron
and Lozano-Pérez [1997] and and its EM-based variant, EM-DD Zhang and Goldman [2001] have also been
developed. More recent works have proposed deep learning methods for MIL based on convolutional and
attention mechanisms Wu et al. [2015b], Ilse et al. [2018].
**Learning from Label Proportions (LLP). This is a variant of the above in which the bag-label is the**
average of the instance-labels, with the goal of training an instance-label predictor from bag data, and arises
in the context of label privacy concerns Rueping [2010], costly supervision Chen et al. [2004] or lack of
labeling instrumentation Dery et al. [2017]. As in the MIL case, the early works on LLP applied traditional
supervised learning techniques de Freitas and Kück [2005], Musicant et al. [2007], Rueping [2010]. Later
works Quadrianto et al. [2009], Patrini et al. [2014] directly estimated model parameters from bag-labels
while Yu et al. [2013] proposed a specialized SVM for LLP. Subsequently, methods using bag pre-processing
Scott and Zhang [2020], Saket et al. [2022] and training deep networks Kotzias et al. [2015], Liu et al. [2019]
have been developed. In particular, Ardehaly and Culotta [2017] proposed the bag-loss method for LLP,
which – known as DLLP in the literature – has been used used as a baseline in subsequent works.
**MIL and LLP for NLP. Applications such as sentiment analysis Pappas and Popescu-Belis [2014], Angelidis**
-----
and Lapata [2018] and document modeling Pappas and Popescu-Belis [2017] have previously admitted
MIL techniques, while more recently Liu et al. [2022] modeled offensive language detection as an MIL
problem and proposed a mutual attention based mechanism. On the other hand, the applications of LLP are
relatively sparser: Ardehaly and Culotta [2016] applied it to domain adaptation for text data, while recent
work Chauhan et al. [2023] proposed a novel method improving on the baseline model training technique of
Ardehaly and Culotta [2017] for text classification.
For both MIL and LLP, previous works have proposed pseudo-labeling based model training methods, in
which the weak-supervision of bag-labels is used along with model predictions to derive pseudo-labels which
can be used to train or fine-tune models. For e.g. pseudo-labels are computed via regularization Wang et al.
[2023], Liu et al. [2021] or expectation-maximization Luo et al. [2020], Barucic and Kybic [2022] techniques.
### 3 Preliminaries
Let X be the underlying set of instances and Y be the label-set which is typically {0, 1} or {0, 1, . . ., L} for
binary or integer labels respectively. A dataset is a collection of labeled instances.
A bag B is a subset of X and yB denotes its label which is thought to depend on the labels of the instances in
_B via an aggregation function Agg which maps tuples with elements from_ to a bag label-set which is the
_Y_ _Y_
real-segment spanned by Y i.e., either [0, 1] or [0, L]. Specifically, if B = {x1, . . ., xk} and yi is the label of
**xi (i ∈** [k]), then yB = Agg (x1, . . ., xk). Typically, AGG is either MIN, MAX or AVG.
We consider prior information about the labels on individual instances, for e.g. through some unsupervised
modeling or side information. For each x in the dataset, its prior px is a soft label in [0, 1] or [0, L]. Another
class of priors is given by pxz for pairs (x, z) of instances.
Some applications provide preference bag-labels which encode comparisons between pairs of bags. Specifically, for a pair of bags (B1, B2) the preference bag-label is yB1B2 which is 1 if yB1 > yB2, and −1 if
_yB1_ _yB2._
_≤_
**Modeling Task. Given as input a collection B of pairwise-disjoint (i.e., non-overlapping) bags along with**
their bag-labels, possibly along with the priors {px} or {pxz}, the goal is to output a model predicting a
score for each instance in X . In the preference evaluation, we evaluate the model in terms of the accuracy of
the preference labels assigned by the model on a test set of bags.
### 4 Our Techniques
We present the the components of the FRACTAL method along with the BagLoss baseline approach.
**4.1** **Bag-loss with priors**
Let us denote by probAGG some differentiabale approximation of AGG to soft-labels Y. The BagLoss method
optimizes the following loss, for a model M:
_yB)_
_Ltotbag(_, ) := [∑][B][∈B][ L][bag][ (][y][B][, ˜] (1)
_B_ _M_
_|B|_
-----
where ˜yB = probAGG ( (x))x _B_ is the aggregate prediction of bag B, Lbag is some loss function. In our
_M_ _∈_
bag loss with priors, PriorsBagLoss method, we have an additional loss which incorporates the priors. For
priors {px} we have:
_Ltotprior1(_, ) := [∑][x][∈][X][ L][prior][ (][p][x][,][ M][(][x][))] (2)
**_X_** _M_
_|X |_
while for priors {pxz} defined over pairs of instances:
_Ltotprior2(_, )
**_X_** _M_
:= [∑][(][x][,][z][)][∈][X][ 2][ L][prior][ (][p][xz][,][ M][(][x][)][M][(][z][))] (3)
_|X |[2]_
where Lprior is a loss. The total loss is a combination of bag and prior losses:
_Ltot = λLtotbag + λ1Ltotprior1 + λ2Ltotprior2,_
for some λ, λ1, λ2 ∈ [0, 1], s.t. λ + λ1 + λ2 = 1.
**Minibatch based model training. For a given batch size q, learning rate and optimizer as well as hyperpa-**
rameter λ ∈ [0, 1], we train the predictor model M by doing the following for N epochs and K steps per
epoch:
1. Sample a minibatch S of q bags _S_ .
_B_ _⊆B_
2. Using current model predictions, compute Ltotbag and Ltotprior restricted only to the bags and instances
in S, and compute Ltot.
3. Using the required gradients from (1), (2) and (3) along with the optimizer and learning rate, update
the weights of the model M.
**4.1.1** **Preference based bag-loss with priors**
The PrefBagLoss approach is similar to those in the previous subsection, where instead of Ltotbag we have a
preference based loss for the pairs of bags S for which preference labels are available. For a pair of bags
(B1, B2) with yB1B2 be the preference-label let ˜yB1 and ˜yB2 be positive real-valued aggregate predictions of
_B1 and B2 respectively. Using the Bradley and Terry [1952] model we have the loss:_
_Lpref(B1, B2, yB1B2) := yB1B2 log_ _[y][B][2]_ (4)
_yB1_
which is averaged over all pairs in S to obtain Ltotpref which is minimized.
The minibatch training now samples pairs of bags and computes Ltotpref restricted to the sampled pairs. In the
priors based extension, PriorsPrefBagLoss, the Ltotprior loss remains the same, over all the instances in the
minibatch.
**4.2** **Approximations to MIN**
In the binary-label case, the standard baseline is Mult which is just the product of the probabilities. We
employ the in built TensorFlow (TF) approximation tf_reduce_min in our experiments (See Appendix B for
more details and ablations with other approximations). Note that MAX can be derived from MIN applied to
flipped variables in the binary case. For integer labels we use the respective TF approximations for MIN and
MAX.
-----
|Dataset (Input for training)|Bags Instances Task Prior|
|---|---|
|QA-Feedback (Question, Knowledge Passage, Pair of Responses, Preference Label)|Both responses are Sentences in a response + Learn sentence-level scores for relevance knowledge passage- treated as separate question + Knowledge Pas- using only the preference bag-label (indi- sentence cosine sim., bags. sages cates which response is better). The aggre- corr. b/w sentences. gation function used is AVG|
|FirA (Paragraph, Query, Rel- evance Score)|Paragraph sentence of the paragraph + Learn {0, 1, 2, 3, 4}-valued relevance score query-sentence co- query for each sentence wrt query. The aggrega- sine similarity tion function used is MAX|
|MultiSpanQA (Context, Question, Label (answer present in context)|Context Sentences of the context + Identify sentences of the context which con- query-sentence co- Questions tain the answer to the question. We use sine sim., corr. b/w MAX for this binary classifciation setup. sentences|
|AquaMuSe (Documents, Query, Summary, Entailment Label)|Summary Sentence of the summary + Given a query, document and bag-level bi- doc-sentence cosine documents + query nary entailment label, determine the non- sim., corr. b/w sen- entailed sentences in a summary. MIN is tences the aggregate function used in this setup.|
|WikiCatSum (Documents, Summary, Entailment Label)|Summary Sentence of a summary + Given a document and bag-level binary en- doc-sentence cosine documents tailment label, determine the non-entailed sim., corr. b/w sen- sentences in a summary. MIN is the aggre- tences gate function used in this setup.|
|PRM800K (MATH Prob- lem, step-wise solution, La- bel(correctness)|Solution to the Step of the solution + ques- Using the binary aggregate label indicating question-step cosine MATH problem tion the correctness of the solution, identify all sim., corr. b/w steps incorrect steps in the solution.|
Table 1: Summary of the bags, labels, instances, annotations and priors for each dataset.
**4.3** **Max-Likelihood Pseudo-labeling and Model Training**
Our PsLab pseudo-labeling method uses the prediction of the model M trained as per the techniques
described above, to output the max-likelihood instance-labels for each bag, consistent with the bag-label. We
apply this to the case of 0, 1 -labels and model predictions being probabilities, and the MIN aggregation (the
_{_ _}_
MAX case is analogous). For a given bag, consider the distribution in which each x in the bag is independently
1 with probability M(x) and 0 otherwise. If the bag-label is 1 then PsLab outputs 1 for each instance in the
bag. When the bag-label is 0, then PsLab outputs the maximum likelihood valid configuration of labels i.e.,
with at least one 0-label. This is can be efficiently done via the following algorithm:
1. Compute the labeling Γ : B →{0, 1} where Γ(x) = 1 if M(x) > 1 and 0 otherwise.
2. If Γ(x) = 1 for all x ∈ _B, then modify Γ by flipping the label of z ∈_ _B which has the minimum value_
of M(z).
**Model Training on Pseudo-labels. After computing the pseudo-labels on the training bags, we now have a**
train-set with labeled instances. The model retrained on this dataset is evaluated for comparative performance.
### 5 Tasks and Datasets
Tables 1 and 2 provide concise descriptions of the tasks and datasets that we consider. In the rest of this
section we provide more details on various aspects of these datasets and tasks.
**Long-form Question Answering.** We use QA-Feedback dataset, an SxS preference dataset collected and
released by Wu et al. [2023]. These are human provided preferences on pairs of model generated responses
for input questions and relevant passages from ASQA Stelmakh et al. [2023], an ambiguous factoid QA
-----
|Task Dataset|Objective Splits (Train|Test)|
|---|---|
|Long-form QA QA-Feedback Retrieval FirA Retrieval MultiSpanQA Summarization AquaMuSe Summarization WikiCatSum Math Reasoning PRM800K|Preference 13.5k | 3k Regression 18k | 4k Classification 5k | 650 Classification 3k | 500 Classification 45k | 1.5k Classification 1k | 100|
Table 2: Summary of the setup used for each dataset
dataset. The data further contains segment-level annotations, and we make use of “irrelevance, repetition,
or incoherence” category for sentence-level evaluation. The responses are the bags and the preferences are
the preference bag-labels.The aggregation function used for the bag-level loss term is AVG, and the priors
integrated into the loss function include the cosine similarity between knowledge passages and each sentence
of the response. Specifically:
**x, U**
P1(x) := cosprior(x) = [1] 1 + _⟨_ _⟩_ (5)
2 **x** 2 **U** 2
_∥_ _∥_ _∥_ _∥_
where x and U are (embeddings of) a sentence and the relevant passage, and we scale cosine-similarity so
that its value is in [0, 1]. Additionally, we incorporate the Pearson’s correlation between a pair of sentences as
a prior:
P2(x, z) := corrprior(x, z) = [1] (6)
2 [(][1][ +][ ρ][xz][)]
where ρxz is the Pearson’s correlation between sentence embeddings x and z.
**Retrieval.** We use two datasets for retrieval tasks: MultiSpanQA and FiRA. MultiSpanQA Li et al. [2022]
is aimed at questions that necessitate identifying multiple discontinuous text spans from a given passage to
compose the complete answer. This dataset consists of question-context pairs, with annotated answer spans
for the train and validation splits. We randomly select 25% of the train-split as the test-split. Context is
treated as a bag, with its instances being sentences which labeled 1 if they overlap with annotated spans, and 0
otherwise. The MAX aggregation is used to indicate the presence an answer in the context. All MultiSpanQA
samples contain answers to questions, resulting in all positive bags. To balance the dataset, negative bags are
created for half of the samples by extracting context chunks without answers. Thus, both the instance and
bag labels are {0, 1}-valued.
The FiRA dataset Hofstätter et al. [2020] comprises word-level relevance annotations using {0, . . ., 4}-valued
labels for the relevance of each word in a paragraph to a query. We compute the word-level average across
annotators and then the maximum across all words in a sentence to derive sentence-level scores. Similar
to the previous setup, we treat the paragraph as a bag, its sentences as instances, and employ MAX as the
aggregation function. The instance and bag-level scores range from 0 to 4, with the goal of optimizing a
regression loss.
For both datasets, we integrate a correlation prior between sentence pairs and a cosine-similarity prior (see
(5), (6)) between the query and each sentence of the context.
**Summarization.** We utilize two datasets: WikiCatSum and AquaMuSe. The WikiCatSum dataset PerezBeltrachini et al. [2019] is specifically designed for multi-document summarization tasks, focusing on
-----
generating Wikipedia-style lead sections for entities within three domains: Companies, Films, and Animals
out of which we focus on the Films and Animals domains. On the other hand, the AquaMuSe dataset Kulkarni
et al. [2020] is tailored for multi-document, question-focused summarization.
We adopt the binary entailment metric for this task. The reference summaries already provided in these two
datasets serve as the entailed summaries[1]. Each sentence in these summaries is considered positively entailed.
To generate non-entailed summaries, we synthesize negatives by employing various manipulations, similar to
Yin et al. [2021]. Firstly, we perturb the reference summary through sentence replacement. This involves
randomly selecting k sentences, where k is less than the total sentences in the summary, and iteratively
feeding their left context to an ULM to predict the next sentence. The predicted sentence is then used to
replace the selected one. Additionally, we explore the standard word replacement technique, which randomly
masks k words whose POS tags are among proper nouns, numbers, and verbs, to introduce factual errors in
the summaries. The masked words are then predicted using BERT. The number of replaced sentences and
words is randomly selected for each sample. The perturbed sentences within the summary are considered
non-entailed, while the remaining unchanged sentences are deemed entailed. Thus, the sentence as well
as bag labels are 0, 1 -valued with 1 indicating entailed and 0 non-entailed, with MIN as the aggregation
_{_ _}_
function. Examples of entailed and non-entailed summaries are provided in Appendix E. As in previous
tasks, we incorporate sentence-document cosine similarity and sentence correlation priors into our methods.
Additionally, we experiment with NLI entailment scores Honovich et al. [2022] as priors for this task.
**Math Reasoning** We utilize Phase 1 of the PRM800K dataset Lightman et al. [2023] releasing step-level
annotations for model-generated solutions to MATH problems Hendrycks et al. [2021]. The task at hand is to
identify all the incorrect steps in the solution.
|Method|FirA MSQA QA-FB AqMse WikiCS PRM|
|---|---|
|cos-sim Resp-level BagLoss FRACTAL (ours) Supervised|- 0.455 0.535 0.632 0.408 0.42 0.319 0.583 0.491 0.697 - 0.528 0.304 0.661 0.509 0.751 0.477 0.569 0.294 0.693 0.532 0.814 0.645 0.597 0.283 0.729 0.651 0.876 0.837 0.613|
Table 3: We compare our method against several baselines across all datasets. The columns left to right are
for the FiRA, MultiSpanQA, QA-Feedback, WikiCatSum, AquaMuse and PRM800K datasets. For the FiRA
dataset, we report MAE, while for the others we report AUC-ROC.
### 6 Experiments
We evaluate FRACTAL along with baseline techniques (listed below) on the tasks and datasets describes in
Sec. 5.
**Off the shelf Baselines.** The following methods directly score the sentences:
_Semantic Similarity: This uses similarity of individual sentences from the response with the input context to_
estimate their relevance for the task. For this, we use the cosine similarity (see (5)) between the corresponding
embeddings.
1In this work, we do not filter any noise present in the existing data splits.
-----
_Entailment Scorer: For summarization and relevance tasks, we also compute entailment using the NLI scorer_
from TRUE paper Honovich et al. [2022]. This is a T5x-11B model Raffel et al. [2023] finetuned on several
NLI datasets.
**Trainable Baselines.** These use the bag or instance labels to train models.
_BagLoss: This uses BagLoss on bag-labels (or PrefBagLoss in case of preference bag-labels) described in_
Sec. 4.
_Response-level: This uses embeddings for responses and the corresponding response-labels for model training_
while inference is on sentences.
_Supervised: This trains directly on instance-labels to provide an upper baseline for comparison. FRACTAL:_
As described in Sec. 4, this involves PriorsBagLoss (or PriorsPrefBagLoss) based model training using
bag-labels (or preference bag-labels) as well priors, TensorFlow approximations to the the MIN or MAX
functions. The PsLab pseudo-labeling uses the best performing model (trained using prior augmented bag
loss) predictions to pseudo-label the train-set followed by another model training on these resultant instance
(pseudo)-labeled train-set. Note that (i) when we only have preference bag-labels, or (ii) when the model
prediction is a value in [0, L] with L > 1 so that label-probabilities are not available, PsLab is not applicable,
and FRACTAL provides the model trained on our prior augmented bag loss methods. The Lbag and Lprior
losses are taken to be cross-entropy except for mse for the regression objective on the FiRA dataset for which
the priors are also appropriately scaled.
**6.1** **Model Training Setup**
We use the same model architecture across all tasks: a Sentence-T5 Large encoder to generate embeddings
for text components, followed by a 2-hidden layer MLP with 73728 parameters for predicting sentence-level
scores. To handle lengthy documents exceeding 2000 tokens in MultiSpanQA, WikiCatSum, and AquaMuSe
datasets, we partition documents into 1000-token paragraphs which are encoded separately to improve
embedding quality. Subsequently, attention weights representing importance are learnt for each document
split, and the document embedding is obtained through a weighted sum of individual split embeddings. We
report results averaged over 3 randomly seeded trials. We conduct grid search hyperparameter tuning to
identify optimal parameter configurations, including learning rates, weights of prior terms integrated into the
loss function, and batch sizes. The list of optimal hyperparameters for each dataset is provided in Appendix
D.
**6.2** **Results and Discussion**
Tables 4 and 5 provide the detailed evaluations of the baselines and our methods on the test data. In the
tables, PriorsBagLoss(λ1, λ2) and PriorsPrefBagLoss(λ1, λ2) denote the instantiation of these methods with
weights λ1 and λ2 for the losses corresponding to priors P1 and P2 respectively (see (5), (6) and Sec. 4.1,
4.1.1). For the WikiCatSum dataset, we incorporate NLI entailment scores as a prior, assigning a weight of
_λ3 for the corresponding loss term. PsLab denotes the performance of the model trained after pseudo-labeling_
the train-set using the best performing prior augmented bag loss. Table 3 summarizes the performance of the
different baselines listed above and our proposed approach, FRACTAL on the tasks/datasets in Sec. 5.
**FRACTAL renders more precise sentence-level scores. From Table 3, we observe a consistent improvement**
in the sentence scoring over the BagLoss as well as the Response-level baseline across all these datasets
in terms of AUC-ROC, bridging the performance gap between them and the best approach supervised
-----
**Method** **AUC-ROC** **AUC-PR** **Accuracy**
**_MultiSpanQA_**
Supervised 0.729 ± 0.016 0.354 ± 0.008 0.861 ± 0.046
Cosine Similarity 0.455 0.135 0.851
NLI 0.631 0.366 **0.859**
Response-level Model 0.583 ± 0.187 0.217 ± 0.094 0.852 ± 0.003
BagLoss 0.661 ± 0.092 0.309 ± 0.127 0.852 ± 0.133
FRACTAL Methods
PriorBagLoss(0.2, 0) 0.669 ± 0.063 0.311 ± 0.059 0.838 ± 0.071
PriorBagLoss(0.1, 0.2) 0.625 ± 0.07 0.271 ± 0.039 0.851 ± 0.021
PSLAB **0.693 ± 0.115** **0.326 ± 0.071** 0.842 ± 0.052
**_QA Preference Feedback_**
Supervised 0.651 ± 0.009 0.609 ± 0.007 0.688 ± 0.011
Cosine Similarity 0.535 0.526 0.483
Response-level Model 0.491 ± 0.008 0.4643 ± 0.007 0.453 ± 0.015
PrefBagLoss 0.509 ± 0.005 0.526 ± 0.002 0.515 ± 0.009
FRACTAL Methods
PriorPrefBagLoss(0.2, 0) 0.516 ± 0.003 0.494 ± 0.003 0.509 ± 0.007
PriorPrefBagLoss(0, 0.4) 0.528 ± 0.003 0.519 ± 0.002 **0.530 ± 0.006**
PriorPrefBagLoss(0.2, 0.5) **0.532 ± 0.004** **0.526 ± 0.005** 0.521 ± 0.008
**_WikiCatSum_**
Supervised 0.837 ± 0.062 0.894 ± 0.085 0.718 ± 0.085
Cosine Similarity 0.408 0.829 0.362
NLI 0.639 0.817 0.648
BagLoss 0.477 ± 0.093 0.831 ± 0.052 0.562 ± 0.047
FRACTAL Methods
PriorBagLoss(0.2, 0.1, 0) 0.641 ± 0.028 0.879 ± 0.013 0.651 ± 0.017
PriorBagLoss(0, 0, 0.4) 0.642 **0.885** 0.653
PSLAB **0.645 ± 0.038** 0.879 ± 0.057 **0.663 ± 0.091**
**_AquaMuSe_**
Supervised 0.876 ± 0.007 0.926 ± 0.002 0.866 ± 0.008
Cosine Similarity 0.632 0.763 0.649
NLI 0.793 0.889 0.824
Response-level Model 0.696 ± 0.011 0.775 ± 0.009 0.675 ± 0.015
BagLoss 0.747 ± 0.007 0.824 ± 0.005 0.779 ± 0.01
FRACTAL Methods
PriorBagLoss(0.2, 0.1) 0.789 ± 0.004 0.871 ± 0.006 0.831 ± 0.007
PSLAB **0.814 ± 0.006** **0.898 ± 0.007** **0.834 ± 0.01**
**_PRM800K_**
Supervised 0.613 ± 0.028 0.928 ± 0.021 0.709 ± 0.051
Cosine Similarity 0.420 0.873 0.496
Response-level Model 0.528 ± 0.145 0.895 ± 0.037 0.521 ± 0.082
BagLoss 0.569 ± 0.019 0.924 ± 0.011 **0.688 ± 0.071**
FRACTAL Methods
PriorBagLoss(0.5, 0.5) 0.582 ± 0.034 0.927 ± 0.009 0.534 ± 0.044
PriorBagLoss(0, 0.1) 0.579 ± 0.052 0.925 ± 0.017 0.603 ± 0.038
PriorBagLoss(0.1, 0.1) 0.580 ± 0.068 0.926 ± 0.024 0.563 ± 0.049
PSLAB **0.597 ± 0.093** **0.927 ± 0.004** 0.578 ± 0.063
Table 4: Instance-level Evaluation on MultiSpanQA, QA Preference Feedback, WikiCatSum, AquaMuSe and
PRM800K Datasets
(sentence-level trained) model. The use of priors along with pseudo-labeling based model training allows
for an improved estimation of task specific score for sentences. It is interesting to note that BagLoss
-----
**Method** **MAE** **MSE**
Supervised 0.283 ± 0.072 0.141 ± 0.088
Response-level Model 0.319 ± 0.047 0.186 ± 0.098
BagLoss 0.304 ± 0.007 0.163 ± 0.002
FRACTAL Methods
PriorBagLoss(0.3, 0) 0.298 ± 0.002 0.157 ± 0.004
PriorBagLoss(0.2, 0.2) **0.294 ± 0.003** **0.155 ± 0.001**
Table 5: Instance-level Evaluation on FiRA Dataset
|Method|ROUGE|
|---|---|
|SFT + Preference RLHF SFT + FineGrained RLHF (Human Annotation) SFT + FineGrained RLHF (FRACTAL Prediction)|48.96 49.12 49.04|
|---|---|
Table 6: We perform fine-grained RLHF on the QA-Feedback dataset using the framework provided by Wu
et al. [2023] by replacing the Human Annotations with relevance label predictions from a model trained only
on preference labels.
outperforms the Response-level baseline, suggesting that introduction of aggregate loss based methods to
estimate sentence-level scores is itself useful. Response-level model trained with large response doesn’t
generalize well to smaller sentences.
**Leveraging Prior improves Feedback Disaggregation. Among the key ideas of FRACTAL is to augment**
BagLoss with cosine-similarity and correlation priors at the sentence-level (see (5) and (6)). Using a weighted
combination of combination of BagLoss and the prior loss terms, provides substantially improved performance
over either using just the BagLoss, or the cosine-similarity baseline. In effect, our proposed combination
performs better than either of its constituents. We hypothesize that the priors provide sentence-level insight
which complements the aggregate label based optimization of BagLoss. While performance of FRACTAL is
sensitive to the weights of each loss component, during our hyperparameter sweep, we find that the model
tends to select an upweighted BagLoss term for most datasets and metrics except for QA Preference Feedback.
**Calibrated Pseudo-Labels are more helpful. An interesting question is whether the disaggregated scores**
should eventually be calibrated such that their aggregate matches the response scores for training the final
model, as done by our PsLab method in FRACTAL as described in 4.3. The ablations on PsLab are highlighted
for Retrieval, Summarization, and Math Reasoning in Table 4. For all tasks, PsLab results in a more accurate
model compared to both the baselines and the models trained on prior augmented BagLoss.
**Downstream Tasks benefit from sentence-level scoring. Wu et al. [2023] showed that using fine grained**
human labels per segment level across 3 categories irrelevant, untruthful, completeness can help improve
performance. In the above setup, we replace the fine grained human labels of relevance with the FRACTAL
predictions (Refer Long-Form Question Answering task in Section 5). We observe in Table 6 that FRACTAL
(Row 3) has comparable performance to Fine Grained RLHF with human labels (Row 2) and FRACTAL
predictions help improve performance over using only preference labels (Row 1).
**FRACTAL generalizes well across task objectives and aggregation functions As mentioned in 5, we**
experiment with a different task setups to thoroughly investigate the applicability of our approach to various
-----
LLM capabilities. As highlighted earlier in the overview and tasks specific Tables 3-5, we find FRACTAL to
be the best performing method. The task specific discussion of experimental results is deferred to Appendix
A.
### 7 Conclusions
Our works casts the problem of deriving sentence-level scores from response-labels for complex text
generation tasks as that of learning from aggregated labels in the MIL and LLP frameworks. We propose
a novel method FRACTAL, which augments bag-loss using instance level priors to train predictor models,
along with a pseudo-labeling technique for improved model training. Extensive evaluations of FRACTAL
along with vanilla bag-loss and response-level model training baselines, as well as off-the-shelf scorers
demonstrate substantial performance gains from FRACTAL models on six datasets spanning four tasks:
retrieval, question answering, summarization, and math reasoning.
### 8 Limitations
Our method for label calibration and pseudo-labeling works well in classification tasks, leading to better
performance. However, it’s not as effective in regression tasks due to lack of label-specific model predictions
(probabilities). Also, applying this technique becomes difficult when dealing with preference feedback.
### 9 Ethical Considerations
We propose techniques to transform via model training methods, a response-level score into sentence-level
scores. While such response-level scores are commonly obtained by human annotators, we have not conducted
any human annotations and our evaluations are on publicly available datasets. Our method produces artificial
labels for downstream tasks and does not modify in anyway the original response scores or attempt to
associate them to any individual(s).
### References
Reinald Kim Amplayo, Peter J. Liu, Yao Zhao, and Shashi Narayan. Smart: Sentences as basic units for text
evaluation, 2022.
Stefanos Angelidis and Mirella Lapata. Multiple instance learning networks for fine-grained sentiment
analysis. Transactions of the Association for Computational Linguistics, 6:17–31, 2018. doi: 10.1162/
[tacl_a_00002. URL https://aclanthology.org/Q18-1002.](https://aclanthology.org/Q18-1002)
Ehsan Mohammady Ardehaly and Aron Culotta. Domain adaptation for learning from label proportions
using self-training. In IJCAI, pages 3670–3676, 2016.
Ehsan Mohammady Ardehaly and Aron Culotta. Co-training for demographic classification using deep
learning from label proportions. In 2017 IEEE International Conference on Data Mining Workshops
_(ICDMW), pages 1017–1024. IEEE, 2017._
-----
Boris Babenko. Multiple instance learning: algorithms and applications. View Article PubMed/NCBI Google
_Scholar, 19, 2008._
Boris Babenko, Ming-Hsuan Yang, and Serge Belongie. Visual tracking with online multiple instance
learning. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 983–990, 2009.
doi: 10.1109/CVPR.2009.5206737.
Denis Barucic and Jan Kybic. Fast learning from label proportions with small bags. In Proc. IEEE ICIP,
pages 3156–3160, 2022.
Ralph Allan Bradley and Milton E. Terry. Rank analysis of incomplete block designs: I. the method of
[paired comparisons. Biometrika, 39(3/4):324–345, 1952. ISSN 00063444. URL http://www.jstor.](http://www.jstor.org/stable/2334029)
[org/stable/2334029.](http://www.jstor.org/stable/2334029)
Jatin Chauhan, Xiaoxuan Wang, and Wei Wang. Learning under label proportions for text classification. In
_Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12210–12223, Singapore,_
2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-emnlp.817. URL
[https://aclanthology.org/2023.findings-emnlp.817.](https://aclanthology.org/2023.findings-emnlp.817)
L. Chen, Z. Huang, and R. Ramakrishnan. Cost-based labeling of groups of mass spectra. In Proc. ACM
_SIGMOD International Conference on Management of Data, pages 167–178, 2004._
N. de Freitas and H. Kück. Learning about individuals from group statistics. In Proc. UAI, pages 332–339,
2005.
L. M. Dery, B. Nachman, F. Rubbo, and A. Schwartzman. Weakly supervised classification in high energy
physics. Journal of High Energy Physics, 2017(5):1–11, 2017.
Thomas G. Dietterich, Richard H. Lathrop, and Tomás Lozano-Pérez. Solving the multiple instance problem
with axis-parallel rectangles. Artif. Intell., 89(1-2):31–71, 1997.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. Measuring mathematical problem solving with the math dataset, 2021.
Sebastian Hofstätter, Markus Zlabinger, Mete Sertkan, Michael Schröder, and Allan Hanbury. Fine-grained
relevance annotations for multi-task document ranking and question answering. In Proc. of CIKM, 2020.
Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas
Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. True: Re-evaluating factual consistency
evaluation. In Workshop on Document-grounded Dialogue and Conversational Question Answering, 2022.
[URL https://api.semanticscholar.org/CorpusID:247694170.](https://api.semanticscholar.org/CorpusID:247694170)
Maximilian Ilse, Jakub Tomczak, and Max Welling. Attention-based deep multiple instance learning. In
_International conference on machine learning, pages 2127–2136. PMLR, 2018._
D. Kotzias, M. Denil, N. de Freitas, and P. Smyth. From group to individual labels using deep features. In
_Proc. SIGKDD, pages 597–606, 2015._
Sayali Kulkarni, Sheide Chammas, Wan Zhu, Fei Sha, and Eugene Ie. Aquamuse: Automatically generating
datasets for query-based multi-document summarization, 2020.
-----
Haonan Li, Martin Tomko, Maria Vasardani, and Timothy Baldwin. MultiSpanQA: A dataset for multi-span
question answering. In Marine Carpuat, Marie-Catherine de Marneffe, and Ivan Vladimir Meza Ruiz,
editors, Proceedings of the 2022 Conference of the North American Chapter of the Association for
_Computational Linguistics: Human Language Technologies, pages 1250–1260, Seattle, United States,_
July 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.naacl-main.90. URL
[https://aclanthology.org/2022.naacl-main.90.](https://aclanthology.org/2022.naacl-main.90)
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John
Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step, 2023.
J. Liu, B. Wang, Z. Qi, Y. Tian, and Y. Shi. Learning from label proportions with generative adversarial
networks. In Proc. NeurIPS, pages 7167–7177, 2019.
Jiabin Liu, Bo Wang, Xin Shen, Zhiquan Qi, and Yingjie Tian. Two-stage training for learning from
label proportions. In Zhi-Hua Zhou, editor, Proceedings of the Thirtieth International Joint Conference
_on Artificial Intelligence, IJCAI-21, pages 2737–2743. International Joint Conferences on Artificial_
[Intelligence Organization, 8 2021. doi: 10.24963/ijcai.2021/377. URL https://doi.org/10.24963/](https://doi.org/10.24963/ijcai.2021/377)
[ijcai.2021/377. Main Track.](https://doi.org/10.24963/ijcai.2021/377)
Jiexi Liu, Dehan Kong, Longtao Huang, Dinghui Mao, and Hui Xue. Multiple instance learning for offensive
language detection. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages
7387–7396. Association for Computational Linguistics, 2022. doi: 10.18653/v1/2022.findings-emnlp.546.
[URL https://aclanthology.org/2022.findings-emnlp.546.](https://aclanthology.org/2022.findings-emnlp.546)
T. Lozano-Pérez and C. Yang. Image database retrieval with multiple-instance learning techniques. In Proc.
_ICDE, page 233, 2000._
Zhekun Luo, Devin Guillory, Baifeng Shi, Wei Ke, Fang Wan, Trevor Darrell, and Huijuan Xu. Weaklysupervised action localization with expectation-maximization multi-instance learning. In Computer Vision
_– ECCV 2020, pages 729–745, Cham, 2020. Springer International Publishing._
O. Maron. Learning from ambiguity. PhD thesis, Massassachusetts Institute of Technology, 1998.
Oded Maron and Tomás Lozano-Pérez. A framework for multiple-instance learning. NIPS’97, page 570–576,
1997.
D. R. Musicant, J. M. Christensen, and J. F. Olson. Supervised learning by training on aggregate outputs. In
_Proc. ICDM, pages 252–261. IEEE Computer Society, 2007._
Nikolaos Pappas and Andrei Popescu-Belis. Explaining the stars: Weighted multiple-instance learning
for aspect-based sentiment analysis. In Proceedings of the 2014 Conference on Empirical Methods in
_Natural Language Processing (EMNLP), pages 455–466, Doha, Qatar, October 2014. Association for_
[Computational Linguistics. doi: 10.3115/v1/D14-1052. URL https://aclanthology.org/D14-1052.](https://aclanthology.org/D14-1052)
Nikolaos Pappas and Andrei Popescu-Belis. Explicit document modeling through weighted multiple-instance
learning. Journal of Artificial Intelligence Research, 58:591–626, 2017.
G. Patrini, R. Nock, T. S. Caetano, and P. Rivera. (almost) no label no cry. In Proc. Advances in Neural
_Information Processing Systems, pages 190–198, 2014._
Laura Perez-Beltrachini, Yang Liu, and Mirella Lapata. Generating summaries with topic templates and
structured convolutional decoders. In Anna Korhonen, David Traum, and Lluís Màrquez, editors, Pro_ceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5107–5116,_
-----
Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1504. URL
[https://aclanthology.org/P19-1504.](https://aclanthology.org/P19-1504)
N. Quadrianto, A. J. Smola, T. S. Caetano, and Q. V. Le. Estimating labels from label proportions. J. Mach.
_Learn. Res., 10:2349–2374, 2009._
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer,
2023.
Jan Ramon and Luc De Raedt. Multi instance neural networks. In Proceedings of the ICML-2000 workshop
_on attribute-value and relational learning, pages 53–60, 2000._
Soumya Ray and Mark Craven. Supervised versus multiple instance learning: an empirical comparison. In
_Proc. ICML, page 697–704, 2005._
S. Rueping. SVM classifier estimation from group probabilities. In Proc. ICML, pages 911–918, 2010.
Rishi Saket, Aravindan Raghuveer, and Balaraman Ravindran. On combining bags to better learn from label
proportions. In AISTATS, volume 151 of Proceedings of Machine Learning Research, pages 5913–5927.
[PMLR, 2022. URL https://proceedings.mlr.press/v151/saket22a.html.](https://proceedings.mlr.press/v151/saket22a.html)
C. Scott and J. Zhang. Learning from label proportions: A mutual contamination framework. In Proc.
_NeurIPS, 2020._
Karan Sikka, Abhinav Dhall, and Marian Bartlett. Weakly supervised pain localization using multiple
instance learning. In 2013 10th IEEE International Conference and Workshops on Automatic Face and
_Gesture Recognition (FG), pages 1–8, 2013._
Ivan Stelmakh, Yi Luan, Bhuwan Dhingra, and Ming-Wei Chang. Asqa: Factoid questions meet long-form
answers, 2023.
Xi Wang, Fangyao Tang, Hao Chen, Carol Y. Cheung, and Pheng-Ann Heng. Deep semi-supervised
multiple instance learning with self-correction for dme classification from oct images. Medical Image
_Analysis, 83:102673, 2023. ISSN 1361-8415. doi: https://doi.org/10.1016/j.media.2022.102673. URL_
[https://www.sciencedirect.com/science/article/pii/S1361841522003012.](https://www.sciencedirect.com/science/article/pii/S1361841522003012)
J. Wu, Yinan Yu, Chang Huang, and Kai Yu. Deep multiple instance learning for image classification and
auto-annotation. In Proc. CVPR, pages 3460–3469, 2015a.
Jiajun Wu, Yinan Yu, Chang Huang, and Kai Yu. Deep multiple instance learning for image classification
and auto-annotation. In Proceedings of the IEEE conference on computer vision and pattern recognition,
pages 3460–3469, 2015b.
Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A. Smith, Mari
Ostendorf, and Hannaneh Hajishirzi. Fine-grained human feedback gives better rewards for language
model training, 2023.
Wenpeng Yin, Dragomir Radev, and Caiming Xiong. Docnli: A large-scale dataset for document-level
natural language inference. In Findings of the Association for Computational Linguistics: ACL-IJCNLP
_2021. Association for Computational Linguistics, 2021. doi: 10.18653/v1/2021.findings-acl.435. URL_
[http://dx.doi.org/10.18653/v1/2021.findings-acl.435.](http://dx.doi.org/10.18653/v1/2021.findings-acl.435)
-----
F. X. Yu, D. Liu, S. Kumar, T. Jebara, and S. F. Chang. ∝SVM for learning with label proportions. In Proc.
_ICML, volume 28, pages 504–512, 2013._
Cha Zhang, John Platt, and Paul Viola. Multiple instance boosting for object detection. In Advances in
_Neural Information Processing Systems, volume 18. MIT Press, 2005._
Qi Zhang and Sally Goldman. Em-dd: An improved multiple-instance learning technique. Advances in
_neural information processing systems, 14, 2001._
### A Task Specific Discussion of Experimental Results
Following is the detailed per-task analysis:
**Long-form Question Answering. As presented in Table 4, the model trained using our PriorsBagLoss**
method on preference labels with the cosine-similarity prior has the best AUC-ROC, AUC-PR and accuracy
scores in the experiments on the QA-feedback dataset. It outperforms both BagLoss as well as the cosinesimilarity based baselines, the latter one by a significant margin. However, we observe that the performance
of the correlation prior based variant is worse than that of the BagLoss baseline which itself is worse than the
Response-level trained baseline.
**Retrieval. In the MultiSpan-QA dataset experiments (Table 4) we observe that the model trained by applying**
PsLab on the predictions of the model trained on PriorsBagLoss with a combination of the cosine-similarity
and correlation priors achieves the best AUC-ROC and AUC-PR scores (by a significant gap) among the
bag level baselines, while the variant with only the cosine-similarity prior performs the second bet on these
metrics. However, the Response-level trained model and BagLoss achieve marginally higher accuracy scores.
All these methods also handily outperform the cosine-similarity baseline. From the FiRA dataset results
(Table 5) we observe that the our prior augmented BagLoss method, specifically the using both the priors,
performs the best on mae as well as mse metrics.
**Summarization. The experimental results on the WikiCatSum dataset, presented in Table 4, show that our**
PriorsBagLoss method with different combinations of the cosine-similarity and the correlation priors, or
using NLI as a prior, as well as the PsLab method on these models yield the best performance (by a significant
margin) in terms of AUC-ROC, AUC-PR and accuracy metrics. The comparative baselines are the BagLoss,
NLI and cosine-similarity. A similar trend is observed on the AquaMuse dataset (Table 4) on which the our
methods significantly outperform the bag-level baselines, in particular the PsLab applied to the PriorsBagLoss
method yields the best performing model.
**Math Reasoning. On the PRM800k dataset, we observe from the experimental evaluations (Table 4) that the**
BagLoss and PriorsBagLoss methods are best performing among the bag-level baselines and also outperform
the cosine-similarity baselines. While PriorsBagLoss using a combination of cosine-similarity and correlation
priors achieves better AUC-ROC scores, BagLoss has significantly better accuracy while the AUC-PR scores
are similar.
### B Results for Differentiable Minimum Approximations
As mentioned in Sec. 4, in the binary-label case, the standard baseline is Mult which is just the product
of the probabilities. More sophisticated approximations that we include in our study are LSE Ramon and
-----
**Model** **AND Approx** **AUC-ROC** **AUC-PR**
Instance Baseline - 0.837 0.894
Mult 0.449 0.785
BagLoss
PriorBagLoss(0.2, 0.1)
GM 0.463 0.824
tf.reduce_min 0.478 0.829
Mult 0.599 0.858
GM 0.631 0.862
tf.reduce_min 0.643 0.877
Table 7: Results for differentiable AND approximations on WikiCatSum dataset
De Raedt [2000], ISR, NOR and GM Zhang et al. [2005] (see Sec. 2.4.1) of Babenko [2008] for details). We
employ the in built TensorFlow (TF) approximation tf_reduce_min in our experiments, noting that MAX can
be derived from MIN applied to flipped variables in the binary case. For integer labels we use the respective
TF approximations for MIN and MAX. The gradient of tf_reduce_min over a list of variables is non-zero
only for those variables whose value is at the minimum.
Table 7 has an ablation of Mult, GM and tf_reduce_min for the BagLoss and PriorBagLoss methods on the
WikiCatSum dataset, demonstrating that tf_reduce_min outperforms the others in AUC-ROC and AUC-PR
metrics.
### C Aggregate and instance-level evaluations
We also include evaluations of the various methods on a test set of bags w.r.t. bag-level metrics using the
corresponding AGG approximations. Tables 8, 9, 10, 11, 12 and 9 contain the aggregate as well as instance
evaluations.
### D Hyperparameter Tuning
Table 14 contains the best weights for our prior augmented BagLoss method on different datasets, along with
the best learning rates and batch size in the bag-level training.
### E Examples of perturbed summaries from the WikiCatSum and Aquamuse Datasets
Tables 15 and 16 contain the entailed and non-entailed (perturbed) summaries for the Aquamuse and
WikiCatSum datasets respectively.
-----
**Evaluation** **Model** **AUC-ROC** **AUC-PR** **Accuracy** **Precision** **Recall**
Cosine Similarity 0.488 0.287 0.698 0 0
NLI 0.6855 0.522 0.7453 0.7883 0.319
Response-level Model 0.681 ± 0.012 0.525 ± 0.081 0.726 ± 0.033 0.752 ± 0.011 0.428 ± 0.063
Aggregate
Instance
Sentence-level Model 0.653 ± 0.076 0.462 ± 0.050 0.723 ± 0.015 0.599 ± 0.093 0.435 ± 0.187
BagLoss 0.678 ± 0.082 0.525 ± 0.072 0.718 ± 0.057 0.717 ± 0.014 0.391 ± 0.033
0.8 BagLoss + 0.2 P1 0.683 ± 0.053 0.527 ± 0.02 0.722 ± 0.035 0.748 ± 0.009 0.461 ± 0.01
0.7 BagLoss + 0.2 P2 + 0.1 P1 0.665 ± 0.061 0.491 ± 0.04 0.727 ± 0.029 0.693 ± 0.018 0.316 ± 0.037
Cosine Similarity 0.455 0.135 0.851 0 0
NLI 0.631 0.366 0.859 0.872 0.178
Response-level Model 0.583 ± 0.187 0.217 ± 0.094 0.852 ± 0.003 0.529 ± 0.074 0.086 ± 0.04
Sentence-level Model 0.729 ± 0.016 0.354 ± 0.008 0.861 ± 0.046 0.717 ± 0.011 0.438 ± 0.072
BagLoss 0.661 ± 0.092 0.309 ± 0.127 0.852 ± 0.133 0.711 ± 0.074 0.24 ± 0.188
0.8 BagLoss + 0.2 P1 0.669 ± 0.063 0.311 ± 0.059 0.838 ± 0.071 0.65 ± 0.089 0.189 ± 0.112
0.7 BagLoss + 0.2 P2 + 0.1 P1 0.625 ± 0.07 0.271 ± 0.039 0.851 ± 0.021 0.639 ± 0.189 0.135 ± 0.098
FGLAB 0.693 ± 0.115 0.326 ± 0.071 0.842 ± 0.052 0.676 ± 0.043 0.228 ± 0.091
Table 8: Comparison of aggregate and instance-level performance on MultiSpanQA Dataset
**Evaluation** **Method** **AUC-ROC** **AUC-PR** **Accuracy** **Precision** **Recall**
**Preference** Cosine Similarity 0.4978 0.3952 0.477 0.379 0.4985
Response-level Model 0.546 0.4651 0.553 0.437 0.539
BagLoss 0.543 0.4644 0.5463 0.442 0.5324
PriorBagLoss(0.2,0) 0.568 0.4658 0.574 0.439 0.5467
**Instance** Sentence-level Model 0.647 0.611 0.686 0.722 0.418
Cosine Similarity 0.535 0.526 0.483 0.893 0.134
Response-level Model 0.491 0.4643 0.453 0.882 0.278
BagLoss 0.509 0.5269 0.5167 0.814 0.36
PriorBagLoss(0.2, 0) 0.516 0.4936 0.508 0.647 0.715
Table 9: Comparison of Preference and instance-level evaluation on QA Preference Feedback Dataset
-----
**Evaluation** **Model** **MAE** **MSE**
Instance-level Model 0.375 ± 0.09 0.247 ± 0.113
Aggregate
Instance
Response-level Model 0.320 ± 0.042 0.197 ± 0.017
BagLoss 0.326 ± 0.021 0.209 ± 0.007
Instance-level Model 0.283 ± 0.072 0.141 ± 0.088
Response-level Model 0.319 ± 0.047 0.186 ± 0.098
BagLoss 0.304 ± 0.007 0.163 ± 0.002
PriorBagLoss(0.3, 0) 0.298 ± 0.002 0.157 ± 0.004
PriorBagLoss(0.2, 0.2) 0.294 ± 0.003 0.155 ± 0.001
Table 10: Comparison of Aggregate and Instance-level evaluations on FirA Dataset
**Method** **AUC-ROC** **AUC-PR** **Accuracy** **Precision** **Recall**
Instance Baseline 0.837 ± 0.062 0.894 ± 0.085 0.718 ± 0.085 0.926 ± 0.001 0.733 ± 0.003
NLI 0.639 0.817 0.648 0.834 0.559
Cosine Similarity 0.408 0.829 0.362 0.719 0.276
BagLoss 0.477 ± 0.093 0.831 ± 0.052 0.562 ± 0.047 0.769 ± 0.018 0.319 ± 0.048
PriorBagLoss(0.2, 0.1) 0.641 ± 0.028 0.879 ± 0.013 0.651 ± 0.017 0.897 ± 0.009 0.658 ± 0.082
PSLAB 0.645 ± 0.038 0.879 ± 0.057 0.663 ± 0.091 0.884 ± 0.076 0.661 ± 0.092
0.6*BagLoss + 0.4*NLI 0.642 0.885 0.653 0.914 0.619
Table 11: Instance-level Evaluation on WikiCatSum
**Method** **AUC-ROC** **AUC-PR** **Accuracy** **Precision** **Recall**
Sentence-level Model 0.613 ± 0.028 0.928 ± 0.021 0.709 ± 0.051 0.902 ± 0.018 0.993 ± 0.004
Response-level Model 0.528 ± 0.145 0.895 ± 0.037 0.521 ± 0.082 0.851 ± 0.055 0.577 ± 0.065
Cosine Similarity 0.420 0.873 0.496 0.888 0.683
BagLoss 0.569 ± 0.019 0.924 ± 0.011 0.688 ± 0.071 0.904 ± 0.014 0.955 ± 0.048
PriorBagLoss(0.5, 0) 0.582 ± 0.034 0.927 ± 0.009 0.534 ± 0.044 0.920 ± 0.014 0.606 ± 0.037
PriorBagLoss(0, 0.1) 0.579 ± 0.052 0.925 ± 0.017 0.603 ± 0.038 0.908 ± 0.020 0.794 ± 0.055
PriorBagLoss(0.1, 0.1) 0.580 ± 0.068 0.926 ± 0.024 0.563 ± 0.049 0.914 ± 0.028 0.708 ± 0.077
psl 0.597 ± 0.093 0.927 ± 0.004 0.578 ± 0.063 0.911 ± 0.009 0.713 ± 0.128
Table 12: Instance-level Evaluation on PRM800K
-----
**Method** **AUC-ROC** **AUC-PR** **Accuracy** **Precision** **Recall**
Sentence-level Model 0.648 0.611 0.686 0.722 0.418
Cosine Similarity 0.535 0.526 0.483 0.893 0.134
Response-level Model 0.491 0.4643 0.453 0.882 0.278
BagLoss 0.509 0.5269 0.5167 0.814 0.36
PriorBagLoss(0.2, 0) 0.516 0.4936 0.508 0.647 0.715
PriorBagLoss(0, 0.4) 0.527 0.515 0.529 0.519 0.763
PriorBagLoss(0.2, 0.5) 0.532 0.522 0.521 0.738 0.469
Table 13: QA Preference Feedback Results (Instance Evaluation)
**Dataset** _α1_ _α2_ _α3_ **Learning Rate** **Batch Size**
QA-Feedback 0.3 0.2 0.5 1e-5 256
MultiSpanQA 0.8 0.2 0 1e-3 512
WikiCatSum 0.7 0.2 0.1 1e-4 1024
FiRA 0.6 0.2 0.2 1e-5 1024
AquaMuSe 0.7 0.2 0.1 1e-3 512
PRM800K 0.8 0.1 0.1 1e-4 64
Table 14: α1, α2, and α3 represent the coefficients of the BagLoss, cosine similarity prior and correlation
prior terms in the loss function.
-----
|Type|Summary|
|---|---|
|Reference Summary|She also is the first ever woman in Indian History to be nominated as the Rajya Sabha member. She is considered the most important revivalist in the Indian classical dance form of Bharatanatyam from its original’ sadhir’ style, prevalent amongst the temple dancers, Devadasis, she also worked for the re-establishment of traditional Indian arts and crafts.|
|---|---|
|Word Replace- ment|She also is the first ever woman in Indian History to be nominated as the Rajya Sabha Independent member. She is considered the most important revivalist in the Indian classical dance form of Kathak from its Nautch ’sadhir’ style, prevalent amongst the temple singers. Furthermore, she also advocated for the re-establishment of traditional Indian arts and crafts.|
|---|---|
|Sentence Re- placement|She also is the first ever woman in Indian History to be nominated as the Rajya Sabha member. She is considered the most important revivalist in the Indian classical dance form of Bharatanatyam from its original’ sadhir’ style, prevalent amongst the temple dancers. She was also a strong advocate for animal welfare and environmental protection, actively participating in campaigns and legislative efforts throughout her life.|
|---|---|
Table 15: Example of the entailed and non-entailed versions of the summary from AquaMuSe Dataset. We
either use entailed or non-entailed version.
-----
|Type|Summary|
|---|---|
|Reference Summary|the gold spangle ( autographa bractea ) is a moth of the family noctuidae . it is found in europe, across western siberia and the altai mountains, the northern caucasus, northern turkey and northern iran . its wingspan is 42 – 50 mm . the forewings are brown and gray with large rhomboid golden marks . the hindwings and body are lighter grayish brown . the moth flies from july to august depending on the location, and migrates long distances . the larvae feed on a wide range of plants including hieracium, tussilago farfara, plantago, crepis paludosa, taraxacum, urtica, lamium, stachys and eupatorium cannabinum .|
|---|---|
|Word Replace- ment|the gold spangle ( autographa californica ) is a moth of the family noctuidae . it is found in western north america, across california and the altai mountains, south dakota and new mexico . its wingspan is 16 - 25 mm . the forewings are blue and gray with silver-white long lateral part and a patch of chestnut brown . the hindwings and body are a grayish tan . the moth flies from march to september depending on the location, and migrates long distances . the larvae feed on a wide range of herbaceous plants including legumes such as fabaceae, alfalfas, peas, taraxacum, urtica, lamium, stachys and eupatorium cannabinum .|
|---|---|
|Sentence Re- placement|the gold spangle ( autographa bractea ) is a moth of the family noctuidae . it is found in europe, across western siberia and the altai mountains, the northern caucasus, northern turkey and northern iran . its wingspan is 42 – 50 mm . the forewing has an inner line below middle finely golden in color, and the outer one is golden at the inner margin only . the hindwings and body are lighter grayish brown . the moth files from july to august depending on the location, and migrates long distances . Occupying waste ground, gardens and moorland, this species is widespread and fairly common in the north of Britain .|
|---|---|
Table 16: Example of the entailed and non-entailed versions of the summary from WikiCatSum Dataset. We
either use entailed or non-entailed version.
-----
| [
"Yukti, Makhija",
"Priyanka, Agrawal",
"Rishi, Saket",
"Aravindan, Raghuveer"
] | 2024-04-07T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2404.04817 | https://arxiv.org/abs/2404.04817 | https://www.semanticscholar.org/paper/d202a4fc51aca5a10ed4a4afd5513b9435f9515a |
FRoG: Evaluating Fuzzy Reasoning of Generalized Quantifiers in Large Language Models | Fuzzy reasoning is vital due to the frequent use of imprecise information in daily contexts. However, the ability of current large language models (LLMs) to handle such reasoning remains largely uncharted. In this paper, we introduce a new benchmark, FRoG, for fuzzy reasoning, featuring real-world mathematical word problems that incorporate generalized quantifiers. Our experimental findings reveal that fuzzy reasoning continues to pose significant challenges for LLMs. Moreover, we find that existing methods designed to enhance reasoning do not consistently improve performance in tasks involving fuzzy logic. Additionally, our results show an inverse scaling effect in the performance of LLMs on FRoG. Interestingly, we also demonstrate that strong mathematical reasoning skills are not necessarily indicative of success on our benchmark. | A new benchmark, FRoG, is introduced for fuzzy reasoning, featuring real-world mathematical word problems that incorporate generalized quantifiers, and it is found that existing methods designed to enhance reasoning do not consistently improve performance in tasks involving fuzzy logic. | [
"Yiyuan, Li",
"Shichao, Sun",
"Pengfei, Liu"
] | 2024-07-01T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2407.01046v2 | https://arxiv.org/abs/2407.01046 | https://www.semanticscholar.org/paper/034884ecdb9c55d2fc2e441c4a96b2b73a5b4757 |
|
First Experiments with Data Driven Conjecturing | N/A | null | # First Experiments with Data Driven Conjecturing[∗]
Karel Chvalovsk´y[1], Thibault Gauthier[1], and Josef Urban[1]
Czech Technical University in Prague, Czech republic,
```
[email protected], [email protected], and [email protected]
```
An essential part of mathematics and the work of a mathematician is to produce conjectures.
This is also an important problem in automated theorem proving. (Un)fortunately, already
humans have different opinions on what is a good conjecture and hence an objective function
for ranking conjectures is hard to specify. It is even less clear how conjectures are discovered.
There have been various attempts to produce conjectures automatically. Well known examples like Lenat’s AM (Automated Mathematician) [9], a more specialized Graffitti by Fajtlowicz [4], and Colton’s HR [3] are based on human curated rules for generating conjectures. In
small domains, exhaustive brute force generation can be useful, in particular when controlled
by a type system and further semantic pruning [7].
**Using Distributed Representations for Conjecturing: Our approach is different. We**
do not want to write down rules describing interesting conjectures directly, but we would like to
_learn meaningful conjecturing from a large corpus of mathematical proofs. For that, a better_
semantic understanding of such corpora is needed. It is possible to use distributional semantics
approach, where we try to learn semantic similarities among concepts solely based on their cooccurrences in a corpus. This has proven to be very successful in computational linguistics [10].
A notion (concept) is then represented by a low-dimensional vector. One of the interesting
aspects of such a representation are analogies via linear algebra. Let v, v, and v be the
_∩_ _∪_ _∧_
vector representations of ∩, ∪, and ∧, respectively. Then we can answer a question “What is
_to_ _as_ _is to_ _?” by finding v such that v_ **v is most similar to v** **v** . Such analogies
_∧_ _∪_ _∩_ _∧_ _−_ _∩_ _−_ _∪_
can be used for free-style conjecturing similar to [6].
A straightforward application of this idea is to learn such representations over a large formal
library, in our case we use the Mizar [2] Mathematical Library (MML). Given a statement s,
for example, x ∩ _y = x →_ _x ∪_ _y = y, we can identify an important notion in s that we would_
like to shift, e.g., represented by v . Now we look for a vector that is close to v such that
_∩_ _∩_ _∩_
it is a binary function. If we are lucky v is close and hence we would like to replace in s by
_∧_ _∩_
. We should also replace by a binary function represented by a vector that is to v as v
_∧_ _∪_ _∪_ _∧_
is to v . It could be v and hence we obtain x _y = x_ _x_ _y = y as a new statement._
_∩_ _∨_ _∧_ _→_ _∨_
However, here we have made several decisions and it is rather unclear how to make them
automatically. Before we start to discuss them, it is worth mentioning that our situation is
significantly different from the situation in natural language processing (NLP). We use the
Mizar formal library so for every statement we have a parse tree. Moreover, if we produce a
new statement from an old one, we can try to check by an automated theorem prover (ATP)
whether it is provable or disprovable, because we can work directly with a TPTP [11] translation
of the Mizar statement [12]. Although it is generally very difficult to disprove a statement, in
our case it is possible to do that for trivially invalid statements, which we will often produce.
Similarly, we can filter trivially valid statements.
Now back to our problem. We can use a distributed representation of notions and statements
such as [1]. Given a statement s we can find an important notion N in it and shift it (i.e.,
its vector). Here N should be a predicate, function, or a constant. Once we do that, we can
_∗Supported by the ERC Consolidator grant no. 649043 AI4REASON and by the Czech project AI&Reasoning_
CZ.02.1.01/0.0/0.0/15 003/0000466 and the European Regional Development Fund.
-----
First Experiments with Data Driven Conjecturing Chvalovsk´y, Gauthier, and Urban
look for a semantically similar notion of the same type. This search can be unrestricted, or
we can look for notions that, e.g., appear only in different Mizar articles. When we find a
suitable notion (or more notions), we can start to shift notions in our statement from the most
important to the least important. It is unlikely that a vector for a new notion is exactly at the
position where we expect it, therefore we can use the previous shifts to correct the new ones.
The question when to stop shifting and keep the rest of the statement intact can be left open,
because we can generate all possible variants and remove those that are trivially (in)valid.
This procedure can be improved in various ways, e.g., by using a beam search with the
possibility to keep a notion intact even when less important notions are shifted. However, so
far we have obtained only weak results with this method. It probably suffers from the fact that
it is hard to keep all parts synchronized. A single error can spoil the whole translation, and
even more importantly, it is usually necessary to shift different parts of statements differently.
**Consistency by NMT: The recently developed neural machine translation (NMT) archi-**
tectures provide a different and possibly better approach. It was shown recently that we can
use NMT for simple informal to formal translations [14]. Here, the above mentioned semantic
relations between the notions are learned as part of the training process. Moreover, the inner
consistency of the translated result is controlled directly by NMT. That is even incorrect results
are likely to parse and the notions are combined meaningfully. For example, in the encoderdecoder neural architectures a hidden state (vector) characterizing the translation done so far
is updated after each decoding step, and the choice of the next decoded symbol is statistically
conditioned on the state, making the resulting combinations of symbols statistically plausible.
We can formulate our conjecturing task as a translation problem—translate an already
known statement s into a conjecture t. How can we produce a sufficient amount of training
data { (s, t): s translates into t } for such a task? Assume we have a statement s and we can say
that statements t1, . . ., tn in our library are somehow relevant to s. We can then try to confuse
NMT by adding n training examples (s, t1), . . ., (s, tn) and hence NMT will then attempt to
_{_ _}_
translate s into a statement that is most similar to all t1, . . ., tn.
For an initial experiment, we produce abstracted common patterns (e.g. commutativity,
associativity, etc.) from all Mizar toplevel statements using Gauthier’s patternizer [5] used
previously for concept alignment and conjecturing based on them [6]. The patternizer finds
about 16000 patterns that generalize at least two statements. From them we create a corpus
of about 1.3 million (non-unique) translation pairs by making an input-output pair from all
statements that are instances of the same pattern. This means that NMT will be trained to
analogize on many examples, and due to the large non-determinism in the training data it may
produce a new formula that will likely be syntactically consistent. This is indeed often the case
on a test set of about 30000 unique statements that after the training result in about 16000
formulas that do not appear in MML. A very simple example generated by this conjecturing
approach is (X ∩ _Y ) \ Z = (X \ Z) ∩_ (Y \ Z) produced from (X ∪ _Y ) \ Z = (X \ Z) ∪_ (Y \ Z) .
Although it is a trivial duality statement, it should be noted that it was produced completely
automatically without any intervention from outside and it is not in the Mizar library. Moreover,
there is no need to check for a correct substitution, cf. [5], this part is handled by NMT itself.
This statement can be proved automatically by the MizAR hammer [13, 8]. Examples of false
but syntactically consistent conjectures generated automatically in this way include:
```
for n, m being natural numbers holds n gcd m = n div m;
for R being Relation holds with_suprema(A) <=> with_suprema(inverse_relation(A));
```
In this initial experiment, we say that two statements with a common pattern are relevant to
each other. There are many other untested options, for example, we can say that a statement
_t is relevant to s if t occurs in a proof of s, or vice versa._
-----
First Experiments with Data Driven Conjecturing Chvalovsk´y, Gauthier, and Urban
## References
[1] Sanjeev Arora, Yingyu Liang, and Tengyu Ma. A simple but tough-to-beat baseline for sentence
embeddings. In ICLR 2017, 2017.
[2] Grzegorz Bancerek, Czeslaw Bylinski, Adam Grabowski, Artur Kornilowicz, Roman Matuszewski,
Adam Naumowicz, Karol Pak, and Josef Urban. Mizar: State-of-the-art and beyond. In Manfred
Kerber, Jacques Carette, Cezary Kaliszyk, Florian Rabe, and Volker Sorge, editors, Intelligent
_Computer Mathematics - International Conference, CICM 2015, Washington, DC, USA, July 13-_
_17, 2015, Proceedings, volume 9150 of Lecture Notes in Computer Science, pages 261–279. Springer,_
2015.
[3] Simon Colton. Automated Theory Formation in Pure Mathematics. Distinguished Dissertations.
Springer London, 2012.
[4] Siemion Fajtlowicz. On conjectures of Graffiti. Annals of Discrete Mathematics, 72(1–3):113–118,
1988.
[5] Thibault Gauthier and Cezary Kaliszyk. Aligning concepts across proof assistant libraries. J.
_Symb. Comput., 90:89–123, 2019._
[6] Thibault Gauthier, Cezary Kaliszyk, and Josef Urban. Initial experiments with statistical conjecturing over large formal corpora. In Andrea Kohlhase, Paul Libbrecht, Bruce R. Miller, Adam
Naumowicz, Walther Neuper, Pedro Quaresma, Frank Wm. Tompa, and Martin Suda, editors,
_Joint Proceedings of the FM4M, MathUI, and ThEdu Workshops, Doctoral Program, and Work_
_in Progress at the Conference on Intelligent Computer Mathematics 2016 co-located with the 9th_
_Conference on Intelligent Computer Mathematics (CICM 2016), Bialystok, Poland, July 25-29,_
_2016., volume 1785 of CEUR Workshop Proceedings, pages 219–228. CEUR-WS.org, 2016._
[7] Moa Johansson, Dan Ros´en, Nicholas Smallbone, and Koen Claessen. Hipster: Integrating theory
exploration in a proof assistant. In Stephen M. Watt, James H. Davenport, Alan P. Sexton, Petr
Sojka, and Josef Urban, editors, Intelligent Computer Mathematics - International Conference,
_CICM 2014, Coimbra, Portugal, July 7-11, 2014. Proceedings, volume 8543 of Lecture Notes in_
_Computer Science, pages 108–122. Springer, 2014._
[8] Cezary Kaliszyk and Josef Urban. MizAR 40 for Mizar 40. J. Autom. Reasoning, 55(3):245–256,
2015.
[9] Douglas Bruce Lenat. AM: An Artificial Intelligence Approach to Discovery in Mathematics as
_Heuristic Search. PhD thesis, Stanford, 1976._
[10] Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. Distributed
representations of words and phrases and their compositionality. In Christopher J. C. Burges, L´eon
Bottou, Zoubin Ghahramani, and Kilian Q. Weinberger, editors, Advances in Neural Information
_Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013._
_Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages_
3111–3119, 2013.
[11] Geoff Sutcliffe. The TPTP problem library and associated infrastructure. J. Autom. Reasoning,
43(4):337–362, 2009.
[12] Josef Urban. MPTP 0.2: Design, implementation, and initial experiments. J. Autom. Reasoning,
37(1-2):21–43, 2006.
[13] Josef Urban, Piotr Rudnicki, and Geoff Sutcliffe. ATP and presentation service for Mizar formalizations. J. Autom. Reasoning, 50:229–241, 2013.
[14] Qingxiang Wang, Cezary Kaliszyk, and Josef Urban. First experiments with neural translation
of informal to formal mathematics. In Florian Rabe, William M. Farmer, Grant O. Passmore,
and Abdou Youssef, editors, Intelligent Computer Mathematics - 11th International Conference,
_CICM 2018, Hagenberg, Austria, August 13-17, 2018, Proceedings, volume 11006 of Lecture Notes_
_in Computer Science, pages 255–270. Springer, 2018._
-----
| [
"Thibault, Gauthier",
"Karel, Chvalovsky",
"Josef, Urban"
] | 2019-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Formal Mathematics Statement Curriculum Learning | We explore the use of expert iteration in the context of language modeling applied to formal mathematics. We show that at same compute budget, expert iteration, by which we mean proof search interleaved with learning, dramatically outperforms proof search only. We also observe that when applied to a collection of formal statements of sufficiently varied difficulty, expert iteration is capable of finding and solving a curriculum of increasingly difficult problems, without the need for associated ground-truth proofs. Finally, by applying this expert iteration to a manually curated set of problem statements, we surpass previous state-of-the-art on the miniF2F benchmark, automatically solving multiple challenging problems drawn from high school olympiads. | null | ## Formal Mathematics Statement Curriculum Learning
**Stanislas Polu** [1] **Jesse Michael Han** [1] **Kunhao Zheng** [2] **Mantas Baksys** [3] **Igor Babuschkin** [1] **Ilya Sutskever** [1]
**Abstract**
whether a trajectory (i.e. a proof) is successful (i.e. formally
correct). But the vast scope of formal mathematics means
that any strong reasoning result obtained in it will be more
meaningful than comparable results in games (e.g. finding
proofs to mathematical conjectures), and could even be
applicable to important practical problems (e.g. software
verification).
However, tackling formal mathematics involves two main
challenges that we must address in order to continue making
progress:
**Infinite action space Not only does formal mathematics**
have an extremely large search space (like Go for example),
it also has an infinite action space. At each step of proof
search, the model must choose not from a well-behaved
finite set of actions, but a complex and infinite set of tactics, potentially involving exogenous mathematical terms
that have to be generated (e.g., generating a mathematical
statement to be used as a witness, an object used steps such
as “there exists an x ...”, or a cut, the introduction and the
chaining of a lemma in the middle of a proof).
**No direct self-play setup In formal mathematics, a prover**
is not playing against an opponent but against a set of statements to prove. When faced with a statement that is just too
hard, there is no obvious reframing of the formal mathematics setup that will let the prover generate intermediary easier
statements to tackle first. This asymmetry prevents naive application of the symmetric self-play algorithms commonly
used in 2-player games.
These two differences make a naive application of reinforcement learning to formal mathematics unlikely to succeed.
Past work proposed to address the infinite action space problem by sampling from a language model (Polu & Sutskever,
2020). This paper focuses on this second problem and our
basis for addressing it is the observation that the key role
of self-play is to provide an unsupervised curriculum. We
propose instead to supply auxiliary sets of problem statements (without requiring proofs) of varying difficulty. We
empirically show that, when the difficulty of these auxiliary problems is varied enough, a simple expert iteration
procedure is able to solve a curriculum of increasingly difficult problems, eventually generalizing to our target distribution. We show that this works with both automaticallygenerated and manually-curated auxiliary distributions of
We explore the use of expert iteration in the context of language modeling applied to formal mathematics. We show that at same compute budget, expert iteration, by which we mean proof
search interleaved with learning, dramatically outperforms proof search only. We also observe that
when applied to a collection of formal statements
of sufficiently varied difficulty, expert iteration is
capable of finding and solving a curriculum of increasingly difficult problems, without the need for
associated ground-truth proofs. Finally, by applying this expert iteration to a manually curated set
of problem statements, we achieve state-of-the-art
on the miniF2F benchmark, automatically solving
multiple challenging problems drawn from high
school olympiads.
**1. Introduction**
Deep learning has enjoyed spectacular success in many domains, including language (Brown et al., 2020; Devlin et al.,
2018; Wu et al., 2016), vision (Radford et al., 2021; Tan
& Le, 2019), and image generation (Ramesh et al., 2021;
Karras et al., 2019). One domain where deep learning has
not yet enjoyed a comparable success is in tasks that require extensive planning and symbolic reasoning, with the
exception of two-player games (Silver et al., 2016; 2017;
Berner et al., 2019; Vinyals et al., 2019). In such games,
deep learning systems exhibit a considerable degree of reasoning, especially when trained with self-play combined
with a search procedure such as Monte Carlo Tree Search
(MCTS) (Browne et al., 2012). But the resulting reasoning
abilities achieved are limited due to the relatively narrow
scope of games.
As such, theorem proving in interactive proof assistants, or
formal mathematics, appears as an interesting game-like
domain to tackle due to its increased scope. Like games,
formal mathematics has an automated way of determining
1OpenAI 2École Polytechnique 3University of Cambridge. Correspondence to: Stanislas Polu <[email protected]>.
Preprint. Under review.
-----
**Formal Mathematics Statement Curriculum Learning**
problems and leverage this to achieve state-of-the-art on the
_miniF2F benchmark. Our results suggest that continuous_
self-improvement in formal mathematics can potentially
be reduced to the problem of generating such sets of formal statements, which we have done in part manually in
this work, but could eventually be scaled in the future with
more automation (such as more domain-specific statements
generator or even informal to formal machine translation).
**1.1. miniF2F benchmark**
In this work, we target the miniF2F (Zheng et al., 2021)
benchmark, which consists of 244 validation and 244 test
formalized statements of mathematical problems from various competitions. We believe it to be a better measure
of mathematical reasoning compared to a formal libraryderived split. Also, the extreme scarcity in formal libraries
of this type of problems makes it an ideal test-bed for the
expert iteration methodology studied in this paper.
**1.2. Contribution**
Our contributions are the following: we present lean-gym,
a simple REPL interface for interacting with the Lean theorem prover; we propose an expert iteration methodology
for GPT-f (Polu & Sutskever, 2020) which uses proofs generated by our models as training data to iteratively improve
their performance; we demonstrate that, at fixed compute
budget, expert iteration outperforms proof search only; we
present a synthetic inequality generator and study how expert iteration finds and solves a curriculum of increasingly
difficult problems from a set of generated statements of various difficulty; finally, we present a manually curated set of
formalized problem statements and leverage it to achieve
state-of-the-art performance on the miniF2F benchmark.
**2. Related Work**
Our work strongly relies on, and can be seen as a natural
continuation of the work presented in the original GPT-f
paper (Polu & Sutskever, 2020) which studies the use of
language models to generate tactics, the PACT paper (Han
et al., 2021) which applies GPT-f to Lean and studies the
benefits from co-training on self-supervised objectives, and
the miniF2F benchmark (Zheng et al., 2021).
We present additional related work in Appendix A.
**3. Formal Environment**
We choose Lean (de Moura et al., 2015; lea) as our formal environment. Unlike Metamath (Megill & Wheeler,
2019), which has been studied in the original GPT-f paper (Polu & Sutskever, 2020), Lean benefits from high-level
tactics which were shown to be beneficial in the context of
the miniF2F benchmark (Zheng et al., 2021)–Lean proofs
are typically 10x shorter than Metamath’s. Also, Lean has
recently received a lot of attention from the mathematical community, thanks to projects such as the Perfectoid
Spaces (Buzzard et al., 2019) and the Liquid Tensor experiment (Scholze, 2020), and benefits from a vibrant community of hundreds of contributors to its main mathematical
library called mathlib. We refer to the PACT paper’s Background section (Han et al., 2021) for a detailed introduction
to Lean in the context of neural theorem proving.
**3.1. lean-gym**
In the PACT paper (Han et al., 2021), proof search is performed by the Lean runtime using the LEANSTEP environment, with a generic backend interface to models. While
easy to use–one just needs to plug in their model–this approach makes it difficult to alter and iterate on the search
procedure because it is programmed in Lean (which is not
designed or intended for cluster-wide parallelised I/O intensive tasks), and the coupling of the search procedure with
the Lean runtime introduces challenges when scaling to a
large number of parallel workers.
To solve these issues we implemented lean-gym[1] – a
simple REPL interface over the standard input/output implemented in Lean directly. We present lean-gym’s API
and discuss some of its advantages and limitations in Appendix B.
**3.2. Proof datasets extraction**
We rely on the proof extraction methodology presented
in the PACT paper (Han et al., 2021) to extract human
tactic proof steps from mathlib (the tactic dataset) as
well as the various other proof artifacts (mix1 and mix2
datasets). We also extract mathlib-{train,valid,test}, the
set of statements from mathlib along the split proposed in
Han et al. (2021) (the validation and test splits of tactic,
mix1, mix2 being aligned with mathlib-{valid, test} as
the splits are determined by declaration name hashes (across
all data sources including proof-term mining) as opposed to
individual proof steps or data-points).
**4. Expert Iteration**
_Expert iteration was introduced in Silver et al. (2017) and_
broadly consists in iteratively training models on their previously sampled trajectories, to achieve continuous improvement. In this section we present our expert iteration methodology, including the models and pre-training strategies we
rely on.
[1https://github.com/openai/lean-gym](https://github.com/openai/lean-gym)
-----
**Formal Mathematics Statement Curriculum Learning**
proof sizes.
**4.1. Model**
We use 11 buckets B = 0...10 and compute the proofsize
bucket b(g) for a goal g by assigning infinite proof sizes to
bucket 0, all proof sizes over 20 to bucket 1 and linearly projecting proof sizes lower than 20 on the remaining buckets
2, ..., 10 (10 being the bucket for the shortest proof sizes).
In practice, when training and sampling from the model, we
map B to the tokens ’A’...’K’.
To value goals as we run proof searches, we sample the
_proofsize bucket token and record the logits pb(g) for each_
viable bucket and use them to get a weighted average with
the following formula: v(g) = #1B _b_ _B_ _[p][b][(][g][)][.b][.]_
_∈_
As an example, if the model assignsP p0 = 1 (hence pb=0 =
_̸_
0) then v(g) = 0. Conversely if the model assigns p10 =
1 (10 being the bucket for the shortest proof sizes) then
_v(g) = 1._
The rationale for using this proofsize objective instead of
the outcome objective described in Polu & Sutskever (2020)
is that (i) it achieves better performance compared to the
_outcome objective (see table 1), and (ii) it prioritizes goals_
that potentially lead to shorter proofs during proof search,
creating an intrinsic incentive for the system to converge towards shorter proofs. Similarly to Polu & Sutskever (2020)
we favor this token-based approach to the introduction of
a separate value head to keep the overall architecture simple. This way the proofsize objective can be implemented
by simply augmenting the training dataset and without any
architectural change.
**4.4. Bootstrapping**
Bootstrapping consists in the steps required to train an initial
model on both the proofstep objective and the proofsize
_objective._
Given a pre-trained model on WebMath, we fine-tune it
on the tactic dataset extracted from mathlib as well as
the proof artifacts dataset mix1 as described in Han et al.
(2021). This initial model, which we denote θ0 is solely
trained on the proofstep objective. We use the validation
splits of the tactic and m1 datasets to early-stop training.
Note that this is our only use of mathlib-valid to influence
the training process throughout this paper.
To generate data for the proofsize objective, we use θ0 to
sample proofs for statements from mathlib-train. For each
statement from mathlib-train (25k) we attempt a = 1 proof
searches using the cumulative logprob priority search described in Polu & Sutskever (2020) (which does not require
a trained value function) using d = 512 expansions and
_e = 8 samples per expansion. We denote the set of success-_
ful proof searches created in this process as S0.
Using S0 we generate dataset D0 by concatenating: (i) the
We use decoder-only Transformers similar to GPT-3 (Brown
et al., 2020). Throughout this paper we focus on a model
with 36 layers and 774 million trainable parameters (referred
to as the 700m model in the GPT-f paper (Polu & Sutskever,
2020)).
**4.2. Pre-Training**
We pre-train our models successively on GPT-3’s postprocessed version of CommonCrawl (for 300B tokens) and
an updated version of WebMath (Polu & Sutskever, 2020)
(for 72B tokens) whose mix is presented in Appendix C.
**4.3. Training objectives**
4.3.1. Proofstep objective
The proofstep objective, introduced in Polu & Sutskever
(2020), consists in generating a PROOFSTEP (a Lean
tactic) given a GOAL (a Lean tactic state). We also
condition this objective on the current DECLARATION
(a Lean theorem name), which remains the same
throughout a proof search: DECL <DECLARATION>
GOAL <GOAL> PROOFSTEP <PROOFSTEP>.
The rationale for conditioning on the declaration name is to
hint our models on the position of the current declaration in
the mathlib library. It can be considered as a weak proxy
signal for the large amount of information not shown to
the model (the full environment consisting of the available
imports and currently open declarations such as module
names, notations, declared instances, ...). The declaration
name lets models at least in principle memorize and then
retrieve some of that information, knowing that lean-gym
errors if a theorem or definition that is not available in
the environment associated with the current declaration is
used by tactics generated by our models. Also note that
conversely to Polu & Sutskever (2020) and like Han et al.
(2021) <GOAL> is not necessarily a single goal but a Lean
tactic state, which possibly comprises multiple goals.
4.3.2. Proofsize objective
We depart from Polu & Sutskever (2020) and use a
_proofsize objective to guide our proof searches, which_
consists in generating one token that represents a proof
size estimate bucket for the current goal (Lean tactic state): DECL <DECLARATION> GOAL <GOAL>
PROOFSIZE <PROOFSIZE_BUCKET_TOKEN>
For a given goal g, either the goal was proved as part of the
proof search and we denote its proof size (the number of
tactic applications (compounded Lean tactics counting as
one)) as ps(g), or the goal was not proved in which case we
assign the goal to a bucket that virtually represents "infinite"
-----
**Formal Mathematics Statement Curriculum Learning**
strating that training a value function on proofs sampled
from mathlib-train has limited transfer to miniF2F-valid.
The main differences with Zheng et al. (2021), potentially
explaining the gap on minif2f-valid (27.6% vs 23.9%), consists in the new pre-training described in section 4.2 as well
as the use of a more recent mathlib checkpoint for the mix1,
mix2 and tactic datasets.
**4.5. Iterated sampling and training**
Our expert iteration process takes as input: (i) a set of
formal statements St, (ii) a function a : St −→ N indicating
the number of proof search attempts to run per statement at
each iteration, (iii) a base model θ0 to fine-tune from at each
iteration, and (iv) a mathlib bootstrapped model θ1 trained
on both objectives.
Each iteration k consists in sampling proof searches for
statements in St using θk, filtering successful proof searches
_Sk to extract a new dataset Dk, and fine-tuning θ0 on it to_
obtain θk+1, on which we can iterate. To sample proof
searches from St we use the best-first search described in
Polu & Sutskever (2020) with the value function described
in section 4.3.2. We attempt a(s ∈ _St) proof searches_
for each statement s with d = 512 expansions and e = 8
samples per expansion. We denote the set of successful
proof searches for iteration k as Sk.
Using Sk we generate datasets Dk by concatenating: (i) the
initial tactic dataset (proofstep objective), (ii) a deduplicated set of proofsteps extracted from the proofs in
1 _i_ _k_ _[S][k][ (][proofstep objective][), and (iii) a deduplicated]_
_≤_ _≤_
set of proofsize tuples (goals and proofsize) extracted from
S
the full proof searches in 1 _i_ _k_ _[S][k][ (][proofsize objective][).]_
_≤_ _≤_
Note that we use a global deduplication across iterations
[S]
for both proofsteps and proofsize tuples which we found to
be important to maintain the stability of the expert iteration
procedure. This global deduplication is somewhat equivalent for each statement to growing a unique proof tree by
aggregating all the proof searches that have been run for
it across iterations. This virtual proof tree accumulates a
growing number of positive proof paths as well as a growing number of visited goals that remain unproven. We use
these goals as negative examples for the proofsize objective,
labeling them with an infinite proofsize. Positive goals are
deduplicated keeping the minimum proof sizes across proof
searches.
Finally θk is obtained by fine-tuning θ0 for exactly one
epoch on Dk. Note that the initial tactic dataset is included in each Dk, despite θ0 being already trained on it
(along with mix1). We found this repetition to be beneficial
overall (as it adds the mathlib extracted proofsteps to our
deduplicated per statements virtual proof trees) despite it
leading to a slight overfit on the tactic dataset in terms
_Table 1. Performance of θ0 and θ1 on mathlib-valid and miniF2F-_
_valid compared to PACT Lean GPT-f as reported in Han et al._
(2021); Zheng et al. (2021). All models have the same architecture.
_θ0 is sampled using cumulative logprob priority best-first search._
_θ1 is sampled using best-first search based on the proofsize objec-_
_tive. We report our setup (d = 512 expansions and e = 8 tactic_
samples per expansions) as well as the setups used in Han et al.
(2021); Zheng et al. (2021) to control for compute. We also report
the performance of θ1 on mathlib-valid when trained using the
_outcome objective from Polu & Sutskever (2020) as an ablation of_
our proposed proofsize objective.
Model _d_ _e_ _pass@1_ _pass@8_
_mathlib-valid_
PACT 512 16 48.4%
_θ0 (PACT setup)_ 512 16 48.5% 57.6%
_θ0_ 512 8 46.7% 57.5%
_θ1_ 512 8 **56.3%** **66.3%**
_θ1 (outcome objective)_ 512 8 55.6% 65.9%
_miniF2F-valid_
MiniF2F 128 16 23.9% 29.3%
_θ0 (MiniF2F setup)_ 128 16 27.6% 31.8%
_θ0_ 512 8 28.4% 33.6%
_θ1_ 512 8 **28.5%** **35.5%**
_θ1 (outcome objective)_ 512 8 28.3% 34.7%
initial tactic dataset (proofstep objective), (ii) a deduplicated set of proofsteps extracted from the proofs in S0
(proofstep objective) and (iii) a deduplicated set of proofsize
tuples (goals and proofsize) extracted from the full proof
searches in S0 (proofsize objective).
Note that the full proof searches in S0 include goals that
are visited but eventually remain unproved, which provides
useful negative examples for the trained value function (even
if these negatives may include provable goals that simply
were not prioritized by the search). Also note that S0 doesn’t
include failed proof searches (which would contain only
negative examples and no proofstep objective data).
We fine-tune θ0 on D0 for exactly one epoch (no use of val_idation data for early-stopping) to obtain our initial model_
_θ1 trained on both the proofstep objective and the proofsize_
_objective. θ0 is used in our expert iteration setup as base_
model to fine-tune from at each iteration, and θ1 is our first
iterated model or mathlib bootstrapped model trained on
both objectives.
We report in Table 1 the pass rates of θ0 and θ1 on mathlib_valid and miniF2F-valid and compare with previously re-_
ported pass rates for equivalent amounts of compute. As
reported in Polu & Sutskever (2020), training a value function to guide search greatly improves the pass rates of θ1
on mathlib-valid (see Polu & Sutskever (2020) for an ablation of the value function). Interestingly, the gap between
_θ0 and θ1 on miniF2F-valid is not as significant, demon-_
-----
**Formal Mathematics Statement Curriculum Learning**
of validation loss.
**4.6. Expert iteration on mathlib-train**
In this section we propose to set St to the statements in
_mathlib-train, run our expert iteration process with it and re-_
port performance on both mathlib-valid and miniF2F-valid.
Performance is reported in terms of pass rate (percentage of
successful proof searches) as a function of the number of
attempts per statement, noted pass@k where k is the number of attempts per statement at test time. To reduce noise
in these metrics we generally run more than k attempts at
test time (generally 32 to compute pass@1 and pass@8),
averaging across attempts as needed to obtain a smoother
_pass@k value._
Given the large number of statements in mathlib-train (25k)
we uniformly set a = 1 and use θ0 and θ1 as described in
section 4.4 and report pass@1 and pass@8 across 8 iterations in figure 1. The pass@1 on mathlib-valid goes from
56.3% for θ1 to 62.6% for θ9. The performance steadily
improves and follows a clear logarithmic scaling law on
_mathlib-valid. It is also notable that, initially, transfer to out-_
of-distribution minif2f-valid appears limited but eventually
kicks in as we reach better performance on mathlib-valid.
This demonstrates that the expert iteration process does not
just overfit to mathlib but also leads to improved performance on out-of-distribution statements.
mathlib-valid minif2f-valid
pass@1
70
pass@8
40
65
35
60
30
55
2 4 6 8 2 4 6 8
_Figure 1. pass@1 (plain) and pass@8 (dotted) for mathlib-valid_
and minif2f-valid when running 8 expert iterations with St set to
be the statements in mathlib-train. The x-axis is log-scaled. It
corresponds to the indices of the θk models and serves as a good
proxy to compute (the amount of test-time and train-time compute
per iteration being fixed). The y-axis is scaled linearly and simply
shifted between the two graphs (spans an equal range).
We define the cumulative pass rate at iteration k as the pass
rate consisting of all proof searches up to iteration k (necessarily monotonic in k). Since we set a = 16 for evaluation
on mathlib-valid and minif2f-valid at each iteration, the
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||Exper|t iterat|ion||
||Samp|le only|||
mathlib-valid minif2f-valid
78 46
76 44
74 42
72 40
Expert iteration
70 38
Sample only
Adjusted compute
68 36
2 4 6 8 2 4 6 8
_Figure 2. Cumulative pass rate for our expert iteration loop as well_
as a sample only loop where we skip re-training the model between
iterations. The adjusted compute line is computed by fitting the
_sample only curve and shifting it to approximate a setup where we_
would focus all the additional compute used by expert iteration
(sampling training data from mathlib-train as well as re-training
models at each iteration) towards running proof searches against
_mathlib-valid._
cumulative pass rate at iteration k can be seen as a noisy
ensembled pass@16k (multiple models (θk), no averaging).
In figure 2, we report this cumulative pass rate for two iteration loops, our normal one and a sampling-only loop
where we skip re-training the model between iterations and
solely sample from θ1. This directly compares test-time
compute scaling (scaling proof search attempts) to expert
iteration scaling (interleaved training on new data sampled
from mathlib-train) and provides a very clear visualization
of the gains of expert iteration. For a fair comparison, we
also report an adjusted compute line which approximates
the test-time performance we would get at each iteration if
we were to focus all the additional compute used by expert
iteration (sampling proofs from mathlib-train as well as
re-training models at each iteration) towards solely running
proof searches against mathlib-valid.
As shown by figure 2, the scaling exponent of expert iteration is substantially higher than the scaling exponent
associated with solely scaling test-time compute (running
more proof searches), demonstrating the clear benefit of
expert iteration. We’ll denote the fully iterated model from
this section as θ9[mathlib].
Even in the presence of ground-truth proofs for each of
the statements in mathlib-train (tactic dataset), expert
iteration generates data that further improves the performance of the model. The number of statements proved
in mathlib-train goes from 17390 (67.8%) at iteration 1 to
19476 (76.0%) at iteration 9, while the average proof length
of these statements goes from 4.8 to 4.0. We hypothesize
-----
**Formal Mathematics Statement Curriculum Learning**
that this continuously improving performance through expert iteration stems from two effects: (i) the model finding
new original proofs for the same statements and (ii) the
model closing marginally harder statements at each iteration – which in turn provides more useful training data for
the next iteration. By iteration 9, the model is trained on
more than 90% generated data. We present in Appendix E
a few examples of original proofs found by our models on
_mathlib-train compared with their ground-truth versions._
To verify our hypothesis that expert iteration is capable of
closing a curriculum of increasingly difficult problems out
of a set of problem statements, and that this capability is
independent of having access to ground-truth proofs, we
propose in the next section to study expert iteration applied
to a synthetically generated set of problems for which we
have fine-grained control on the difficulty of each statement.
**5. Statement curriculum learning**
In this section we focus on running expert iteration on synthetic statements generated by an inequality generator. The
use of synthetic statements enables us to control the difficulty of each statement to present evidence that expert
iteration can hill-climb the intrinsic difficulty gradient of
the resulting set of statements. In particular, we show that,
at fixed compute budget, expert iteration eventually closes
proofs of hard statements that remain completely out of
reach of simply sampling proof searches without interleaved
training.
**5.1. Synthetic inequality generator**
We designed a synthetic inequality statement generator for
Lean in the spirit of the INT (Wu et al., 2020) generator.
The generator consists in generating inequalities from well
known inequality theorems (AM-GM, Trivial inequality,
Cauchy-Schwarz, Bernoulli, Young, Hölder) and composing them. It is driven by two difficulty parameters: ND
which controls depth of composition of inequalities and
_NS which controls the complexity of the input expressions_
to the composed inequalities. We provide details on its
implementation in Appendix D.
Using this generator we generate a curriculum of 5600 inequality statements (for which we don’t have proofs), 100
for each values of 0 _NS_ 7 and 0 _ND_ 6. We
_≤_ _≤_ _≤_ _≤_
denote this set of statements as synth-ineq.
To bootstrap our models capabilities on this specific task, we
also generate 100 statements of low difficulty (ND = 1 and
_NS = 5) and formalize a proof for each of these statements._
We refer to this dataset as synth-ineq-train. In the rest of
this paper we adjunct this training dataset to the tactic
dataset used to train our models.
**5.2. Expert iteration on synthetic inequality statements**
In this section we propose to set St to the union of the statements in mathlib-train and synth-ineq. Again, we uniformly
set a = 1 and use θ0 and θ1 as described in section 4.4,
except that they are now also trained on synth-ineq-train.
Similarly to the previous section, we report in figure 3 the
cumulative pass rate for two loops, our standard expert
iteration loop, and a proof search only loop where we don’t
interleave training between iterations. The pass rates are
reported split by values of ND (pooling together 0 ≤ _NS ≤_
7) which we found to be the main driver for difficulty.
|0|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|1 2|||||
|3 4|||||
|5 6|||||
||||||
||||||
Expert iteration Sample only
50 0 50
1
40 2 40
3
30 4 30
5
20 6 20
10 10
0 0
2 4 6 8 2 4 6 8
_Figure 3. Cumulative pass rate for our expert iteration loop as well_
as a sample only loop where we skip re-training the model between
iterations. Pass rates are reported for each value of ND (pooling
together 0 ≤ _NS ≤_ 7).
Despite the challenging nature of these synthetic inequalities, figure 3 demonstrates that expert iteration is capable of
learning the intrinsic curriculum induced by synth-ineq. In
particular, expert iteration is capable of closing 6 problems
of difficulty ND = 6 without having been provided with
any seed ground-truth proof for this difficulty level. Note
that difficulty ND = 6 remains completely out of reach of
simply scaling the number of attempts per statements (the
_sample only loop remaining stuck at 0 for ND = 6)._
This confirms on our synthetic statements dataset synth-ineq
that not only expert iteration is capable of learning the curricula occurring in a set of statements, but this process also
enables the emergence of new capabilities without the need
for ground-truth proofs (ability to close, highly challenging,
deeply composed inequalities).
**6. Targeting miniF2F**
Motivated by the results from Section 5, we curated and
manually formalized a set of math exercises to target
_miniF2F. miniF2F statements being quite out of distribu-_
tion compared to the distribution of statements present in
-----
**Formal Mathematics Statement Curriculum Learning**
_mathlib (which typically includes generic theorems and lem-_
mas but very few exercises), we hypothesized that if the
difficulty of this set of statements was made varied enough,
expert iteration could potentially leverage it to effectively
shift our models’ distribution closer to miniF2F’s, and in
turn, improve their eventual performance on it.
**6.1. Formalization effort**
We manually formalized 327 statements[2] drawn from the following sources: 302 examples and exercises from Lehoczky
& Rusczyk (a;b). The books are classic problem solving
textbooks for students in grades 7-12 preparing for contests
such as AMCs and AIMEs. 25 problems from the MATH
dataset (Hendrycks et al., 2021). All problems were drawn
from the train split of the dataset, focusing on difficulty 5
problems (miniF2F only contains problems from the test
split).
We refer to Zheng et al. (2021) for more details on the
formalization procedure and the typical time needed for it
as these problems were formalized in similar conditions.
We denote this set of statements as miniF2F-curriculum
and verified (based on problem provenance and manual
inspection of statements) that it had an empty intersection
with miniF2F-{test,valid}.
**6.2. Transfer to miniF2F**
In this section we propose to set St to the union of the statements in mathlib-train, synth-ineq and miniF2F-curriculum.
We uniformly set a = 1 on mathlib-train and synth-ineq
and a = 8 on miniF2F-curriculum and use θ0 and θ1 as
described in section 5.
Similarly to previous sections, we report in figure 4 (left)
the cumulative pass rate on miniF2F-valid of our full curriculum expert iteration loop and compare them with the
_mathlib-train only expert iteration from section 4.6. Since_
more compute is deployed in our full-curriculum loop (more
statements) we also report a mathlib-train only loop taking
_a = 2. At the end of the expert iteration, 100 out of the 327_
statements from miniF2F-curriculum end up being closed,
suggesting a lack of density in our manually formalized set
of statement.
We also report in figure 4 (right) the pass@1 and pass@8
for our full curriculum expert iteration loop. The steady improvement on miniF2F-valid shows that the expert iteration
procedure we propose does not overfit on the statements
that compose the curriculum it uses. Despite the potential
inefficiency of our curriculum, the improved performance
associated with its use demonstrates, as hypothesized, an
[2https://github.com/openai/miniF2F/tree/](https://github.com/openai/miniF2F/tree/statement_curriculum_learning/lean/src/statement_curriculum_learning)
[statement_curriculum_learning/lean/src/](https://github.com/openai/miniF2F/tree/statement_curriculum_learning/lean/src/statement_curriculum_learning)
[statement_curriculum_learning](https://github.com/openai/miniF2F/tree/statement_curriculum_learning/lean/src/statement_curriculum_learning)
minif2f-valid
minif2f-valid
40
35
30
45
40
|ass@1 full|Col2|Col3|
|---|---|---|
|ss@8 mat ss@8 full|hlib||
||||
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||m m|athlib athlib|a= a=|1 2|
pass@1 mathlib
pass@1 full
pass@8 mathlib
pass@8 full
_Figure 4. Left: cumulative pass rate on miniF2F-valid for our_
expert iteration loop using our full curriculum (mathlib-train, synth_ineq and miniF2F-curriculum) compared to the expert iteration_
loop from section 4.6. The total number of attempts per iteration
in our full loop is 25k + 5.6k + 8 ∗ 327 ≈ 33.2k, which means
the total compute deployed is higher than in the mathlib-train only
loop (25k). We therefore also report in dotted a mathlib-train only
loop, taking a = 2, whose total number of attempts per iteration is
_≈_ 50k. Right: pass@1 (plain) and pass@8 (dotted) for our expert
iteration loop using our full curriculum (mathlib-train, synth-ineq
and miniF2F-curriculum) compared to the expert iteration loop
from section 4.6.
effective transfer between miniF2F-curriculum, synth-ineq
and miniF2F-valid through expert iteration. We’ll denote
the fully iterated model from this section as θ9[full][.]
**6.3. Results**
We report in table 2 the pass rates on mathlib-{valid, test}
and miniF2F-{valid, test} for the models trained in previous
sections, namely θ1, θ9[mathlib], and θ9[full][. We achieve a][ 47][.][3%]
pass rate (using a = 64 attempts) on miniF2F-valid and a
36.6% pass rate on miniF2F-test, substantially improving
from the previous state-of-the-art (Zheng et al., 2021).
These results include the resolution of 26 AMC12
problems, 6 AIME problems and 2 problems adapted
from the IMOs. Out of these statements, 4 AMC12
problems (amc12b_2020_p5, amc12a_2009_p9,
amc12a_2003_p24, amc12b_2003_p17), 2 AIME
problems (aime_1984_p1, aime_1990_p4),
and 2 IMO-adapted problems (imo_1961_p1[3],
imo_1964_p2) are uniquely solved by expert iterated models, the two IMO-adapted and the two AIME
problems being uniquely solved by θ9[full][.]
We provide a selection of the proofs found by our models
3Note that this IMO-adapted statement from miniF2F-valid is
a much weaker version than the original problem (see Appendix F
for more context)
-----
**Formal Mathematics Statement Curriculum Learning**
Indicatively, with our 774m parameters model, running a
full expert iteration to train θ9[full] required about 2000 A100
days of compute. Running one full proof search (a = 1
_d = 512 e = 8) when properly parallelised, requires on_
average about 0.1 A100 hour of compute.
**7.2. Limitations**
Despite our models’ capability, as discussed in Appendix F.1, to generate cuts and witnesses, we believe that
their current main limitation lies in their inability (under
our proposed search procedure) to chain more than 2 or
3 non-trivial steps of mathematical reasoning, preventing
them from consistently (instead of exceptionally) solving
challenging olympiad problems. We’ve been repeatedly
impressed by the complexity of some of the proofsteps generated by our models. But, proofs requiring many of such
reasoning steps remain beyond our current compute horizon.
Even if we solved a selection of challenging olympiad problems, our models are still very far from being competitive
with the brightest students in these competitions.
While our models have demonstrated some capabilities to
generate cuts, the cuts they generate are often shallow (they
involve only a few proofsteps and don’t necessarily deeply
change the structure of the proof–we refer the reader to the
Cut-Elimination theorem and Carbone & Semmes (1996)
for a discussion of the influence of cuts on proof size). We
believe that studying language models’ ability to generate
cuts, and designing search procedures that leverage that
capability (related ideas can be found in Czechowski et al.
(2021)), are interesting avenues of research to alleviate this
limitation.
**8. Conclusion**
In this paper we presented an expert iteration procedure
for GPT-f (Polu & Sutskever, 2020), demonstrating that it
is capable of solving a curriculum of increasingly difficult
problems out of a set of formal statements of sufficiently varied difficulty. Our results suggest that the lack of self-play
in the formal mathematics setup can be effectively compensated for by automatically as well as manually curated sets
of formal statements, which are much cheaper to formalize
than full proofs. Finally, we hope that the statement curricu_lum learning methodology we presented in this work will_
help accelerate progress in automated reasoning, especially
if scaled with automated generation and curation of formal
statements in the future.
**References**
Lean theorem prover. [https://leanprover.](https://leanprover.github.io/about/)
[github.io/about/.](https://leanprover.github.io/about/)
_Table 2. Performance of θ1 (value-function based search), θ9[mathlib]_
(expert iterated on mathlib-train) and θ9[full] (expert iterated on our
full curriculum) on mathlib-{valid, test} and miniF2F-{valid, test}.
All proof searches are run with d = 512 and e = 8.
Model _pass@1_ _pass@8_ _pass@64_
_mathlib-valid_
PACT (Han et al., 2021) 48.4% - -
_θ1_ 56.3% 66.3% 72.0%
_θ9[mathlib]_ **62.6%** **70.7%** **75.8%**
_θ9[full]_ 61.7% 69.8% 75.3%
_mathlib-test_
_θ1_ 56.5% 66.9% 73.7%
_θ9[mathlib]_ **63.0%** 71.5% **77.1%**
_θ9[full]_ 62.9% **71.6%** 76.3%
_miniF2F-valid_
PACT (Zheng et al., 2021) 23.9% 29.3% -
_θ1_ 28.5% 35.5% 41.2%
_θ9[mathlib]_ 31.3% 38.3% 44.1%
_θ9[full]_ **33.6%** **41.2%** **47.3%**
_miniF2F-test_
PACT (Zheng et al., 2021) 24.6% 29.2% -
_θ1_ 25.9% 31.1% 33.6%
_θ9[mathlib]_ 27.2% 33.0% 35.2%
_θ9[full]_ **29.6%** **34.5%** **36.6%**
for these statements as well as a qualitative analysis of them
in Appendix F.
Also, we achieve a higher than 75% pass rate (using a = 64
attempts) on mathlib-{valid, test} (a new state-of-the-art as
well) suggesting that our models have the potential to be
effectively leveraged as proof assistants in the formalization
efforts associated with mathlib.
**7. Discussion**
**7.1. Model Size**
Throughout this paper, we used a single model size (774m
trainable parameters). We briefly experimented with different model sizes (not reported in this paper) and found
that model size scaling is not as straightforward as in the
case of unsupervised learning (Kaplan et al., 2020). We
found that bigger models are better, in the sense that they
consistently exhibit higher pass@1. But, they are also much
more expensive to sample from. And despite their pass@1
being higher, it is often the case that for a fixed amount
of compute, sampling more attempts from a smaller model
leads to a better final performance.
For the compute budget we had available, we estimated the
model size we used to be a compelling trade-off. We leave
as future work a more thorough study of these dynamics to
better understand the different compute frontiers involved.
-----
**Formal Mathematics Statement Curriculum Learning**
Bansal, K., Loos, S., Rabe, M., Szegedy, C., and Wilcox, S.
Holist: An environment for machine learning of higher
order logic theorem proving. In International Conference
_on Machine Learning, pp. 454–463, 2019a._
Bansal, K., Loos, S. M., Rabe, M. N., and Szegedy, C.
Learning to reason in large theories without imitation.
_arXiv preprint arXiv:1905.10501, 2019b._
Berner, C., Brockman, G., Chan, B., Cheung, V., D˛ebiak, P.,
Dennison, C., Farhi, D., Fischer, Q., Hashme, S., Hesse,
C., et al. Dota 2 with large scale deep reinforcement
learning. arXiv preprint arXiv:1912.06680, 2019.
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan,
J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
Askell, A., et al. Language models are few-shot learners.
_arXiv preprint arXiv:2005.14165, 2020._
Browne, C. B., Powley, E., Whitehouse, D., Lucas, S. M.,
Cowling, P. I., Rohlfshagen, P., Tavener, S., Perez, D.,
Samothrakis, S., and Colton, S. A survey of monte carlo
tree search methods. IEEE Transactions on Computa_tional Intelligence and AI in games, 4(1):1–43, 2012._
Buzzard, K., Commelin, J., and Massot, P. Lean perfectoid
spaces. [https://leanprover-community.](https://leanprover-community.github.io/lean-perfectoid-spaces/)
[github.io/lean-perfectoid-spaces/,](https://leanprover-community.github.io/lean-perfectoid-spaces/)
2019.
Carbone, A. and Semmes, S. Making proofs without modus
ponens: An introduction to the combinatorics and complexity of cut elimination. Bulletin of the American Math_ematical Society, 34:131–159, 1996._
Czechowski, K., Odrzygó´zd´z, T., Zbysi´nski, M., Zawalski,
M., Olejnik, K., Wu, Y., Kucinski, L., and Miło´s, P. Subgoal search for complex reasoning tasks. Advances in
_Neural Information Processing Systems, 34, 2021._
de Moura, L., Kong, S., Avigad, J., Van Doorn, F., and von
Raumer, J. The lean theorem prover (system description).
In International Conference on Automated Deduction, pp.
378–388. Springer, 2015.
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert:
Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805,
2018.
Firoiu, V., Aygun, E., Anand, A., Ahmed, Z., Glorot, X.,
Orseau, L., Zhang, L., Precup, D., and Mourad, S. Training a first-order theorem prover from synthetic data. arXiv
_preprint arXiv:2103.03798, 2021._
Han, J. M., Rute, J., Wu, Y., Ayers, E. W., and Polu, S. Proof
artifact co-training for theorem proving with language
models. arXiv preprint arXiv:2102.06203, 2021.
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart,
S., Tang, E., Song, D., and Steinhardt, J. Measuring mathematical problem solving with the math dataset. arXiv
_preprint arXiv:2103.03874, 2021._
Huang, D., Dhariwal, P., Song, D., and Sutskever, I.
Gamepad: A learning environment for theorem proving.
_arXiv preprint arXiv:1806.00608, 2018._
Irving, G., Szegedy, C., Alemi, A. A., Eén, N., Chollet,
F., and Urban, J. Deepmath-deep sequence models for
premise selection. In Advances in Neural Information
_Processing Systems, pp. 2235–2243, 2016._
Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B.,
Chess, B., Child, R., Gray, S., Radford, A., Wu, J., and
Amodei, D. Scaling laws for neural language models.
_arXiv preprint arXiv:2001.08361, 2020._
Karras, T., Laine, S., and Aila, T. A style-based generator architecture for generative adversarial networks. In
_Proceedings of the IEEE/CVF Conference on Computer_
_Vision and Pattern Recognition, pp. 4401–4410, 2019._
Lehoczky, S. and Rusczyk, R. The Art of Problem Solving,
_Volume 1: the Basics, a. ISBN:978-0-9773045-6-1._
Lehoczky, S. and Rusczyk, R. The Art of Problem Solving,
_Volume 2: and Beyond, b. ISBN:978-0-9773045-8-5._
Li, W., Yu, L., Wu, Y., and Paulson, L. C. Modelling highlevel mathematical reasoning in mechanised declarative
proofs. arXiv preprint arXiv:2006.09265, 2020.
Loos, S., Irving, G., Szegedy, C., and Kaliszyk, C.
Deep network guided proof search. _arXiv preprint_
_arXiv:1701.06972, 2017._
Megill, N. D. and Wheeler, D. A. _Metamath:_ _A_
_Computer Language for Pure Mathematics,_ 2019.
[URL http://us.metamath.org/downloads/](http://us.metamath.org/downloads/metamath.pdf)
[metamath.pdf.](http://us.metamath.org/downloads/metamath.pdf)
Polu, S. and Sutskever, I. Generative language modeling for automated theorem proving. _arXiv preprint_
_arXiv:2009.03393, 2020._
Rabe, M. N., Lee, D., Bansal, K., and Szegedy, C. Mathematical reasoning via self-supervised skip-tree training.
_arXiv preprint arXiv:2006.04757, 2020._
Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G.,
Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J.,
et al. Learning transferable visual models from natural
language supervision. arXiv preprint arXiv:2103.00020,
2021.
-----
**Formal Mathematics Statement Curriculum Learning**
Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., and Sutskever, I. Zero-shot textto-image generation. arXiv preprint arXiv:2102.12092,
2021.
Scholze, P. Liquid tensor experiment. [https:](https://xenaproject.wordpress.com/2020/12/05/liquid-tensor-experiment/)
[//xenaproject.wordpress.com/2020/](https://xenaproject.wordpress.com/2020/12/05/liquid-tensor-experiment/)
[12/05/liquid-tensor-experiment/, 2020.](https://xenaproject.wordpress.com/2020/12/05/liquid-tensor-experiment/)
Selsam, D., Lamm, M., Bünz, B., Liang, P., de Moura, L.,
and Dill, D. L. Learning a sat solver from single-bit
supervision. arXiv preprint arXiv:1802.03685, 2018.
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L.,
Van Den Driessche, G., Schrittwieser, J., Antonoglou, I.,
Panneershelvam, V., Lanctot, M., et al. Mastering the
game of go with deep neural networks and tree search.
_nature, 529(7587):484–489, 2016._
Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai,
M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., et al. Mastering chess and shogi by self-play
with a general reinforcement learning algorithm. arXiv
_preprint arXiv:1712.01815, 2017._
Tan, M. and Le, Q. Efficientnet: Rethinking model scaling
for convolutional neural networks. In International Con_ference on Machine Learning, pp. 6105–6114. PMLR,_
2019.
Urban, J. and Jakub˚uv, J. First neural conjecturing datasets
and experiments. arXiv preprint arXiv:2005.14664, 2020.
Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M.,
Dudzik, A., Chung, J., Choi, D. H., Powell, R., Ewalds,
T., Georgiev, P., et al. Grandmaster level in starcraft ii
using multi-agent reinforcement learning. Nature, 575
(7782):350–354, 2019.
Wang, M., Tang, Y., Wang, J., and Deng, J. Premise selection for theorem proving by deep graph embedding. In
_Advances in Neural Information Processing Systems, pp._
2786–2796, 2017.
Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M.,
Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey,
K., et al. Google’s neural machine translation system:
Bridging the gap between human and machine translation.
_arXiv preprint arXiv:1609.08144, 2016._
Wu, Y., Jiang, A. Q., Ba, J., and Grosse, R. Int: An inequality benchmark for evaluating generalization in theorem
proving. arXiv preprint arXiv:2007.02924, 2020.
Yang, K. and Deng, J. Learning to prove theorems
via interacting with proof assistants. _arXiv preprint_
_arXiv:1905.09381, 2019._
Zheng, K., Han, J. M., and Polu, S. Minif2f: a cross-system
benchmark for formal olympiad-level mathematics. arXiv
_preprint arXiv:2109.00110, 2021._
-----
**Formal Mathematics Statement Curriculum Learning**
**A. Related Work**
**Deep learning applied to premise selection and proof guidance** Early applications of deep learning to formal mathematics focused primarily on premise selection and proof guidance. DeepMath (Irving et al., 2016) explored the use of CNNs
and RNNs to predict whether a premise is useful to demonstrate a given conjecture. Their results were later improved with
FormulaNet (Wang et al., 2017) by the use of graph neural networks, reminiscent of NeuroSAT (Selsam et al., 2018). Proof
guidance consists in selecting the next clause to process inside an automated theorem prover. Loos et al. (2017) investigated
the use of models similar to DeepMath’s for proof guidance and demonstrated a significant uplift on the Mizar library.
More recently Firoiu et al. (2021) demonstrated the potential of deep learning techniques to be competitive with E prover’s
heuristics when applied to resolution calculus while training on fully synthetic data.
**Deep learning applied to automated theorem-proving** _HOList (Bansal et al., 2019a) proposes a formal environment_
based on HOL Light. They achieve their best performance (Bansal et al., 2019b) with a GNN model designed for premise
selection and the use of exploration. The same team studied the use of a skip-tree objective with Transformers on formal
statements (Rabe et al., 2020), demonstrating, along with GPT-f (Polu & Sutskever, 2020), the potential of leveraging
Transformers for formal reasoning. GamePad (Huang et al., 2018) and CoqGymn/ASTactic (Yang & Deng, 2019) introduce
environments based on the Coq theorem prover. ASTactic generates tactics as programs by sequentially expanding a
partial abstract syntax tree. Urban & Jakub˚uv (2020) studied the capability of GPT-2 to produce useful conjectures for the
Mizar library and IsarStep (Li et al., 2020) explored the synthesis of intermediate propositions in declarative proofs for
Isabelle/HOL using Transformers.
**B. Lean-gym**
lean-gym presents the following API:
- init-search: declaration → _tactic_state. Takes a declaration name (a theorem name from the loaded library)_
and initializes a search while setting the run-time environment at that particular declaration. It returns the initial tactic
state along with a fresh search_id and tactic_state_id.
- run_tac: (tactic_state, tactic) → _tactic_state. Takes a search_id and a tactic_state_id to identify a_
tactic state, as well as a tactic string to apply to it. It returns a new tactic state and its associated tactic_state_id.
Below is an example in-terminal trace demonstrating the use of lean-gym’s REPL interface:
$ lean --run src/repl.lean
["init_search", ["int.prime.dvd_mul", ""]]
{
"error":null,
"search_id":"0",
"tactic_state":"⊢∀ {m n : Z} {p : N}, nat.prime p →
`↑p | m * n →` p | m.nat_abs ∨ p | n.nat_abs",
"tactic_state_id":"0"
}
...
["run_tac",["1","1","apply (nat.prime.dvd_mul hp).mp"]]
{
"error":null,
"search_id":"1",
"tactic_state":"m n : Z, p : N, hp : nat.prime p, h : ↑p | m * n
_⊢_ p | m.nat_abs * n.nat_abs",
"tactic_state_id":"2"
}
...
Using lean-gym is virtually equivalent to opening a Lean editor at a specific theorem, deleting its proof and interacting
with Lean to reconstruct it.
-----
**Formal Mathematics Statement Curriculum Learning**
_Table 3. Mix and source of data involved in the updated WebMath pre-training._
Dataset Size Mix
Github Python 179 GB 25%
arXiv Math 10 GB 25%
Math StackExchange 2 GB 25%
PACT mix2 28 GB 17%
Math Overflow 200 M 5%
ProofWiki 30 M 2%
PlanetMath 25 M 1%
Providing a REPL interface over the standard input/output makes it very easy to integrate lean-gym from any programming
language. Writing a wrapper in Python, as an example, only takes a few dozen lines of code. Since lean-gym is a Lean
program, managing the loaded libraries is done directly using Lean’s own infrastructure (using leanpkg.toml), making
it quite straightforward to have access to both mathlib and miniF2F statements from the same lean-gym instance.
Note that lean-gym is stateful, meaning that distributing proof searches on multiple lean-gym instances requires to
track which instance is associated with which proof search. In practice, we were able to scale the use of lean-gym to
thousands of cores running thousands of proof searches in parallel. Finally, lean-gym’s REPL interface is blocking,
preventing inner-proof search parallelization, though this limitation can probably be removed in the future.
**C. WebMath**
Our updated WebMath pre-training dataset consists in the mix presented in table 3.
As demonstrated in table 3, we empirically up-weighted (compared to their token size) parts of WebMath with high-quality
mathematical content while making sure they don’t overfit (despite running >1 epochs for some of them). We also included
PACT mix2 directly in the WebMath pre-training to avoid having to sequence more than two pre-training phases to prepare
Lean models.
**D. Synthetic inequalities**
**D.1. Design**
The generator consists of three phases:
**Seed expressions generation** The first phase consists in generating seed expressions for which we track the sign. We start
by initializing an expression set E composed of tuples of expressions and sign constraints, by generating nv variable names
(letters) assumed strictly positive as well as nn integers (for which we know the sign). For NS rounds, we compose elements
of E using unary (log(·), log(1/·), sqrt(·)) or binary operations (+, −, ×, /, ∧, max, min) for which we can deduce the
sign based on the sign condition of the input expression(s) and re-inject the resulting expression and sign constraint in E.
This produces a set E of signed seed expressions of size nv + nn + NS.
**Inequality composition** The second phase consists in generating inequalities from well known inequality theorems (AMGM, Trivial inequality, Cauchy-Schwarz, Bernoulli, Young, Hölder) taking as input to these theorems expressions from E
based on the sign constraints required for each theorem. We finally compose these inequalities ND times using compositions
theorems detailed in D.2. The resulting inequality is a composed inequality of depth ND based on nv + nn + NS seed
expressions.
**Simplification** We finally post-process these inequalities so that they are parsable by Lean and run them through Lean’s
simp tactic for a final simplification.
_ND and NS together control for the difficulty of the resulting inequality. ND controls depth of composition, while NS_
controls for obfuscation as it increases the complexity of the input expressions to the composed inequalities. When sampling
inequalities, we nn = 4 and randomly sample 2 ≤ _nv ≤_ 8 at each generation. We report below examples of generated
inequalities for various values of ND and NS.
-----
**Formal Mathematics Statement Curriculum Learning**
**D.2. List of inequality composition theorems**
Below is the list of theorem names from mathlib that we use to compose inequalities together. One third of the time, we only
transform the current composed inequality with one of the following theorems:
- neg_le_neg
- inv_le_inv
- mul_self_le_mul_self
- div_le_one_of_le
We otherwise compose the current composed inequality with a newly generated inequality using the following theorems:
- mul_le_mul
- add_le_add
- div_le_div
- mul_le_mul_of_nonneg
- le_mul_of_ratio
**D.3. Examples**
|ND = 0 NS = 0|Col2|
|---|---|
|Compositions|AmGm a b (67:R) ((1:R)/(10:R)) ((1:R)/(10:R)) ((8:R)/(10:R))|
|Statement|theorem synthetic_ineq_nb_seed_var_0_depth_0_p_1 (a b : R) (h0 : 0 < a) (h1 : 0 < b) : (67:R) ^ ((8:R) / (10:R)) * b ^ (10:R)−¹ * a ^ (10:R)−¹ ≤ (8:R) / (10:R) * (67:R) + (10:R)−¹ * a + b * (10:R)−¹ := sorry|
|ND = 0 NS = 4|Col2|
|---|---|
|Compositions|Sqnonneg a ((a) + ((-68:R)))|
|Statement|theorem synthetic_ineq_nb_seed_var_4_depth_0_p_4 (a b : R) (h0 : 0 < a) (h1 : 0 < b) : (2:R) * (a * (a + -(68:R))) ≤ (a + -(68:R)) ^ 2 + a ^ 2 := sorry|
Compositions AmGm a b (67:R) ((1:R)/(10:R)) ((1:R)/(10:R)) ((8:R)/(10:R))
theorem synthetic_ineq_nb_seed_var_0_depth_0_p_1
(a b : R)
(h0 : 0 < a)
Statement (h1 : 0 < b) :
(67:R) ^ ((8:R) / (10:R)) * b ^ (10:R)[−]¹ *
a ^ (10:R)[−]¹ ≤ (8:R) / (10:R) * (67:R) +
(10:R)[−]¹ * a + b * (10:R)[−]¹ := sorry
Compositions Sqnonneg a ((a) + ((-68:R)))
theorem synthetic_ineq_nb_seed_var_4_depth_0_p_4
(a b : R)
(h0 : 0 < a)
Statement (h1 : 0 < b) :
(2:R) * (a * (a + -(68:R))) ≤
(a + -(68:R)) ^ 2 + a ^ 2 := sorry
-----
**Formal Mathematics Statement Curriculum Learning**
|ND = 4 NS = 4|Col2|
|---|---|
|Compositions|AddLeAdd Bernoulli 99 c AddLeAdd SelfDivConst ((a) / (f)) 6 LeMulOfRatio SelfDivConst c 70 DivLeDiv Cauchy ((a) / (f)) d c (log (((59:R) + f))) Young ((a) / (f)) a ((3:R)/(2:R)) ((3:R)/(1:R))|
|Statement|theorem synthetic_ineq_nb_seed_var_4_depth_4_p_13 (a b c d e f : R) (h0 : 0 < a) (h1 : 0 < b) (h2 : 0 < c) (h3 : 0 < d) (h4 : 0 < e) (h5 : 0 < f) : (1:R) + (99:R) * c + (a / f / (6:R) + a * (a / f) / ((d ^ 2 + a ^ 2 / f ^ 2) * (real.log ((59:R) + f) ^ 2 + c ^ 2))) ≤ ((a / f) ^ ((3:R) / (2:R)) / ((3:R) / (2:R)) + a ^ 3 / (3:R)) / (real.log ((59:R) + f) * d + a / f * c) ^ 2 * (c / (c / (70:R))) + a / f + (c + (1:R)) ^ 99 := sorry|
AddLeAdd
Bernoulli 99 c
AddLeAdd
SelfDivConst ((a) / (f)) 6
Compositions LeMulOfRatio
SelfDivConst c 70
DivLeDiv
Cauchy ((a) / (f)) d c (log (((59:R) + f)))
Young ((a) / (f)) a ((3:R)/(2:R)) ((3:R)/(1:R))
theorem synthetic_ineq_nb_seed_var_4_depth_4_p_13
(a b c d e f : R)
(h0 : 0 < a)
(h1 : 0 < b)
(h2 : 0 < c)
(h3 : 0 < d)
(h4 : 0 < e)
Statement (h5 : 0 < f) :
(1:R) + (99:R) * c + (a / f / (6:R) + a * (a / f) /
((d ^ 2 + a ^ 2 / f ^ 2) *
(real.log ((59:R) + f) ^ 2 + c ^ 2))) ≤
((a / f) ^ ((3:R) / (2:R)) / ((3:R) / (2:R)) +
a ^ 3 / (3:R)) /
(real.log ((59:R) + f) * d + a / f * c) ^ 2 *
(c / (c / (70:R))) + a / f + (c + (1:R)) ^ 99 := sorry
-----
**Formal Mathematics Statement Curriculum Learning**
**E. Example proofs from mathlib-train**
We present in this section original proofs found by our models from mathlib-train, compared with their ground-truth version.
**comap_eq_of_inverse**
|Statement|lemma comap_eq_of_inverse {f : filter α} {g : filter β} {φ : α → β} (ψ : β → α) (eq : ψ ◦ φ = id) (hφ : tendsto φ f g) (hψ : tendsto ψ g f) : comap φ g = f :=|
|---|---|
|Ground-truth|begin refine ((comap_mono $ map_le_iff_le_comap.1 hψ).trans _).antisymm (map_le_iff_le_comap.1 hφ), rw [comap_comap, eq, comap_id], exact le_rfl end|
|Model proof|begin refine le_antisymm _ (filter.map_le_iff_le_comap.1 hφ), refine λ s hs, _, rw mem_comap, use [ψ −¹’ s, hψ hs], rw [← preimage_comp, eq, preimage_id] end|
lemma comap_eq_of_inverse {f : filter α} {g : filter β}
Statement (h{φφ : : tendsto α → _β} ( φψ : f g) (h β →_ _α) (eq :ψ : tendsto ψ ◦_ _ψφ = id) g f) :_
comap φ g = f :=
begin
refine ((comap_mono $
map_le_iff_le_comap.1 hψ).trans _).antisymm
Ground-truth (map_le_iff_le_comap.1 hφ),
rw [comap_comap, eq, comap_id],
exact le_rfl
end
begin
refine le_antisymm _ (filter.map_le_iff_le_comap.1 hφ),
refine λ s hs, _,
Model proof rw mem_comap,
use [ψ _−¹’ s, hψ hs],_
rw [← preimage_comp, eq, preimage_id]
end
-----
**Formal Mathematics Statement Curriculum Learning**
**sum_range_sub_sum_range**
|Statement|lemma sum_range_sub_sum_range {α : Type*} [add_comm_group α] {f : N → α} {n m : N} (hn m : n ≤ m) : P k in range m, f k - P k in range n, f k = P k in (range m).filter (λ k, n ≤ k), f k :=|
|---|---|
|Ground-truth|begin rw [← sum_sdiff (@filter_subset _ (λ k, n ≤ k) _ (range m)), sub_eq_iff_eq_add, ← eq_sub_iff_add_eq, add_sub_cancel’], refine finset.sum_congr (finset.ext $ λ a, 〈λ h, by simp at *; finish, λ h, have ham : a < m := lt_of_lt_of_le (mem_range.1 h) hnm, by simp * at *〉) (λ _ _, rfl) end|
|Model proof|begin rw [← sum_Ico_eq_sub _ hnm], congr, apply finset.ext, simp [Ico.mem, *], tauto end|
lemma sum_range_sub_sum_range {α : Type*} [add_comm_group α]
Statement {f : k in range m, f k - N → _α} {n m : N} (hnm : n k in range n, f k = ≤_ m) :
P k in (range m).filter ( P _λ k, n ≤_ k), f k :=
beginP
rw [← sum_sdiff (@filter_subset _ (λ k, n ≤ k) _
(range m)), sub_eq_iff_eq_add,
`←` eq_sub_iff_add_eq, add_sub_cancel’],
refine finset.sum_congr
Ground-truth (finset.ext $ λ a, 〈λ h, by simp at *; finish,
_λ h, have ham : a < m :=_
lt_of_lt_of_le (mem_range.1 h) hnm,
by simp * at *〉)
(λ _ _, rfl)
end
begin
rw [← sum_Ico_eq_sub _ hnm],
congr,
Model proof apply finset.ext,
simp [Ico.mem, *],
tauto
end
-----
**Formal Mathematics Statement Curriculum Learning**
**prod_inv_distrib**
|Statement|lemma prod_inv_distrib : (Q x in s, (f x)−¹) = (Q x in s, f x)−¹ :=|
|---|---|
|Ground-truth|begin classical, by_cases h : ∃ x ∈ s, f x = 0, { simpa [prod_eq_zero_iff.mpr h, prod_eq_zero_iff] using h }, { push_neg at h, have h’ := prod_ne_zero_iff.mpr h, have hf : ∀ x ∈ s, (f x)−¹ * f x = 1 := λ x hx, inv_mul_cancel (h x hx), apply mul_right_cancel’ h’, simp [h, h’, ← finset.prod_mul_distrib, prod_congr rfl hf] } end|
|Model proof|begin classical; induction s using finset.induction_on with a s has ih, { simp, }, simp only [has, prod_insert has, mul_inv_rev’], finish end|
lemma prod_inv_distrib : ([Q] x in s, (f x)[−]¹) =
Statement ([Q] x in s, f x)[−]¹ :=
begin
classical,
by_cases h : ∃ x ∈ s, f x = 0,
{ simpa [prod_eq_zero_iff.mpr h, prod_eq_zero_iff]
using h },
{ push_neg at h,
Ground-truth have h’ := prod_ne_zero_iff.mpr h,
have hf : ∀ x ∈ s, (f x)[−]¹ * f x = 1 := λ x hx,
inv_mul_cancel (h x hx),
apply mul_right_cancel’ h’,
simp [h, h’, ← finset.prod_mul_distrib,
prod_congr rfl hf] }
end
begin
classical; induction s using
finset.induction_on with a s has ih,
Model proof { simp, },
simp only [has, prod_insert has, mul_inv_rev’],
finish
end
-----
**Formal Mathematics Statement Curriculum Learning**
**F. Example proofs from minif2f-{test,valid,curriculum}**
We present in this section proofs found by our models from minif2f-{test,valid,curriculum}, demonstrating some of the
capabilities emerging from our training procedure.
**F.1. Qualitative analysis of proofs**
We provide qualitative insights in the nature of the proofs found by our models, which we believe are useful to build a better
intuition of their capabilities beyond pass rate numbers. Throughout this section, we refer to statements and solutions found
by our models that are presented in Appendix F along with comments describing the specificity of each proof.
First, we observe that a large number of olympiad problems that are designed to be computationally challenging for humans
are rendered trivial for our models through the use of Lean tactics. As an example, mathd_numbertheory_447 which
is not necessarily considered straightforward for humans, can be closed in Lean by a simple refl (proof found by our
models).
In recent years, Lean’s mathlib community has developed high-powered tactics such as linarith/nlinarith
(solves (non)linear inequalities), norm_num (normalizes numerical expressions), simp (simplifies goals and hypotheses)
and ring (normalizes expressions in a ring). These tactics can be used with arguments to guide their underlying search
procedure. As mentioned in Zheng et al. (2021), we confirm here that our models acquire advanced capabilities to leverage
these high-level tactics by providing exogenous arguments which are not present in the current tactic state. The generation
of these exogenous arguments through language modeling seems to require a non-trivial amount of mathematical intuition.
imo_1964_p2, imo_1961_p1 and aime_1990_p15 are good examples of such uses.
We have also observed a number of proofs that require multiple non-trivial reasoning steps through the use of lowerlevel tactics such as use, have, or by_cases that generally involve producing a witness or chaining implications,
requiring the generation of context specific exogenous terms. These interesting reasoning steps are structurally different
from simple normalization, simplification and rewriting of hypotheses or goals because they heavily rely on our models
ability to generate meaningful cuts or witnesses. This capability is, in our opinion, the most exciting stepping stone
towards solving more challenging mathematical problems. See, aopsbook_v2_c8_ex1, amc12b_2020_p6 and
mathd_train_algebra_217 for examples of such proofs.
More generally, we also observe that proofs generated by our models have a distinctive style compared to proofs formalized
by humans. This stems in part from the model’s capability to leverage high-level tactics in a way that is challenging for
humans as discussed in this section (e.g. one-liners such as nlinarith [sq_nonneg (x - y), sq_nonneg (y
- z)] where humans would generally decompose the problem in a less machine-like way). Additionally, as a result of our
search procedure and despite the bias towards shorter proofs introduced by our value function, extraneous proofsteps (such
as reversion/introduction of hypotheses, or no-op rewrites) are often interleaved with useful ones, which rarely happens in
human formalizations.
-----
**Formal Mathematics Statement Curriculum Learning**
**imo_1961_p1**
|Natural language|Solve the system of equations: x + y + z = a x2 + y2 + z2 = b2 xy = z2 where a and b are constants. Give the conditions that a and b must satisfy so that x, y, z (the solutions of the system) are distinct positive numbers. Note: the formalized statement in miniF2F is a weaker problem as it focuses on the second part of the question, providing the actual conditions, and asking for a proof that the requirement entails them.|
|---|---|
|Model proof|theorem imo_1961_p1 (x y z a b : R) (h 0 : 0 < x ∧ 0 < y ∧ 0 < z) (h : x ̸= y) 1 (h : y ̸= z) 2 (h : z ̸= x) 3 (h : x + y + z = a) 4 (h : x^2 + y^2 + z^2 = b^2) 5 (h 6 : x * y = z^2) : 0 < a ∧ b^2 < a^2 ∧ a^2 < 3 * b^2 := begin revert_all, intros, rw mul_comm, split, { nlinarith [sq_nonneg (x - y), sq_nonneg (y - z)], }, split, { nlinarith [sq_nonneg (z - 1)], }, revert h h 4, 3 field_simp [mul_comm a b], rw [mul_comm, ← h 5], contrapose!, rw mul_comm at h , 6 rw mul_comm, intro h, nlinarith [sq_nonneg (x - y), sq_nonneg (y - z)] end|
|Comments|The model is able to close this problem by splitting into cases, contraposing for the last case and using nlinarith. It must be noted that the arguments for the first two nlinarith uses are not necessary, however the [sq_nonneg (x - y), sq_nonneg (y - z)] argument provided on the last line is crucial to close the goal and are completely exogenous (present in no form in the tactic state before).|
Solve the system of equations:
_x + y + z = a_
_x[2]_ + y[2] + z[2] = b[2]
Natural language _xy = z[2]_
where a and b are constants. Give the conditions that a and b must satisfy so that x, y, z (the
solutions of the system) are distinct positive numbers. Note: the formalized statement in miniF2F is
a weaker problem as it focuses on the second part of the question, providing the actual conditions,
and asking for a proof that the requirement entails them.
theorem imo_1961_p1
(x y z a b : R)
(h0 : 0 < x ∧ 0 < y ∧ 0 < z)
(h1 : x ̸= y)
(h2 : y ̸= z)
(h3 : z ̸= x)
(h4 : x + y + z = a)
(h5 : x^2 + y^2 + z^2 = b^2)
(h6 : x * y = z^2) :
0 < a ∧ b^2 < a^2 ∧ a^2 < 3 * b^2 :=
begin
revert_all,
intros,
Model proof rw mul_comm,
split,
{ nlinarith [sq_nonneg (x - y), sq_nonneg (y - z)], },
split,
{ nlinarith [sq_nonneg (z - 1)], },
revert h3 h4,
field_simp [mul_comm a b],
rw [mul_comm, ← h5],
contrapose!,
rw mul_comm at h6,
rw mul_comm,
intro h,
nlinarith [sq_nonneg (x - y), sq_nonneg (y - z)]
end
Comments The model is able to close this problem by splitting into cases, contraposing for the last case and
using nlinarith. It must be noted that the arguments for the first two nlinarith uses are not
necessary, however the [sq_nonneg (x - y), sq_nonneg (y - z)] argument provided
on the last line is crucial to close the goal and are completely exogenous (present in no form in the
tactic state before).
-----
**Formal Mathematics Statement Curriculum Learning**
**imo_1964_p2**
|Natural language|Suppose a, b, c are the sides of a triangle. Prove that a2(b + c −a) + b2(c + a −b) + c2(a + b −c) ≤3abc|
|---|---|
|Model proof|theorem imo_1964_p2 (a b c : R) (h 0 : 0 < a ∧ 0 < b ∧ 0 < c) (h : c < a + b) 1 (h : b < a + c) 2 (h : a < b + c) : 3 a^2 * (b + c - a) + b^2 * (c + a - b) + c^2 * (a + b - c) ≤ 3 * a * b * c := begin nlinarith [sq_nonneg (b - a), sq_nonneg (c - b), sq_nonneg (a - c), sq_nonneg (c - a)] end|
|Comments|The model is able to close an IMO problem in one-line. It correctly provides exogenous arguments to nlinarith, which are necessary to close the goal. Note that either one of the last two arguments in the sequence [sq_nonneg (b - a), sq_nonneg (c - b), sq_nonneg (a - c), sq_nonneg (c - a)] can be omitted.|
Suppose a, b, c are the sides of a triangle. Prove that
Natural language
_a[2](b + c −_ _a) + b[2](c + a −_ _b) + c[2](a + b −_ _c) ≤_ 3abc
theorem imo_1964_p2
(a b c : R)
(h0 : 0 < a ∧ 0 < b ∧ 0 < c)
(h1 : c < a + b)
(h2 : b < a + c)
(h3 : a < b + c) :
Model proof
a^2 * (b + c - a) + b^2 * (c + a - b) + c^2 * (a + b - c)
_≤_ 3 * a * b * c :=
begin
nlinarith [sq_nonneg (b - a), sq_nonneg (c - b),
sq_nonneg (a - c), sq_nonneg (c - a)]
end
Comments The model is able to close an IMO problem in one-line. It correctly provides exogenous arguments to
nlinarith, which are necessary to close the goal. Note that either one of the last two arguments in
the sequence [sq_nonneg (b - a), sq_nonneg (c - b), sq_nonneg (a - c),
sq_nonneg (c - a)] can be omitted.
-----
**Formal Mathematics Statement Curriculum Learning**
**aime_1990_p15**
|Natural language|Find ax5 + by5 if the real numbers a, b, x, and y satisfy the equations ax + by = 3, ax2 + by2 = 7, ax3 + by3 = 16, ax4 + by4 = 42. Note: the formalized statement in miniF2F provides the answer and asks for a proof of it.|
|---|---|
|Model proof|theorem aime_1990_p15 (a b x y : R) (h 0 : a * x + b * y = 3) (h 1 : a * x^2 + b * y^2 = 7) (h 2 : a * x^3 + b * y^3 = 16) (h 3 : a * x^4 + b * y^4 = 42) : a * x^5 + b * y^5 = 20 := begin revert_all, intros a b x y h h h h 4, 3 1 2 ring_nf at h h 2, 1 rw ← sub_eq_zero at h 1, nlinarith [sq_nonneg (x - y), sq_nonneg (a + b - 2), sq_nonneg (x + y - (2:R)), sq_nonneg (a - b - 2)] end|
|Comments|The model is able to close a challenging AIME problem by providing crucial exogenous arguments sq_nonneg (x - y) and sq_nonneg (x + y - (2 : R)) to nlinarith, which are required to close the goal (while the other two can be removed).|
Find ax[5] + by[5] if the real numbers a, b, x, and y satisfy the equations
_ax + by = 3,_
_ax[2]_ + by[2] = 7,
Natural language
_ax[3]_ + by[3] = 16,
_ax[4]_ + by[4] = 42.
**Note: the formalized statement in miniF2F provides the answer and asks for a proof of it.**
theorem aime_1990_p15
(a b x y : R)
(h0 : a * x + b * y = 3)
(h1 : a * x^2 + b * y^2 = 7)
(h2 : a * x^3 + b * y^3 = 16)
(h3 : a * x^4 + b * y^4 = 42) :
a * x^5 + b * y^5 = 20 :=
Model proof begin
revert_all,
intros a b x y h3 h1 h2 h4,
ring_nf at h1 h2,
rw ← sub_eq_zero at h1,
nlinarith [sq_nonneg (x - y), sq_nonneg (a + b - 2),
sq_nonneg (x + y - (2:R)), sq_nonneg (a - b - 2)]
end
Comments The model is able to close a challenging AIME problem by providing crucial exogenous arguments
sq_nonneg (x - y) and sq_nonneg (x + y - (2 : R)) to nlinarith, which are
required to close the goal (while the other two can be removed).
-----
**Formal Mathematics Statement Curriculum Learning**
**mathd_train_algebra_217**
|Natural language|Let f(x) = Ax + B and g(x) = Bx + A, where A ̸= B. If f(g(x)) −g(f(x)) = B −A, what is A + B? Note: the formalized statement in our curriculum provides the answer and asks for a proof of it.|
|---|---|
|Model proof|theorem mathd_train_algebra_217 (a b : R) (f g : R → R) (h 0 : ∀ x, f x = a * x + b) (h 1 : ∀ x, f x = b * x + a) (h : a ̸= b) 2 (h 3 : ∀ x, f (g x) - g (f x) = b - a) : a + b = 0 := begin revert_all, intros a b, intros f g, contrapose!, rintro 〈h 0, 〈h 1, h 2〉〉, use (0 : R), simp only [sub_eq_iff_eq_add, h 0, mul_zero] at *, simp only [*, zero_add], norm_num at h 0 end|
|Comments|The model is able to close the goal by contraposing, supplying a witness by the use of use (0 : R) and finally leveraging the simp and norm_num. This example demonstrates the model’s ability to chain multiple non-trivial steps of reasoning including the generation of witnesses.|
Natural language LetIf f f(g((xx) =)) − Axg( +f ( Bx)) = and B g( −x) =A, what is Bx + A A, where + B? Note A ̸=: the formalized statement in our B. _curriculum_
provides the answer and asks for a proof of it.
theorem mathd_train_algebra_217
(a b : R)
(f g : R → R)
(h0 : ∀ x, f x = a * x + b)
(h1 : ∀ x, f x = b * x + a)
(h2 : a ̸= b)
(h3 : ∀ x, f (g x) - g (f x) = b - a) :
a + b = 0 :=
begin
Model proof revert_all,
intros a b,
intros f g,
contrapose!,
rintro 〈h0, 〈h1, h2〉〉,
use (0 : R),
simp only [sub_eq_iff_eq_add, h0, mul_zero] at *,
simp only [*, zero_add],
norm_num at h0
end
Comments The model is able to close the goal by contraposing, supplying a witness by the use of use (0 : R)
and finally leveraging the simp and norm_num. This example demonstrates the model’s ability to
chain multiple non-trivial steps of reasoning including the generation of witnesses.
-----
**Formal Mathematics Statement Curriculum Learning**
**amc12b_2020_p6**
|Natural language|For all integers n ≥9, the value of (n + 2)! −(n + 1)! n! is always which of the following? (A) a multiple of 4 (B) a multiple of 10 (C) a prime number (D) a perfect square (E) a perfect cube.|
|---|---|
|Model proof|theorem amc12b_2020_p6 (n : N) (h 0 : 9 ≤ n) : ∃ x : N, (x:R)^2 = (nat.factorial (n + 2) - nat.factorial (n + 1)) / nat.factorial n := begin revert_all, intros n hn, use n + 1, field_simp [nat.factorial_ne_zero, pow_succ’], ring_exp end|
|Comments|The model exhibits here its ability to implicitly interpret factorial expressions by guessing the correct answer with the use of use n + 1 and straightforwardly proving its claim right after.|
For all integers n ≥ 9, the value of
(n + 2)! − (n + 1)!
Natural language _n!_
is always which of the following?
**(A) a multiple of 4** **(B) a multiple of 10** **(C) a prime number**
**(D) a perfect square** **(E) a perfect cube.**
theorem amc12b_2020_p6
(n : N)
(h0 : 9 ≤ n) :
_∃_ x : N, (x:R)^2 = (nat.factorial (n + 2) -
nat.factorial (n + 1)) / nat.factorial n :=
begin
Model proof revert_all,
intros n hn,
use n + 1,
field_simp [nat.factorial_ne_zero, pow_succ’],
ring_exp
end
Comments The model exhibits here its ability to implicitly interpret factorial expressions by guessing the correct
answer with the use of use n + 1 and straightforwardly proving its claim right after.
-----
**Formal Mathematics Statement Curriculum Learning**
**mathd_algebra_140**
|Natural language|The expression 24x2 −19x −35 can be written as (Ax −5)(2Bx + C), where A, B, and C are positive numbers. Find AB −3C. Note: the formalized statement in miniF2F provides the answer and asks for a proof of it.|
|---|---|
|Model proof|theorem mathd_algebra_140 (a b c : R) (h 0 : 0 < a ∧ 0 < b ∧ 0 < c) (h 1 : ∀ x, 24 * x^2 - 19 * x - 35 = (((a * x) - 5) * ((2 * (b * x)) + c))) : a * b - 3 * c = -9 := begin revert_all, rintro a b c h , 1 rw mul_comm, rw ←sub_eq_zero, field_simp, rw sub_eq_add_neg, rw ← eq_neg_iff_add_eq_zero, rw [mul_comm, ← sub_eq_add_neg, sub_eq_add_neg, ← sub_eq_add_neg], rw ←sub_eq_zero, simp only [mul_add, neg_mul_eq_neg_mul_symm, mul_comm, add_left_comm], norm_num, simp only [pow_two], intro h , 0 rw ← sub_eq_zero, linarith [h 1.2, h 1.1, h 3, h 1, h 2] 0 0 0 end|
|Comments|The model mostly "struggles" to make progress up to the last line. There, it presents us with the idea to specialize the statement given in h (which became h in the process of the proof) at 1 0 three consecutive natural numbers 1, 2, 3 which closes the goal with nlinarith. This proof is interesting as it demonstrates the model’s ability to evaluate symbolic expressions implicitly.|
Natural language The expressionpositive numbers. Find 24x[2] _− AB19x − −335C. can be written as Note: the formalized statement in (Ax −_ 5)(2Bx + miniF2F C), where provides the answer A, B, and C are
and asks for a proof of it.
theorem mathd_algebra_140
(a b c : R)
(h0 : 0 < a ∧ 0 < b ∧ 0 < c)
(h1 : ∀ x, 24 * x^2 - 19 * x - 35 =
(((a * x) - 5) * ((2 * (b * x)) + c))) :
a * b - 3 * c = -9 :=
begin
revert_all,
rintro a b c h1,
rw mul_comm,
rw ←sub_eq_zero,
field_simp,
Model proof rw sub_eq_add_neg,
rw ← eq_neg_iff_add_eq_zero,
rw [mul_comm, ← sub_eq_add_neg, sub_eq_add_neg,
`←` sub_eq_add_neg],
rw ←sub_eq_zero,
simp only [mul_add, neg_mul_eq_neg_mul_symm, mul_comm,
add_left_comm],
norm_num,
simp only [pow_two],
intro h0,
rw ← sub_eq_zero,
linarith [h1.2, h1.1, h0 3, h0 1, h0 2]
end
Comments The model mostly "struggles" to make progress up to the last line. There, it presents us with
the idea to specialize the statement given in h1 (which became h0 in the process of the proof) at
three consecutive natural numbers 1, 2, 3 which closes the goal with nlinarith. This proof is
interesting as it demonstrates the model’s ability to evaluate symbolic expressions implicitly.
-----
**Formal Mathematics Statement Curriculum Learning**
**aime_1984_p1**
|Natural language|Find the value of a + a + a + a + . . . + a if a 1, a 2, a . . . is an arithmetic progression with 2 4 6 8 98 3 common difference 1, and a + a + a + . . . + a = 137. Note: the formalized statement in 1 2 3 98 miniF2F provides the answer and asks for a proof of it.|
|---|---|
|Model proof|theorem aime_1984_p1 (u : N → Q) (h 0 : ∀ n, u (n + 1) = u n + 1) (h : P k in finset.range 98, u k.succ = 137) : 1 P k in finset.range 49, u (2 * k.succ) = 93 := begin revert_all, simp [finset.sum_range_succ], ring, simp [pow_succ, mul_comm, add_left_comm, add_assoc], simp [two_mul], rintro p, revert p, simp [add_comm], intros p hp, simp [hp], ring_nf, intros, nlinarith end|
|Comments|The model finds a rather short proof of the statement compared to typical human formalizations (see https://gist.github.com/MantasBaksys/ 3efd5c0d42f440d16c09076db42822f2).|
Find the value of a2 + a4 + a6 + a8 + . . . + a98 if a1, a2, a3 . . . is an arithmetic progression with
Natural language common difference 1, and a1 + a2 + a3 + . . . + a98 = 137. Note: the formalized statement in
_miniF2F provides the answer and asks for a proof of it._
theorem aime_1984_p1
(u : N → Q)
(h0 : ∀ n, u (n + 1) = u n + 1)
(h1 : k in finset.range 98, u k.succ = 137) :
k in finset.range 49, u (2 * k.succ) = 93 :=
begin
[P]
Prevert_all,
simp [finset.sum_range_succ],
ring,
Model proof simp [pow_succ, mul_comm, add_left_comm, add_assoc],
simp [two_mul],
rintro p,
revert p,
simp [add_comm],
intros p hp,
simp [hp], ring_nf,
intros,
nlinarith
end
Comments The model finds a rather short proof of the statement compared to typical
human formalizations (see https://gist.github.com/MantasBaksys/
3efd5c0d42f440d16c09076db42822f2).
-----
**Formal Mathematics Statement Curriculum Learning**
**aopsbook_v2_c8_ex1**
|Natural language|Consider the sequence { n } = 1, 2, 3, ... n+1 2 3 4 Prove that this sequence tends to 1 as n →∞|
|---|---|
|Model proof|theorem aopsbook_v2_c8_ex1 (u : N → R) (h 0 : ∀ n, u n = n / (n + 1)) : filter.tendsto u filter.at_top (N 1) := begin revert_all, simp [← nnreal.coe_one], norm_cast, intros, revert h, 0 assume h, simp [tendsto_const_nhds, ← nnreal.coe_one, h], revert u, assume f, norm_num, rw tendsto_iff_norm_tendsto_zero, assume H, convert tendsto_norm_zero.comp tendsto_one_div_add_at_top_nhds_0_nat, funext n, have h : (((n:R) + 1) : R) ̸= 0, 0 { norm_cast, exact n.succ_ne_zero, }, rw H, field_simp [h, norm_neg], 0 ring, rw [← sub_eq_zero], simp [← sub_eq_zero], simp [sub_eq_add_neg], end|
|Comments|An interesting example of a generated have statement, denoted by h, which is introduced as a cut 0 in order to simplify the expression containing divisions by using field_simp[h, norm_neg] 0 at a later step.|
Natural language Consider the sequence { _n+1n_ [} =][ 1]2 [,][ 2]3 [,][ 3]4 [, ...]
Prove that this sequence tends to 1 as n →∞
theorem aopsbook_v2_c8_ex1
(u : N → R)
(h0 : ∀ n, u n = n / (n + 1)) :
filter.tendsto u filter.at_top (N 1) :=
begin
revert_all,
simp [← nnreal.coe_one],
norm_cast,
intros,
revert h0,
assume h,
simp [tendsto_const_nhds, ← nnreal.coe_one, h],
revert u,
assume f,
Model proof norm_num,
rw tendsto_iff_norm_tendsto_zero,
assume H,
convert tendsto_norm_zero.comp
tendsto_one_div_add_at_top_nhds_0_nat,
funext n,
have h0 : (((n:R) + 1) : R) ̸= 0,
{ norm_cast, exact n.succ_ne_zero, },
rw H,
field_simp [h0, norm_neg],
ring,
rw [← sub_eq_zero],
simp [← sub_eq_zero],
simp [sub_eq_add_neg],
end
Comments An interesting example of a generated have statement, denoted by h0, which is introduced as a cut
in order to simplify the expression containing divisions by using field_simp[h0, norm_neg]
at a later step.
**mathd_numbertheory_447**
|Natural language|What is the sum of the units digits of all the multiples of 3 between 0 and 50? Note: the formalized statement in miniF2F provides the answer and asks for a proof of it.|
|---|---|
|Model proof|theorem mathd_numbertheory_447 : P k in finset.filter (λ x, 3|x) (finset.erase (finset.range 50) 0), (k % 10) = 78 := begin refl end|
|Comments|Because the predicate λ x, 3|x is registered as decidable over N, we can state the problem by using finset.filter, which is computable. Hence, refl is able to close the goal.|
What is the sum of the units digits of all the multiples of 3 between 0 and 50? Note: the formalized
Natural language
statement in miniF2F provides the answer and asks for a proof of it.
theorem mathd_numbertheory_447 :
k in finset.filter (λ x, 3|x)
(finset.erase (finset.range 50) 0), (k % 10) = 78 :=
Model proof beginP
refl
end
Comments Because the predicate λ x, 3|x is registered as decidable over N, we can state the problem by
using finset.filter, which is computable. Hence, refl is able to close the goal.
-----
| [
"Stanislas, Polu",
"Jesse Michael, Han",
"Kunhao, Zheng",
"Mantas, Baksys",
"Igor, Babuschkin",
"Ilya, Sutskever"
] | 2022-02-02T00:00:00 | ICLR 2023 | true | 0 | 17 | [
"Lean"
] | http://arxiv.org/abs/2202.01344 | https://arxiv.org/abs/2202.01344 | https://www.semanticscholar.org/paper/916a06a6d51aa93de27aac2f3e14faed08dd6706 |
From Calculation to Adjudication: Examining LLM judges on Mathematical Reasoning Tasks | To reduce the need for human annotations, large language models (LLMs) have been proposed as judges of the quality of other candidate models. LLM judges are typically evaluated by measuring the correlation with human judgments on generation tasks such as summarization or machine translation. In contrast, we study LLM judges on mathematical reasoning tasks. These tasks require multi-step reasoning, and the correctness of their solutions is verifiable, enabling a more objective evaluation. We perform a detailed performance analysis and find that the used judges are mostly unable to improve task performance but are able to pick the better model. Our analysis uncovers a strong correlation between judgment performance and the candidate model task performance. We observe that judges tend to choose the model of higher quality even if its answer is incorrect. Further, we show that it is possible to use statistics, such as the task performances of the individual models, to predict judgment performance. In an ablation, we either swap or mask the candidate answers and observe that judges often keep the original judgment, providing evidence that judges incorporate writing style in their judgments. In summary, we find that regularities in the judgments are quantifiable using statistical measures and provide various angles on exploiting them. | The analysis uncovers a strong correlation between judgment performance and the candidate model task performance and shows that regularities in the judgments are quantifiable using statistical measures and provides various angles on exploiting them. | ## From Calculation to Adjudication: Examining LLM judges on Mathematical Reasoning Tasks
**Andreas Stephan[1,2], Dawei Zhu[4], Matthias Aßenmacher[6,7],**
**Xiaoyu Shen[5], Benjamin Roth[1,3]**
1Faculty of Computer Science, 2UniVie Doctoral School Computer Science,
3Faculty of Philological and Cultural Studies, University of Vienna, Vienna, Austria
4Saarland University, Saarland Informatics Campus, 5Eastern Institute of Technology, Ningbo
6Department of Statistics, LMU Munich, 7Munich Center for Machine Learning (MCML)
**[Correspondence: [email protected]](mailto:[email protected])**
**Abstract**
To reduce the need for human annotations,
large language models (LLMs) have been proposed as judges of the quality of other candidate models. LLM judges are typically evaluated by measuring the correlation with human
judgments on generation tasks such as summarization or machine translation. In contrast, we
study LLM judges on mathematical reasoning
tasks. These tasks require multi-step reasoning, and the correctness of their solutions is
verifiable, enabling a more objective evaluation.
We perform a detailed performance analysis
and find that the used judges are mostly unable
to improve task performance but are able to
pick the better model. Our analysis uncovers
a strong correlation between judgment performance and the candidate model task performance. We observe that judges tend to choose
the model of higher quality even if its answer
is incorrect. Further, we show that it is possible
to use statistics, such as the task performances
of the individual models, to predict judgment
performance. In an ablation, we either swap or
mask the candidate answers and observe that
judges often keep the original judgment, providing evidence that judges incorporate writing style in their judgments. In summary, we
find that regularities in the judgments are quantifiable using statistical measures and provide
various angles on exploiting them.[1]
Question
Mr. Ruther sold 3/5 of his land and had 12.8 hectares left. How much
land did he have at first?
LLM MA
Mr. Ruther was left with 1 - 3/5 = <<1-3/5=0.4>>0.4 or 2/5 of his
landhis land which is equal to 12.8 hectares. So...###32.
LLM MB
Let x be the original land he had. 3/5x = 12.8, x = 12.8 / 3/5 =
<<12.8/3/5=20.8>>20.8.### 20.8.
Judge LLM
Answer A is correct. In Answer B, the equation is set up incorrectly.
If 12.8 hectares is the amount of land left after selling 3/5 of the
land, then 12.8 hectares represents 2/5 of... {"answer":"A"}
CoT text Final answer
Figure 1: In our problem setup two LLMs (MA and
_MB), provide candidate answers for a math problem,_
and a judge LLM has to decide which one is correct.
All three use chain-of-thought (CoT) reasoning (Wei
et al., 2022).
Much like judges in the real world, who are expected to be exact, fair, and unbiased, e.g., as defined in Bangalore Principles of judicial conduct
(Bangalore Principles, 2002), LLMs, when employed as judges, should be ethical and logical.
Already the philosopher Aristotle argued that the
virtuous actor exhibits the joint excellence of reason and character (Kraut, 2022). Previous works
investigate properties and biases of LLM judges on
generation tasks such as translation or summarization (Kim et al., 2024b; Liu et al., 2024). These are
typically evaluated using correlation with human
annotators and are thus inherently subjective.
In this work, we investigate LLM judges on
mathematical reasoning datasets (see Figure 1).
These need complex multi-step reasoning, and the
solution is verifiable, which allows us to investigate
the relationship between judge and candidate models in a principled manner. We base our analysis on
four large (more than 30B parameters) LLMs and
four small (less than 10B) LLMs on three mathe
**1** **Introduction**
The automatic evaluation of machine learning models promises to reduce the need for human annotations. Specifically, the LLM-as-a-judge paradigm
(Zheng et al., 2023) has gained traction, aiming to
assess or compare the quality of generated texts
automatically. This approach is beneficial for
automated data labeling (Tan et al., 2024), selfimprovement of LLMs (Wu et al., 2024), and ranking LLMs with respect to specific tasks (Zheng
et al., 2023).
1Code: [email protected]:AndSt/llm_judges.git
-----
**2** **Related Work**
**2.1** **LLM as Judges**
Using LLMs as judges to evaluate text generated
by LLMs, including their own outputs, has recently attracted significant interest because it reduces the need for human annotation (Zheng et al.,
2023). Commonly, large frontier models are used
as judges. Applications include the automatic assessment of language model capabilities and, e.g.,
determining which model performs better on a
given task (Zheng et al., 2023) and reinforcement
learning from AI feedback by automatically generating data for preference optimization (Bai et al.,
2022; Wu et al., 2024).
Various methods exist to make judgments
(Zheng et al., 2023; Liusie et al., 2024). One approach is pairwise selection (Wang et al., 2024a),
where two answers are presented, and the model is
asked to select the better one. Another approach
is pointwise grading (Li et al., 2024), where the
model is asked to assign a grade based on a predefined scale, and the answer with a better grade
is chosen. Judgment prompts may involve reference solutions or not. Another body of research
explicitly trains models to act as judges (Kim et al.,
2024a; Wang et al., 2024a) or closely related, as
reward models (Wang et al., 2024b; Li et al., 2024).
The effectiveness of LLMs as judges is typically
assessed by measuring the correlation or overlap
with human judgments (Zheng et al., 2023; Kim
et al., 2024b). In contrast, we focus on difficult
tasks with a concrete final answer. Finally, we want
to stress that several works caution for the use of
LLM judges as experts (Bavaresco et al., 2024;
Koo et al., 2023; Raina et al., 2024). In a similar
vein, we aim to understand regularities and their
shortcomings.
**2.2** **Biases in LLM-as-a-judge**
Human-annotated data inherently reflects the annotators’ biases and opinions. These biases can be
detrimental or (intentionally) beneficial, depending on the goals of the annotation process (Plank,
2022). Similarly, several studies have explored the
biases present in LLM judges:
One linguistic bias is ordering bias (Zheng et al.,
2023; Koo et al., 2023), where a judge gives a
different answer depending on the order in which
answers are presented. Panickssery et al. (2024)
note that it is possible to interpret position bias as
a sign that the model is unsure. There are multiple
matical reasoning datasets.
Our experiments contain a detailed performance
examination, confirming that larger models are generally better judges (Zheng et al., 2023). We find
that only the best-tested model, Qwen 2 72B, consistently improves task performance if we evaluate
the judged samples, but all tested judges likely pick
the better model for a given task.
We investigate subsets with one correct and one
incorrect candidate answer. We uncover a correlation between judgment performance and task performance of the candidate models, showing that
judges tend to select incorrect answers from better
models. Thus, we hypothesize that judges have access and rely on the superior writing styles of larger
models instead of solely analyzing the reasoning.
When we divide the datasets into buckets of model
agreement, we observe that agreement is a proxy
for sample difficulty.
Motivated by these regularities, we analyze
whether it is possible to predict judgment performance and find that task performances of judge and
candidate LLMs explain most of the variance. We
hypothesize that judges incorporate writing style
into their judgments. Thus, we predict individual
judgments using statistical and transformer-based
models and achieve above-chance performance,
supporting our hypothesis.
Lastly, we test how judgments are affected by
perturbing numeric values in responses by 1) swapping results and 2) masking numeric values. Our
findings reveal that judges largely retain original
judgments, providing further evidence that judges,
in large part, base their decisions on writing style.
In summary, our contributions are as follows:
1. We conduct an in-depth performance analysis
of LLM judges for mathematical reasoning
tasks.
2. Our analysis reveals a correlation between
the judgment and candidate task performance,
providing a novel statistical angle on the analysis of LLM judges.
3. We show that statistics such as task performance or agreement of candidate models are
indicative of judgment performance.
4. After systematically perturbing the candidate
answers, we observe that judges often keep
their original judgments, providing evidence
that judgments are also based on writing style.
-----
works (Xu et al., 2024; Panickssery et al., 2024; Liu
et al., 2024) that find evidence for self-bias or selfpreference. Koo et al. (2023) provide a benchmark
for analyzing cognitive biases. West et al. (2024)
and Oh et al. (2024) explore the “Generative AI
Paradox” where generating solutions is easier for
the LLM than analyzing them, unlike humans who
typically find analysis easier than generation.
In this work, we aim to establish a better understanding of underlying regularities that relate
judgments to statistics such as model performance.
**3** **General Setup**
In the following, we describe the problem setting,
including the used notation, and the general experimental setting including used models and datasets.
**3.1** **Problem Description**
In this work, we consider two models, denoted by
_MA, MB_ **M, providing candidate solutions for a**
_∈_
sample of a dataset D and a judge model MJ **M,**
_∈_
which is tasked to select, to “judge”, whether it
prefers the solutions of the models MA or MB. The
solutions are represented by the random variables
_A and B. We consider the events that solutions are_
true (A = T ), false (A = F ), or that their solution
is the same (A = B). We denote the judgment of
the judge MJ by the random variable ∆J, which
can either be correct (∆J = T ), incorrect (∆J =
_F_ ) or choose a specific model MA (∆J = MA).
Given that the final answer is either correct or
incorrect, we can break the probability of the judge
making a correct judgment P (∆J = T _|A, B, D)_
given a sample of a dataset D and the answers of
two models MA, MB down into the following four
cases:
_P (∆J = T |A, B, D)_ (1)
= _P (∆J = T |A = X, B = Y, D)P (A = X, B = Y |D)_
(X,YX )∈C
= P (A = T, B = T |D)
+ P (∆J = T |A = T, B = F, D)P (A = T, B = F |D)
+ P (∆J = T |A = F, B = T, D)P (A = T, B = F |D)
from multi-step CoT reasoning. For all datasets,
we use accuracy as the performance metric.
**AQUA-RAT (Ling et al., 2017) is a dataset to test**
the quantitative reasoning ability of LLMs. Unlike
the other two datasets, the questions are multiplechoice. GSM8K (Cobbe et al., 2021) consists of
grade school math word problems. The answers
are free-form numbers. MATH (Hendrycks et al.,
2021) contains challenging competition mathematics problems. Find more details in Appendix A.1
**3.3** **Models**
We evaluate the performance of openly available
LLMs, including four large models Qwen 2 72B
(Yang et al., 2024), Llama 3 70B (AI@Meta, 2024),
_Yi 1.5 34B (Young et al., 2024), Mixtral 8x7B_
(Jiang et al., 2024) and four small models, namely
_Llama 3 8B (AI@Meta, 2024), Gemma 1.1 7B_
(Gemma Team et al., 2024), Mistral 7B v0.3 (Jiang
et al., 2023), and Mistral 7B v0.1 (Jiang et al.,
2023). We use the chat- or instruction-tuned model
variants and test each model as a candidate answer
generator and as a judge. More information is in
Appendix A.2.
**3.4** **Inferences**
This section describes the candidate answer generation and the judgment generation. Find more
information on prompts and hardware details in
Appendix A.
**Candidate answer generation.** To judge two
candidate answers (including of the same model),
we sample two initial CoT solutions for each model
using 4-shot prompting. We set the temperature to
0.9 to get two different solutions.
**Judgements.** We choose the first candidate generation for each model and generate judgments for
all 36 unique model combinations.[2] If both models are the same, we take the second initial generation. We accommodate positional bias (Zheng
et al., 2023; Koo et al., 2023) by evaluating the two
candidate answers in both possible orders for each
question and then taking the average correctness of
the judgments as the final assessment. The judge
_has to choose if the first or second answer is cor-_
rect. The prompt is zero-shot and applies CoT, the
temperature is set to 0 for deterministic generation
results.
2We consider all pairs from the eight LLMs, including
self-pairing, yielding 8+22−1 = 36 combinations.
where C = (T, F )[2]. Note that in cases where
both answers are correct or incorrect imply that the
judgment is also either correct or incorrect respectively, i.e., P (∆J = T _|A = T, B = T_ ) = 1 and
_P_ (∆J = T _|A = F, B = F_ ) = 0.
**3.2** **Datasets**
The experiments encompass three mathematical
reasoning datasets where models highly benefit
-----
|Col1|Qwen 2 72B Llama 3 70B Yi 1.5 34B Mixtral 8x7B|Llama 3 8B Gemma 1.1 7B Mistral 7B v0.3 Mistral 7B v0.1|
|---|---|---|
|AQUA_RAT (1) P(∆J = T|A, B, D) GSM8K MATH|66.05 55.98 62.2 57.08 77.06 72.41 72.64 68.81 29.66 24.64 26.69 23.60|51.59 53.52 54.96 51.41 65.88 65.47 68.57 63.28 22.35 21.66 22.68 19.91|
|AQUA_RAT (2) P(∆J = T|A ̸= B, D) GSM8K MATH|53.79 45.04 49.73 44.47 63.16 59.33 56.01 46.68 25.04 22.14 22.22 18.29|36.56 40.48 38.26 36.43 41.29 38.71 42.59 39.72 17.04 16.00 16.88 15.22|
|AQUA_RAT (3) P(∆J = T|{A, B} = {T, F}, D) GSM8K MATH|73.13 64.63 68.66 63.26 85.65 81.32 76.84 64.97 80.48 73.01 71.37 61.03|52.74 58.04 54.54 52.45 57.87 54.16 59.58 55.48 58.04 54.70 56.27 50.91|
Table 1: Performance of judge LLMs in three cases: (1) accuracy on all samples, (2) accuracy where models MA
and MB disagree, and (3) accuracy where only one model is correct. Results are averaged over all pairs (MA, MB),
with the highest accuracy in bold and the second highest underlined.
**4** **General Performance**
The experiments have multiple degrees of freedom:
judges, candidate models, and datasets. Therefore, we first examine judgments per dataset, and
secondly, we investigate judgments per candidate
model pair. Afterwards, we provide evaluations for
two applied questions.
**4.1** **Performance per dataset**
We begin by examining the judgment performance,
i.e., how often the judge picks a correct answer,
across different datasets. Therefore, we average
the performance across all model pairs (MA, MB).
**Setup.** Table 1 considers three cases where each
case focuses on a specific subset of the datasets:
_Case (1) investigates the observed task perfor-_
mance P (∆J = T _|A, B, D) where we evaluate_
the task performance using the answers chosen
by the judges. Note that this includes samples
where both candidate models give the same answer. Case (2) asks how often judges choose a
correct answer given that the answers differ, i.e.,
_P_ (∆J = T _|A ̸= B, D). Note that this may (and_
often does) include cases where both answers are
incorrect. Case (3) gives the probability that the
judge chooses the correct answer given that one
answer is correct, and the other answer is incorrect,
formally P (∆J = T _|A ̸= B, T ∈{A, B}, D)._
**Results.** We observe that large models outperform smaller models. Specifically, we see that
Qwen 2 72B is the best judge, followed by Yi 1.5
34B. The performance of Llama 3 70B is, on average, comparable to that of Yi 1.5 34B. Note that
performance in Case (1) and Case (2) is often quite
low, especially for MATH, as there are many cases
where the judge can only choose wrong answers.
Importantly, we observe that smaller models with
fewer than 10B parameters are unreliable judges.
Especially, in Case (3), where a correct answer is
(a) Qwen 2 70B (b) LLama3 70B
(c) Yi 1.5 34B (d) Mixtral 8x7B
Figure 2: Observed performance P (∆J = T _|A, B, D)_
of four judge LLMs (a-d) in evaluating various model
pairs, averaged across all datasets.
provided, smaller models only achieve an accuracy
of around 55%, barely better than random chance.
Therefore, we focus on the four larger models as
judges in the subsequent analysis.
**4.2** **Performance per model combination**
The comparative performance of model pairs offers
insights into which model is better for the specific
task or which combination of models yields the
best results.
**Setup.** Figure 2 illustrates the final performance
_P_ (∆J = T _|A, B), indicating the probability of a_
judge choosing a correct answer given two models
_A and B. The results are averaged over datasets_
and presented as an upper triangular matrix due to
symmetry. If both models in a pair are the same,
_A = B, we employ the second response generated_
with temperature sampling to introduce variation.
We report the performance of all models used as
judges in the Appendix B in Table D.
-----
Figure 3: Amount of model pairs (MA, MB) where
the answers chosen by the judge achieve a higher task
performance than the models individually (green). The
blue bar only considers models where the judge is at
least as good as the candidate models.
**Results.** We observe that the best performance
is achieved when both the candidate answers and
the judge are the highest-performing model, Qwen
2 72B. An analysis of the first rows (cf. Figure
2) reveals a notable trend: The final performance
declines when comparing the output of a strong
model against a mediocre model (e.g., Llama3-8B)
but then improves again when compared against the
weakest model. This suggests that judging becomes
more challenging when distinguishing between the
correct answers of a strong model and the incorrect
answers of a mediocre one, compared to discerning
the outputs of a bad one.
**4.3** **Do judges elicit task improvement?**
One use case for LLM judges is to improve task
performance. A potential application is to train on
answers chosen by the judge (Yuan et al., 2024).
**Setup.** Therefore, we test how often the performance of the answers chosen by the judge is better than the performance of the individual models. Formally, for all pairs of models MA, MB and
datasets D, how often is the observed performance
_P_ (∆J = T _|A, B, D) larger than max{P_ (A =
_T_ _|D), P_ (B = T _|D)}? In Figure 3 the green bar_
tests all model pairs, and the blue bar only pairs
where the judge is at least as good as the candidate
models, i.e., P (J|D) ≥ max{P (A|D), P (B|D)}.
The task performances of all models are given in
the Appendix B in Table 9.
**Results.** We see that only Qwen 2 72B increases
the performance reliably. However, it is easier for
the judge to improve performance if it compares
answers of less or equally good candidate models.
Figure 4: Percentage of model pairs (MA, MB) where
a judge picks a better model MA (meaning P (A =
_T_ _|D) > P_ (B = T _|D)), by selecting more answers of_
_MA than from MB._
**4.4** **Does the judge prefer the better model?**
Another application of LLM judges is whether they
can accurately identify which model performs better for a given task. This is crucial if we want to
rank LLMs by their capabilities or if a practitioner
wants to decide which model to deploy.
**Setup.** To assess this, we evaluate the frequency
with which a judge selects the superior model.
For a candidate model pair MA, MB **M, al-**
_∈_
ways assume they are ordered, such that P (A =
_T_ _|D) > P_ (B = T _|D)._ Then, specifically,
we determine the proportion for which the judge
chooses MA more often than MB, or formally,
how often is P (∆J = MA|A, B, D) > P (∆J =
_MB_ _A, B, D) for all candidate pairs and datasets._
_|_
**Results.** The judges consistently perform well in
the selection of the better model. Notably, we find
that Qwen 2 72B can only not rank the pair Mistral 7B v0.1 and v0.3 on the MATH dataset. This
issue appears minor, as both models exhibit similarly poor performance on the challenging MATH
dataset (with accuracies of 6.13% and 3.10%, respectively), meaning most judgments compare two
wrong answers. Notably, already the worst judge,
Mixtral 8x7B, performs well. In summary, we see
that judges are more capable of aggregate-level
rankings than instance-level rankings.
**5** **Analysis of Subsets**
We investigate properties that occur when we use
subsets based on the correctness of models or agreement between models.
-----
(a) Qwen2 72B (b) LLama2 70B
(c) Yi 1.5 43B (d) Mixtral 8x7B
Figure 5: Judges’ accuracy vs. performance gap between two candidate models MA and MB. Each point
represents a subset where MA is correct and MB is incorrect. The color reflects the size of these subsets.
**5.1** **Do task performances correlate with**
**judgments?**
We consider the subset of highest practical relevance where one candidate model is correct, and
one candidate model is incorrect. The goal is to investigate the relationship between candidate model
task performance and judgment performance.
**Setup.** For all model pairs _MA, MB_ _∈_
**M, MA ̸= MB we analyze subsets where MA is**
correct, and MB is incorrect. Note that we can
always order MA and MB this way. Each plot in
Figure 5 shows the relationship between judge performance, P (∆J = T _|A = T, B = F_ ) (Y-axis)
and candidate model performance gap of MA and
_MB, i.e., P_ (A = T _D)_ _P_ (B = T _D) (X-axis)._
_|_ _−_ _|_
Examples of these subsets and their corresponding
performances are in Appendix C in Table 10.
**Results.** The analysis reveals a strong correlation
(Pearson’s r[2] _> 0.69) between candidate model_
performance gap and judgment accuracy. If the
performance gap is negative, we consider subsets
where larger models are incorrect. Judges favor
answers from larger models even when they are
incorrect on these subsets. We hypothesize that
this bias arises because larger models exhibit a specific writing style, articulating their responses more
convincingly, thereby misleading the judges. This
finding aligns with previous research identifying
self-bias (Xu et al., 2024; Panickssery et al., 2024;
Liu et al., 2024). However, our results indicate that
this bias extends more broadly to the inherent qual
(a) Performance on all comparisons.
(b) Performance on comparisons with one correct
and one incorrect answer.
Figure 6: Judge performance by agreement bucket. E.g.,
bucket S3 (X-axis) means that all eight models gave
together three different answers. Note that AQUA-RAT
is multiple-choice with maximally six answers.
ity of the underlying models on reasoning datasets.
However, this is not necessarily a critical issue in
practice, as the larger model tends to answer correctly more often (as indicated by the color of the
points in Figure 5.
**5.2** **Does judgment quality depend on models’**
**agreement?**
We are interested in whether the level of agreement
among models, i.e., how many models give a different answer for a sample, impacts the performance
on the respective subset.
**Setup.** We define disagreement buckets Sj,
where each bucket contains instances for which
exactly 1 ≤ _j ≤_ 8 unique answers were given
across all models. Formally, we set
_i_ _D_ _MA(i)_ _MA_ **M** = j
_{_ _∈_ _| |{_ _|_ _∈_ _}|_ _}_
_Sj =_
where MA(i) is the answer of model MA for instance i. We analyze the results in two contexts: all
comparisons, including those where both answers
are correct or incorrect (cf. Figure 6(a)), and only
instances where exactly one answer is correct (cf.
Figure 6(b)). We average the performances of all
-----
_P_ (∆J = T | · · · · · · )
↓ Features \ Condition → _A, B_ _A ̸= B_ _{T, F_ _} = {A, B}_
(1) _P_ (J), P (A), P (B) 97.50 90.20 59.20
_P_ (A = B),
(2) 76.00 54.90 49.90
_P_ (J = A|A ̸= B)
Table 2: Coeffictions of Determination (R[2], higher is
better) for linear regression using the different feature
sets as covariates (rows) and different target variables
defined by the condition (columns). All values are significant (p < 0.001) as per an Overall-F-Test.
judges and all candidate pairs. Find per-judge plots
in Appendix C in Figure 10.
**Results.** Figure 6(a) shows that when all models
agree (bucket S1), the performance is nearly 100%,
indicating unanimous agreement usually means correctness. As disagreement increases, performance
expectedly decreases. Thus, model agreement is a
proxy for sample difficulty. In 6(b), where a correct and an incorrect answer exists, performance
remains relatively stable across disagreement buckets for datasets with free-form answers, such as
GSM8K and MATH. However, for AQUA-RAT,
performance degrades as disagreement rises.
**6** **Prediction of Judgements**
We investigate whether predicting the judgments’
outcomes is feasible. Firstly, we aim to predict
performance statistics. Secondly, we aim to predict
individual judgments.
**6.1** **Can we predict judgment performance?**
On the subset where exactly one answer is correct,
we found a strong correlation between judgment
performance and candidate task performances.
This hints at regularities within the judging process, thus we aim to predict judge performance
using model statistics.
**Setup.** We fit six different linear regression models using the judgment performances as the target variables Y, including all variations of judges,
model pairs MA, MB **M, and datasets D. Re-**
_∈_
garding the covariates X in the model, we distinguish between two setups: In Case (1), we
solely use the task performances P (X|D), X ∈
_{J, A, B} of judge and candidate models, to pre-_
dict judgment performance. In Case (2), we utilize statistics available without knowledge of the
ground truth. The features for this case are the probability of agreement between the candidate models
|↓Model \Judge →|Qwen 2 72B Llama 3 70B Yi 1.5 34B Mixtral 8x7B|
|---|---|
|(1) TF-IDF + RF 60.78 61.37 60.77 58.69 (2) RoBERTa 68.14 66.49 67.03 63.91||
Table 3: Accuracy of predicting LLM judges’ decisions
using Random Forest (RF) and RoBERTa classifiers.
_P_ (A = B _D) and the probability of model MA_
_|_
being chosen. Since we are not specifically interested in the individual features’ effects, but rather
in their ability to explain the variation of judgment
performance, we rely on the coefficient of determination, R[2], for evaluation (Fahrmeir et al., 2013,,
see Appendix E).
**Results.** The results are shown in Table 2 (excluding data sets from the probability formulas
for simplicity). We observe that the performancerelated features of the models can almost perfectly
explain the variation in final judgment performance
(R[2] = 97.50%), also when conditioning only on
the subset of differing answers (R[2] = 90.20%).
Logically, P (A) and P (B), i.e., P (A|D), P (B|D)
respectively, have significant[3] explanatory power
for judgment performance, as they encompass all
correct answers. In Case (2), we still observe a relatively high R[2] value, indicating that the features
can explain 50% of the target’s variance.
**6.2** **Can we predict which individual**
**judgments?**
We hypothesize that judgments are biased towards
larger or better models because they incorporate
linguistic cues or writing style into their judgments
rather than purely relying on reasoning assessment. Therefore, we train a classifier to understand
whether we can predict individual judgments.
**Setup.** We separate all comparisons made per
judge into training, validation, and test splits and
train two classifiers. The test accuracy is reported
in Table 3. The first model utilizes TF-IDF vectorization. We create two independent vectorizers
for both answers. The resulting features are concatenated. A RandomForest classifier (Breiman,
2001) is then trained on these combined features.
The second model is a RoBERTa model (Liu et al.,
2020) trained on the full prompt presented to the
judge. Refer to Appendix D for the training details
of both models.
3We test statistical significance using an Overall-F-Test for
each fitted model. Further details are in Appendix E.
-----
introduced noise and heavily bases its decision on
the writing style. Interestingly, in a substantial
amount of samples (up to 17%) the judge refuses
to make a judgment. On a positive note, manual
inspection revealed that the model often realizes
that the original answers were perturbed.
**8** **Discussion**
**Style and Quality.** Our experiments suggest a
relation between judgment and candidate task performance (cf. Section 5) and a relation between
judgment and writing style (cf. Sec. 6 and 7).
We hypothesize these two are interconnected and
facets of the same underlying bias. When models become better, e.g., by being trained on larger
amounts of data, their ability to write convincingly
increases. Conversely, when an LLM demonstrates
an increased ability to write convincingly, it likely
acquires a more nuanced grasp of what humans perceive as compelling. This enhanced understanding
likely also extends to task performance.
**Generalizability of approach.** Our in-depth
analysis utilizes Formula (1) to segment judgment
data based on correctness criteria, allowing for targeted investigation of specific subsets. This approach is generalizable and transferable to other
NLP tasks, such as summarization. By incorporating discrete signals such as text topics, a similar
derivation of the judgment probability is possible.
**9** **Conclusion**
We conducted a thorough analysis of LLM judges
on mathematical reasoning tasks. We include a
detailed judgment performance evaluation of eight
models on three datasets. We find that larger models are generally better than smaller models and
that judges succeed in detecting the more capable
model. Our analysis reveals a strong correlation between judgment performance and task performance
of the models providing candidate answers which
shows that judges tend to choose larger or better
models. We hypothesize that LLM judges incorporate writing style into their judgments instead of
purely analyzing the reasoning. We provide two experiments to provide evidence for this hypothesis.
Finally, we want to emphasize the importance of
impartiality and fairness in the role of LLM judges,
similar to human judges in the real world. Our
research introduces methods to quantify biases in
favor of larger or better models, thereby offering a
means to measure the reduction of such biases.
|Col1|Swapped = ̸= Refused|Masked = ̸= Refused|
|---|---|---|
|Qwen2-72B Llama-3-70B Yi-1.5-34B Mixtral-8x7B|75.75 12.20 12.05 78.81 13.72 7.47 74.80 14.47 10.73 71.19 20.69 8.12|56.40 26.06 17.53 63.44 29.22 7.33 44.89 37.31 17.80 60.34 29.26 10.40|
Table 4: Analysis of judgments where results in candidate answers were either swapped or numbers masked.
We report how many judgments stay the same (=), different (=)̸, or where judges refused to follow the output
format (Refused).
**Results.** The random forest model achieves an
accuracy of approximately 60%, demonstrating performance above random chance. This suggests that
specific keywords or phrases influence judges. The
RoBERTa model surpassed this, reaching nearly
70% accuracy. Taken together, these results suggest that judge decision-making is a multi-faceted
process. While specific linguistic cues appear to
hold influence, a substantial portion of the decisionmaking process seems to be based on other contextual factors or broader reasoning.
**7** **Perturbation of Results**
We aim to gain a deeper understanding of the extent to which writing style affects the final judgment. Therefore, we create an experiment perturbing the candidate answers and examine whether
this changes the judgment.
**Setup.** We examine two perturbtations: Swap and
_Mask. In the Swap experiment, we swap the final_
answer from model MA with that of model MB,
while keeping their CoT reasoning unchanged. In
the Mask experiment, we anonymize all numbers
in both the CoT reasoning and the final answer
by replacing them with “X”.[4] Table 4 shows the
frequency with which the judge selects the same
answer (=), a different answer (≠ ), or fails/refuses
to follow the output format and make a decision
(Refused). Refer to Appendix F for specific examples.
**Results.** We observe that the new judgments in
more than half the cases agree with the original
judgment. In the Swap experiment, they even agree
on average by 75% of the cases. We deduce that
the judge is largely unaffected by the artificially
4In preliminary runs, we observed that masking caused
significant confusion for the judge models. To address this,
we adapt the judgment prompt in this setting to include the
instruction: “Only analyze the reasoning! All numbers have
been replaced with ’X’ to help you focus on the reasoning.”
-----
**Limitations**
Our analysis is primarily focused on mathematical reasoning datasets, which allows us to explore
judgments through the lens of correctness within
specific subsets. While this approach provides valuable insights, it limits the generalizability of our
findings to other tasks or domains. Based on the
fact that the investigated datasets are complex, in
the sense that they need multi-step reasoning to
be solved, and based on the fact that there is no
thorough investigation of LLM judges on mathematical reasoning datasets yet, we think this work
is a valuable contribution.
In our experiments, we focus on testing a single, specific prompt. It is common knowledge that
LLMs are highly sensitive to variations in prompt
phrasing, which can substantially influence their
performance. Nevertheless, it is impossible for us
to meet the computational demands necessary to
run our experiments with multiple prompts.
In this study, we intentionally concentrate on
open-weight models, motivated by our strong belief in the principles of open science. Openweight models offer transparency and reproducibility, which are critical for advancing scientific understanding. However, we note that it is also interesting to study closed models to understand potential
differences. Still, we are committed to research on
open-weight models because we believe it benefits
the community more.
**10** **Acknowledgements**
This research was funded by the WWTF through
the project ”Knowledge-infused Deep Learning
for Natural Language Processing” (WWTF Vienna Research Group VRG19-008) and through
the project "Transparent and Explainable Models" (WWTF ICT19-041). MA is funded by the
Deutsche Forschungsgemeinschaft (DFG, German
Research Foundation) as part of BERD@NFDI grant number 460037581. Further, we thank Jan
Philip Wahle and Pedro Henrique Luz de Araujo
for fruitful discussions and their constructive feedback.
**References**
[AI@Meta. 2024. Llama 3 model card.](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md)
Yuntao Bai, Saurav Kadavath, Sandipan Kundu,
Amanda Askell, Jackson Kernion, Andy Jones,
Anna Chen, Anna Goldie, Azalia Mirhoseini,
Cameron McKinnon, et al. 2022. Constitutional
ai: Harmlessness from ai feedback. arXiv preprint
_arXiv:2212.08073._
[Bangalore Principles, 2002. 2002. The bangalore princi-](https://www.judicialintegritygroup.org/jig-principles)
[ples of judicial conduct. Available from the Judicial](https://www.judicialintegritygroup.org/jig-principles)
Integrity Group website.
Anna Bavaresco, Raffaella Bernardi, Leonardo Bertolazzi, Desmond Elliott, Raquel Fernández, Albert
Gatt, Esam Ghaleb, Mario Giulianelli, Michael
Hanna, Alexander Koller, André F. T. Martins,
Philipp Mondorf, Vera Neplenbroek, Sandro Pezzelle,
Barbara Plank, David Schlangen, Alessandro Suglia, Aditya K Surikuchi, Ece Takmaz, and Alberto
[Testoni. 2024. Llms instead of human judges? a](http://arxiv.org/abs/2406.18403)
[large scale empirical study across 20 nlp evaluation](http://arxiv.org/abs/2406.18403)
[tasks.](http://arxiv.org/abs/2406.18403)
[Leo Breiman. 2001. Random forests. Mach. Learn.,](https://doi.org/10.1023/A:1010933404324)
45(1):5–32.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
Ludwig Fahrmeir, Thomas Kneib, Stefan Lang, Brian
Marx, Ludwig Fahrmeir, Thomas Kneib, Stefan Lang,
and Brian Marx. 2013. Regression models. Springer.
Google Gemma Team, Thomas Mesnard, Cassidy
Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya
Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay
Kale, Juliette Love, Pouya Tafti, Léonard Hussenot,
Pier Giuseppe Sessa, Aakanksha Chowdhery, Adam
Roberts, Aditya Barua, Alex Botev, Alex CastroRos, Ambrose Slone, Amélie Héliou, Andrea Tacchetti, Anna Bulanova, Antonia Paterson, Beth
Tsai, Bobak Shahriari, Charline Le Lan, Christopher A. Choquette-Choo, Clément Crepy, Daniel Cer,
Daphne Ippolito, David Reid, Elena Buchatskaya,
Eric Ni, Eric Noland, Geng Yan, George Tucker,
George-Christian Muraru, Grigory Rozhdestvenskiy,
Henryk Michalewski, Ian Tenney, Ivan Grishchenko,
Jacob Austin, James Keeling, Jane Labanowski,
Jean-Baptiste Lespiau, Jeff Stanway, Jenny Brennan, Jeremy Chen, Johan Ferret, Justin Chiu, Justin
Mao-Jones, Katherine Lee, Kathy Yu, Katie Millican, Lars Lowe Sjoesund, Lisa Lee, Lucas Dixon,
Machel Reid, Maciej Mikuła, Mateo Wirth, Michael
Sharman, Nikolai Chinaev, Nithum Thain, Olivier
Bachem, Oscar Chang, Oscar Wahltinez, Paige Bailey, Paul Michel, Petko Yotov, Rahma Chaabouni,
Ramona Comanescu, Reena Jana, Rohan Anil, Ross
McIlroy, Ruibo Liu, Ryan Mullins, Samuel L Smith,
Sebastian Borgeaud, Sertan Girgin, Sholto Douglas,
Shree Pandya, Siamak Shakeri, Soham De, Ted Klimenko, Tom Hennigan, Vlad Feinberg, Wojciech
Stokowiec, Yu hui Chen, Zafarali Ahmed, Zhitao
Gong, Tris Warkentin, Ludovic Peran, Minh Giang,
Clément Farabet, Oriol Vinyals, Jeff Dean, Koray
Kavukcuoglu, Demis Hassabis, Zoubin Ghahramani,
-----
Douglas Eck, Joelle Barral, Fernando Pereira, Eli
Collins, Armand Joulin, Noah Fiedel, Evan Senter,
[Alek Andreev, and Kathleen Kenealy. 2024. Gemma:](http://arxiv.org/abs/2403.08295)
[Open models based on gemini research and technol-](http://arxiv.org/abs/2403.08295)
[ogy.](http://arxiv.org/abs/2403.08295)
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021. Measuring mathematical
problem solving with the math dataset. NeurIPS.
Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud,
Marie-Anne Lachaux, Pierre Stock, Teven Le Scao,
Thibaut Lavril, Thomas Wang, Timothée Lacroix,
[and William El Sayed. 2023. Mistral 7b.](http://arxiv.org/abs/2310.06825)
Albert Q. Jiang, Alexandre Sablayrolles, Antoine
Roux, Arthur Mensch, Blanche Savary, Chris
Bamford, Devendra Singh Chaplot, Diego de las
Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, MarieAnne Lachaux, Pierre Stock, Sandeep Subramanian,
Sophia Yang, Szymon Antoniak, Teven Le Scao,
Théophile Gervet, Thibaut Lavril, Thomas Wang,
[Timothée Lacroix, and William El Sayed. 2024. Mix-](http://arxiv.org/abs/2401.04088)
[tral of experts.](http://arxiv.org/abs/2401.04088)
Seungone Kim, Jamin Shin, Yejin Cho, Joel Jang,
Shayne Longpre, Hwaran Lee, Sangdoo Yun,
Seongjin Shin, Sungdong Kim, James Thorne, and
[Minjoon Seo. 2024a. Prometheus: Inducing fine-](https://openreview.net/forum?id=8euJaTveKw)
[grained evaluation capability in language models. In](https://openreview.net/forum?id=8euJaTveKw)
_The Twelfth International Conference on Learning_
_Representations._
Seungone Kim, Juyoung Suk, Ji Yong Cho, Shayne
Longpre, Chaeeun Kim, Dongkeun Yoon, Guijin Son,
Yejin Cho, Sheikh Shafayat, Jinheon Baek, Sue Hyun
Park, Hyeonbin Hwang, Jinkyung Jo, Hyowon Cho,
Haebin Shin, Seongyun Lee, Hanseok Oh, Noah Lee,
Namgyu Ho, Se June Joo, Miyoung Ko, Yoonjoo Lee,
Hyungjoo Chae, Jamin Shin, Joel Jang, Seonghyeon
Ye, Bill Yuchen Lin, Sean Welleck, Graham Neubig, Moontae Lee, Kyungjae Lee, and Minjoon Seo.
[2024b. The biggen bench: A principled benchmark](http://arxiv.org/abs/2406.05761)
[for fine-grained evaluation of language models with](http://arxiv.org/abs/2406.05761)
[language models.](http://arxiv.org/abs/2406.05761)
Ryan Koo, Minhwa Lee, Vipul Raheja, Jong Inn Park,
[Zae Myung Kim, and Dongyeop Kang. 2023. Bench-](http://arxiv.org/abs/2309.17012)
[marking cognitive biases in large language models as](http://arxiv.org/abs/2309.17012)
[evaluators.](http://arxiv.org/abs/2309.17012)
Richard Kraut. 2022. Aristotle’s Ethics. In Edward N.
Zalta and Uri Nodelman, editors, The Stanford En_cyclopedia of Philosophy, Fall 2022 edition. Meta-_
physics Research Lab, Stanford University.
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying
Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E.
Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model
serving with pagedattention. In Proceedings of the
_ACM SIGOPS 29th Symposium on Operating Systems_
_Principles._
Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, hai
[zhao, and Pengfei Liu. 2024. Generative judge for](https://openreview.net/forum?id=gtkFw6sZGS)
[evaluating alignment. In The Twelfth International](https://openreview.net/forum?id=gtkFw6sZGS)
_Conference on Learning Representations._
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun[som. 2017. Program induction by rationale genera-](https://doi.org/10.18653/v1/P17-1015)
[tion: Learning to solve and explain algebraic word](https://doi.org/10.18653/v1/P17-1015)
[problems. In Proceedings of the 55th Annual Meet-](https://doi.org/10.18653/v1/P17-1015)
_ing of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 158–167, Vancouver,_
Canada. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2020.
[Ro{bert}a: A robustly optimized {bert} pretraining](https://openreview.net/forum?id=SyxS0T4tvS)
[approach.](https://openreview.net/forum?id=SyxS0T4tvS)
Yiqi Liu, Nafise Sadat Moosavi, and Chenghua Lin.
[2024. Llms as narcissistic evaluators: When ego](http://arxiv.org/abs/2311.09766)
[inflates evaluation scores.](http://arxiv.org/abs/2311.09766)
Adian Liusie, Potsawee Manakul, and Mark Gales. 2024.
[LLM comparative assessment: Zero-shot NLG eval-](https://aclanthology.org/2024.eacl-long.8)
[uation through pairwise comparisons using large lan-](https://aclanthology.org/2024.eacl-long.8)
[guage models. In Proceedings of the 18th Confer-](https://aclanthology.org/2024.eacl-long.8)
_ence of the European Chapter of the Association for_
_Computational Linguistics (Volume 1: Long Papers),_
pages 139–151, St. Julian’s, Malta. Association for
Computational Linguistics.
Juhyun Oh, Eunsu Kim, Inha Cha, and Alice Oh. 2024.
[The generative AI paradox in evaluation: “what it](https://aclanthology.org/2024.eacl-srw.19)
[can solve, it may not evaluate”. In Proceedings of the](https://aclanthology.org/2024.eacl-srw.19)
_18th Conference of the European Chapter of the As-_
_sociation for Computational Linguistics: Student Re-_
_search Workshop, pages 248–257, St. Julian’s, Malta._
Association for Computational Linguistics.
Arjun Panickssery, Samuel R. Bowman, and Shi Feng.
[2024. Llm evaluators recognize and favor their own](http://arxiv.org/abs/2404.13076)
[generations.](http://arxiv.org/abs/2404.13076)
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel,
B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer,
R. Weiss, V. Dubourg, J. Vanderplas, A. Passos,
D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in
Python. _Journal of Machine Learning Research,_
12:2825–2830.
[Barbara Plank. 2022. The “problem” of human label](https://doi.org/10.18653/v1/2022.emnlp-main.731)
[variation: On ground truth in data, modeling and](https://doi.org/10.18653/v1/2022.emnlp-main.731)
[evaluation. In Proceedings of the 2022 Conference](https://doi.org/10.18653/v1/2022.emnlp-main.731)
_on Empirical Methods in Natural Language Process-_
_ing, pages 10671–10682, Abu Dhabi, United Arab_
Emirates. Association for Computational Linguistics.
[Vyas Raina, Adian Liusie, and Mark Gales. 2024. Is](http://arxiv.org/abs/2402.14016)
[llm-as-a-judge robust? investigating universal adver-](http://arxiv.org/abs/2402.14016)
[sarial attacks on zero-shot llm assessment.](http://arxiv.org/abs/2402.14016)
-----
Skipper Seabold and Josef Perktold. 2010. statsmodels:
Econometric and statistical modeling with python. In
_9th Python in Science Conference._
Zhen Tan, Dawei Li, Song Wang, Alimohammad
Beigi, Bohan Jiang, Amrita Bhattacharjee, Mansooreh Karami, Jundong Li, Lu Cheng, and Huan
[Liu. 2024. Large language models for data annota-](http://arxiv.org/abs/2402.13446)
[tion: A survey.](http://arxiv.org/abs/2402.13446)
Yidong Wang, Zhuohao Yu, Wenjin Yao, Zhengran
Zeng, Linyi Yang, Cunxiang Wang, Hao Chen,
Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie,
[Wei Ye, Shikun Zhang, and Yue Zhang. 2024a. Pan-](https://openreview.net/forum?id=5Nn2BLV7SB)
[daLM: An automatic evaluation benchmark for LLM](https://openreview.net/forum?id=5Nn2BLV7SB)
[instruction tuning optimization. In The Twelfth Inter-](https://openreview.net/forum?id=5Nn2BLV7SB)
_national Conference on Learning Representations._
Zhilin Wang, Yi Dong, Olivier Delalleau, Jiaqi
Zeng, Gerald Shen, Daniel Egert, Jimmy J. Zhang,
Makesh Narsimhan Sreedhar, and Oleksii Kuchaiev.
[2024b. Helpsteer2: Open-source dataset for training](http://arxiv.org/abs/2406.08673)
[top-performing reward models.](http://arxiv.org/abs/2406.08673)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le,
[and Denny Zhou. 2022. Chain of thought prompt-](https://openreview.net/forum?id=_VjQlMeSB_J)
[ing elicits reasoning in large language models. In](https://openreview.net/forum?id=_VjQlMeSB_J)
_Advances in Neural Information Processing Systems._
Peter West, Ximing Lu, Nouha Dziri, Faeze Brahman,
Linjie Li, Jena D. Hwang, Liwei Jiang, Jillian Fisher,
Abhilasha Ravichander, Khyathi Chandu, Benjamin
Newman, Pang Wei Koh, Allyson Ettinger, and Yejin
[Choi. 2024. The generative AI paradox: “what it can](https://openreview.net/forum?id=CF8H8MS5P8)
[create, it may not understand”. In The Twelfth Inter-](https://openreview.net/forum?id=CF8H8MS5P8)
_national Conference on Learning Representations._
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen,
Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu,
Teven Le Scao, Sylvain Gugger, Mariama Drame,
[Quentin Lhoest, and Alexander Rush. 2020. Trans-](https://doi.org/10.18653/v1/2020.emnlp-demos.6)
[formers: State-of-the-art natural language processing.](https://doi.org/10.18653/v1/2020.emnlp-demos.6)
In Proceedings of the 2020 Conference on Empirical
_Methods in Natural Language Processing: System_
_Demonstrations, pages 38–45, Online. Association_
for Computational Linguistics.
Tianhao Wu, Weizhe Yuan, Olga Golovneva, Jing Xu,
Yuandong Tian, Jiantao Jiao, Jason Weston, and Sain[bayar Sukhbaatar. 2024. Meta-rewarding language](http://arxiv.org/abs/2407.19594)
[models: Self-improving alignment with llm-as-a-](http://arxiv.org/abs/2407.19594)
[meta-judge.](http://arxiv.org/abs/2407.19594)
Wenda Xu, Guanglei Zhu, Xuandong Zhao, Liangming Pan, Lei Li, and William Yang Wang. 2024.
[Pride and prejudice: Llm amplifies self-bias in self-](http://arxiv.org/abs/2402.11436)
[refinement.](http://arxiv.org/abs/2402.11436)
An Yang, Baosong Yang, Binyuan Hui, Bo Zheng,
Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan
Li, Dayiheng Liu, Fei Huang, Guanting Dong, Haoran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian
Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jin
Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang
Lin, Kai Dang, Keming Lu, Keqin Chen, Kexin Yang,
Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng
Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin,
Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu,
Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng,
Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin
Wei, Xuancheng Ren, Yang Fan, Yang Yao, Yichang
Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu
Cui, Zhenru Zhang, and Zhihao Fan. 2024. Qwen2
technical report. arXiv preprint arXiv:2407.10671.
Alex Young, Bei Chen, Chao Li, Chengen Huang,
Ge Zhang, Guanwei Zhang, Heng Li, Jiangcheng
Zhu, Jianqun Chen, Jing Chang, Kaidong Yu, Peng
Liu, Qiang Liu, Shawn Yue, Senbin Yang, Shiming
Yang, Tao Yu, Wen Xie, Wenhao Huang, Xiaohui
Hu, Xiaoyi Ren, Xinyao Niu, Pengcheng Nie, Yuchi
Xu, Yudong Liu, Yue Wang, Yuxuan Cai, Zhenyu Gu,
[Zhiyuan Liu, and Zonghong Dai. 2024. Yi: Open](http://arxiv.org/abs/2403.04652)
[foundation models by 01.ai.](http://arxiv.org/abs/2403.04652)
Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho,
Xian Li, Sainbayar Sukhbaatar, Jing Xu, and Jason
[Weston. 2024. Self-rewarding language models.](http://arxiv.org/abs/2401.10020)
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan
Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin,
Zhuohan Li, Dacheng Li, Eric Xing, Hao Zhang,
[Joseph E. Gonzalez, and Ion Stoica. 2023. Judging](https://openreview.net/forum?id=uccHPGDlao)
[LLM-as-a-judge with MT-bench and chatbot arena.](https://openreview.net/forum?id=uccHPGDlao)
In Thirty-seventh Conference on Neural Information
_Processing Systems Datasets and Benchmarks Track._
-----
Avg. Avg.
# questions
# question characters # answer characters
AQUA-RAT 254 239.1 203.1
MATH 1516 216.5 643.9
GSM8K 1319 239.9 292.9
Table 5: An overview of dataset size and text length.
**A** **Experimental Setup**
We provide further details on the general setup described in Section 3. Specifically, we include statistics and examples of the datasets, additional information on the models used, and the exact prompts
employed in this study.
**A.1** **Datasets**
Additional information about the datasets is given
in Table 5, which presents an overview of the
dataset statistics. Note that for the MATH dataset,
we only include the most challenging questions,
called levels 4 and 5, in the dataset. Notably, it
has ground truth answer sequences that are, on
average, almost three times longer than those in
other datasets.
In Table 6, we provide examples of questions and their corresponding answers from the
ground truth. Note that these examples were used
for few-shot prompting.
**A.2** **Models**
We execute all models using the VLLM software
for LLM serving (Kwon et al., 2023). The weights
for all models are accessible through Huggingface
Transformers (Wolf et al., 2020). Table 7 includes
hyperlinks to each model for easy reference.
**A.3** **Prompts**
We used two different prompts within this project.
The prompt shown in Figure 7 is used for the candidate solutions for all datasets. Examples of the
few-shots are in Table 6. The prompt for the judges
is given in Figure 8. Note that we run experiments
for both orders of the answers of the models MA
and MB.
**A.4** **Infrastructure**
The experiments were run on NVIDIA A100 and
NVIDIA H100. The judgments used in Section 4
and Section 5 took around 5 days equivalents on
4 A100 40GB. Using 2 H100 90GB and 4 A100
40GB it took around 2.5 days. For the perturbation
**Initial Prompt**
{
"role": "user",
"content": "You are a reasoning assistant.
Always answer exactly in the same format.
Use ’####’ to separate the final answer
(without additional comments) from the
reasoning.
{{shot 1 question}}"
},
{
"role": "assistant",
"content": "{{shot 1 question}}"
}
...
,
{
"role": "assistant",
"content": "{{shot 4 answer}}"
}, {
"role": "user",
"content": "{{question}}"
}
}
Figure 7: Prompt used to generate initial solutions for
all datasets. It includes few-shots and the question of
the current sample.
-----
|Col1|Question|Answer|
|---|---|---|
|Col1|Question|Answer|
|---|---|---|
|AQUA-RAT|Two friends plan to walk along a 43-km trail, starting at opposite ends of the trail at the same time. If Friend P’s rate is 15% faster than Friend Q’s, how many kilometers will Friend P have walked when they pass each other? Options: A)21 B)21.5 C)22 D)22.5 E)23|If Q complete x kilometers, then P com- pletes 1.15x kilometers. x + 1.15x = 43 2.15x=43 x = 43/2.15 = 20 Then P will have have walked 1.15*20=23 km. The answer is E. #### E|
|GSM8K|Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Na- talia sell altogether in April and May?|Natalia sold 48/2 = «48/2=24»24 clips in May. Natalia sold 48+24 = «48+24=72»72 clips altogether in April and May. #### 72|
|MATH|Mr. Madoff invests 1000 dollars in a fund that compounds annually at a con- stant interest rate. After three years, his investment has grown to 1225 dollars. What is the annual interest rate, as a percentage? (Round your answer to the nearest integer.)|Let r be the annual interest rate. Then after three years, Mr. Mad- off’s investment is 1000 1 + r 3, · 100 so 1000 1 + r 3 = 1225. Then · 100 1 + r 3 = 1.225,so [1 + r = 100 100 √ 3 1.225 = 1.069987 . . ., which means r = 7, to the nearest integer. #### 7.0|
Table 6: Example of ground truth answers used for few-shot prompting.
of all judges across various model comparisons
[huggingface.co/Qwen/Qwen2-72B](https://huggingface.co/Qwen/Qwen2-72B) in Figure 9. As shown in Table 1, only the large
[huggingface.co/01-ai/Yi-1.5-34B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-34B-Chat-16K) models consistently produce judgments that devi
[huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) ate consistently from random chance. These results
[huggingface.co/google/gemma-1.1-7b-it](https://huggingface.co/google/gemma-1.1-7b-it) in Figure 9 support the superior performance of
|Model|URL|
|---|---|
|Qwen2 72B Llama 3 70B Yi 1.5 34B Mixtral 8x7B Llama 3 8B Gemma 1.1 7B Mistral 7B v0.3 Mistral 7B v0.1|huggingface.co/Qwen/Qwen2-72B huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct huggingface.co/01-ai/Yi-1.5-34B-Chat-16K huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1 huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct huggingface.co/google/gemma-1.1-7b-it huggingface.co/mistralai/Mistral-7B-Instruct-v0.3 huggingface.co/mistralai/Mistral-7B-Instruct-v0.1|
**C** **Additional subset experiments**
This section provides additional information for
chapter 5.
**C.1** **Example Subset performance**
To better understand the correlation observed in
Figure 5, we provide examples of these subsets,
which can be seen in Table 10. These examples
include the following details: the judge, the compared models, the dataset, the performance of each
model on a dataset (denoted by P (A|D)), the
judgment performance on the subset (denoted by
_P_ (∆J = T _|A = X, B = Y, D)), and the relative_
size of the subset (denoted by P (A = X, B =
_Y |D)). We provide the five subsets with the high-_
est performance, the five subsets with the five lowest performance, and five random subsets.
**C.2** **Performance by agreement**
We provide an extension of the results in the main
paper. We put all samples into bins of how many
Table 7: Used models and corresponding hyperlinks.
experiments in Section 7, it took around the same
amount of time.
**B** **General performance**
This section provides additional information related to Section 4. Specifically, we present the task
performance of all models across all datasets, as
well as the judging performance of all models when
used as judges.
**B.1** **Task Performance**
In various contexts in this work, the task performance of the individual models is essential. Therefore, we provide the accuracy of all models and all
datasets in Table 9.
**B.2** **Judging performance per model pair**
We conduct experiments with all eight models serving as judges. We present the performance metrics
-----
|Judge|model MA|model MB|Label|Prompt|Response|Correct|
|---|---|---|---|---|---|---|
|Qwen 2 72B|Qwen 2 72B|Llama 3 8B|20.00|Question: Charlie wants to sell beeswax candles. For every pound of beeswax, he can make 10 tapered candles. One pound of beeswax and the wicks cost $10.00 in supplies. If he sells each candle for $2.00 each, what is his net profit if he makes and sells 20 candles? Answer A: He needs 2 pounds of beeswax to make 20 candles because 20 / 10 = «20/10=2»2 It will cost $20.00 in supplies for the beeswax and wicks because 2 x 10 = «2*10=20»20 His total revenue from selling the candles will be $40.00 because 20 x 2 = «20*2=40»40 His net profit will be $20.00 because 40 - 20 = «40-20=20»20 #### 20 ————– Answer B: It takes 1 pound of beeswax and the wicks to make 10 candles, so it takes 1 pound of beeswax and the wicks to make 20 candles. It costs 10 dollars for 1 pound of beeswax and the wicks, so it costs 10 dollars for 20 candles. Charlie sells each candle for 2 dollars, so he makes 20 * 2 = 40 dollars by selling the 20 candles. So, his net profit is 40 - 10 = 30 dollars. #### 30 ————– Compare both answers in detail and choose the an- swer which correctly answers the question. Conclude with a JSON in Markdown format indicat- ing your choice between answer A or B: “‘json { "answer": "B" or "A" } “‘|Answer A and Answer B differ in their calculation of the cost of supplies for making 20 candles. Answer A correctly calculates that Charlie needs 2 pounds of beeswax to make 20 candles and that the cost for the beeswax and wicks for 2 pounds is $20.00. An- swer B incorrectly assumes that the cost for making 20 candles is the same as for 10 candles, which is $10.00. Therefore, Answer A correctly calculates the net profit as $20.00, while Answer B incorrectly calculates it as $30.00. The correct answer is A. “‘json { "answer": "A" } “‘|True|
|Llama 3 70B|Qwen 2 72B|Llama 3 8B|D|Question: There is well of depth 30m and frog is at bottom of the well. He jumps 3m up one day and falls back 2m down the same day. How many days will it take for the frog to come out of the well? Options: A)25 days B)26 days C)27 days D)28 days E)29 days Answer A: The frog is initially at the bottom of the well, which is 30m deep. On the first day, the frog jumps 3m up, so it is 33m from the bottom. Then, it falls back 2m, so it is 31m from the bottom. Each day, the frog jumps 3m up and falls 2m down, making a net gain of 1m. To get out of the 30m deep well, it would take 30 days for the frog to jump and fall 30 times, covering a total distance of 30m. #### A ————– Answer B: If he jumps 3m up and slips 2m down in a day, he is 1m up in a day in 27 days he would have reached 27m On the 28th day, he jumps up 3m and come out of the well. Answer: D #### D ————– Compare both answers in detail and choose the an- swer which correctly answers the question. Conclude with a JSON in Markdown format indicat- ing your choice between answer A or B: “‘json { "answer": "B" or "A" } “‘|Let’s break down both answers: **Answer A:** The explanation is correct that the frog makes a net gain of 1m each day. However, the conclusion that it would take 30 days to get out of the 30m deep well is incorrect. The frog would reach the top of the well when it has covered a total distance of 30m, but it would not take 30 days. **Answer B:** This explanation is correct. The frog makes a net gain of 1m each day, so in 27 days, it would have covered 27m. On the 28th day, it would jump up 3m and come out of the well. Therefore, the correct answer is **D) 28 days**. Here is the conclusion in JSON format: “‘json { "answer": "B" } “‘|True|
Table 8: Example of ground truth answers used for few-shot prompting.
-----
different answers were given by the eight models. For the sake of brevity, we average across all
judges. In Figure 10 we show the same graphs
for each judge individually. Again, we give the
performance on the overall dataset and on the subset where there exists exactly one correct and one
incorrect solution. All interpretations and conclusions overlap with the main part of the paper.
**D** **Prediction of Individual Judgements**
**Random Forest.** We utilize TF-IDF representations with English stop word removal for the Random Forest model. We further remove all digits
from the text. We set the maximum number of features to 10,000 and consider n-grams ranging from
unigrams to bigrams (N-gram range: 1-2). The
model uses 500 trees (estimators). For training, we
use the Scikit-learn (Pedregosa et al., 2011) library.
The running time was negligible.
**RoBERTa.** For the RoBERTa model (Liu et al.,
2020), we use a batch size of 64 and a learning rate
of 2e-5. The weight decay is set to 1e-3, and the
model is trained for 8 epochs. The final model is
selected based on the best validation performance.
The model is trained using the HuggingFace Transformers library (Wolf et al., 2020). The total running time was about twelve hours on a single H100
90GB.
**E** **Statistical Methodology**
We describe the statistical background for the tests
applied in Section 6. All predictions and statistical tests in Section 6 were performed using the
statsmodels library (Seabold and Perktold, 2010).
**E.1** **Coefficient of Determination**
The coefficient of determination, R[2], for evaluation
of linear regression models (Fahrmeir et al., 2013)
is defined as follows:
_ni=1[(ˆ]yi_ _y¯)[2]_
_R[2]_ = _n_ _−_
_i=1[(][y][i][ −]_ _y[¯])[2]_
P
_R[2]_ measures the share of the variance inP _Y ex-_
plained by its covariation with the features X included in the model by dividing the variation of
the predicted values ˆyi by the variation of the true
target values yi. If the features X have high explanatory power for Y, the ˆyi will be close to
the yi and R[2] will be close to 1, while in the
extreme case of no correlation between X and
**Judge Prompt**
Question:
{{question}}
Answer A:
{{answer A}}
————–
Answer B:
{{answer B}}
————–
Compare both answers in detail and
choose the answer which correctly answers
the question.
Conclude with a JSON in Markdown
format indicating your choice between
answer A or B:
“‘json
{
"answer": "B" or "A"
}
“‘
Figure 8: Prompt used for judgements. The full text
above is wrapped in the user role, as all models support
this role. No additional system message is used.
AQUA-RAT GSM8K MATH
Qwen 2 72B 76.38 92.04 51.19
Llama 3 70B 73.62 91.05 34.37
Yi 1.5 34B 64.96 78.47 27.04
Mixtral 8x7B 47.24 61.18 13.79
Llama 3 8B 51.18 73.01 15.04
Gemma 1.1 7B 42.91 50.72 12.60
Mistral 7B v0.3 38.19 42.76 6.13
Mistral 7B v0.1 21.65 26.08 3.10
Table 9: Task performance of all models using the
prompt in Figure 7.
-----
(a) Qwen 2 70B (b) LLama3 70B (c) LLama 3 8B (d) Gemma 1.1 7B
(e) Yi 1.5 34B (f) Mixtral 8x7B (g) Mistral 7B v0.3 (h) Mistral 7B v0.1
Figure 9: Evaluation of final task performance P (∆J = T _|A, B) averaged over all datasets for model pairs_
(MA, MB) for the judges (a) - (h).
Judge model A model B dataset X Y _P_ (A|D) _P_ (B|D) _P_ (∆J = T _|A = X, B = Y, D)_ _P_ (A = X, B = Y |D)
Qwen 2 72B Qwen 2 72B Mistral 7B v0.1 MATH True False 51.2 3.1 99.1 50.2
Qwen 2 72B Yi 1.5 34B Mistral 7B v0.1 MATH True False 27.0 3.1 98.4 27.9
Qwen 2 72B Llama 3 8B Mistral 7B v0.1 MATH True False 15.0 3.1 98.3 16.7
Qwen 2 72B Llama 3 70B Mistral 7B v0.1 MATH True False 34.4 3.1 98.3 35.4
Qwen 2 72B Mixtral 8x7B Mistral 7B v0.1 MATH True False 13.8 3.1 98.2 15.4
Mixtral 8x7B Mixtral 8x7B Mixtral 8x7B GSM8K False True 61.2 61.2 65.8 14.7
Yi 1.5 34B Llama 3 8B Gemma 1.1 7B GSM8K False True 73.0 50.7 64.9 8.0
Yi 1.5 34B Llama 3 70B Mistral 7B v0.1 AQUA-RAT True False 73.6 21.7 89.1 60.8
Qwen 2 72B Gemma 1.1 7B Gemma 1.1 7B GSM8K False True 50.7 50.7 90.0 12.9
Yi 1.5 34B Yi 1.5 34B Mistral 7B v0.1 GSM8K False True 78.5 26.1 52.5 2.6
Qwen 2 72B Llama 3 70B Mistral 7B v0.1 MATH False True 34.4 3.1 13.2 2.2
Qwen 2 72B Yi 1.5 34B Mistral 7B v0.1 AQUA-RAT False True 65.0 21.7 10.0 4.7
Yi 1.5 34B Qwen 2 72B Mistral 7B v0.1 MATH False True 51.2 3.1 6.5 1.4
Llama 3 70B Qwen 2 72B Mistral 7B v0.1 MATH False True 51.2 3.1 6.2 1.3
Qwen 2 72B Qwen 2 72B Mistral 7B v0.1 MATH False True 51.2 3.1 6.1 1.4
Table 10: Examples of comparisons; and performance; problem:
-----
(a) Qwen 2 72B: Using all comparisons. (b) Comparison with a correct and incorrect answer
(c) Llama 3 70B: Using all comparisons. (d) Comparison with a correct and incorrect answer
(e) Yi 1.5 34B: Using all comparisons. (f) Comparison with a correct and incorrect answer
(g) Mixtral 8x7B: Using all comparisons. (h) Comparison with a correct and incorrect answer
Figure 10: Judge performance by agreement bucket, e.g. bucket 3 (X-axis) means that all eight models gave together
three different answers.
-----
_Y the arithmetic mean is the best estimate (i.e.,_
_yˆi = ¯y ∀_ _i = 1, . . ., n) resulting in R[2]_ = 0.
**E.2** **Overall-F-Test**
The Overall-F-Test is built upon R[2] and tests
whether the overall model is of any significant
value for explaining the variation of the target variable. The F-distributed test statistic is calculated
as
_R[2]_
_,_
1 − _R[2][ ·][ n][ −]_ _p[p][ −]_ [1]
where R[2] is the coefficient of determination, n is
the number of observations, and p is the number of
covariates included in the model (i.e., the number
of estimated coefficients excluding the intercept).
The hypotheses that can be tested this way are
_H0 : β1 = β2 = · · · = βp = 0_
_H1 : βj ̸= 0 for at least one j ∈{1, . . ., p}._
vs.
So from a rejection of H0, it can be concluded
that at least one of the included features exhibits
explanatory power for the variation of the target
variable.
**E.3** **Multiple Testing**
Since we conduct multiple statistical tests within
the scope of one research project, it is important
to consider multiple testing as a potential problem
resulting in false positive findings. The p-values
from our tests, however, also satisfy a significance
level resulting from a Bonferroni Correction of the
typical significance level of 5%.
**F** **Peturbation of Results**
In Section 7 we performed two experiments, where
we 1) swap the results of the individual answers
and 2) mask all numbers using an ’X’ to understand
whether the model only focuses on the writing style.
You can find examples, including the prompt and
the response of the judge for experiment 1) in Table
11 and examples for experiment 2) in Table 12
-----
|Judge|model MA|model MB|Label|Prompt|Response|Correct|
|---|---|---|---|---|---|---|
|Qwen 2 72B|Qwen 2 72B|Llama 3 8B|163.00|Question: Candice put 80 post-it notes in her purse before she headed out to her job at the coffee shop. On her way, she stopped off at the store and pur- chased a package of Post-it notes; At work, she placed a single Post-it note on each of 220 different cups of coffee. If she had 23 post-it notes remaining overall, how many Post-it notes were in the package that she purchased? Answer A: Candice had 80 post-it notes initially. She used 80 + 220 = «80+220=300»300 of them to note the cups of coffee. She had 23 post-it notes remaining overall. So, she used 300 - 23 = «300- 23=277»277 of the 300 post-it notes initially. The remaining post-it notes are the difference between the total and the initial count, which is 300 - 80 = «300-80=220»220. She must have purchased 220 - 23 = «220-23=163»197 post-it notes. #### 163 ————– Answer B: Candice started with 80 Post-it notes and placed one on each of the 220 cups of coffee, so she used 220 notes. If she had 23 notes remaining, then she must have had 220 + 23 = 243 notes in total. Since she originally had 80 notes, the package she purchased must have contained 243 - 80 = 197 notes. Therefore, the package contained 197 Post-it notes. #### 197 ————– Compare both answers in detail and choose the an- swer which correctly answers the question. Conclude with a JSON in Markdown format indicat- ing your choice between answer A or B: “‘json { "answer": "B" or "A" } “‘|Answer A is incorrect because it incorrectly calcu- lates the number of post-it notes Candice used and purchased. The correct answer is B because it ac- curately calculates the total number of post-it notes Candice had after using them and then subtracts the initial number of post-it notes she had to find out how many were in the package she purchased. “‘json { "answer": "B" } “‘|True|
|Llama 3 70B|Qwen 2 72B|Llama 3 8B|D|Question: In a chocolate store, all chocolates are either vanilla or cocoa flavored only. 10% of the chocolates are cocoa flavored, 90% of the rest are squashed. What percentage of the chocolates are both vanilla flavored and not squashed? Options: A)1% B)2% C)5% D)9% E)10% Answer A: If 10% of the chocolates are cocoa fla- vored, then the remaining 90% are vanilla flavored. 90% of the vanilla chocolates are squashed. This means that 10% of the vanilla chocolates are not squashed. The question asks for the percentage of chocolates that are both vanilla falvored and not squashed. Since 10% of the vanilla chocolates are not squashed, the answer is 10%. #### D ————– Answer B: 90% of the chocolates are not cocoa fla- vored (they are vanilla) and 10% of these (90% *10% = 9%) are not squashed. So 9% of the chocolates are both vanilla falvored and not squashed. The answer is E. #### E ————– Compare both answers in detail and choose the an- swer which correctly answers the question. Conclude with a JSON in Markdown format indicat- ing your choice between answer A or B: “‘json { "answer": "B" or "A" } “‘|Let’s break down both answers: **Answer A:** If 10% of the chocolates are co- coa flavored, then the remaining 90% are vanilla flavored. 90% of the vanilla chocolates are squashed. This means that 10% of the vanilla chocolates are not squashed. The question asks for the percentage of chocolates that are both vanilla flavored and not squashed. Since 10% of the vanilla chocolates are not squashed, the answer is 10%. **Issue with Answer A:** The calculation "10% of the vanilla chocolates are not squashed" is incorrect. 90% of the vanilla chocolates are squashed, which means 10% of 90% (not 10% of the total) are not squashed. **Answer B:** 90% of the chocolates are not cocoa falvored (they are vanilla) and 10% of these (90% * 10% = 9%) are not squashed. So 9% of the chocolates are both vanilla flavored and not squashed. **Correctness of Answer B:** This answer correctly calculates the percentage of chocolates that are both vanilla falvored and not squashed. Therefore, the correct answer is **B**. Here is the conclusion in JSON format: “‘json { "answer": "B" } “‘|True|
Table 11: Examples of comparisons where the final results were swapped.
-----
|Judge|model MA|model MB|Label|Prompt|Response|Correct|
|---|---|---|---|---|---|---|
|Qwen 2 72B|Qwen 2 72B|Llama 3 8B|163.00|Question: Candice put 80 post-it notes in her purse before she headed out to her job at the coffee shop. On her way, she stopped off at the store and pur- chased a package of Post-it notes; At work, she placed a single Post-it note on each of 220 different cups of coffee. If she had 23 post-it notes remaining overall, how many Post-it notes were in the package that she purchased? Answer A: Candice had X post-it notes initially. She used X + X = «X+X=X»X of them to note the cups of coffee. She had X post-it notes remaining overall. So, she used X - X = «X-X=X»X of the X post-it notes initially. The remaining post-it notes are the dif- ference between the total and the initial count, which is X - X = «X-X=X»X. She must have purchased X - X = «X-X=X»X post-it notes. #### X ————– Answer B: Candice started with X Post-it notes and placed one on each of the X cups of coffee, so she used X notes. If she had X notes remaining, then she must have had X + X = X notes in total. Since she originally had X notes, the package she purchased must have contained X - X = X notes. Therefore, the package contained X Post-it notes. #### X ————– Compare both answers in detail and choose the an- swer which correctly answers the question. Only analyze the reasoning! Therefore we exchanged all numbers with ’X’ so you can focus on the reasoning. Conclude with a JSON in Markdown format indicat- ing your choice between answer A or B: “‘json { "answer": "B" or "A" } “‘|Answer A’s reasoning is flawed because it uses the variable X in a confusing and incorrect way, equating X to X-X, which results in 0, and does not make sense in the context of the problem. Answer B, while also using X in a confusing way, at least attempts to set up an equation that could lead to a solution if the Xs were replaced with actual numbers. However, neither answer provides a clear, correct solution to the problem as stated. A correct approach would be to set up an equation based on the information given. Candice started with 80 post-it notes, used some, bought more, and ended up with 23 post-it notes. She used 220 post-it notes at work. Let’s denote the number of post-it notes in the package she bought as Y. The equation based on the information given would be: Initial post-it notes + Post-it notes bought − Post-it notes used = Post-it notes remaining 80 + Y −220 = 23 Solving for Y would give us the number of post-it notes in the package she bought. Since neither answer A nor B provides a correct solu- tion, and given the instruction to choose between A or B, we must conclude that neither is correct. How- ever, since a choice must be made between A and B, and B at least attempts to set up an equation, albeit incorrectly, we will choose B as the less incorrect option. “‘json { "answer": "B" } “‘|True|
|Llama 3 70B|Qwen 2 72B|Llama 3 8B|D|Question: There is well of depth 30m and frog is at bottom of the well. He jumps 3m up one day and falls back 2m down the same day. How many days will it take for the frog to come out of the well? Options: A)25 days B)26 days C)27 days D)28 days E)29 days Answer A: The frog is initially at the bottom of the well, which is Xm deep. On the first day, the frog jumps Xm up, so it is Xm from the bottom. Then, it falls back Xm, so it is Xm from the bottom. Each day, the frog jumps Xm up and falls Xm down, making a net gain of Xm. To get out of the Xm deep well, it would take X days for the frog to jump and fall X times, covering a total distance of Xm. #### A ————– Answer B: If he jumps Xm up and slips Xm down in a day, he is Xm up in a day in X days he would have reached Xm On the Xth day, he jumps up Xm and come out of the well. Answer: D #### D ————– Compare both answers in detail and choose the an- swer which correctly answers the question. Only analyze the reasoning! Therefore we exchanged all numbers with ’X’ so you can focus on the reasoning. Conclude with a JSON in Markdown format indicat- ing your choice between answer A or B: “‘json { "answer": "B" or "A" } “‘|Let’s analyze the reasoning of both answers: **Answer A:** The answer assumes that the frog makes a net gain of Xm (3m - 2m = 1m) each day. However, it doesn’t consider the fact that the frog needs to reach the top of the well to come out. The answer simply states that it would take X days to cover a total distance of Xm, without considering the remaining distance to the top of the well. **Answer B:** The answer correctly assumes that the frog makes a net gain of Xm (3m - 2m = 1m) each day. It then concludes that in X days, the frog would have reached Xm, and on the Xth day, it would jump up Xm and come out of the well. This answer considers the remaining distance to the top of the well and correctly concludes that the frog would come out on the Xth day. Based on the analysis, I choose: “‘json { "answer": "B" } “‘|True|
Table 12: Example of comparisons where numbers were masked using an ’X’.
-----
| [
"Andreas, Stephan",
"Dawei, Zhu",
"Matthias, Aßenmacher",
"Xiaoyu, Shen",
"Benjamin, Roth"
] | 2024-09-06T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.04168 | https://arxiv.org/abs/2409.04168 | https://www.semanticscholar.org/paper/671d010882165b724eca0b79298c102584cec546 |
From Textbooks to Knowledge: A Case Study in Harvesting Axiomatic Knowledge from Textbooks to Solve Geometry Problems | Textbooks are rich sources of information. Harvesting structured knowledge from textbooks is a key challenge in many educational applications. As a case study, we present an approach for harvesting structured axiomatic knowledge from math textbooks. Our approach uses rich contextual and typographical features extracted from raw textbooks. It leverages the redundancy and shared ordering across multiple textbooks to further refine the harvested axioms. These axioms are then parsed into rules that are used to improve the state-of-the-art in solving geometry problems. | null | null | [
"Eric, Xing",
"Kumar, Dubey",
"Mrinmaya, Sachan"
] | 2017-01-01T00:00:00 | EMNLP 2017 | true | 0 | 5 | null | https://www.semanticscholar.org/paper/81f466a535cdec4957989999f9ca381bc4fe14e9 | null | https://www.semanticscholar.org/paper/81f466a535cdec4957989999f9ca381bc4fe14e9 |
From proofs to theorems | N/A | null | null | [
"Chvalovsky, Karel"
] | 2020-01-01T00:00:00 | null | false | 0 | 0 | null | http://aitp-conference.org/2020/abstract/paper_21.pdf | null | null |
GFLean: An Autoformalisation Framework for Lean via GF | We present an autoformalisation framework for the Lean theorem prover, called GFLean. GFLean uses a high-level grammar writing tool called Grammatical Framework (GF) for parsing and linearisation. GFLean is implemented in Haskell. We explain the functionalities of GFLean, its inner working and discuss its limitations. We also discuss how we can use neural network based translation programs and rule based translation programs together complimenting each other to build robust autoformalisation frameworks. | An autoformalisation framework for the Lean theorem prover, called GFLean, which uses a high-level grammar writing tool called Grammatical Framework for parsing and linearisation and is implemented in Haskell. | ## GFLEAN: AN AUTOFORMALISATION FRAMEWORK FOR LEAN
### VIA GF
**Shashank Pathak**
Department of Computer Science
The University of Manchester
Manchester, UK
```
[email protected]
```
**ABSTRACT**
We present an autoformalisation framework for the Lean theorem prover, called GFLean. GFLean
uses a high-level grammar writing tool called Grammatical Framework (GF) for parsing and linearisation. GFLean is implemented in Haskell. We explain the functionalities of GFLean, its inner
working and discuss its limitations. We also discuss how we can use neural network based translation programs and rule based translation programs together complimenting each other to build robust
autoformalisation frameworks.
**_Keywords Autoformalisation · Grammatical Framework · Lean_**
**Introduction**
Formalisation refers to the task of converting a text from a natural language to a formal language. Formalisation
of mathematical text is beneficial for two reasons. Firstly, formalisation is done in an underlying logical system
and thus is accompanied by proof checking. Secondly, formalised mathematical text can be stored and manipulated
by computers. Mechanizing the process of formalisation is called autoformalisation. In this article, we present our
ongoing work on creating an autoformalisation framework, called GFLean. GFLean uses a high-level grammar writing
tool called Grammatical Framework (GF) [16], and converts simple statements from the language of mathematics to
input for the Lean theorem prover [15].
By the language of mathematics, we mean the text found in pure mathematics textbooks and research articles [11]. In
his PhD thesis, M. Ganesalingam [11] distinguishes between two classes of sentences found in the language of mathematics, called the formal mode and the informal mode. The sentences in the formal mode have a strict mathematical
content and can be formalised, whereas those in the informal mode have the purpose of guiding the reader’s intuition
and giving a commentary on the text. For example, the sentence “The Lagrange theorem is important and will be
useful later” is in the informal mode. For GFLean, we only work with sentences in the formal mode.
GFLean is implemented in Haskell and uses GF for parsing the input and linearizing the output. An example of a
GFLean input is
```
Ex. Assume x is a rational number. Assume x is equal to 2 + 2 * 2. Then x is greater
than 3.
```
The corresponding output is
```
example (x : Q) (h39 : x = (2 + (2 * 2))) : x > 3 := sorry
```
Currently, GFLean only formalises statements but not proofs. The code for GFLean and some working examples can
be found on the GitHub repository of the project [1].
We call the input language accepted by GFLean Simplified ForTheL. Simplified ForTheL is based on the controlled
natural language ForTheL [22]. The ForTheL syntax has already been implemented in GF [19] and our implementation
-----
of the Simplified ForTheL syntax in GF is based upon the ForTheL syntax implementation. However, the semantics
of ForTheL expressions [22] and Simplified ForTheL expressions differ. ForTheL expressions are converted to firstorder formulas, whereas Simplified ForTheL expressions are converted to Lean expressions, which are type-theoretic
in nature. Thus, we give a type-theoretic semantics to Simplified ForTheL expressions. Specifically, our contributions
are the following.
1. Implementing algorithms in Haskell to translate Simplified ForTheL expressions to Lean expressions via
manipulating abstract syntax trees (ASTs).
2. Implementing a grammar for Lean expressions in GF, so that the ASTs obtained from the previous step can
be linearized as Lean expressions.
To test how GFLean performs on statements from a standard textbook, we used GFLean to formalise statements from
Chapter 3 of the introduction to proofs textbook Mathematical Proofs by G. Chartrand, A. D. Polimeni, and P. Zhang
[7]. Out of the 62 statements contained in the chapter, GFLean can parse and formalise 42 statements with a minor
rephrasing of the input. The 42 statements, the corresponding GFLean inputs, and the GFLean outputs are given
in Appendix A. To parse the remaining 20 statements, the lexicon needs to be expanded and GFLean needs to have
support for more linguistic phenomena like post-fix quantification of symbols, donkey anaphora, etc.
This article is structured in the following manner. Section 2 gives a brief overview of the work done in autoformalisation of mathematical text. It also contains a brief introduction to the Lean theorem prover and a detailed explanation
of how a GF program works. Section 3 gives a thorough description of the workings of GFLean. Section 4 outlines
the limitations of GFLean. Section 5 discusses how rule-based systems and neural translation systems can be used together to create robust autoformalisation tools. Section 6 concludes the article and mentions the directions in which we
plan to extend GFLean. Appendix A contains the input and output for the 42 statements which GFLean can formalise
from the textbook mentioned above. Appendix B contains a formal grammar for Simplified ForTheL.
**2** **Related Work and Preliminaries**
Significant work has been done in writing and checking proofs written in a language that looks like natural language. C.
Zinn [26] used an extension of discourse representation theory [14] called proof representation structures to represent
mathematical discourse. The corresponding system, called Vip, was able to process two full textbook proofs, but was a
proof of concept. The System for Automated Deduction (SAD) [21] was developed to write and check proofs written
in the controlled natural language (CNL) ForTheL [22]. SAD converts ForTheL expressions to first-order formulas
and passes them to first-order automated theorem provers for checking. An ongoing project with the same objectives
of writing and checking proofs in a CNL is Naproche [8]. The CNL used as an input for Naproche is also based upon
ForTheL, and comes with a LATE[X dialect. Thus, the mathematical documents written for Naproche also get typeset]
as LATE[X documents. A number of results, like group theory up to the Sylow theorems, initial chapters from Walter]
Rudin’s Analysis, and set theory up to Silver’s theorem in cardinal arithmetic have been formalised and verified in
Naproche.
The language of mathematics from a linguistic point of view was studied by Ganesalingam in his PhD thesis [11].
Ganesalingam mentions the linguistic features found in the language in detail and gives a blueprint of formalisation
mechanisms using an extended version of DRT. As far as we know, the work has not been implemented as a practical
tool.
Two projects which use Grammatical Framework (GF) for autoformalisation deserve a mention here. The first, called
the MathNat project [12], consisted of developing a CNL for writing statements and proofs, and developing a systemindependent abstract mathematical language. In MathNat, GF was used to parse the CNL expressions. The second is
the Grammatical Logical Inference Framework (GLIF) [20], which can be used to do inference on statements written
in a CNL. Here too, GF is used for parsing, but other tools are used for semantic construction, logical processing, and
inference.
Apart from rule-based systems, neural networks have also been used for autoformalisation. Neural translation methods
have been used to translate LATE[X strings to Mizar input [24],[23]. In recent work, Large language models (LLMs) were]
used to formalise mathematical problems in to Isabelle/HOL input [25] by providing a few examples and asking the
LLM to formalise the statement. Similarly, the LLM Codex was used, alongwith input-dependent prompt selection,
to formalise 120 natural language statements to Lean input with a 65% accuracy [10]. Generating parallel corpora
for training models is an expensive task, although recently LLMs have been used for that as well. A corpus of 332K
formal-informal statement pairs has been produced [13] by informalising statements using GPT-4 [5] from the Archive
of Formal Proofs [2], which is a collection of proof libraries for Isabelle, and the Lean 4 library mathlib4 [3].
-----
**2.1** **Lean**
The Lean 4 theorem prover (or Lean) is an interactive theorem prover and a full-fledged programming language [15].
Lean and its predecessors have been used for large-scale formalisation projects like the Sphere Eversion Project [9]
and Perfectoid Spaces [6]. Lean also has a rapidly evolving monolithic mathematical library called mathlib4. As of
March 2024, mathlib4 has about 140k theorem statements and 76k definitions [4]. For GFLean, we chose the target
language as Lean because Lean can be used to formalise research-level mathematics, and a large body of mathematics
has already been formalised in it.
**2.2** **Grammatical Framework (GF)**
**Grammatical Framework (GF) is a special-purpose programming language designed to write multilingual grammars.**
Each GF program is called a GF grammar and is made up of a single Abstract Syntax and at least one Concrete
**Syntax. For translation, the abstract syntax acts as a bridge between the various concrete syntaxes. The abstract**
syntax encodes everything that needs to be preserved during translation. The concrete syntaxes encode language
specific peculiarities, for example number or gender agreements.
Building translation systems via GF uses one of the two following methods.
1. Using only GF. In this method, a single abstract syntax is defined, and as many concrete syntaxes are defined
as there are languages. The GF programmer has to define the abstract syntax and concrete syntaxes in a way
such that the syntax for all the languages can be realized as the concrete syntaxes corresponding to the same
abstract syntax.
2. Embedding GF grammars in a host program. One usually adopts this method if in order to carry out a
faithful language translation, defining a common abstract syntax is difficult, and one needs to perform some
abstract syntax tree (AST) transformations. In this method, a GF grammar is used to parse the input or
linearize the output, but an external host program is used to perform the AST transformations. A. Ranta
[17], who has played a leading role in the development of GF, mentions that GF lacks the program constructs
and libraries needed for non-compositional translations, such as list processing, state management, and so
on, but these translations can be done via embedding a GF grammar in a host program. GF grammars can
be embedded in Haskell, Python and JavaScript programs [16]. Simple but useful interfaces for formalising
mathematics employing this method have already been built [17]. Since for GFLean we had to implement
some non-compositional translations, we chose this method. Specifically, for GFLean we used GF to parse
the input and linearize the output, and we used Haskell to do the AST transformations.
GF is a high-level grammar writing tool. The user just needs to write the grammar rules, and gets the lexer, parser,
and type-checker for free [16]. The user can utilize records and tables to define concrete syntaxes such that the various
grammatical agreement rules hold, such as the grammatical number agreement between the verb phrase and the noun
phrase, or gender agreement between a noun and a pronoun. The division of functionalities between the abstract and
concrete syntax makes grammar writing modular. Along with that, the user can import natural language grammars
as software libraries from the GF Resource Grammar Library (RGL). As of 2019, RGL has implementations of 35
natural languages [18]. For GFLean, we chose not to import RGL because the natural language lexicon in GFLean
is small and we are not focusing on any language except English. Next, we explain the abstract syntax, the concrete
syntax in detail via an example.
**2.2.1** **Abstract Syntax**
In a GF grammar, the abstract syntax defines the linguistic categories and the possible parse trees. The following
example defines an abstract syntax, called Demo, for a toy grammar for the language of mathematics.
```
abstract Demo = {
cat
Prop; Var;
fun
Nzero, Greater1 : Var -> Prop;
Imp : Prop -> Prop -> Prop;
ForAll, Exists : Var -> Prop -> Prop;
Var1, Var2 : Var;
```
-----
For Demo, the lingusitic categories are Prop and Var, which stand for propositions and variables respectively. Under
```
fun, we define the type of each of the constructors. For example, Nzero has the type Var -> Prop which means
Nzero can be applied to a term of type Var to produce a term of type Prop. The term Var1 has type Var. Thus, Nzero
Var1 has type Prop.
```
**2.2.2** **Concrete Syntax**
A concrete syntax defines how the trees defined via the abstract syntax are linearized as strings. Consider the following
concrete syntax called DemoEng, which defines how the abstract syntax trees (ASTs) defined by Demo are linearized
as English sentences.
```
concrete DemoEng of Demo = {
lincat
Prop, Var = Str;
lin
Nzero var = var ++ "is nonzero";
Greater1 var = var ++ "is greater than 1";
Imp prop1 prop2 = "If" ++ prop1 ++ ", then"
++ prop2;
ForAll var prop = "For each" ++ var ++", " ++ prop;
Exists var prop = "There exists an" ++ var
++ "such that" ++ prop;
Var1 = "x1";
Var2 = "x2";
}
```
Here, Str is the built-in string GF datatype, and ++ is the string concatenation operator. The following is another
concrete syntax called DemoMath which linearizes the ASTs from Demo into symbolic mathematics.
```
concrete DemoMath of Demo = {
lincat
Prop, Var = Str;
lin
Nzero var = "~ ("++ var ++ " = 0 )";
Greater1 var = var ++ " > 1";
Imp prop1 prop2 = "(" ++ prop1 ++ "→" ++ prop2 ++ ")";
ForAll var prop = "(∀" ++ var ++", " ++ prop ++ ")";
Exists var prop ="(∃" ++ var ++ "," ++ prop ++ ")";
Var1 = "x1";
Var2 = "x2";
}
```
To translate a string s from a language L1 to L2, GF uses the concrete syntax for L1 to parse s, and find a corresponding
AST. Then, GF linearizes the AST as a string of L2. For example, for the Demo, DemoEng, and the DemoMath grammar,
parsing the sentence
```
For each x1, There exists an x2 such that If x1 is nonzero, then x2 is greater
than 1
```
produces the AST as shown in Figure 1. Then, GF linearizes the produced AST as
```
(∀ x1, (∃ x2, ( ~ ( x1 = 0 ) →x2 > 1 ) ) ).
```
**3** **GFLean**
GFLean is implemented Haskell and contains two separate embedded GF grammars. The first grammar defines the
input for GFLean. We call the input language for GFLean Simplified ForTheL. The second grammar is used to
linearize the tranlsated abstract syntax trees (ASTs) to Lean expressions. GFLean performs the following steps in the
given order to convert a Simplified ForTheL expression to a corresponding Lean expression:
-----
Figure 1: The AST produced by GF for a sentence from DemoEng.
1. Parsing: GFLean parses the Simplified ForTheL expression using a GF grammar and produces the expression’s AST.
2. Simplification: A series of tree transformations, which we call simplification, happen on the AST. The
simplification is implemented in Haskell.
3. Translation: The AST for the Simplified ForTheL expression obtained from the previous step is converted
into an AST for the corresponding Lean expression. This step is implemented in Haskell as well.
4. Linearization: The AST for the Lean expression is linearized as the Lean expression using GF.
The translation pipeline comprising the four steps above is shown in Figure 2.
Figure 2: The GFLean processing pipeline
The adopted method employed in GFLean of simplifying the ASTs and then translating them into target expressions is
broadly taken from the System for Automated Deduction (SAD) [21]. The methodological differences between SAD
and GFLean are the following:
1. SAD accepts ForTheL, whereas GFLean only accepts Simplified ForTheL.
2. GFLean uses GF for parsing and linearization, whereas in SAD all the steps are fully implemented in Haskell.
-----
**3.1** **Simplified ForTheL**
Simplified ForTheL is a simplified version of the controlled natural language ForTheL [22]. ForTheL is the input
language used by SAD [21], and the input language for Naproche is based upon ForTheL [8]. Our GF implementation
of the Simplified ForTheL syntax is based upon a GF implementation of the ForTheL syntax used in the Grammatical
Logical Framework (GLF) [19].
**3.1.1** **Differences between ForTheL and Simplified ForTheL**
Simplified ForTheL is a simplified version of ForTheL in the following sense:
1. Left adjectives. In ForTheL, multiple left-adjectives can be added in front of an entity, whereas in Simplified
ForTheL, only one left-adjective is allowed. For example, in ForTheL one can write x is an odd prime
```
integer, but in Simplified ForTheL one has to write x is an integer, x is odd and x is prime.
```
2. Conjunction of predicates. In ForTheL, a conjunction of predicates is allowed but in Simplified ForTheL,
only a single predicate in a sentence is allowed. For example, in ForTheL one can write x is odd and
```
greater than 4, but in Simplified ForTheL one has two write x is odd and x is greater than 4.
```
3. Conjunction of terms. In ForTheL, a clause can have a conjunction of terms as the subject but in Simplified
ForTheL, a clause can just have a single term as the subject. Thus, in ForTheL one can write x and y are
```
odd but in Simplified ForTheL one has to write x is odd and y is odd.
```
4. Macro-grammar. By macro-grammar, we mean how the sentences are organised to form a text on the whole.
The macro-grammar for ForTheL is geared towards SAD but the macro-grammar for Simplified ForTheL is
geared towards Lean.
5. Dynamicity. The lexicon for ForTheL is dynamic in the sense that the user can add to the lexicon during
runtime by using patterns [22]. Currently, GF grammars are static and can not be changed during runtime.
Thus, it is not possible expand the Simplified ForTheL lexicon during runtime.
**3.2** **GFLean Translation Examples**
We will see a few input-output examples produced by GFLean. The input is processed in a series of steps each of
which is shown when GFLean is run. For brevity, here we just show the input, and the final output. The examples
demonstrate in detail the range of natural language expressions which GFLean can process.
The input
```
Ex. Assume x is a rational number equal to 2 * 2. Then x is greater than 3.
```
produces the output
```
example (x : Q) (h33 : x = (2 * 2)) : x > 3 := sorry
```
Thus, GFLean can process expressions containing basic arithmetic operators. The input
```
Ex. Assume x is a real number less than 0. Then no nonnegative integer a such that a
is positive is not greater than x.
```
produces the output
```
example (x : R) (h67 : x < 0) : ∀ (a : Z), ((nneg a ∧ pos a) → (¬ (¬ a > x))) := sorry
```
Thus, GFLean can correctly model how the natural language quantifiers (in this case no) and negation occurring inside
a sentence (not) interact. The input
```
Ex. Assume x is an even integer greater than 32. Then x is greater than every integer
less than 32.
```
produces the output
```
example (x : Z) (h70 : even x) (h56 : x > 32) : ∀ (x34 : Z), (x34 < 32 → x > x34) := sorry
```
Thus, we can modify common nouns like integers with adjectives to the left (in this case even) and adjectival
phrases to the right (in this case greater than 32) in the input. Also, we can have quantifiers in the predicate (in
this case every in every integer less than 32).
-----
**3.3** **The Workings of GFLean**
In this section, we explain the four steps mentioned before in detail.
**3.3.1** **Parsing Simplified ForTheL expressions**
The parsing of a Simplified ForTheL expression is done via GF for which we had to implement the syntax of Simplified
ForTheL as a GF grammar. GF parses the expression and produces an abstract syntax tree (AST) which is passed on
to the next step. For example, after parsing the Simplified ForTheL expression
```
Ex. 4 is not less than 3.
```
the AST shown in Figure 3 is passed on to the next step.
Figure 3: The AST produced by GF after parsing Ex. 4 is not less than 3.
Currently, the lexicon contains 8 items like REAL_NUMBER, INTEGER, EXP, SUM, MINUS, etc. which, after
combining with one or multiple terms, behave like a noun phrase, and 11 items like LESS_TE, GREATER_THAN,
```
POSITIVE, etc. which, after combining with one or multiple terms, behave like an adjectival phrase. For both the
```
Simplified ForTheL grammar and the Lean grammar, we extract the lexicon from the same GF file, which makes
grammar extension more modular.
We endow the arithmetic operators with a precedence hierarchy by using records in the concrete syntax of the Simplified ForTheL grammar. As a result, the Simplified ForTheL expression 2 + 2 * 2 has the same AST as the
expression 2 + (2 * 2). To override the precedence hierarchy, the user needs to use ( and ). For example, in this
case the user needs to write (2 + 2) * 2.
-----
**3.3.2** **AST simplification**
This step works on the level of ASTs, i.e. it makes certain changes the to ASTs produced by the previous step. Because
of space limitations, instead of showing how the simplification changes the ASTs, we will show how the linearizations
of the ASTs changes during simplification. This step is implemented in Haskell, and is made up of many intermediate
sub-steps. We explain the sub-step in detail now.
The first sub-step is giving an entity a name if it is unnamed. This sub-step is present in SAD [22] as well. The name of
an unnamed entity in the Simplified ForTheL grammar is represented by a metavariable. We replace the metavariable
with an actual name. The names introduced are new and differ from the names already used. For example, after this
sub-step, the AST for
```
Ex. Assume x is an integer. Assume x is greater than 2. Then no odd integer less than
1 is greater than x.
```
becomes the AST for
```
Ex . Assume x is a integer (x 6). Assume x is greater than 2. Then no odd integer
(x 35) less than 1 is greater than x.
```
We call the next sub-step variable unification. This sub-step is needed for correct translation because sometimes the
introduced names need to match the variables names already present in the sentence. For example, after this sub-step,
the AST for
```
Assume x is a rational number (x 6).
```
becomes the AST for
```
Assume x is a rational number x.
```
In the next sub-step, we change the ASTs such that the corresponding sentences are in a certain form without changing
the meaning. Specifically, we convert in-situ quantification to ex-situ quantification. Consequently, the AST for
```
Every integer x greater than 1 is greater than 2.
```
becomes the AST for
```
For every integer x greater than 1, x is greater than 2.
```
Then, we change the structure of the entities so that both the left-adjectives and the adjectival phrases present on the
right of a noun are written just as a statements modifying the noun from the right. For example, the AST for
```
odd integer x greater than 1
```
becomes the AST for
```
integer x such that x is odd and x is greater than 1
```
After the executing the AST simplification step fully, the AST for
```
Ex. Assume x is an integer. Assume x is greater than 2. Then no odd integer less than
1 is greater than x.
```
becomes the AST for
```
Ex . Assume x is an integer x. Assume x is greater than 2. Then for no integer (x 35)
such that (x 35) is odd and (x 35) is less than 1, (x 35) is greater than x.
```
**3.3.3** **AST transformation**
Similar to AST simplification, in this step too, we make changes to the AST. But because of space limitations, we show
the changes in the corresponding linearized strings instead of the ASTs themselves. In this step, we take the ASTs
obtained after performing the simplification and construct new ASTs corresponding to the correct Lean translation. We
first translate the lexicon by translating linguistic categories to unary or binary functions and predicates. Specifically,
-----
1. We call the lexical item a rawNoun0 which behaves like a noun phrase. A rawNoun0 is converted into a Lean
type. For example, the lexical item INTEGER, which is a rawNoun0, gets converted to the type Int.
2. We call the lexical item a rawNoun2 which after taking two terms behaves like a noun phrase. A rawNoun2
is converted into a binary function. For example, consider the lexical item EXP, which is a rawNoun2 and
corresponds to the exponent function. After taking two terms, EXP behaves like a noun phrase (e.g. EXP 2
```
3, which can be used in place of a noun as it is equal to 8). EXP gets converted to the Lean exponent function
```
which takes two arguments.
3. We call the lexical item a rawAdjective0 which behaves like an adjectival phrase. A rawAdjective0 is converted into a unary relation. For example, the lexical item POSITIVE, which is a rawAdjective0, gets converted to the unary relation P, where Px stands for x is positive.
4. We call the lexical item a rawAdjective1 which after taking a term behaves like an adjectival phrase. A
_rawAdjective1 gets converted into a binary relation. The lexical item LESS_THAN, which is a rawAdjective1,_
gets converted to the binary relation L, where Lxy signifies x is less than y.
For sentences, the ASTs obtained after simplification get converted to ASTs for Lean expressions in a meaning preserving way. For example, the input
```
Ex. Assume x is an odd integer greater than 3. Then x is greater than 2.
```
after being parsed and going through simplification, produces the AST for the expression
```
Ex. Assume x is an integer x. Assume x is odd. Assume x is greater than 3. Then x is
greater than 2.
```
which in turn gets translated to the AST for the Lean expression
```
example (x : Z) (h40 : odd x) (h27 : x > 3) : x > 2 := sorry
```
after the AST translation step. The natural language quantifiers and the logical connectives contained in the input are
also translated in a meaning-preserving manner. For example, the input
```
Ex. Assume x is an odd integer greater than 3. Then no even integer greater than x is
less than every negative integer.
```
after being parsed and going through simplification, produces the AST for the expression
```
Ex. Assume x is an integer x. Assume x is odd. Assume x is greater than 3. Then for no
integer (x 32) such that (x 32) is even and (x 32) is greater than x, for every
integer (x 53) such that (x 53) is negative, (x 32) is less than (x 53).
```
which in turn get translated to the AST for the Lean expression
```
example (x : Z) (h111 : odd x) (h98 : x > 3) : ∀ (x32 : Z),
((even x32 ∧ x32 > x) → (¬ ∀ (x53 : Z), (neg x53 → x32 < x53))) := sorry
```
after the AST translation step.
**3.3.4** **Linearizing as Lean expressions**
The last step of linearizing the ASTs to Lean expression is done by GF. For this step, we had to write a GF grammar
for Lean expressions.
Along with the four steps mentioned in this section, we do a minor pre-processing to the input and a minor postprocessing to the output. The pre-processing steps include converting all text to lowercase, and the post-processing
steps include deleting extra whitespaces and giving each hypothesis variable a unique name.
**4** **Limitations**
GFLean still is a rudimentary program, has a tiny lexicon and accepts a small fragment of the language of mathematics.
Regarding the Simplified ForTheL concrete syntax, we use variations for singular and plural forms of nouns and verbs.
-----
As a result, in the Simplified ForTheL concrete syntax there is no difference between is and are, a and an, integer
and integers, etc. Thus, GFLean can accept ungrammatical sentences like Assume x are an odd integers.
As mentioned in Section 3.1.1, Simplified ForTheL lacks certain linguistics constructs such as conjunction of predicates, conjunction of terms and multiple left-adjectives. These constructs are abundant in the language of mathematics,
and are present in ForTheL as well. Thus, Simplified ForTheL is not an adequate controlled natural language for the
language of mathematics.
Another limitation concerns how much the user can expand the lexicon themselves. The language of mathematics is
dynamic in the sense that definitions and notations introduce new grammar rules and words to the lexicon [11]. Thus,
any functioning grammar for the language of mathematics should be extensible via definitions. A limitation of GF is
that the grammars cannot be extended during run-time. As a consequence, the user cannot write new definitions and
convert them to Lean expressions using GFLean. All the definitions need to be hard-wired in the grammar.
**5** **Discussion**
Continuing the discussion about dynamicity from Section 4, GFLean nowhere communicates with Lean. One possible
way to attain dynamicity would be to reimplement GFLean in Lean itself. Lean is the target language for GFLean,
and is itself a full-fledged programming language [15]. The metaprogramming features of Lean allow us to access the
environment and use the definitions and theorems in other Lean programs. Thus, dynamicity should not be hard to
achieve once GFLean has been implemented in Lean, although how hard it is to reimplement GFLean in Lean is an
open question.
On another note, rule-based translation systems and neural translation systems can be used together to build more
robust autoformalisation programs. Building rule-based translation systems amounts to manually designing numerous
translation rules. Neural network based translation systems do not have this problem, but are sometimes erroneous
and thus domain experts are needed to filter out the incorrect translations. Rule-based systems designed for linearizing
formal expression to natural language text can be used to help the user filter out the wrong translations without putting
a domain expert in the loop. For example, given a natural language statement s, let s[′] be the formalisation of s
produced by a neural translation system. To check whether s[′] is indeed a correct formalisation of s, we need a domain
expert. But if a rule-based system, which correctly linearizes the formal expressions into natural language statements,
is available, the rule-based system can be used to convert s[′] to a natural language statement s[′′]. The user can then
themselves filter out a wrong translation by checking if s and s[′′] mean the same thing. This eliminates the need of a
domain expert and results in a more robust autoformalisation program.
**6** **Conclusion and Further Work**
In this article, we have presented our ongoing effort to construct a framework for autoformalisation, called GFLean.
GFLean converts simple mathematical statements to expressions for the Lean theorem prover. We use a high-level
grammar writing tool called GF for parsing and linearisation. Using GF allows us to just design the grammar, and
we get the tokenizer and parser for free. For the intermediate steps, we use Haskell to perform abstract syntax tree
manipulations. GFLean is still a program under development and the grammar for the input is very basic. GFLean can
handle simple natural language quantifiers, logical operations, adjectival modifications and quantifiers occurring in the
predicate in the input, but does not have support for sentences with a conjunction of predicates, or a conjunction of
terms, or more than one adjective modifying a noun. In terms of how GFLean performs on examples from a textbook,
it can parse and formalise 42 out of the 62 statements from Chapter 3 of the textbook Mathematical Proofs by G.
Chartrand, A. D. Polimeni, and P. Zhang [7] with minor rephrasing. The preliminary work outlined here supports our
working assumption that GF is a useful tool to build modular and potentially scalable rule-based autoformalisation
programs.
We plan to extend GFLean in the following directions.
1. We want to make both the concrete syntaxes better by using records, tables and parameters. By using these
high-level constructs, we can model the agreements found in English.
2. ForTheL has some of the linguistic constructs that Simplified ForTheL lacks. We want to extend GFLean
from Simplified ForTheL to ForTheL.
3. We want to expand the lexicon.
-----
**Acknowledgements**
The work is supported by the Faculty of Science and Engineering, The University of Manchester. The author is
thankful to Dr. Ian Pratt-Hartmann for his expertise, insights, helping with the plan of the project and proof-reading
the manuscript. The author is also thankful to Dr. Inari Listenmaa for help with GF-related queries, and to Jan Frederik
Schaeffer for stimulating discussions and help with the GF implementation of Simplified ForTheL.
**References**
[[1] https://github.com/pkshashank/GFLeanTransfer](https://github.com/pkshashank/GFLeanTransfer)
[[2] https://www.isa-afp.org/](https://www.isa-afp.org/)
[[3] https://github.com/leanprover-community/mathlib4](https://github.com/leanprover-community/mathlib4)
[[4] https://leanprover-community.github.io/mathlib_stats.html](https://leanprover-community.github.io/mathlib_stats.html)
[5] Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F.L., Almeida, D., Altenschmidt, J., Altman,
S., Anadkat, S., et al.: Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023)
[6] Buzzard, K., Commelin, J., Massot, P.: Formalising perfectoid spaces. In: Proceedings of the 9th ACM SIGPLAN International Conference on Certified Programs and Proofs. pp. 299–312 (2020)
[7] Chartrand, G., Polimeni, A.D., Zhang, P.: Mathematical proofs. chap. 3, pp. 81–104. Pearson (2017)
[8] De Lon, A., Koepke, P., Lorenzen, A., Marti, A., Schütz, M., Wenzel, M.: The isabelle/naproche natural language
proof assistant. In: Automated Deduction–CADE 28: 28th International Conference on Automated Deduction,
Virtual Event, July 12–15, 2021, Proceedings 28. pp. 614–624. Springer International Publishing (2021)
[9] van Doorn, F., Massot, P., Nash, O.: Formalising the h-principle and sphere eversion. In: Proceedings of the 12th
ACM SIGPLAN International Conference on Certified Programs and Proofs. pp. 121–134 (2023)
[10] Gadgil, S., Tadipatri, A.R., Agrawal, A., Narayanan, A., Goyal, N.: Towards automating formalisation of theorem statements using large language models. In: 36th Conference on Neural Information Processing Systems
(NeurIPS 2022) Workshop on MATH-AI (2022)
[11] Ganesalingam, M.: The language of mathematics. Springer (2013)
[12] Humayoun, M., Raffalli, C.: Mathnat-mathematical text in a controlled natural language. Special issue: Natural
Language Processing and its Applications 46, 293–307 (2010)
[13] Jiang, A.Q., Li, W., Jamnik, M.: Multilingual mathematical autoformalization. arXiv preprint arXiv:2311.03755
(2023)
[14] Kamp, H., Van Genabith, J., Reyle, U.: Discourse representation theory. In: Handbook of Philosophical Logic:
Volume 15, pp. 125–394. Springer (2010)
[15] Moura, L.d., Ullrich, S.: The lean 4 theorem prover and programming language. In: Automated Deduction–
CADE 28: 28th International Conference on Automated Deduction, Virtual Event, July 12–15, 2021, Proceedings 28. pp. 625–635. Springer (2021)
[16] Ranta, A.: Grammatical framework: Programming with multilingual grammars, vol. 173. CSLI Publications,
Center for the Study of Language and Information Stanford (2011)
[17] Ranta, A.: Translating between language and logic: what is easy and what is difficult. In: Automated Deduction–
CADE-23: 23rd International Conference on Automated Deduction, Wrocław, Poland, July 31-August 5, 2011.
Proceedings 23. pp. 5–25. Springer (2011)
[18] Ranta, A., Angelov, K., Gruzitis, N., Kolachina, P.: Abstract syntax as interlingua: Scaling up the grammatical
framework from controlled languages to robust pipelines. Computational Linguistics 46(2), 425–486 (2020)
[19] Schaefer, J.F., Amann, K., Kohlhase, M.: Prototyping controlled mathematical languages in jupyter notebooks.
In: Mathematical Software–ICMS 2020: 7th International Conference, Braunschweig, Germany, July 13–16,
2020, Proceedings 7. pp. 406–415. Springer (2020)
[20] Schaefer, J.F., Kohlhase, M.: The glif system: A framework for inference-based natural-language understanding
(2020)
[21] Verchinine, K., Lyaletski, A., Paskevich, A.: System for automated deduction (sad): a tool for proof verification.
In: International Conference on Automated Deduction. pp. 398–403. Springer (2007)
-----
[22] Vershinin, K., Paskevich, A.: Forthel—the language of formal theories. International Journal of Information
Theories and Applications 7(3), 120–126 (2000)
[23] Wang, Q., Brown, C., Kaliszyk, C., Urban, J.: Exploration of neural machine translation in autoformalization
of mathematics in mizar. In: Proceedings of the 9th ACM SIGPLAN International Conference on Certified
Programs and Proofs. pp. 85–98 (2020)
[24] Wang, Q., Kaliszyk, C., Urban, J.: First experiments with neural translation of informal to formal mathematics.
In: Intelligent Computer Mathematics: 11th International Conference, CICM 2018, Hagenberg, Austria, August
13-17, 2018, Proceedings 11. pp. 255–270. Springer (2018)
[25] Wu, Y., Jiang, A.Q., Li, W., Rabe, M., Staats, C., Jamnik, M., Szegedy, C.: Autoformalization with large
language models. Advances in Neural Information Processing Systems 35, 32353–32368 (2022)
[26] Zinn, C.: Understanding informal mathematical discourse. PhD thesis, Institut fur Informatik, Universitat
Erlangen-Nurnberg (2004)
-----
**A** **Formalisation of statements from a textbook via GFLean**
GFLean can formalise 42 out of 62 statements from Chapter 3 of the textbook Mathematical Proofs by G. Chartrand,
A. D. Polimeni, and P. Zhang [7]. Next, we show how each of the statement can be formalised using GFLean. We
present them in the following manner:
**Theorem Number (Result number in the book.) Statement from the book.**
```
A corresponding input for GFLean.
The corresponding GFLean output.
```
The following are the formalisations.
**Theorem 1 (Result 3.1). Let x ∈** **_R. If x < 0, then x[2]_** + 1 > 0.
```
Ex. Assume x is a real number. Assume x is less than 0. Then x ^ 2 + 1 is greater
than 0.
example (x : R) (h39 : x < 0) : ((x ^ 2) + 1) > 0 := sorry
```
**Theorem 2 (Result 3.2). Let x ∈** **_R. If x[2]_** _−_ 2x + 2 ≤ 0, then x[3] _≥_ 8.
```
Ex. Assume x is a real number. Assume x ^ 2 - 2 * x + 2 is less than or equal to 0.
Then x ^ 3 is greater than or equal to 8.
example (x : R) (h57 : (((x ^ 2) - (2 * x)) + 2) ≤ 0) : (x ^ 3) ≥ 8 := sorry
```
**Theorem 3 (Exercise 3.1). Let x ∈** **_R. If 0 < x < 1, then x[2]_** _−_ 2x + 2 ̸= 0.
```
Ex. Assume x is a real number. Assume x is greater than 0 and x is less than 1.
Then x ^ 2 - 2 * x + 2 is not equal to 0.
example (x : R) (h64 : x > 0) (h51 : x < 1) : (((x ^ 2) - (2 * x)) + 2) ̸= 0 := sorry
example (x : R) (h68 : x > 0) (h55 : x < 1) : (¬ (((x ^ 2) - (2 * x)) + 2) = 0) := sorry
```
In this case, GFLean produces two outputs which are syntactically same as Lean expressions. This happens because
the input produces two different parse trees. Ideally, we would want a single parse tree.
**Theorem 4 (Exercise 3.3). Let r** **_Q[+]. If_** _[r][2]r[+1]_
_∈_
1, then _[r][2]r[+2]_
_≤_
_≤_ 2.
```
Ex. Assume r is a positive rational number. Assume (r ^ 2 + 1) / r is less than or
equal to 1. Then (r ^ 2 + 2) / r is less than or equal to 2.
```
```
example (r : Q) (h76 : pos r) (h63 : (((r ^ 2) + 1) / r) ≤ 1) : (((r ^ 2) + 2) / r) ≤ 2 :=
sorry
```
**Theorem 5 (Exercise 3.4). Let x ∈** **_R. If x[3]_** _−_ 5x − 1 ≥ 0, then (x − 1)(x − 3) ≥−2.
```
Ex. Assume x is a real number. Assume x ^ 3 - 5 * x - 1 is greater than or equal to 0.
Then (x - 1) * (x - 3) is greater than or equal to -2.
example (x : R) (h70 : (((x ^ 3) - (5 * x)) - 1) ≥ 0) : ((x - 1) * (x - 3)) ≥ -2 := sorry
```
**Theorem 6 (Exercise 3.6). If a, b and c are odd integers such that a + b + c = 0, then abc < 0.**
```
Ex. Assume a is an odd integer, b is an odd integer and c is an odd integer.
Assume a + b + c is equal to 0. Then a * b * c is less than 0.
example (a : Z) (h106 : odd a) (b : Z) (h85 : odd b) (c : Z) (h64 : odd c) (h51 : ((a + b) +
c) = 0) : ((a * b) * c) < 0 := sorry
```
-----
**Theorem 7 (Exercise 3.7). If x, y and z are three real numbers such that x[2]** + y[2] + z[2] _< xy + xz + yz, then_
_x + y + z > 0._
```
Ex. Assume x is a real number, y is a real number and z is a real number.
Assume x ^ 2 + y ^ 2 + z ^ 2 is less than x * y + x * z + y * z. Then
x + y + z is greater than 0.
example (x : R) (y : R) (z : R) (h99 : (((x ^ 2) + (y ^ 2)) + (z ^ 2)) < (((x * y) + (x * z))
+ (y * z))) : ((x + y) + z) > 0 := sorry
```
**Theorem 8 (Result 3.4). If n is an odd integer, then 3n + 7 is an even integer.**
```
Ex. Assume n is an odd integer. Then 3 * n + 7 is even.
example (n : Z) (h40 : odd n) : even ((3 * n) + 7) := sorry
```
**Theorem 9 (Result 3.5). If n is an even integer, then −5n −** 3 is an odd integer.
```
Ex. Assume n is an even integer. Then -5 * n - 3 is odd.
example (n : Z) (h41 : even n) : odd ((-5 * n) - 3) := sorry
```
**Theorem 10 (Result 3.6). If n is an odd integer, then 4n[3]** + 2n − 1 is odd.
```
Ex. Assume n is an odd integer. Then 4 * n ^ 3 + 2 * n - 1 is odd.
example (n : Z) (h57 : odd n) : odd ((4 * (n ^ 3)) + ((2 * n) - 1)) := sorry
```
**Theorem 11 (Result 3.8). If n is an even integer, then 3n[5]** _is an even integer._
```
Ex. Assume n is an even integer. Then 3 * n ^ 5 is even.
example (n : Z) (h41 : even n) : even (3 * (n ^ 5)) := sorry
```
**Theorem 12 (Exercise 3.8). If x is an odd integer, then 9x + 5 is even.**
```
Ex. Assume x is an odd integer. Then 9 * x + 5 is even.
example (x : Z) (h40 : odd x) : even ((9 * x) + 5) := sorry
```
**Theorem 13 (Exercise 3.9). If x is an even integer, then 5x −** 3 is an odd integer.
```
Ex. Assume x is an even integer. Then 5 * x - 3 is odd.
example (x : Z) (h40 : even x) : odd ((5 * x) - 3) := sorry
```
**Theorem 14 (Exercise 3.10). If a and c are odd integers, then ab + ac is even for every integer b.**
```
Ex. Assume a is an odd integer and c is an odd integer. Then for every integer b,
a * b + a * c is even.
example (a : Z) (h78 : odd a) (c : Z) (h57 : odd c) :
```
_∀_ `(b : Z), even ((a * b) + (a * c)) := sorry`
**Theorem 15 (Exercise 3.11). Let n ∈** **_Z. If 1 −_** _n[2]_ _> 0, then 3n −_ 2 is an even integer.
```
Ex. Assume n is an integer. If 1 - n ^ 2 is greater than 0 then 3 * n - 2 is even.
example (n : Z) : ((1 - (n ^ 2)) > 0 → even ((3 * n) - 2)) := sorry
```
**Theorem 16 (Result 3.10). Let x ∈** **_Z. If 5x −_** 7 is even, then x is odd.
```
Ex. Assume x is an integer. If 5 * x - 7 is even then x is odd.
```
-----
```
example (x : Z) : (even ((5 * x) - 7) → odd x) := sorry
```
**Theorem 17 (Result 3.11). Let x ∈** **_Z. Then 11x −_** 7 is even if and only if x is odd.
```
Ex. Assume x is an integer. Then 11 * x - 7 is even iff x is odd.
example (x : Z) : (even ((11 * x) - 7) ↔ odd x) := sorry
```
**Theorem 18 (Result 3.12). Let x ∈** **_Z. Then x[2]_** _is even if and only if x is even._
```
Ex. Assume x is an integer. Then x ^ 2 is even iff x is even.
example (x : Z) : (even (x ^ 2) ↔ even x) := sorry
```
**Theorem 19 (Lemma 3.13). Let x ∈** **_Z. If 5x −_** 7 is odd, then x is even.
```
Ex. Assume x is an integer. If 5 * x - 7 is odd then x is even.
example (x : Z) : (odd ((5 * x) - 7) → even x) := sorry
```
**Theorem 20 (Result 3.14). Let x ∈** **_Z. If 5x −_** 7 is odd, then 9x + 2 is even.
```
Ex. Assume x is an integer. If 5 * x - 7 is odd then 9 * x + 2 is even.
example (x : Z) : (odd ((5 * x) - 7) → even ((9 * x) + 2)) := sorry
```
**Theorem 21 (Exercise 3.16). Let x ∈** **_Z. If 7x + 5 is odd, then x is even._**
```
Ex. Assume x is an integer. If 7 * x + 5 is odd then x is even.
example (x : Z) : (odd ((7 * x) + 5) → even x) := sorry
```
**Theorem 22 (Exercise 3.17). Let n ∈** **_Z. If 15n is even, then 9n is even._**
```
Ex. Assume n is an integer. If 15 * n is even then 9 * n is even.
example (n : Z) : (even (15 * n) → even (9 * n)) := sorry
```
**Theorem 23 (Exercise 3.18). Let x ∈** **_Z. Then 5x −_** 11 is even if and only if x is odd.
```
Ex. Assume x is an integer. Then 5 * x - 11 is even iff x is odd.
example (x : Z) : (even ((5 * x) - 11) ↔ odd x) := sorry
```
**Theorem 24 (Exercise 3.19). Let x ∈** **_Z. If 7x + 4 is even, then 3x −_** 11 is odd.
```
Ex. Assume x is an integer. If 7 * x + 4 is even then 3 * x - 11 is odd.
example (x : Z) : (even ((7 * x) + 4) → odd ((3 * x) - 11)) := sorry
```
**Theorem 25 (Exercise 3.20). Let x ∈** **_Z. Then 3x + 1 is even if and only if 5x −_** 2 is odd.
```
Ex. Assume x is an integer. Then 3 * x + 1 is even iff 5 * x - 2 is odd.
example (x : Z) : (even ((3 * x) + 1) ↔ odd ((5 * x) - 2)) := sorry
```
**Theorem 26 (Exercise 3.21). Let n ∈** **_Z. Then (n + 1)[2]_** _−_ 1 is even if and only if n is odd.
```
Ex. Assume n is an integer. Then (n + 1) ^ 2 - 1 is even iff n is odd.
example (n : Z) : (even (((n + 1) ^ 2) - 1) ↔ odd n) := sorry
```
**Theorem 27 (Result 3.15). If n ∈** **_Z, then n[2]_** + 3n + 5 is an odd integer.
-----
```
Ex. Assume n is an integer. Then n ^ 2 + 3 * n + 5 is odd.
example (n : Z) : odd (((n ^ 2) + (3 * n)) + 5) := sorry
```
**Theorem 28 (Theorem 3.17). Let a and b be integers. Then ab is even if and only if a is even or b is even.**
```
Ex. Assume a is an integer and b is an integer. Then a * b is even iff a is even or
b is even.
example (a : Z) (b : Z) : (even (a * b) ↔ (even a ∨ even b)) := sorry
```
**Theorem 29 (Exercise 3.26). If n ∈** **_Z, then n[2]_** _−_ 3n + 9 is odd.
```
Ex. Assume n is an integer. Then n ^ 2 - 3 * n + 9 is odd.
example (n : Z) : odd (((n ^ 2) - (3 * n)) + 9) := sorry
```
**Theorem 30 (Exercise 3.27). If n ∈** **_Z, then n[3]_** _−_ _n is even._
```
Ex. Assume n is an integer. Then n ^ 3 - n is even.
example (n : Z) : even ((n ^ 3) - n) := sorry
```
**Theorem 31 (Exercise 3.28). Let x, y ∈** **_Z. If xy is odd, then x and y are odd._**
```
Ex. Assume x is an integer and y is an integer. If x * y is odd then x is odd and
y is odd.
example (x : Z) (y : Z) : (odd (x * y) → (odd x ∧ odd y)) := sorry
```
**Theorem 32 (Exercise 3.29). Let a, b ∈** **_Z. If ab is odd, then a[2]_** + b[2] _is even._
```
Ex. Assume a is an integer and b is an integer. If a * b is odd then a ^ 2 + b ^ 2
is even.
example (a : Z) (b : Z) : (odd (a * b) → even ((a ^ 2) + (b ^ 2))) := sorry
```
**Theorem 33 (Exercise 3.36). Let x, y ∈** **_Z. If 3x + 4y and 4x + 5y are both even, then x and y are both even._**
```
Ex. Assume x is an integer and y is an integer. If 3 * x + 4 * y is even and
4 * x + 5 * y is even then x is even and y is even.
```
```
example (x : Z) (y : Z) : ((even ((3 * x) + (4 * y)) ∧ even ((4 * x) + (5 * y))) → (even x ∧
even y)) := sorry
```
**Theorem 34 (Exercise 3.37). Let x, y, z ∈** **_Z. If exactly two of the three integers x, y, z are even, then 3x + 5y + 7z_**
_is odd._
```
Ex. Assume x is an integer, y is an integer and z is an integer. Assume x is even,
y is even and z is not even or x is even, y is not even and z is even or x is not
even, y is even and z is even. Then 3 * x + 5 * y + 7 * z is odd.
example (x : Z) (y : Z) (z : Z) (h158 : ((even x ∧ (even y ∧ (¬ even z))) ∨ ((even x ∧ ((¬
even y) ∧ even z)) ∨ ((¬ even x) ∧ (even y ∧ even z))))) : odd (((3 * x) + (5 * y)) + (7 *
z)) := sorry
```
**Theorem 35 (Exercise 3.40). Let a, b ∈** **_Z. If a is even or b is even, then ab is even._**
```
Ex. Assume a is an integer and b is an integer. If a is even or b is even then
a * b is even.
example (a : Z) (b : Z) : ((even a ∨ even b) → even (a * b)) := sorry
```
-----
**Theorem 36 (Example 3.19 (2)). If n is an odd integer, then 3n −** 5 is an even integer.
```
Ex. Assume n is an odd integer. Then 3 * n - 5 is even.
example (n : Z) (h40 : odd n) : even ((3 * n) - 5) := sorry
```
**Theorem 37 (Example 3.19 (4)). Let n be an integer. If 3n −** 5 is an odd integer, then n is an even integer.
```
Ex. Assume n is an integer. If 3 * n - 5 is odd then n is even.
example (n : Z) : (odd ((3 * n) - 5) → even n) := sorry
```
**Theorem 38 (Problem 3.21). If m is an even integer and n is an odd integer, then 3m + 5n is odd.**
```
Ex. Assume m is an even integer and n is an odd integer. Then 3 * m + 5 * n is odd.
example (m : Z) (h67 : even m) (n : Z) (h45 : odd n) : odd ((3 * m) + (5 * n)) := sorry
```
**Theorem 39 (Exercise 3.43). Let n ∈** **_Z. If 3n −_** 8 is odd, then n is odd.
```
Ex. Assume n is an integer. If 3 * n - 8 is odd then n is odd.
```
```
example (n : Z) : (odd ((3 * n) - 8) → odd n) := sorry
```
**Theorem 40 (Exercise 3.45). Let x, y ∈** **_Z. If x or y is even, then xy[2]_** _is even._
```
Ex. Assume x is an integer and y is an integer. If x is even or y is even then
x * y ^ 2 is even.
example (x : Z) (y : Z) : ((even x ∨ even y) → even (x * (y ^ 2))) := sorry
```
**Theorem 41 (Exercise 3.47). Let x ∈** **_Z. If 7x −_** 3 is even, then 3x + 8 is odd.
```
Ex. Assume x is an integer. If 7 * x - 3 is even then 3 * x + 8 is odd.
```
```
example (x : Z) : (even ((7 * x) - 3) → odd ((3 * x) + 8)) := sorry
```
**Theorem 42 (Exercise 3.48). Let n ∈** **_Z. Then (n −_** 5)(n + 7)(n + 13) is odd if and only if n is even.
```
Ex. Assume n is an integer. Then (n - 5) * (n + 7) * (n + 13) is odd iff n is even.
example (n : Z) : (odd (((n - 5) * (n + 7)) * (n + 13)) ↔ even n) := sorry
```
**B** **A Formal Grammar of Simplified ForTheL**
In this section, we give a formal grammar for Simplified ForTheL. Although, we wrote a Grammatical Framework
(GF) grammar for Simplified ForTheL, for comprehensibility, here we present it as a context-free grammar (CFG). To
present the Simplified ForTheL grammar as a CFG, we had to combine the abstract and concrete syntax in a way such
that the readability is maintained. As a result, the language defined by the following CFG is not exactly Simplified
ForTheL, but a close approximation of it. An exact CFG for Simplified ForTheL can be obtained by importing the
```
TextsEng.gf file in the GF shell and typing the command pg -printer=bnf in the shell.
```
We use the BNF notation to present syntax. Nonterminals are written in italic (e.g. variable) and terminals in typewriter font (e.g. integer). Grammar productions have the form:
_nonterm_ _alt1_ _alt2_ _. . ._ _altn_
_→_ _|_ _|_ _|_
Let t1, t2, t3 and t4 be strings made up of terminals and non-terminals. Then, the following conventions are adopted:
- The symbol ε denotes the empty string.
- The pattern t1 | t2 denotes a choice between t1 and t2.
- The pattern t1 [ t2 ] t3 denotes that t2 is optional.
- The pattern t1 ( t2 | t3 ) t4 denotes a choice between t1 t2 t4 and t1 t3 t4.
We present the grammar in a bottom-up fashion. The following subsections, called Lexicon (B.1), Notions (B.2),
Terms (B.3), Predicates (B.4), Statements (B.5), and Texts (B.6), correspond to the Abstract Syntax file names found
in the GitHub repository of the project [1]. With respect to the grammar given, GFLean works on a text and produces
its formalisation.
-----
**B.1** **Lexicon**
**B.2** **Notions**
**B.3** **Terms**
_variable →_ `a | b | c | k | m | n | r | x | y | z`
_rawNoun0 →_ `real ( number | numbers )`
_| ( integer | integers )_
_| rational ( number | numbers )_
_rawAdjective1 →_ `less than`
_| less than or equal to_
_| greater than_
_| greater than or equal to_
_| not equal to_
_| equal to_
_rawAdjective0 →_ `positive | odd | even | nonnegative | negative`
_rawNoun2 →_ `+ | - | * | / | ^`
_primSimpleAdjective →_ _rawAdjective0_
_primClassNoun →_ _rawNoun0 names_
_names →_ _variable_
_variable →_ `(x Int)`
_| ε_
_leftAttribute →_ _primSimpleAdjective_
_rightAttribute →_ _isPredicate_
_| that doesPredicate_
_| such that statement_
_notion →_ _primClassNoun_
_| primClassNoun rightAttribute_
_| leftAttribute primClassNoun_
_| leftAttribute primClassNoun rightAttribute_
_Int →_ _. . . | -1 | 0 | 1 | . . ._
_primDefiniteNoun →_ _term rawNoun2 term_
_term →_ _quantifiedNotion_
_| definiteTerm_
_quantifiedNotion →_ `every term`
_| some term_
_| no term_
_definiteTerm →_ _primDefiniteNoun_
_| variable_
_| Int_
-----
**B.4** **Predicates**
**B.5** **Statements**
_polarity →_ _ϵ | not_
_primAdjective →_ _rawAdjective0_
_| rawAdjective1 term_
_doesPredicate →_ ( is | are ) isPredicate
_| ( is | are ) is_aPredicate_
_isPredicate →_ _polarity primAdjective_
_is_aPredicate →_ _polarity ( a | an | ϵ ) notion_
_| polarity definiteTerm_
_statement →_ _statement ( and |, ) statement_
_| statement or statement_
_| if statement then statement_
_| it’s not that statement_
_| for quantifiedNotion, statement_
_| term doesPredicate_
_| ( there exist | there exists a | there exists an ) notion_
_| ( there exists no | there exist no ) notion_
_text →_ _example_
_example →_ `ex. Lassumption [then] statement .`
_Lassumption →_ `assume assumption Lassumption | ε`
_assumption →_ _statement ._
**B.6** **Texts**
-----
| [
"Shashank, Pathak"
] | 2024-04-01T00:00:00 | null | false | 0 | 0 | [
"Lean"
] | http://arxiv.org/abs/2404.01234 | https://arxiv.org/abs/2404.01234 | https://www.semanticscholar.org/paper/7852aa1613a5eeaf791376c71ad306f5e17fc853 |
GOLD: Geometry Problem Solver with Natural Language Description | Addressing the challenge of automated geometry math problem-solving in artificial intelligence (AI) involves understanding multi-modal information and mathematics. blackCurrent methods struggle with accurately interpreting geometry diagrams, which hinders effective problem-solving. To tackle this issue, we present the Geometry problem sOlver with natural Language Description (GOLD) model. GOLD enhances the extraction of geometric relations by separately processing symbols and geometric primitives within the diagram. Subsequently, it converts the extracted relations into natural language descriptions, efficiently utilizing large language models to solve geometry math problems. Experiments show that the GOLD model outperforms the Geoformer model, the previous best method on the UniGeo dataset, by achieving accuracy improvements of 12.7% and 42.1% in calculation and proving subsets. Additionally, it surpasses the former best model on the PGPS9K and Geometry3K datasets, PGPSNet, by obtaining accuracy enhancements of 1.8% and 3.2%, respectively. | GOLD enhances the extraction of geometric relations by separately processing symbols and geometric primitives within the diagram, and converts the extracted relations into natural language descriptions, efficiently utilizing large language models to solve geometry math problems. | # GOLD: Geometry Problem Solver with Natural Language Description
**Jiaxin Zhang** **Yashar Moshfeghi**
University of Strathclyde University of Strathclyde
[email protected] [email protected]
**Abstract** gram and problem text separately or jointly, result
ing in highly generalized models (Chen et al., 2021,
Addressing the challenge of automated ge
2022). However, these methods struggle with accu
ometry math problem-solving in artificial in
rately capturing the complex relationships within
telligence (AI) involves understanding multi
geometry diagrams (Lu et al., 2023b). Additionally,
modal information and mathematics. Current
methods struggle with accurately interpreting their vector-based representation of geometric relageometry diagrams, which hinders effective tions is not easily interpretable by humans, posing
problem-solving. To tackle this issue, we challenges in identifying whether performance ispresent the Geometry problem sOlver with sues are from the relation extraction or the problemnatural Language Description (GOLD) model.
solving component. In a different approach, some
GOLD enhances the extraction of geometric
studies have successfully translated geometry dia
relations by separately processing symbols and
grams into formal languages, enhancing precision
geometric primitives within the diagram. Subsequently, it converts the extracted relations and interpretability (Sachan et al., 2017; Seo et al.,
into natural language descriptions, efficiently 2015; Lu et al., 2021; Zhang et al., 2023). Howutilizing large language models to solve geom- ever, these methods do not separately process reetry math problems. Experiments show that lations among geometric primitives and relations
the GOLD model outperforms the Geoformer between symbols and geometric primitives, which
model, the previous best method on the UniGeo
adds difficulty in solving the geometry math prob
dataset, by achieving accuracy improvements
lem correctly. Moreover, these approaches necessi
of 12.7% and 42.1% in calculation and proving
tate specifically designed solvers that take formal
subsets. Additionally, it surpasses the former
best model on the PGPS9K and Geometry3K languages as input, making them incompatible with
datasets, PGPSNet, by obtaining accuracy en- prevalent large language models (LLMs).
hancements of 1.8% and 3.2%, respectively.[1]
To address the limitations of existing methods
**1** **Introduction** in solving geometry math problems, we introduce
the GOLD model. The GOLD model converts
Automated solving of geometry math problems has geometry diagrams into natural language descripgained considerable attention in the AI community tions, aiding in the generation of solution programs
recently (Chen et al., 2021; Lu et al., 2021; Cao and for the problems. Particularly, the GOLD model’s
Xiao, 2022; Chen et al., 2022; Zhang et al., 2023; relation-construction head extracts two types of
Peng et al., 2023; Ning et al., 2023). Unlike math geometric relations: sym2geo (relations between
word problems, geometry math problems involve symbols and geometric primitives) and geo2geo
additional geometry diagrams, necessitating com- (relations among geometric primitives). This proprehensive reasoning capabilities for understand- cess involves two specialized heads that separately
ing multi-modal information (refer to Figure 1 for model symbols and geometric primitives within
an example of a geometry math problem). As a diagrams as distinct vectors. These extracted geresult, research on automated geometry math prob- ometric relations are then converted into natural
lem solving is still in its infancy (Chen et al., 2022). language descriptions. This not only improves the
Existing approaches for solving geometry math model’s interpretability but also connects geometry
problems utilize neural networks to embed the dia- diagrams with problem texts. Furthermore, since
[1GOLD code can be found at https://github.com/](https://github.com/NeuraSearch/Geometry-Diagram-Description) these natural language descriptions meet the input
[NeuraSearch/Geometry-Diagram-Description](https://github.com/NeuraSearch/Geometry-Diagram-Description) requirements of LLMs, the GOLD model is able to
263
-----
utilize the advanced LLMs as the problem-solving struggle to effectively integrate problem texts and
module, efficiently generating solution programs geometry diagrams. In response, Geoformer (Chen
used to solve geometry math problems. et al., 2022) emerged, embedding both diagram and
To evaluate the effectiveness of the GOLD problem text jointly using the VL-T5 (Cho et al.,
model, we conduct experiments on the three lat- 2021) model, treating visuals as additional tokens.
est released datasets: UniGeo (comprising calcu- Despite these advancements, they still struggle to
lation and proving subsets) (Chen et al., 2022), provide precise descriptions of slender, overlapped
PGPS9K (Zhang et al., 2023), and Geometry3K geometric primitives with complex spatial relation(Lu et al., 2021). The experimental results show ships (Zhang et al., 2022), resulting in sub-optimal
the significant performance gains of our GOLD performance when solving geometry math probmodel compared to state-of-the-art (SOTA) mod- lems.
els. It surpasses the Geoformer model, which is the Other approaches typically involve parsing the
SOTA model on the UniGeo dataset, by 12.7% and diagram into formal language and utilizing spe42.1% in accuracy on the UniGeo calculation and cific solvers to generate solution programs. Recent
proving subsets, respectively. Additionally, our works like Inter-GPS (Lu et al., 2021) and PGPGOLD model outperforms the PGPSNet model, SNet (Zhang et al., 2023) employed their parsers to
the SOTA model on the PGPS9K and Geometry3K describe the diagram using carefully crafted rules.
datasets by 1.8% and 3.2% in accuracy, respec- However, these methods based on predefined rules
tively. These results highlight the superior perfor- often lack extensibility, resulting in limited generalmance and effectiveness of our proposed GOLD ization capabilities. To address this issue, our promodel compared to existing approaches. posed GOLD model generates natural language deThe contributions of this work are: (1) We pro- scriptions of the diagrams, ensuring compatibility
pose the GOLD model to extract geometric rela- of adopting LLMs to generate solution programs.
tions from geometry diagrams and subsequently
convert these relations into natural languages, **3** **Model**
which are then utilized for solving geometry math
Our GOLD model is illustrated in Figure 1.
problems. Its compatibility with LLMs is a significant advantage, enabling the GOLD model to
**3.1** **Task Description and Pre-parsing**
utilize the capabilities of LLMs to generate solution
The objective is to generate the correct solution
programs. (2) The GOLD model separately pro
program to solve the problem by analyzing a ge
cesses symbols and geometric primitives from the _P_
ometry math problem text and its corresponding
diagrams. This separation design simplifies the ex- _T_
diagram . Specifically, the solution program rep
traction of the geometric relations. (3) Our GOLD _D_
resents intermediate steps in the domain-specific
model demonstrates significant improvements over
language generating the output for the question (see
previous methods across all evaluated datasets, val
an example of solution program in Figure 1).
idating the effectiveness of our approach.
In our approach, we initially preprocess geom
**2** **Related Work** etry diagrams to extract geometric primitives G
(including Point P, Line L, and Circle C) and sym
Early works have explored solving geometry math bols S from the diagram D for subsequent task.
problems through rule-based approaches (Gelern- Specifically, we utilize a standard Feature Pyrater et al., 1960; Wen-Tsün, 1986; Chou and Gao, mid Network (FPN) (Lin et al., 2017) integrated
1996a,b). Recently, with the success of deep learn- with a MobileNetV2 (Sandler et al., 2018) backing methods, several works have explored using bone for this task. For the detection of symbols,
neural network architectures for automated geom- we apply the anchor-free detection model FCOS
etry math problem-solving. Approaches such as (Tian et al., 2022), and for the extraction of geoNGS (Chen et al., 2021) utilizing LSTM (Hochre- metric primitives, we use the GSM model (Zhang
iter and Schmidhuber, 1997) and ResNet-101 (He et al., 2022). The FCOS model employs feature
et al., 2016) encoded problem texts and geometry maps P3 to P7, generated by the FPN layer, to dediagrams separately. Later, methods like DPE-NGS tect symbols within the diagram. This detection
(Cao and Xiao, 2022) replaced the text encoder step produces bounding box coordinates (boxs) and
with transformer models. However, these methods class type (clss) for each symbol (s ). For the
_∈S_
264
-----
Figure 1: The illustration of the GOLD Model. The diagram D, problem text T, and solution program P used in
this illustration are sourced from the PGPS9K dataset (Zhang et al., 2023). The symbols and geometric primitives in
the diagram are annotated using the notations from the Notation Table, which are consistent with the colours of
extracted relations of sym2geo and geo2geo.
extraction of geometric primitives, we prefer using ric primitive g, we conduct the below calculation:
the feature map P2 instead of P1, as P2 is more
memory-efficient due to its lower resolution. This emb[s,g]feat [=][ ReLU][(][W]feat[s,g] **[V][s,g][)]** (1)
process results in the identification of segmenta- where Wfeat[s,g]
tion masks (maskg) and class type (clsg) for each either symbols or geometric primitives. Next, we[∈] [R][h][×][h][ are trainable parameters for]
geometric primitive (g ∈G). elaborate the calculation process of V[s,g] for sym
bols and geometric primitives separately.
**3.2** **Mapping Symbols and Geometric**
To obtain the V[s] for symbol s, we utilize
**Primitives Separately**
RoIAlign (He et al., 2017) on its feature map, based
Before constructing the geometric relations, we on the bounding box of symbol s:
map the symbols and geometric primitives into
vectors. To achieve this, we introduce two heads:
symbol vector head and geometric primitive vec- **V[s]** = F(ReLU(BN(Conv(RoIAlign(boxs, feat_mapi)))))(2)
tor head. Specifically, each head functions as ex- where i refers to the i-th layer of feature maps
tracting the feature_embedding (embfeat ) and spa- where the bounding box (boxs) is calculated from.
_tial_embedding (embspat_ ). The feature_embedding The Conv is the convolution layer with 64 channels,
is computed from the cropped feature map, which BN is the BatchNorm layer, and ReLU is the ReLU
is determined by either the bounding box or the seg- activation layer. The F means flatten operation,
mentation mask. Moreover, where symbols and ge- indicating that the V[s] is further flatten into a vecometric primitives are placed significantly shapes tor and used for obtaining the feature_embedding
how they relate. For instance, only points lying emb[s]feat [for symbol][ s][ through Eq][ 1][.]
on a line can hold the geometric relation with that To obtain the V[g] for geometric primitive g, we
particular line. Thus, we hypothesize that incorpo- perform an element-wise multiplication between
rating spatial information of and can enhance the segmentation mask (maskg) of g and the P2
_S_ _G_
the accuracy of predictions about geometric rela- layer of feature map (feat_map2). Next, we flatten
tions. Consequently, we embed the bounding boxes the resulting vector along the height and width
of symbols and the coordinates of the geometric dimensions and apply global average pooling to
primitives into the spatial_embedding. obtain the V[g]:
**3.2.1** **Constructing the feature_embedding** **V[g]** = AvgPool(F(maskg × feat_map2)) (3)
To obtain the feature_embedding (emb[s,g]feat [) and][ spa-] The **V[g]** is used for calculating the _fea-_
_tial_embedding (emb[s,g]spat_ [) for symbol][ s][ or geomet-] _ture_embedding emb[g]feat_ [for geometric primitive][ g]
265
-----
through Eq 1.
**3.2.2** **Constructing the spatial_embedding**
The spatial_embedding is obtained by mapping
the spatial information of symbols and geometric primitives into embeddings. Specifically, for
symbol s, we map the coordinates of its bounding box into an embedding using the trainable parameters Wspat[s] _[∈]_ [R][h][×][4][. Specifically,][ emb]spat[s] [=]
**Wspat[s]** [[][x][t][, y][t][, x][b][, y][b][]][⊤][, where][ (][x][t][, y][t][)][ represent the]
coordinates of the top-left corner of the bounding
box, and (xb, yb) is the coordinates of the bottomright corner of the bounding box.
Next, to obtain the spatial_embedding of a geometric primitive g, we start by representing coordinates of g using locg. The format of locg depends
on the class type (clsg) of the geometric primitive:
for a point, it contains two numbers (ng = 2) representing its coordinates; for a line, it contains four
numbers (ng = 4) representing the coordinates of
its start and end points; and for a circle, it contains
three numbers (ng = 3) representing the coordinates of its centre point and the radius length. We
then map locg into spatial_embedding by calculating emb[g]spat [=][ ReLU][(][W]spat[g] [(][W]loc[g] _[loc][g][)), where]_
**Wloc[g]**
for different[∈] [R][h][×] cls[n][g][ are different trainable parameters]g, and Wspat[g]
_[∈]_ [R][h][×][h][ are trainable]
parameters.
To help the model differentiate between different types of geometric primitives, we introduce the geo_type_embedding (emb[g]type[) to cap-]
ture the semantic information of the geometric
primitive. The emb[g]type [is obtained by perform-]
ing a lookup operation on the embeddings using the class type (clsg) of g from the list of geometric primitive types [P, L, C]. Specifically,
emb[g]type [=] [embedding][(][cls][g][)][, where][ cls][g][ is the class]
type ID of g.
beddings relevant to the geometric primitive g,
emb[g]feat [, emb]spat[g] [, and emb][g]type [:]
vec[g][∈G] = ReLU(Wvec[g] [(][emb][g]feat [+] [emb]spat[g] [+] [emb]type[g] [))][ (5)]
where Wvec[g]
_[∈]_ [R][h][×][h][ are the trainable parameters.]
**3.3** **Relation Construction Head**
The relation-construction head aims to establish
_sym2geo relations among symbols and geometric_
primitives and geo2geo relations among geometric
primitives themselves.
**3.3.1** **sym2geo relation**
The sym2geo relation can be further divided into
_text2geo and other2geo relations. The text2geo rela-_
tion explains the association between text symbols
and geometric primitives, where the text symbols
are used to be the reference to a geometric primitive
or to display degree, length, etc. To distinguish the
role of a text symbol, we introduce the text_class
for the text symbol. Specifically, when text_class
is category 0, the text2geo signifies point (or line,
or circle) names; when text_class is category 1,
the text2geo corresponds to angle degrees; when
_text_class is category 2_, the text2geo signifies line
lengths; when text_class is category 3, the text2geo
denotes the degree of an angle within a circle. The
probabilities of the category (P (text_class|s)) of
text symbol (s ∈{S|clss = "text"}) is defined as:
_P = softmax(Wtext[sym2geo]_class_ [ReLU][(][W]1[sym2geo]vec[s])) (6)
where W1[sym2geo] R[h][×][h] and Wtext[sym2geo]_class
_∈_ _∈_
R[4][×][h], both are the trainable parameters.
The other2geo relation captures relations between non-text symbols (s clss = "text" )
_∈{S|_ _̸_ _}_
and geometric primitives. The non-text symbols
are used to find out the relations among geometric
primitives, such as angles of same degree, lines of
_same length, parallel lines, and perpendicular lines._
For instance, in Figure 1, the symbol enclosed in a
red rectangle signifies the parallel relation.
**3.2.3** **Symbol Vector and Geometric Primitive**
are used to find out the relations among geometric
**Vector**
primitives, such as angles of same degree, lines of
The vector representation vec[s][∈S] of symbol s is ob
_same length, parallel lines, and perpendicular lines._
tained by passing concatenated emb[s]feat [and][ emb]spat[s] For instance, in Figure 1, the symbol enclosed in a
through a specific feed-forward neural network:
red rectangle signifies the parallel relation.
To establish the sym2geo relation between sym
vec[s][∈S] = ReLU(Wvec[s] [[][emb]feat[s] [:][ emb]spat[s] []][⊤][)] (4) bol s and geometric primitive g, we begin by uti
lizing the corresponding symbol head to trans
where Wvec[s] form the vector of the geometric primitive: vec[g] =
_[∈]_ [R][h][×][2][h][ are the trainable parameters]
depending on the class type (clss) of symbol s, and ReLU(Ws[sym2geo]1 vec[g]), where Ws[sym2geo]1 R[h][×][h]
_∈_
[:] refers to concatenation operation. are trainable parameters that vary depending on c
The vector representation of the geometric prim- different class types (clss) of symbols. Finally, we
itive vec[g][∈G] is obtained by summing up three em- calculate the probabilities of the existence of the
266
-----
relation between symbol s and geometric primitive
_g as follows:_
_O1 = ReLU(W2[sym2geo][vec[s]_ : vec[g][∈{][sub][}]]) (7)
_P_ (rel[sym2geo]s,g _s, g) = sigmoid(Wrel[sym2geo]O1)_
_|_ c
the relations between geometric primitives gi and
_gj can be calculated as follows:_
_P = softmax(Wrel[geo2geo]ReLU(W1[geo2geo](vec[g][i]_ + vec[g][j] )))
(8)
where W1[geo2geo] R[h][×][h] and Wrel[geo2geo] R[3][×][h]
_∈_ _∈_
are the trainable parameters (the number 3 refers
to "no relation" and two relations from either Point
and Line or Point and Circle). Please refer to Appendix A.2 for details on how to predict geo2geo
relations during the inference stage.
**3.4** **Problem-Solving Module**
Both the sym2geo and geo2geo relations are expressed in natural languages by the GOLD model,
following the same format as the problem text T
(please refer to Appendix B for the paradigm of
converting sym2geo and geo2geo relations to natural language descriptions). Therefore, it is convenient to utilize the LLMs as the problem-solving
module. Specifically, the problem text T and the
natural language descriptions L are concatenated
for the LLMs to generate the solution program
_P. To illustrate the compatibility of our methods_
with LLMs, we employ three well-known models for problem-solving: T5-base (Raffel et al.,
2020), Llama2-13b-chat (Touvron et al., 2023),
and CodeLlama-13b (Rozière et al., 2023). The
T5-base model is fine-tuned for the target solution
programs. Conversely, for Llama2-13b-chat and
CodeLlama-13b, we employ directive instructions
to guide their solution generation process (please
refer to Appendix C for the choice of instructions).
**3.5** **Training Objective**
Given a dataset of geometry math problems. The
training process begins with training the preparsing module to extract necessary features from
the geometry diagrams. Following this, we focus
on training three components: the symbol vector
head, the geometric primitive vector head, and the
relation-construction head. This training is guided
by minimizing a joint loss function, which is defined as Lcons = Lg2g + Lt_cls + Ls2g . The Lg2g
loss represents the negative log-likelihood loss for
accurately identifying the ground truth geo2geo relations. Meanwhile, the Lt_cls constitutes the negative log-likelihood loss for correctly categorizing
the text symbols. Lastly, the Ls2g loss is the binary cross-entropy loss associated with the ground
truth sym2geo relations. Once they are trained, and
their parameters are fixed, we advance to the final
where W2[sym2geo] R[h][×][2][h] and Wrel[sym2geo] R[1][×][h]
_∈_ _∈_
are the trainable parameters. Worth mentioning,
that each type of symbol, including the additional
four categories of the text symbol, has its own
**W2[sym2geo]. Additionally,** _sub_ refers to the sub_{_ _}_
set of geometric primitives, as certain symbols can
only have relations with specific geometric primitives. Please refer to Appendix A.1 for details on
how to predict text2geo and other2geo relations
during the inference stage.
**3.3.2** **geo2geo relation**
Previous work tend to provide only sym2geo relations. However, despite the sym2geo relation
can provide geometric relations among geometric
primitives like parallel, perpendicular, etc. We hypothesize that providing additional information that
describes all the geometric primitives from the diagrams is beneficial for the task. Moreover, we
tackle the issue concerning the absence of references to geometric primitives in the diagram. For
example, in Figure 1, the original diagram lacks a
reference to the line, where sym2geo relation cannot address. To overcome this limitation, we have
devised an automated approach that assigns appropriate references to the geometric primitives using
the format "clsg + num" (e.g., "L1, L2, L3, L4"
in purple in Figure 1). This enables the relationconstruction module to (1) present a detailed depiction of the diagram by describing the geo2geo
relations, even in the absence of a single reference,
and (2) generate all sym2geo relations, even when
some geometric primitives lack references. The
_geo2geo relations are categorized according to the_
involved geometric primitives: (1) Point and Line:
"on-a-line" and "end-point". The "on-a-line" relation occurs when a point lies between the tail and
the head of the line. Specifically, a point lying at either the head or the tail of the line is the "end-point",
which is the special case of "on-a-line". (2) Point
and Circle: "centre-point" and "on-a-circle." The
"centre-point" relation refers to a point being the
centre point of the circle. The "on-a-circle" relation
occurs when a point lies on the arc of the circle.
Finally, the probabilities (P (rel[geo2geo]gi,gj _gi, gj)) of_
_|_
267
-----
stage of fine-tuning the problem-solving module.[2]
During this stage, our objective is to minimize the
_Lprog loss, which is the negative log-likelihood loss_
for correct solution programs (please refer to Appendix D for more details of loss functions).
**4** **Experiments and Results**
**4.1** **Experimental Setup**
Our method was implemented using the PyTorch
(Paszke et al., 2019) and HuggingFace (Wolf et al.,
2020) libraries. For the pre-parsing module, we
followed the training and parameter settings of the
previous work (Zhang et al., 2022). We evaluated
the dimensions of the embeddings over a range
of {32, 64, 128}, and based on the model’s performance in the validation set, we experimentally
determined 64 as the optimal dimension size for
the embeddings. We utilized the Adam optimizer
with a learning rate of 1e[−][4] and weight decay of
1e[−][4] for training all modules. The symbol vector
head, geometric primitive vector head, and relationconstruction head were trained end-to-end for 50
epochs with a batch size of 20, while the problemsolving module (using T5-base) was fine-tuned for
30 epochs with a batch size of 10. All experiments
were conducted on one NVIDIA A100 80GB GPU.
**4.2** **Datasets**
Our experiments are conducted on three datasets:
UniGeo (Chen et al., 2022), PGPS9K (Zhang et al.,
2023), and Geometry3K (Lu et al., 2021). The
UniGeo dataset comprises 14,541 problems, categorized into 4,998 calculation problems (CAL) and
9,543 proving problems (PRV), which are split into
train, validate, and test subsets in a ratio of 7.0: 1.5:
1.5. The Geometry3K includes 3,002 problems, divided into train, validate, and test subsets following
a 7.0: 1.0: 2.0 ratio. Since PGPS9K contains a
partial Geometry3K dataset, we keep an exclusive
set of 6,131 problems, of which 1000 problems are
a test subset. Due to the absence of a validation
subset in PGPS9K, we divide its training set to
create a train-validation split in a 9.0: 1.0 ratio.
**4.3** **Evaluation Metrics**
To compare against existing works, we adhere to
the evaluation criteria from the original datasets
for both our model and the baselines. For the UniGeo dataset, we utilize the top-10 accuracy met
2Note that the fine-tuning step is only implemented when
T5-base is used as the problem-solving module.
ric, which measures the ratio of correct solution
programs among the top ten predictions, aligning
with the metric used by the authors of the UniGeo dataset. For the PGPS9K and Geometry3K
datasets, we adopt a stricter metric, the top-3 accuracy, as recommended by the authors of the
PGPS9K dataset. Note that our comparison involves matching the predicted solution program
with the ground truth, which is more rigorous than
merely comparing the numerical output derived
from the solution program.[3]
**4.4** **Comparison with State-of-the-art Models**
We evaluate the performance of our GOLD model
(using T5-base as its problem-solving module)
against state-of-the-art (SOTA) methods in solving
geometry math problems. The selected baselines
for this comparison include: 1. PGSPNet (Zhang
et al., 2023): it integrates a combination of CNN
and GRU encoders, which generate an encoded vector of the diagram that serves as the input aligning
with the logic form to the solver module. 2. Inter**GPS (Lu et al., 2021): it parses both the problem**
text and the diagram into a formal language, subsequently feeding this into the solver. 3. Geoformer
(Chen et al., 2022): it utilizes the VL-T5 model
for the purpose of diagram encoding, then servers
encoded embeddings to the transformer. 4. NGS
(Chen et al., 2021): it uses the ResNet-101 for its
encoding process, showcasing a different approach
in handling the diagram encoding. 5. Bert2Prog
(Chen et al., 2021): it leverages BERT and ResNet
as encoders and an LSTM network for generating.
The results presented in Table 1 demonstrate that
our GOLD model outperforms baselines across test
subsets of all datasets. Specifically, when compared to Geoformer, the SOTA on the UniGeo
dataset, our model exhibits a remarkable increase
in accuracy: 12.7% on the UniGeo CAL and 42.1%
on the UniGeo PRV. Compared to the SOTA model
on PGPS9K and Geometry3K datasets, PGPSNet,
the GOLD model surpasses it by 1.8% and 3.2%
in accuracy, respectively. When using ground truth
diagram annotations, the GOLD (GT) shows a significant improvement in accuracy on the PGPS9K
3This is grounded in the principle that a correct output can
sometimes be produced by an incorrect solution program, indicating a failure in the model’s understanding of the problem.
For example, consider a problem where the correct answer
is "5" and the correct program is "2 × 3 - 1". An incorrect
program like "2 + 3" could still yield the correct output. Thus,
generating the correct program is a more reliable indicator of
the model’s accurate problem comprehension.
268
-----
Models UniGeo CAL Test (%) UniGeo Prv Test (%) PGPS9K Test (%) Geometry3K Test (%)
BERT2Prog 54.7† 48.0† - -
NGS 56.9† 53.2† 34.1‡ 35.3‡
Geoformer 62.5† 56.4† 35.6‡ 36.8‡
InterGPS 56.8 47.2 38.3 48.6
InterGPS (GT) n/a n/a 59.8‡ 64.2‡
PGPSNet 53.2 42.3 58.8 59.5
PGPSNet (GT) n/a n/a 62.7‡ 65.0‡
GOLD **75.2** **98.5** **60.6** **62.7**
GOLD (GT) n/a n/a 65.8 69.1
Table 1: Comparison results on the test subsets of chosen datasets. PGPSNet reported models’ performances using
the ground truth diagram annotations, where these models have "(GT)" behind them. We re-implemented these
methods to get performances without GT annotations. Note that UniGeo lacks GT diagram annotations, so relevant
cells are "n/a". "†" and "‡" indicates the results are from Chen et al., 2022 and Zhang et al., 2023, respectively.
and Geometry3K, with gains of 3.1% and 4.1%
over PGPSNet (GT). Against InterGPS (GT), the
improvements are at 6.0% and 4.9%, respectively.
These results underline the effectiveness of the
GOLD model in solving geometry math problems.
Moreover, our GOLD model distinguishes itself from approaches like InterGPS and PGPSNet,
which rely on logic-form representations to describe diagrams. In contrast, GOLD inputs natural
language descriptions to LLMs to generate solution
programs. Using natural language leads to significant improvements across all datasets compared to
InterGPS and PGPSNet, as evidenced in Table 1.
Furthermore, models like Geoformer and NGS primarily encode diagrams into vectors. These approaches fall short in providing precise descriptions
of the diagrams and limit the adoption of LLMs,
thus leading to worse performances compared to
our GOLD model. This highlights the importance
of detailed and accurate diagram representations
for tackling geometry math problems, where our
GOLD model excels.
Worth mentioning is that the training for the symbol vector head, geometric primitive vector head,
and relation-construction head of the GOLD model
was exclusively conducted on the PGPS9K and Geometry datasets due to the lack of annotations in
the UniGeo dataset. Despite this, the outstanding
performance of the GOLD model on the test subset
of UniGeo, as shown in Table 1, demonstrates its
exceptional generalization capability.
**4.5** **Ablation Study on Natural Language**
**Description**
We assess our model’s efficacy using three distinct
diagram description formats: absence of diagram
description, logic forms, and natural language de
scriptions. The comparative results are detailed in
Table 2. When fine-tuning T5-base as the problemsolving module, Table 2 indicates that descriptions
in natural language outperform those in logic-form,
with 3.1% and 3.4% improvements on the test subsets of PGPS9K and Geometry3K, respectively.
PGPS9K Geometry3K
n/a LF NLD n/a LF NLD
22.3 57.5 **60.6** 12.3 59.3 **62.7**
T5-base
_± 0.0_ _± 0.3_ _± 0.3_ _± 0.0_ _± 0.5_ _± 0.2_
5.2 33.5 39.6 2.3 31.8 40.1
Llama2-13b-chat
_± 0.0_ _± 0.4_ _± 0.2_ _± 0.0_ _± 0.3_ _± 0.4_
3.2 15.8 16.2 2.0 14.6 15.1
CodeLlama-13b
_± 0.0_ _± 0.0_ _± 0.0_ _± 0.0_ _± 0.0_ _± 0.0_
Table 2: Evaluation of the GOLD model on two datasets
with no description (n/a), logic-forms (LF), and natural
language descriptions (NLD). Both the mean and standard errors of the accuracy metrics are presented.
Conversely, when using Llama2-13b-chat
(Llama2) and CodeLlama-13b (CodeLlama) as the
problem-solving module, we implement instructions to guide the generation of answers. Since
their generations differ from the ground truth, we
opt to calculate the accuracy of choosing the correct option from given candidates. According to
Table 2, using natural language descriptions significantly enhances the accuracy of the Llama2 model
compared to using logic forms, demonstrating the
greater compatibility of our natural language descriptions with models like Llama2. However, neither natural language descriptions nor logic forms
yield satisfactory outcomes with CodeLlama, possibly due to a mismatch between the training corpus
of CodeLlama and the description formats.
Lastly, we conduct experiments by excluding
relevant modules used to generate the natural lan
269
-----
guage descriptions and solely inputting the problem text T into the problem-solving module. The
results in Table 2 show a substantial decline in
the performance of the GOLD model across all
selected LLMs, highlighting the importance of diagram descriptions provided by relevant modules of
the GOLD model in solving geometry math problems.
**4.6** **Accuracy of the Extraction of geo2geo and**
**sym2geo Relations**
Our analysis in Table 3 and measured by F1 metric, evaluates the accuracy of extracting geometric
relations with and without embfeat and embspat on
PGPS9K test subset. We note that the pre-parsing
stage achieves a high F1-score of 98.9%, ensuring
accurate identification of symbols and geometric
primitives for sym2geo and geo2geo relations extraction. However, when directly using V[s,g] as
vectors of symbols and geometric primitives (only
using feature outputs from the pre-parsing step), the
absence of embfeat and embspat leads to a notable
decrease in performance for both relations extraction. Conversely, the inclusion of either embfeat
and embspat results in improved performance. Table 3 further reveals that the extraction of both relation types reaches its highest F1-score when both
embeddings are utilized. These results highlight the
advantages of our approach in separately modelling
symbols and geometric primitives, which proves
to be more efficient in addressing the relation extraction of geometry math problems (please see
Appendix G for the impact of embfeat and embspat
on problem-solving accuracy, and Appendix H for
the ablation analysis for the emb[g]type [).]
embfeat embspat pre-parsing geo2geo sym2geo
ËË ËË 98.998.998.998.9 65.279.880.693.7 ± ± ± ± 0.1 0.3 0.4 0.2 58.675.671.177.3 ± ± ± ± 0.1 0.5 0.2 0.1
Table 3: The check mark (Ë) indicates that the corresponding embedding is enabled. Note that "pre-parsing"
is not influenced by embfeat and embspat . Both the
mean and standard errors of the accuracy metrics are
presented. See Appendix E and F for the accuracy of
fine-grained relations.
Table 3 shows that the GOLD model accurately
captures geo2geo relation, prompting us to investigate its impact on solving geometry math problems.
The bar chart in Figure 2 indicates a notable de
Figure 2: Top-left: the performance of the GOLD (using
T5-base) with (w) and without (w/o) the geo2geo. Topright: Geometry math problem. Bottom: Predicted
diagram description with and without the geo2geo. The
same text between (w) and (w/o) is omitted for space
consideration, where the red text is geo2geo relations.
cline in model performance on the PGPS9K and
Geometry3K datasets when geo2geo relations are
omitted. However, this trend is less pronounced
on the UniGeo datasets. This is likely because the
PGPS9K and Geometry3K datasets often lack descriptions of geometric primitives in their problem
texts. An example from the Geometry3K dataset,
illustrated in Figure 2, demonstrates this issue: the
problem text typically poses a question (e.g., "Find
X") without extra information. Consequently, relying only on sym2geo relations leads to insufficient
representation of essential diagram details.
**5** **Conclusion**
In this work, we have introduced the GOLD model
for automated geometry math problem-solving.
GOLD uniquely converts geometry diagrams into
natural language descriptions, facilitating direct integration with LLMs for problem-solving. A key
feature of the GOLD model is that it separately
handles symbols and geometric primitives, simplifying the process of establishing relations between symbols and geometric primitives and relations among geometric primitives themselves. Our
experiments show that the GOLD model outperforms the Geoformer, the previous SOTA on the
UniGeo dataset, with accuracy improvements of
12.7% and 42.1% on the UniGeo calculation and
proving datasets, respectively. Additionally, compared to PGPSNet, the SOTA for the PGPS9K and
Geometry3K datasets, the GOLD model shows notable accuracy improvements of 1.8% and 3.2%,
respectively, showing our method’s effectiveness.
270
-----
**6** **Limitations** [Shang-Ching Chou and Xiao-Shan Gao. 1996a. Auto-](https://doi.org/10.1007/BF00283133)
[mated generation of readable proofs with geometric](https://doi.org/10.1007/BF00283133)
While our GOLD model marks a significant ad- [invariants i. multiple and shortest proof generation. J.](https://doi.org/10.1007/BF00283133)
vancement in solving geometry math problems, ar- _Autom. Reason., 17(3):325–347._
eas remain for future improvement. For example,
[Shang-Ching Chou and Xiao-Shan Gao. 1996b. Auto-](https://doi.org/10.1007/BF00283133)
the GOLD model has not yet reached the level [mated generation of readable proofs with geometric](https://doi.org/10.1007/BF00283133)
of human performance in solving geometry math [invariants i. multiple and shortest proof generation. J.](https://doi.org/10.1007/BF00283133)
problems. This gap is possibly due to the limita- _Autom. Reason., 17(3):325–347._
tions in fully extracting geometric relations from
Herbert L. Gelernter, J. R. Hansen, and Donald W. Love
diagrams. While GOLD accurately identifies sym- [land. 1960. Empirical explorations of the geometry](https://doi.org/10.1145/1460361.1460381)
bols, geometric primitives, and geo2geo relations, [theorem machine. In Papers presented at the 1960](https://doi.org/10.1145/1460361.1460381)
the extraction of sym2geo relations still requires _western joint IRE-AIEE-ACM computer conference,_
_IRE-AIEE-ACM 1960 (Western), San Francisco, Cal-_
enhancement. Moreover, this study evaluated three
_ifornia, USA, May 3-5, 1960, pages 143–149. ACM._
popular large language models (LLMs): T5-bases,
Llama2-13b-chat, and CodeLlama-13b. To deepen Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross B.
[Girshick. 2017. Mask R-CNN. In IEEE Interna-](https://doi.org/10.1109/ICCV.2017.322)
our understanding and leverage the full potential
_tional Conference on Computer Vision, ICCV 2017,_
of LLMs in solving geometry math problems, it
_Venice, Italy, October 22-29, 2017, pages 2980–2988._
would be beneficial to assess more LLMs. This IEEE Computer Society.
broader evaluation could provide more comprehen
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian
sive insights into optimizing LLMs for this specific
[Sun. 2016. Deep residual learning for image recogni-](https://doi.org/10.1109/CVPR.2016.90)
task. [tion. In 2016 IEEE Conference on Computer Vision](https://doi.org/10.1109/CVPR.2016.90)
_and Pattern Recognition, CVPR 2016, Las Vegas,_
_NV, USA, June 27-30, 2016, pages 770–778. IEEE_
**References** Computer Society.
[Jie Cao and Jing Xiao. 2022. An augmented benchmark](https://aclanthology.org/2022.coling-1.130) [Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long](https://doi.org/10.1162/neco.1997.9.8.1735)
[dataset for geometric question answering through](https://aclanthology.org/2022.coling-1.130) [short-term memory. Neural Comput., 9(8):1735–](https://doi.org/10.1162/neco.1997.9.8.1735)
[dual parallel text encoding. In Proceedings of the](https://aclanthology.org/2022.coling-1.130) 1780.
_29th International Conference on Computational Lin-_
_guistics, COLING 2022, Gyeongju, Republic of Ko-_ Tsung-Yi Lin, Piotr Dollár, Ross B. Girshick, Kaiming
_rea, October 12-17, 2022, pages 1511–1520. Interna-_ He, Bharath Hariharan, and Serge J. Belongie. 2017.
tional Committee on Computational Linguistics. [Feature pyramid networks for object detection. In](https://doi.org/10.1109/CVPR.2017.106)
_2017 IEEE Conference on Computer Vision and Pat-_
Jiaqi Chen, Tong Li, Jinghui Qin, Pan Lu, Liang Lin, _tern Recognition, CVPR 2017, Honolulu, HI, USA,_
[Chongyu Chen, and Xiaodan Liang. 2022. Unigeo:](https://aclanthology.org/2022.emnlp-main.218) _July 21-26, 2017, pages 936–944. IEEE Computer_
[Unifying geometry logical reasoning via reformu-](https://aclanthology.org/2022.emnlp-main.218) Society.
[lating mathematical expression. In Proceedings of](https://aclanthology.org/2022.emnlp-main.218)
_the 2022 Conference on Empirical Methods in Natu-_ Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chun_ral Language Processing, EMNLP 2022, Abu Dhabi,_ yuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei
_United Arab Emirates, December 7-11, 2022, pages_ Chang, Michel Galley, and Jianfeng Gao. 2023a.
3313–3323. Association for Computational Linguis- [Mathvista: Evaluating math reasoning in visual con-](https://doi.org/10.48550/ARXIV.2310.02255)
tics. [texts with gpt-4v, bard, and other large multimodal](https://doi.org/10.48550/ARXIV.2310.02255)
[models. CoRR, abs/2310.02255.](https://doi.org/10.48550/ARXIV.2310.02255)
Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan Liang,
Lingbo Liu, Eric P. Xing, and Liang Lin. 2021. Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan
[Geoqa: A geometric question answering benchmark](https://doi.org/10.18653/v1/2021.findings-acl.46) Huang, Xiaodan Liang, and Song-Chun Zhu. 2021.
[towards multimodal numerical reasoning. In Find-](https://doi.org/10.18653/v1/2021.findings-acl.46) [Inter-gps: Interpretable geometry problem solving](https://doi.org/10.18653/v1/2021.acl-long.528)
_ings of the Association for Computational Linguis-_ [with formal language and symbolic reasoning. In](https://doi.org/10.18653/v1/2021.acl-long.528)
_tics: ACL/IJCNLP 2021, Online Event, August 1-6,_ _Proceedings of the 59th Annual Meeting of the Asso-_
_2021, volume ACL/IJCNLP 2021 of Findings of ACL,_ _ciation for Computational Linguistics and the 11th_
pages 513–523. Association for Computational Lin- _International Joint Conference on Natural Language_
guistics. _Processing, ACL/IJCNLP 2021, (Volume 1: Long_
_Papers), Virtual Event, August 1-6, 2021, pages 6774–_
Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal. 2021. 6786. Association for Computational Linguistics.
[Unifying vision-and-language tasks via text genera-](http://proceedings.mlr.press/v139/cho21a.html)
[tion. In Proceedings of the 38th International Con-](http://proceedings.mlr.press/v139/cho21a.html) Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and
_ference on Machine Learning, ICML 2021, 18-24_ Kai-Wei Chang. 2023b. [A survey of deep learn-](https://doi.org/10.18653/V1/2023.ACL-LONG.817)
_July 2021, Virtual Event, volume 139 of Proceedings_ [ing for mathematical reasoning.](https://doi.org/10.18653/V1/2023.ACL-LONG.817) In Proceedings
_of Machine Learning Research, pages 1931–1942._ _of the 61st Annual Meeting of the Association for_
PMLR. _Computational Linguistics (Volume 1: Long Papers),_
271
-----
_ACL 2023, Toronto, Canada, July 9-14, 2023, pages_ _USA, June 18-22, 2018, pages 4510–4520. Computer_
14605–14631. Association for Computational Lin- Vision Foundation / IEEE Computer Society.
guistics.
Min Joon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren
Maizhen Ning, Qiu-Feng Wang, Kaizhu Huang, and [Etzioni, and Clint Malcolm. 2015. Solving geome-](https://doi.org/10.18653/v1/d15-1171)
[Xiaowei Huang. 2023. A symbolic characters aware](https://doi.org/10.1145/3581783.3612570) [try problems: Combining text and diagram interpre-](https://doi.org/10.18653/v1/d15-1171)
[model for solving geometry problems. In Proceed-](https://doi.org/10.1145/3581783.3612570) [tation. In Proceedings of the 2015 Conference on](https://doi.org/10.18653/v1/d15-1171)
_ings of the 31st ACM International Conference on_ _Empirical Methods in Natural Language Processing,_
_Multimedia, MM 2023, Ottawa, ON, Canada, 29 Oc-_ _EMNLP 2015, Lisbon, Portugal, September 17-21,_
_tober 2023- 3 November 2023, pages 7767–7775._ _2015, pages 1466–1476. The Association for Com-_
ACM. putational Linguistics.
Adam Paszke, Sam Gross, Francisco Massa, Adam Zhi Tian, Chunhua Shen, Hao Chen, and Tong He.
Lerer, James Bradbury, Gregory Chanan, Trevor [2022. FCOS: A simple and strong anchor-free object](https://doi.org/10.1109/TPAMI.2020.3032166)
Killeen, Zeming Lin, Natalia Gimelshein, Luca [detector. IEEE Trans. Pattern Anal. Mach. Intell.,](https://doi.org/10.1109/TPAMI.2020.3032166)
Antiga, Alban Desmaison, Andreas Köpf, Edward Z. 44(4):1922–1933.
Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Hugo Touvron, Louis Martin, Kevin Stone, Peter Al[Junjie Bai, and Soumith Chintala. 2019. Pytorch: An](https://proceedings.neurips.cc/paper/2019/hash/bdbca288fee7f92f2bfa9f7012727740-Abstract.html) bert, Amjad Almahairi, Yasmine Babaei, Nikolay
[imperative style, high-performance deep learning li-](https://proceedings.neurips.cc/paper/2019/hash/bdbca288fee7f92f2bfa9f7012727740-Abstract.html) Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
[brary. In Advances in Neural Information Processing](https://proceedings.neurips.cc/paper/2019/hash/bdbca288fee7f92f2bfa9f7012727740-Abstract.html) Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton_Systems 32: Annual Conference on Neural Informa-_ Ferrer, Moya Chen, Guillem Cucurull, David Esiobu,
_tion Processing Systems 2019, NeurIPS 2019, De-_ Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
_cember 8-14, 2019, Vancouver, BC, Canada, pages_ Cynthia Gao, Vedanuj Goswami, Naman Goyal, An8024–8035. thony Hartshorn, Saghar Hosseini, Rui Hou, Hakan
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa,
Shuai Peng, Di Fu, Yijun Liang, Liangcai Gao, and Zhi Isabel Kloumann, Artem Korenev, Punit Singh Koura,
[Tang. 2023. Geodrl: A self-learning framework for](https://doi.org/10.18653/V1/2023.FINDINGS-ACL.850) Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Di[geometry problem solving using reinforcement learn-](https://doi.org/10.18653/V1/2023.FINDINGS-ACL.850) ana Liskovich, Yinghai Lu, Yuning Mao, Xavier Mar[ing in deductive reasoning. In Findings of the Asso-](https://doi.org/10.18653/V1/2023.FINDINGS-ACL.850) tinet, Todor Mihaylov, Pushkar Mishra, Igor Moly_ciation for Computational Linguistics: ACL 2023,_ bog, Yixin Nie, Andrew Poulton, Jeremy Reizen_Toronto, Canada, July 9-14, 2023, pages 13468–_ stein, Rashi Rungta, Kalyan Saladi, Alan Schelten,
13480. Association for Computational Linguistics. Ruan Silva, Eric Michael Smith, Ranjan Subrama
nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine lor, Adina Williams, Jian Xiang Kuan, Puxin Xu,
Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
[Wei Li, and Peter J. Liu. 2020. Exploring the limits](http://jmlr.org/papers/v21/20-074.html) Melanie Kambadur, Sharan Narang, Aurélien Ro[of transfer learning with a unified text-to-text trans-](http://jmlr.org/papers/v21/20-074.html) driguez, Robert Stojnic, Sergey Edunov, and Thomas
[former. J. Mach. Learn. Res., 21:140:1–140:67.](http://jmlr.org/papers/v21/20-074.html) [Scialom. 2023. Llama 2: Open foundation and fine-](https://doi.org/10.48550/ARXIV.2307.09288)
[tuned chat models. CoRR, abs/2307.09288.](https://doi.org/10.48550/ARXIV.2307.09288)
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten
Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, [Wu Wen-Tsün. 1986. Basic principles of mechanical](https://doi.org/10.1007/BF02328447)
Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom [theorem proving in elementary geometries. J. Autom.](https://doi.org/10.1007/BF02328447)
Kozhevnikov, Ivan Evtimov, Joanna Bitton, Man- _Reason., 2(3):221–252._
ish Bhatt, Cristian Canton-Ferrer, Aaron Grattafiori,
Wenhan Xiong, Alexandre Défossez, Jade Copet, Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Faisal Azhar, Hugo Touvron, Louis Martin, Nico- Chaumond, Clement Delangue, Anthony Moi, Pierlas Usunier, Thomas Scialom, and Gabriel Synnaeve. ric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz,
[2023. Code llama: Open foundation models for code.](https://doi.org/10.48550/ARXIV.2308.12950) Joe Davison, Sam Shleifer, Patrick von Platen, Clara
_CoRR, abs/2308.12950._ Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le
Scao, Sylvain Gugger, Mariama Drame, Quentin
Mrinmaya Sachan, Avinava Dubey, and Eric P. Xing. [Lhoest, and Alexander M. Rush. 2020. Transformers:](https://doi.org/10.18653/v1/2020.emnlp-demos.6)
[2017. From textbooks to knowledge: A case study](https://doi.org/10.18653/v1/d17-1081) [State-of-the-art natural language processing. In Pro-](https://doi.org/10.18653/v1/2020.emnlp-demos.6)
[in harvesting axiomatic knowledge from textbooks](https://doi.org/10.18653/v1/d17-1081) _ceedings of the 2020 Conference on Empirical Meth-_
[to solve geometry problems. In Proceedings of the](https://doi.org/10.18653/v1/d17-1081) _ods in Natural Language Processing: System Demon-_
_2017 Conference on Empirical Methods in Natural_ _strations, EMNLP 2020 - Demos, Online, November_
_Language Processing, EMNLP 2017, Copenhagen,_ _16-20, 2020, pages 38–45. Association for Computa-_
_Denmark, September 9-11, 2017, pages 773–784._ tional Linguistics.
Association for Computational Linguistics.
Ming-Liang Zhang, Fei Yin, Yi-Han Hao, and Cheng
Mark Sandler, Andrew G. Howard, Menglong Zhu, An- [Lin Liu. 2022. Plane geometry diagram parsing. In](https://doi.org/10.24963/ijcai.2022/228)
[drey Zhmoginov, and Liang-Chieh Chen. 2018. Mo-](https://doi.org/10.1109/CVPR.2018.00474) _Proceedings of the Thirty-First International Joint_
[bilenetv2: Inverted residuals and linear bottlenecks.](https://doi.org/10.1109/CVPR.2018.00474) _Conference on Artificial Intelligence, IJCAI 2022,_
In 2018 IEEE Conference on Computer Vision and _Vienna, Austria, 23-29 July 2022, pages 1636–1643._
_Pattern Recognition, CVPR 2018, Salt Lake City, UT,_ ijcai.org.
272
-----
- if text_classs is 2 (i.e., category 2 ), it indicates
that the symbol s represents the length of a
line. Since a line consists of two points, we
select the points with the top two highest probabilities P (rel[sym2geo]s∈{S|clss="text"},g∈{P}[|][s, g][)][:]
_p1, p2 = argmaxtwoP_ (rel[sym2geo]s∈{S|clss="text"},g∈{P}[|][s, g][)]
(12)
- if text_classs is 3 (i.e., category 3 ), it
indicates that the symbol s represents the
degree of an angle on the circle. In this
case, the angle is formed by the centre
point of a circle and two points lying on
the arc of a circle. Therefore, we first
select the circle with the highest probability of _P_ (rel[sym2geo]s∈{S|clss="text"},g∈{C}[|][s, g][)][.]
Subsequently, we select two points
with the top two highest probabilities
_P_ (rel[sym2geo]s∈{S|clss="text"},g∈{P}[|][s, g][)][.] Worth
mentioning, these two points must be on the
arc of the selected circle:
Ming-Liang Zhang, Fei Yin, and Cheng-Lin Liu. 2023.
[A multi-modal neural geometric solver with textual](https://doi.org/10.24963/IJCAI.2023/376)
[clauses parsed from diagram.](https://doi.org/10.24963/IJCAI.2023/376) In Proceedings of
_the Thirty-Second International Joint Conference_
_on Artificial Intelligence, IJCAI 2023, 19th-25th Au-_
_gust 2023, Macao, SAR, China, pages 3374–3382._
ijcai.org.
**A** **Inference**
During the inference stage, we employ Eq 4 and
Eq 5 to map symbols s ∈S and geometric primitives g ∈G to corresponding vectors vec[s][∈S] and
vec[g][∈G], respectively. Following this, we proceed
with the inference of sym2geo and geo2geo relations.
**A.1** **Predict sym2geo Relation**
For a text symbol s ∈{S|clss = "text"}, it is
necessary to determine its meaning based on its
_text_class. To accomplish this, we assign the cat-_
egory of text symbol s ∈{S|clss = "text"} as
the one with the highest probability among the
_P_ (text_class|s) values, as specified in Eq 6:
_text_class_ _s = argmaxP_ (text_class|s) (9)
- if text_classs is 0 (i.e., category 0 ), it indicates that the symbol s corresponds to the
reference name of a point, or a line, or a circle.
In this case, we assign the symbol s to the geometric primitive g that has the highest probability of P (rel[sym2geo]s∈{S|clss="text"},g∈{P,L,C}[|][s, g][)][,]
where g ∈{P, L, C} specifies that the geometric primitive g belongs to the set of points,
lines, and circles:
_g = argmaxP_ (rel[sym2geo]s∈{S|clss="text"},g∈{P,L,C}[|][s, g][)]
(10)
- if text_classs is 1 (i.e., category 1 ), it indicates that the symbol s represents the degree of an angle. Since an angle consists of two lines and one point, we select the point with the highest probability
_P_ (rel[sym2geo]s∈{S|clss="text"},g∈{P}[|][s, g][)][, and we se-]
lect the two lines with the top two highest
probabilities P (rel[sym2geo]s∈{S|clss="text"},g∈{L}[|][s, g][)][.]
It is worth mentioning these two lines must
have geo2geo relations of "end-point" or "ona-line" with the selected point.
_p = argmaxP_ (rel[sym2geo]s∈{S|clss="text"},g∈{P}[|][s, g][)]
_l1, l2 = argmaxtwoP_ (rel[sym2geo]s∈{S|clss="text"},g∈{L}[|][s, g][)][,]
where rel l1,p "end-point"‘, "on-a-line" and
_∈{_ _}_
_rel l2,p_ "end-point"‘, "on-a-line"
_∈{_ _}_ (11)
_c = argmaxP_ (rel[sym2geo]s∈{S|clss="text"},g∈{C}[|][s, g][)]
_p1, p2 = argmaxtwoP_ (rel[sym2geo]s∈{S|clss="text"},g∈{P}[|][s, g][)][,]
where rel p1,c = rel p2,c = "on-a-circle"
(13)
For the geometric relations among geometric
primitives, such as parallel. It is determined by
the other2geo relation. For the other2geo relation
involving other symbols, it is required that the relation holds with at least two geometric primitives.
This means that there should be at least two geometric primitives with probabilities P (rel[sym2geo]s,g _s, g)_
_|_
larger than a threshold θ. In this case, the geometric
primitives are selected based on this criterion.
_{gindices} = sorted(P_ (rel[sym2geo]s∈{S|clss≠ "text"},g∈{P,L,C[}|][s, g][))][ > θ]
_gselected = G[{gindices}]_
(14)
where "sorted" indicates that values are sorted
in descending order, and [] refers to the selection
from the geometric primitives group G according
to the indices {gindices}. The threshold θ is set as
0.5 experimentally.
**A.2** **Predict geo2geo Relation**
The geo2geo relation between geometric primitives
_gi and gj is determined based on Eq 8, where it is_
assigned as the relation with the highest probability:
273
-----
predicted text_class. Here are the guidelines for
each case:
- If the text_class indicates that the symbol
refers to the reference name of a point (or
a line, or a circle), we modify the name of the
corresponding point (or line, or circle) accordingly.
- If the text_class indicates that the symbol
refers to the degree of an angle, we describe it
following the guidelines specified in the "Degree" entry of Table 4.
- If the text_class indicates that the symbol
refers to the length of a line, we describe it
according to the instructions provided in the
"Length" entry of Table 4.
- If the text_class indicates that the symbol
refers to the degree of an angle on the circle,
we describe it based on the guidelines outlined
in the "Circle Degree" entry of Table 4.
Furthermore, when dealing with the other2geo
relations, we describe them based on the specific
type of geometric relation as indicated in Table 4.
**C** **Instruction Choice**
Instructions serve as direct and explicit commands
that clearly communicate to the model the specific
task it is required to perform. For our experiments,
we initially selected two distinct instruction templates for Llama2-13b-chat (Touvron et al., 2023)
and CodeLlama-13b (Rozière et al., 2023), as detailed in Table 5. Upon experimental evaluation, it
was observed that the instruction template modified
from the one used to train the Llama2 model (displayed at the upper row in Table 5) demonstrated
superior performance. Consequently, we opted for
this template in our work.
**D** **Loss Function Details**
The Lg2g is defined as the negative log-likelihood
loss, where we aim to minimize the negative loglikelihood of the ground truth relations among geometric primitives:
_rel gi,gj_ _∈G = argmaxP_ (rel[geo2geo]gi,gj _|gi, gj)_ (15)
In an ideal scenario, the OCR results would accurately provide references to the points, lines, and
circles, allowing us to extract precise information
about the geometric primitives. However, the opensource OCR tool[4] we have adopted is not accurate.
As a result, some primitives may lack reference
names. To address this issue, we automatically label the primitives in sequential order (e.g., "P1, P2,
L1, L2") if their reference names are missing.
**A.3** **Generate Solution Program**
Once the geo2geo and sym2geo relations are constructed, we proceed to convert them into natural
language descriptions L (See Appendix B for details). We then concatenate the natural language
descriptions L with the problem text T . This combined text is passed to the problem-solving module,
which employs BeamSearch with a beam size of
10 to generate the solution program P. Moreover,
when using larger LLMs, such as Llama2, we add
instructions in front of the concatenation of L and
_T, which is further sent to LLMs to generate rea-_
soning process.
**B** **Convert Relations to Natural Language**
**Descriptions**
Once the geo2geo relations and sym2geo relations
have been established, we proceed to convert these
relations into natural language descriptions denoted
as L following the guidelines specified in Table 4.
To begin, we initiate the process by representing
the existing geometric primitives in the diagram
by enumerating points, lines, and circles within the
description of the geo2geo relation. In detail, we sequentially enumerate all existing points, providing
their reference names as described in the "Point"
entry of Table 4. We describe the associated points
for each line by mentioning their reference names.
Additionally, we include a list of points that have
"end-point" and "on-a-line" relations with the line,
as specified in the "Line" entry of Table 4. Similarly, for each circle, we mention its reference name
and proceed to list the points that exhibit "centerpoint" and "on-a-circle" relations with the circle,
following the guidelines provided in the "Circle"
entry of Table 4.
Next, we proceed to describe the text2geo relation within the sym2geo relation based on the
[4https://github.com/JaidedAI/EasyOCR](https://github.com/JaidedAI/EasyOCR)
log(P (rel[geo2geo]gi,gj _gi, gj))_ (16)
_|_
_gjX∈L,C_
_Lg2g = −_
_gi∈P_
where gi is a geometric primitive belonging to
points, and gj is a geometric primitive belonging to
274
-----
Relations Paradigm Example
Point The diagram contains ${}. The diagram contains Point A, B, C.
The diagram contains ${},
geo2geo Line which has endpoints: ${} and ${},
In addition, there is/are ${} on the line.
The diagram contains ${},
Circle whose center point is ${},
which has ${} on its arc.
1. Angle ${} has degree of ${}.
Degree 2. Line ${} and Line ${} cross at Point ${}
has degree of ${}.
The diagram contains Line L1,
which has endpoints: Point P0 and Point P1,
In addition, there is/are Point P2 on the line.
The diagram contains Circle M,
whose center point is Point E,
which has Point F, Point G on its arc.
1. Angle 1 has degree of 100.
2. Line L1 and Line L2 cross at Point C
has degree of 50.
The length of Line ${} between Point ${} The length of Line L3 between Point A
text2geo Length
and Point $ is ${}. and Point B is 10.
Line ${} and Line ${} cross at the
Circle Degree center point ${} of Circle ${} has
degree of ${}.
Line L1 and Line L2 cross at the
center point C of Circle C0 has
degree of 20.
Angle ${} has the same degree with Angle 1 has the same degree with
same degree
Angle ${} ... Angle 2, Angle 3.
Line ${} has the same length with Line L1 has the same length with
other2geo same length
Line ${} ... Line L2, Line L3.
parallel Line ${} is parallel with Line ${}... Line a is parallel with Line b.
Line ${} is perpendicular with Line ${} Line L1 is perpendicular with Line L2
perpendicular
at Point ${}. at Point C.
Table 4: The defined paradigm used to convert geo2geo and sym2geo relations to natural language descriptions
_L. "${}" is the placeholder. The placeholder is filled in as demonstrated in the "Example" column, and the filled_
content is highlighted in bold type.
lines and circles. The rel[geo2geo]gi,gj refers to the ground where i is the i-th token in the ground truth solution
truth relation between gi and gj. program.
The Lt_cls is defined as the negative loglikelihood loss, where we aim to minimize the neg
**E** **Image Parsing Accuracy**
ative log-likelihood of the ground truth text_class
of the text symbol:
Table 6 presents the performance of the image
_Lt_cls = −_ log(P (text_classs|s)) (17) parsing module, measured using the F1 metric. For
XS geometric primitives, we employ the parsing posi
where text_classs is the ground truth text_class of tion evaluation method, utilizing the Hough transthe symbol s. form with a distance threshold of 15. For symbols,
The Ls2g is the binary cross-entropy loss: we use an Intersection over Union (IoU) thresh
old of 0.5. The results in Table 6 demonstrate that
_Ls2g_ = − _{I(s, g) × log(P_ (rel[sym2geo]s,g _|s, g))_ the image-parsing module delivers accurate pars
_sX∈S_ _gX∈G_ ing results for diagrams, providing the model with
+ (1 − I(s, g)) × (1 − log(P (rel[sym2geo]s,g _|s, g)}_ precise information.
(18)
where I(s, g) is 1 if there is relation between symbol s and geometric primitive g, otherwise it is **F** **Relation Prediction Accuracy**
0.
The Lprog is defined as the negative log- Table 7 displays the F1 metric for the performance
likelihood loss, where we aim to minimize the neg- of relation parsing. The results show that our
ative log-likelihood of the tokens of the ground GOLD model accurately predicts geo2geo relatruth solution programs: tions. However, for sym2geo relations, except for
the "parallel" relation, there is considerable room
_Lprog = −_ _i_ log(P (ti|t<i)) (19) for improvement in the prediction performance.
X
275
-----
Instruction Template Example
[INST]
You are a problem-solving bot,
and now I ask you to solve a geometry problem,
please answer the question and provide the correct option letter.
The problem is as follows:
[INST]
You are a problem-solving bot,
and now I ask you to solve a geometry problem,
please answer the question and provide the correct option letter.
The problem is as follows:
{Problem Text}
Here are the basic descriptions of the diagram:
{Natural Language Descriptions}
The Answer and the Reason Process are:
[/INST]
Hint: Please answer the question and provide the correct option letter,
e.g., A, B, C, D, at the end
{Problem Text}
Here are the basic descriptions of the diagram:
{Natural Language Descriptions}
Find the perimeter of the polygon.
The Choices are: A: 20.0, B: 24.0, C: 28.0, D: 34.409,
Here are the basic description of the diagram:
The diagram contains Point P0, Point P1, Point P2, Point P3, Point P4,
The diagram contains Line L0, which has endpoints: Point P1, Point P3,
Line L1, which has endpoints: Point P1, Point P4,
Line L2, which has endpoints: Point P3, Point P4,
Line L3, which has endpoints: Point P0, Point P3,
Line L4, which has endpoints: Point P0, Point P1,
Line L5, which has endpoints: Point P0, Point P4,
The length of Line L0 between Point P2 and Point P3 is 7.
The length of Line L4 between Point P2 and Point P1 is 7.
The length of Line L5 between Point P4 and Point P2 is 5.
Line L3 between Point P0 and Point P3 has the same length
with Line L4 between Point P1 and Point P0
and Line L2 between Point P3 and Point P4
and Line L1 between Point P1 and Point P4.
The Answer and the Reason Process are:
[/INST]
Hint: Please answer the question and provide the correct option letter,
e.g., A, B, C, D, at the end
Find the perimeter of the polygon.
The Choices are: A: 20.0, B: 24.0, C: 28.0, D: 34.409,
Here are the basic descriptions of the diagram:
The diagram contains Point P0, Point P1, Point P2, Point P3, Point P4,
The diagram contains Line L0, which has endpoints: Point P1, Point P3,
Line L1, which has endpoints: Point P1, Point P4,
Line L2, which has endpoints: Point P3, Point P4,
Line L3, which has endpoints: Point P0, Point P3,
Line L4, which has endpoints: Point P0, Point P1,
Line L5, which has endpoints: Point P0, Point P4,
The length of Line L0 between Point P2 and Point P3 is 7.
The length of Line L4 between Point P2 and Point P1 is 7.
The length of Line L5 between Point P4 and Point P2 is 5.
Line L3 between Point P0 and Point P3 has the same length
with Line L4 between Point P1 and Point P0
and Line L2 between Point P3 and Point P4
and Line L1 between Point P1 and Point P4.
Table 5: Two instruction templates. The template in the upper row is modified from the instruction used to train
the Llama2 model, and another one is from the Lu et al., 2023a. In the column of "Instruction Template", the
"{problem Text}" is the geometry math problem text T, and "{Natural Language Descriptions}" is the description
of the diagram L.
276
-----
Geometric Primitives or Symbols F1 (%)
point 99.8
line 99.5
circle 99.1
symbol 97.2
Table 6: Pre-parsing performances by F1 metric.
Relation Type PGPS9K Test (%)
end-point 97.9 ± 0.3
on-a-line 91.3 _0.4_
geo2geo _±_
center-point 93.6 ± 0.2
on-a-circle 92.0 ± 0.0
text symbol 65.2 ± 0.1
angle 73.1 ± 0.0
sym2geo bar 75.7 ± 0.2
parallel 89.0 ± 0.4
perpendicular 82.9 ± 0.0
Table 7: Relation Parsing performances by F1 metric.
Both the mean and standard errors of the accuracy metrics are presented.
**G** **Influence of feature_embedding and**
**spatial_embedding on Geometry**
**Problem Solving**
We conduct ablation study on feature_embedding
and spatial_embedding in Table 8. To discard the
use of (embfeat and embspat ), we directly use feature outputs from the pre-parsing step as vectors
of symbols and geometric primitives, i.e., V[s,g],
to construct the sym2geo and geo2geo relations.
We can observe that the GOLD model without any
embedding performs the worst on all test subsets.
However, when either one of embeddings (embfeat
or embspat ) is added, the model’s performance improves. Notably, the model equipped with both
embeddings achieves the best performance.
embfeat embspat CAL PRV PGPS9K Geometry3K
66.2 90.2 48.2 50.2
_± 0.3_ _± 0.2_ _± 0.5_ _± 0.3_
71.5 93.2 55.0 58.1
Ë
_± 0.3_ _± 0.4_ _± 0.1_ _± 0.1_
72.8 93.0 56.3 58.0
Ë
_± 0.2_ _± 0.3_ _± 0.1_ _± 0.2_
Ë Ë **75.2** **98.5** **60.6** **62.7**
_± 0.3_ _± 0.5_ _± 0.3_ _± 0.2_
Table 8: Program accuracy with or without fea_ture_embedding and spatial_embedding. The check_
mark (Ë) indicates that the corresponding embedding
is enabled. T5-base is used as the problem-solving module for the GOLD model. Both the mean and standard
errors of the accuracy metrics are presented.
Figure 3: An example from the 111-th problem in the
PGPS9K dataset. This case shows that models’ natural
language descriptions and solution programs outputs
with and without spatial_embedding. The purple notations in the diagram are added by us. Note that the
different parts of diagram descriptions between w/o and
**w are coloured red.**
Figure 4: The top-left bar chart compares GOLD (T5base as the problem-solving module) accuracy in solving geometry math problems, with (w) and without (w/o)
the use of geo_type_embedding. The top-right diagram
is from the 375th problem in the PGPS9K dataset, while
the bottom part shows the predicted diagram descriptions for two different cases. Purple notations in the
diagram are added for better visual comprehension. The
differences between the two diagram description texts
are highlighted in red. It should be noted that the same
texts in the w to the w/o section are omitted, which are
represented by "...".
In Figure 3, we conduct a case study on the
GOLD model with and without the use of spa_tial_embedding. It is evident that the model with-_
out spatial_embedding incorrectly generates the
"parallel" relation between lines, resulting in an
erroneous solution program. This highlights the
importance of spatial_embedding in capturing accurate spatial relations and improving the model’s
performance.
277
-----
**H** **Importance of the**
**geo_type_embedding**
We conducted experiments to assess the impact of
_geo_type_embedding (emb[g]type[). The top-left bar]_
chart in Figure 4 demonstrates that the model’s
performance declines when emb[g]type [is not utilized.]
Notably, the performance gaps between the model
with emb[g]type [and without it are more pronounced]
on the PGPS9K and Geometry3K datasets compared to the UniGeo datasets. We believe this is
because the problem text in the UniGeo dataset
explicitly mentions the geometric primitives, providing valuable information that helps the GOLD
model understand the geometric primitives more
effectively. Furthermore, as shown in Figure 4, the
GOLD model without emb[g]type [fails to generate ac-]
curate circle information, impeding its ability to
further generate correct solution programs.
278
-----
| [
"Jiaxin, Zhang",
"Yashar, Moshfeghi",
"Kevin, Duh",
"Helena, Gomez",
"Steven, Bethard"
] | 2024-06-01T00:00:00 | NAACL 2024 Findings | false | 0 | 0 | null | https://aclanthology.org/2024.findings-naacl.19 | https://arxiv.org/abs/2405.00494 | https://www.semanticscholar.org/paper/9d107ecb6442ae757b79f10cc7f5f799f2372846 |
Generate & Rank: A Multi-task Framework for Math Word Problems | Math word problem (MWP) is a challenging and critical task in natural language processing. Many recent studies formalize MWP as a generation task and have adopted sequence-to-sequence models to transform problem descriptions to mathematical expressions. However, mathematical expressions are prone to minor mistakes while the generation objective does not explicitly handle such mistakes. To address this limitation, we devise a new ranking task for MWP and propose Generate & Rank, a multi-task framework based on a generative pre-trained language model. By joint training with generation and ranking, the model learns from its own mistakes and is able to distinguish between correct and incorrect expressions. Meanwhile, we perform tree-based disturbance specially designed for MWP and an online update to boost the ranker. We demonstrate the effectiveness of our proposed method on the benchmark and the results show that our method consistently outperforms baselines in all datasets. Particularly, in the classical Math23k, our method is 7% (78.4% to 85.4%) higher than the state-of-the-art. Code could be found at https://github.com/huawei-noah/noah-research. | null | ## Generate & Rank: A Multi-task Framework for Math Word Problems
**Jianhao Shen[1][†], Yichun Yin[2], Lin Li[3], Lifeng Shang[2],**
**Xin Jiang[2], Ming Zhang[1*], Qun Liu[2]**
1Department of Computer Science, School of EECS, Peking University
2Huawei Noah’s Ark Lab
3Huawei HiSilicon
{jhshen, mzhang_cs}@pku.edu.cn
{yinyichun, lilin29, shang.lifeng, jiang.xin, qun.liu}@huawei.com
**Abstract**
Math word problem (MWP) is a challenging
and critical task in natural language processing. Many recent studies formalize MWP as
a generation task and have adopted sequenceto-sequence models to transform problem descriptions to mathematical expressions. However, mathematical expressions are prone to
minor mistakes while the generation objective does not explicitly handle such mistakes.
To address this limitation, we devise a new
ranking task for MWP and propose Generate & Rank, a multi-task framework based
on a generative pre-trained language model.
By joint training with generation and ranking,
the model learns from its own mistakes and
is able to distinguish between correct and incorrect expressions. Meanwhile, we perform
tree-based disturbance specially designed for
MWP and an online update to boost the ranker.
We demonstrate the effectiveness of our proposed method on the benchmark and the results show that our method consistently outperforms baselines in all datasets. Particularly,
in the classical Math23k, our method is 7%
(78.4% → **85.4%) higher than the state-of-the-**
art[1].
|Original MWP|Col2|
|---|---|
|Problem|A project is completed in 25 days by 12 workers. If it takes 20 days to complete, how many workers will it take?|
|Solution|25 * 12 / 20|
|Number-mapped MWP||
|Problem|A project is completed in NUM0 days by NUM1 workers. If it takes NUM2 days to complete, how many workers will it take?|
|Solution|NUM0 * NUM1 / NUM2|
Table 1: An example of MWP, where numbers are usually mapped to special tokens, such as Num0/1/2.
from source texts to target expressions. These studies have proposed numerous advanced techniques
to improve the MWP solver, but their performance
is still unsatisfactory yet.
We argue that it is not sufficient to model MWP
as only a generation task, because there is a significant difference between mathematical expressions and natural language sequences: one minor
mistake in a mathematical expression will change
the whole semantic thus lead to a wrong answer,
whereas natural language is more robust to such
minor mistakes. The objective function of the generation task is to maximize generation likelihood
on ground-truth expressions, which does not have
an explicit strategy to make the model learn to
distinguish between ground-truth and expressions
that have minor mistakes. In addition, previous
works (Liu et al., 2019a; Xie and Sun, 2019; Zhang
et al., 2020) find that the performance of generation
models degrades fast as the expression gets longer.
To handle the above problems, we propose Generate & Rank, a multi-task framework for MWP,
which introduces a new ranker to explicitly distinguish between correct and incorrect expressions.
Specifically, our framework includes two modules:
a generator and a ranker. The former is designed
to generate candidate expressions given a problem text and the latter aims to rank the candidate
**1** **Introduction**
Solving math word problems (MWP) (Bobrow,
1964) is an important and fundamental task in natural language processing (NLP), which requires to
provide a solution expression given a mathematical
problem description, as illustrated in Table 1. Many
recent studies formalize MWP as a generation task
and commonly adopt LSTM-based sequence-tosequence (Seq2Seq) models (Wang et al., 2017,
2018b; Xie and Sun, 2019), where problem texts
are source sequences, mathematical expressions are
target sequences and the model learns the mapping
_† This work is done when Jianhao Shen is an intern at_
Huawei Noah’s Ark Lab
*Corresponding author
1Code will be available soon.
-----
expressions. They are built based on an encoderdecoder model and are jointly trained with generation loss and ranking loss. In this work, we build
our model based on BART (Lewis et al., 2020),
a widely used pre-trained language model that
achieves SOTA performance on various sequenceto-sequence tasks (Ahmad et al., 2021; Liu et al.,
2020). During multi-task training, expressions produced by the generator are used to construct an
expression bank and train the ranker, in which
way the model can learn from its own mistakes.
To construct more informative candidates for the
ranker, we specially design tree-based disturbance
for MWP. We also introduce an online update mechanism to generate a new set of candidate expressions at each training epoch. The overall training procedure is in an iterative manner, in which
the ranker and generator continue to enhance each
other.
To evaluate the effectiveness of the proposed
model, we conduct extensive experiments on the
datasets of Math23K (Wang et al., 2017) and
MAWPS (Koncel-Kedziorski et al., 2016). The
results show that our model outperforms typical
baselines. Particularly, we obtain an improvement
of 7% in the Math23K dataset that is extensively
studied. Moreover, we do ablation study and model
analysis, which shows that (1) joint training improves the performance of the generator and ranker
over separate training; (2) both strategies of constructing candidate expressions and online updating
are important to the success of the ranker. We also
find that with the ranker, our model achieves a large
improvement in generation of long expressions.
The contributions of our work are two-fold: (1)
We propose Generate & Rank, a new multi-task
framework to train a pre-trained language model
for math word problem solving. To construct informative candidate expressions for the ranker, we propose two effective generation methods and also introduce an online update strategy. (2) Experiments
show that our proposed model consistently outperforms the state-of-the-art models and achieves a
significant improvement on the Math23K dataset.
**2** **Preliminaries**
**2.1** **Math Word Problem**
A math word problem P is a sequence of word tokens and numeric values, which typically describes
a partial quantitative state of a world and some updates or relationships among quantities, then asks a
question about an unknown quantity. The solution
_S to the question is a mathematical expression that_
consists of math operators and numbers. In solving
a math word problem, we usually do not care about
the specific number of a quantity, so the numbers
in problems and solution expressions are mapped
to special tokens NUM#i according to their orders
in the problem text. Table 1 gives an example of an
original math word problem and the corresponding
number-mapped problem.
**2.2** **BART**
BART is a widely-used pre-trained language model.
It follows a standard encoder-decoder structure using Transformer layers (Vaswani et al., 2017) and
is pre-trained with text denoising tasks. The pretrained BART can be fine-tuned for tasks of sequence classification and generation.
**Transformer-based Encoder-Decoder.** BART
uses an encoder-decoder structure that is the
mainstream architecture for sequence-to-sequence
tasks. The encoder adopts the bidirectional selfattention to map an input sequence of tokens P =
(x1, x2, . . ., xn) to a sequence of continuous representations R = (r1, r2, . . ., rn). The BART encoder is composed of multiple Transformer layers,
each consists of a multi-head self-attention (MHA)
module and a fully connected feed-forward (FFN)
module. We denote the mapping function of the
BART encoder as follows:
(r1, r2, . . ., rn) = BARTEnc(x1, x2, . . ., xn)
(1)
The BART decoder also consists of multiple
Transformer layers. Besides MHA and FFN modules, the decoder layer adds another multi-head
attention over the output of the encoder. The decoder takes in one token si at a time, and gives an
output state based on the output of the encoder and
previous tokens in the decoder input. This output
state is then fed into a linear transformation followed by a softmax function to get the predicted
next-token probabilities. This one-step decoding
process is denoted as follows:
_P_ ( ) = softmax(diW + b) (2)
_∗_
**di = BARTDec(R; s0, s1, . . ., si** 1), (3)
_−_
where s0 is a special [bos] token indicating the
start of decoding, and R is the output of encoder.
**BART Pre-training. BART is pre-trained by the**
tasks of recovering a corrupted document to orig
-----
En/Decoder Shared BART _Expression_ Ground-truth _Expression_ Candidates
_Task #1: Generating_ _Task #2: Ranking_
**Ś Multi-task Training** _Expression_ _Score_
**_Generating Loss_** **+** **_Ranking Loss_**
Ranker
Encoder Decoder
Decoder Encoder
_Problem_
_Expression_ _Problem_
**_Generate_**
**ś Expression Online Updating**
Expression
Bank
**_Disturb_**
_Expression_
Figure 1: Our proposed Generate & Rank framework for BART-based MWP solver. The model consists of a
generator and a ranker. They share BART encoder and decoder, and are jointly trained with generating loss and
ranking loss. We construct an expression bank for training the ranker with expressions produced by the generator
and ones obtained by tree-based disturbance. The expression bank is updated every epoch so that the model can
constantly learn from new informative examples.
inal one. The input to BART is corrupted in two
ways: (1) a number of text spans are replaced with a
single [MASK] token; (2) sentences in a document
are shuffled in a random order. The objective of
BART pre-training is to minimize the cross-entropy
loss between the decoder’s generation probabilities
and the ground-truth of original document.
**3** **Methodology**
We propose Generate & Rank, a BART-based multitask framework for math word problems. Our
model consists of a generator and a ranker, which
share a BART model and are jointly trained with a
generating task and ranking task. The objective of
generating is to generate expressions given a math
word problem. We also add a ranking task so that
the model can select a correct expression from a
set of candidates. We construct an expression bank
to provide training examples for the ranker. Figure
1 shows our proposed framework and we introduce
details for each task and the whole framework in
the following sections.
**3.1** **Multi-task Training**
**Task #1: Generating. We first formulate the math**
word problem as a sequence-to-sequence task, in
which BART is trained to generate solution expressions given a math word problem. Following the fine-tuning strategy of BART (Lewis et al.,
2020), we take problem text, a sequence of tokens
_P = (x1, x2, . . ., xn), as input to BART encoder,_
and minimize negative log-likelihood of the solution expression S = (s1, s2, . . ., sm),
_−_ log Pr(S|P ), (4)
(P,SX)∈D
_JGEN =_
_|D|_
where the conditional probability is decomposed in
an auto-regressive way as:
Pr(si _P, Sj<i)_ (5)
_|_
_i=1_
Y
Pr(S|P ) =
Pr( _P, Sj<i) = softmax(diW + b)_ (6)
_∗|_
**di = BARTDec(R; Sj<i)** (7)
**R = BARTEnc(P** ). (8)
Additionally, we add two special tokens s1 =[bos]
and sm =[eos] to indicate the start and end symbols of decoding sequences.
**Task #2: Ranking. Through generating, we obtain**
many candidate solution expressions. To decide
which expression is a correct solution to the problem, we propose a ranking task which is essentially
a task of sequence pair classification. Given pairs
of problems and candidate expressions, the ranker
chooses the expression with highest ranking score
as the final solution to the problem. Specifically,
we add an MLP classifier on top of the final layer
hidden state of the last decoder token. The last
-----
decoder token is always a special [eos] token and
its corresponding hidden state can attend to all token representations of problem text and expression.
Same as the generation task, we feed the problem
text into the encoder and expression into the decoder, obtaining sequence representations. The last
decoder representation is then taken as input to the
classifier for ranking score prediction:
Pr(·|P, S) = softmax(d[′]m+1[)] (9)
**d[′]m+1** [= tanh(][d][m][+1][W][1] [+][ b][1][)][W][2] [+][ b][2]
(10)
**dm+1 = BARTDec(R; S),** (11)
where R is the output of the encoder, S is the
expression token sequence, dm+1 is the decoder
representation of the last token, and W1 2 and b1 2
_|_ _|_
are trainable parameters. The training objective
of the ranker is cross-entropy between classifier
output and correct labels,
**Tree-based Disturbance. Our second way to con-**
struct new expressions is adding disturbance to
ground-truth expressions. We design four kinds of
disturbances which are illustrated in Figure 2. The
ground-truth expression is first transformed to an
abstract syntax tree (AST) (Liu et al., 2019a). Then
we disturb tree nodes or sub-structures to produce
new expressions in four ways: a) Expand. A leaf
node is expanded into a sub-tree with a new operation and a number. b) Edit. A node is randomly
changed to another while keeping the expression
valid (i.e., a number node will be changed to another number, and an operator node to another operator). c) Delete. Delete a leaf node and replace
its father with its sibling node. d) Swap. Swap the
left and right children of an operation node.
We use the above methods to construct the expression bank. Since new expressions may also
be correct (for example, swapping two operands
of addition or multiplication), we compare the numerical results of newly obtained expressions with
that of the ground-truth, and add them to positive
or negative samples depending on the comparison.
Then both positive and negative pairs are sampled
from this expression bank for the multi-task training. In order to make the model learn with more
informative examples, we do an online update for
expression bank, which means that we use new expressions obtained by model-based generation and
tree-based disturbance at each training epoch.
**Ground-truth** **NUM1** **+**
**/**
**NUM1** **+**
**NUM2** **NUM3**
NUM1 / (NUM2 + NUM3)
_JRANK = −_
log Pr(1|P, S)
(P,SX)∈D[+]
_|D[+]_ _∪D[−]|_
+ log Pr(0|P, S)
(P,S)
X∈D[−]
(12)
where D[+] and D[−] are sets of positive and negative examples, respectively. We introduce how to
generate negative examples in the next section.
**Optimization Objective. We train the model on**
the joint loss of two tasks together:
_J = JGEN + JRANK._ (13)
and the two modules share BART parameters.
**3.2** **Expression Bank**
By definition, any expression that does not equal
the ground-truth can serve as a negative example,
but we cannot use all of them due to limited computational resources. To train the ranker efficiently,
we use two different strategies, namely modelbased generation and tree-based disturbance, to
construct an expression bank for ranker training.
**Model-based Generation. The first strategy is**
to produce new expressions with the generator.
Specifically, given a problem, we use beam search
with the generator to produce top-K expressions.
Each expression is labeled as positive or negative
depending on whether its calculation result equals
the result of ground-truth.
|/ + + NUM1 NUM3 NUM2 NUM3|Col2|
|---|---|
|NUM1|NUM3|
**/**
**NUM1** **-**
**NUM2** **NUM3**
( NUM1 + NUM3 ) / (NUM2 + NUM3)
**(a) Expand**
|/ NUM1 NUM3|Col2|
|---|---|
|NUM1|NUM3|
NUM1 / NUM3
**(c) Delete**
NUM1 / (NUM2 - NUM3)
**(b) Edit**
**/**
**+** **NUM1**
**NUM2** **NUM3**
(NUM2 + NUM3) / NUM1
**(d) Swap**
Figure 2: Overview of tree-based disturbance.
-----
**Algorithm 1 Training Algorithm**
**Input: MWP Dataset D = {(P, S)}**
**Parameter: Pre-trained BART encoder and de-**
coder parameters θe and θd, random initialized
ranker θv, beam size K, epoch number M
1: // Fine-tune the generator
2: for epoch = 1 to M do
3: Fine-tuning BART encoder θe and decoder
_θd on D with generation loss Eq. (4)._
4: end for
5: // Construct expression bank
6: D[+] _←D, D[−]_ _←{}_
7: for (P, S) ∈D do
8: Generate top-K expressions _S[¯]i_ for prob_{_ _}_
lem P with beam search
9: Get new expressions {S[¯]i[′][}][ by adding tree-]
based disturbance to S
10: _{S[¯]i} ←{S[¯]i} ∪{S[¯]i[′][}]_
11: **for** _S[¯] ∈{S[¯]i} do_
12: **if result of** _S[¯] equals result of S then_
13: _D[+]_ _←D[+]_ _∪{(P,_ _S[¯])}_
14: **else**
15: _D[−]_ _←D[−]_ _∪{(P,_ _S[¯])}_
16: **end if**
17: **end for**
18: end for
19: // Joint training
20: for epoch = 1 to M do
21: Train θe, θd, θv w.r.t. the joint loss Eq.(13)
on D[+] and D[−]
22: Repeat lines 6-18 to reconstruct expression
bank
23: end for
**3.3** **Training Procedure**
The training procedure includes multi-task training and expression online updating. We first finetune the pre-trained BART for the generation task
(JGEN in Eq. 4). After that, we use the fine-tuned
BART and tree-based disturbance to generate expressions as the training samples for the ranker.
Then we do the joint training of generation and
ranking. This process is performed in an iterative manner and the two modules (i.e., generator
and ranker) continue to enhance each other. Meanwhile, training examples for ranking are updated
after each epoch. We summarize the overall training procedure in Algorithm 1.
**3.4** **Model Inference**
We perform a two-stage model inference, namely
generation and ranking. Specifically, given a new
problem text sequence P, we first pass it to the
encoder to get the problem representation R. Then
we perform the beam search to generate top-K expressions. These generated expressions are used as
candidate solutions for the ranker. All expressions
are passed to the ranker and that with the highest
score is selected as the final result.
**4** **Experiment**
**4.1** **Experimental Setup**
**Datasets. We conduct the experiments on two**
commonly-used datasets: Math23K (Wang et al.,
2017) and MAWPS (Koncel-Kedziorski et al.,
2016). Math23K is a large-scale Chinese dataset
that contains 23,162 math word problems and their
corresponding expression solutions. MAWPS is a
English dataset containing 2,373 problems. All the
problems are one-unknown-variable linear problems and can be solved with a single expression.
**Baselines. We compare our model with the follow-**
ing baselines including the state-of-the-art models:
DNS (Wang et al., 2017) uses a vanilla Seq2Seq
model to generate expressions. Math-EN (Wang
et al., 2018b) uses the equation normalization to
avoid equation duplication problem. T-RNN (Wang
et al., 2019b) applies recursive neural networks
to model the tree structures of expressions. SAligned (Chiang and Chen, 2019) tracks the semantic meanings of operands with a stack during
decoding. Group-ATT (Li et al., 2019) leverages
the attention mechanism to enrich problem representation. Both AST-Dec (Liu et al., 2019a) and
GTS (Xie and Sun, 2019) develop a tree-based decoder to generate expressions. Graph2Tree (Zhang
et al., 2020) proposes to build a quantity cell graph
and a comparison graph to better capture the quantity relationships of the problem. Multi-E/D (Shen
and Jin, 2020) is an ensemble model which combines multiple encoders and decoders.
**Implementation Details. We use the PyTorch[2]**
implementations and pre-trained language models
provided by the Transformers library[3]. Since the
Math23K dataset is a Chinese dataset and officially
released BART is only for English, we switch to
[2https://pytorch.org/](https://pytorch.org/)
[3https://github.com/huggingface/](https://github.com/huggingface/transformers)
[transformers](https://github.com/huggingface/transformers)
-----
mBART25 (Liu et al., 2020), which is a multilingual BART for 25 languages including Chinese.
For the MAWPS dataset, we also use mBART25.
We optimize our model with AdamW (Loshchilov
and Hutter, 2019). The training hyperparameters
are set as follows. We set the batch size to 128, the
learning rate to 5e-5 and the warm-up ratio to 0.1.
The weight decay is set to 0.01. The number of
epochs M for fine-tuning and multi-task training
are set to 50. We set beam size K to 10 in beam
search and expression bank size to 20 unless otherwise stated. All experiments are carried out on
NVIDIA Tesla V100. We use 8 GPUs for training
and 1 for testing. For our proposed framework, the
training time is 1.5 hours for one epoch and testing
time is 15 minutes for the whole test set.
**Evaluation Metric. Both MAWPS and Math23K**
are evaluated with a metric of “solution accuracy”,
that is, the expression is considered as correct if it
induces the same number as the ground-truth. For
the Math23K dataset, some baselines are evaluated
using the public available test set while others use
the results of 5-fold cross-validation. We report our
results on both settings. For the MAWPS dataset,
models are evaluated with 5-fold cross-validation.
**4.2** **Results and Analysis**
Evaluation results of our model and baselines are
summarized in Table 2. We observe that: (1) direct fine-tuning of mBART already outperforms the
state-of-the-art models on Math23K, which shows
the powerful generation ability of mBART. (2)
on MAWPS, mBART outperforms most Seq2Seq
baselines but is worse than GTS and Graph2Tree.
These two models leverage tree structure of expressions during decoding which is critical for math
word problem solving. We believe that pre-trained
language models would achieve a better performance if combined with structure information, and
we leave it as a future work[4]. (3) Generate &
Rank framework further improves mBART and
achieves new state-of-the-art results. In particular, Generate & Rank outperforms mBART baselines by more than 4% in all the evaluation settings and also outperforms the previous best models by 7% on Math23K[†], 7.4% on 5-fold crossvalidation Math23K[‡]. The improvement over pretrained mBART demonstrates the effectiveness of
4One may think that the sequence decoder might not always generate valid expressions. However, we check all expressions generated by mBART and find that 99.9% are valid.
our multi-task training framework.
|Model|Math23K† Math23K‡ MAWPS‡|
|---|---|
|DNS Math-EN T-RNN S-Aligned Group-ATT AST-Dec GTS Graph2Tree Multi-E/D mBART Generate & Rank|- 58.1 59.5 66.7 - 69.2 66.9 - 66.8 - 65.8 - 69.5 66.9 76.1 69.0 - - 75.6 74.3 82.6 77.4 75.5 83.7 78.4 76.9 - 80.8 80.0 80.1 85.4 84.3 84.0|
Table 2: Solution accuracy on MAWPS and Math23K.
_† refers to the result of test set and ‡ denotes the result_
of 5-fold cross-validation. “-” means that the results
are not reported in the original papers.
**4.3** **Ablation Study and Model Analysis**
To better understand our model, we further conduct ablation study on Math23K to show how the
proposed components affect performance.
**4.3.1** **Effect of Joint Training**
To investigate the effect of joint training, we introduce the baseline of two-stage training (i.e., w/o
Joint), which means we first train the generator,
then train the ranker, and the modules are trained
independently. We also study the effect of joint
training on generation and perform comparison between mBART and our generator (i.e., w/o Ranker).
The results are listed in Table 3. We can see that the
joint training brings 2.2% improvement compared
with the two-stage training and 2.6% for the generator compared with the mBART trained alone,
suggesting that the joint training of generator and
ranker benefits each other. Besides, the joint training is more space efficient since we only need to
save one unified model rather than two.
Model Acc
Generate & Rank **85.4**
w/o Joint 83.2
w/o Ranker 83.4
w/o both (mBART) 80.8
Table 3: Effect of joint training.
**4.3.2** **Effect of Expression Bank Strategy**
We investigate the effect of different strategies to
construct the expression bank. Here we choose a
random sampling strategy as our baseline, where
-----
#Op Pro AST-Dec G2T mBART Generate & Rank
1 17.3 82.7 85.5 90.2 90.8 (+0.6)
2 52.2 74.5 83.7 88.1 90.2 (+2.1)
3 19.1 59.9 71.7 71.2 79.1 (+7.9)
4 6.6 42.4 51.5 53.0 63.6 (+10.6)
5 3.4 44.1 38.2 41.2 58.8 (+17.6)
6 0.9 55.6 55.6 55.6 88.8 (+33.2)
Table 5: Accuracy for increasing length of expressions.
#Op is the number of operations in expressions. Pro denotes proportion of expressions with different lengths.
possible mistakes when the expression bank is too
small, and when the expression bank is too large,
low-quality expressions may be generated and hinder ranker training. Tree-based disturbance has a
similar trend and the best bank size is 10.
Figure 3: Accuracy with different expression bank
sizes from 5 to 30.
**4.3.4** **Model Analysis**
In Table 5, we list how the model accuracy changes
with respect to the number of operations in expressions. We do not discuss the case of 6 operators
since it has too few examples and high variance.
For expressions less than 6 operators, all models
perform worse when the expression gets longer.
This is as expected since longer expressions require more steps of reasoning and have less data to
train. In addition, we also observe that Generate
& Rank training has larger improvement over finetuned mBART on longer expressions. This implies
that our model is more suitable to handle complex
problems and expressions.
Following Liu et al. (2019a), we also examine
the performance of our model in different domains.
The domain of each problem is defined by whether
it contains any keywords of this domain and we
the set of expressions that appeared in the training
data is sampled as the expression bank. We evaluate different strategies with and without online
updating and summarize the results in Table 4.
|Col1|Col2|
|---|---|
|||
|||
Strategy Online w/o Online
Random Sample 75.2 69.7
Model 84.2 83.2
Model+Tree **85.4** 83.1
Table 4: Accuracy for different expression bank strategies. The expression bank size is 20 for all settings.
We can see that our strategies outperform the
random sampling strategy. Since the ground-truth
can not be accessed during model inference, we
cannot use the tree-based disturbance to generate
candidate expressions as in the training phase. This
discrepancy between training and inference leads to
poor performance if we only use tree-based disturbance to construct the expression bank. However,
combining the tree-based disturbance and modelbased generation strategies, we can obtain better results than the only model-based generation, which
gives evidence that the tree-based disturbance contains some informative examples that the generator
does not cover and it is possible to improve the performance based on the human knowledge of math
expression.
We can also see that strategies have a performance drop without online updating. We conjecture that without online updating the ranker may
tend to memorize existing negative expressions
thus generalize poorly on new problems. As for
strategies with model-based generation, there is another possible reason: the generator keeps updating
during multi-task training, so the previously generated expressions are no longer good samples of the
current model, and newly generated expressions are
more informative. To summarize, both strategies
of constructing the expressions bank and online
updating play an important role in the success of
the ranker.
**4.3.3** **Impact of Expression Bank Size**
We further analyze the impact of expression bank
size on the ranker and results are shown in Figure 3.
If the model-based generation is used, performance
reaches the best at expression bank size 20. This
suggests that the expression bank size should not
be too small nor too large. One possible reason
may be that the generated expressions cannot cover
-----
use the same keyword list as Liu et al. (2019a).
Table 6 shows the results. We observe the similar
pattern that the fine-tuned mBART has limitations
in geometry which requires external knowledge
such as formulas for the circumference and area of
a circle. Interestingly, our proposed model mainly
improves on these domains. This suggests that the
ranking task may be a better choice to learn and
use mathematical knowledge than generating.
Domain Pro mBART Generate & Rank
Distance & Speed 11.8 83.9 83.9
Tracing 2.7 85.2 85.2
Engineering 5.8 86.2 **87.9**
Interval 0.6 66.7 66.7
Circle Geometry 1.9 73.7 **78.9**
Plane Geometry 1.2 75.0 **83.3**
Profit 1.1 72.7 72.7
Solid Geometry 1.6 81.3 **87.5**
Interest Rate 0.9 100.0 100.0
Production 0.4 100.0 100.0
Table 6: Accuracy for different problem domains. Pro
denotes the proportion of each domain in the test data.
Note that the sum of proportion is not 100% since there
are problems not belonging to any specified domain.
**5** **Related Work**
**5.1** **Math Word Problem**
**Rule-based methods. Early approaches on math**
word problems mainly craft rules and templates
for pattern matching (Bobrow, 1964; Slagle, 1965;
Fletcher, 1985; Bakman, 2007). These methods
rely heavily on manual design and can only solve a
limited scope of problems.
**Parsing-based methods. Later on, researchers use**
statistical methods to solve MWP and achieve a
great performance improvement. One line of research focuses on semantic parsing, which leverages traditional machine learning techniques to
identify entities, quantities, and operators from the
problem text. Roy et al. (2015) proposes three
types of classifiers to identify different elements of
problems. ARIS (Hosseini et al., 2014) splits the
problem into fragments and updates a logic template named state by verb categorization. Other
works (Sundaram and Khemani, 2015; Mitra and
Baral, 2016; Liang et al., 2016) follow a similar
process with different templates and annotations.
**Two-stage methods. Another research line first**
obtains an expression template then maps numbers
to the template slots. Kushman et al. (2014) train
a classifier to select from a set of pre-defined templates. Roy and Roth (2015) propose to construct
candidate expressions in a bottom-up manner and
train a global scoring function to guide the beam
search process. ALGES (Koncel-Kedziorski et al.,
2015) converts the process of searching valid expressions to an integer linear programming problem and adopts a different scoring function. UnitDep (Roy and Roth, 2017) proposes Unit Dependency Graph to enhance the scoring function.
**Deep learning methods. Recently, deep learning**
models have become prevailing methods for math
word problems. DNS (Wang et al., 2017) is the
first to apply vanilla RNN-based models to MWP.
Math-EN (Wang et al., 2018b) introduces equation
normalization and compares three Seq2Seq models on MWP solving. Group-ATT (Li et al., 2019)
uses multi-head attention to capture different aspects of features. Some works also leverage tree
structures and graph information to improve performance (Wang et al., 2019b; Chiang and Chen,
2019; Liu et al., 2019a; Xie and Sun, 2019; Zhang
et al., 2020). Shen and Jin (2020) propose a model
of multi-encoders and multi-decoders.
**5.2** **Pre-trained Language Model**
Pre-trained language models have obtained stateof-the-art results in many NLP benchmarks (Wang
et al., 2018a, 2019a). These models are usually
based on Transformer layers (Vaswani et al., 2017)
and trained on large corpus with self-supervised
tasks. According to their architectures, pre-trained
language models can be categorized into three
types: encoder-only, decoder-only and encoderdecoder models. BERT (Devlin et al., 2019) is an
encoder-only model which firstly proposes masked
token prediction and next sentence prediction to
train a language representation model. Following this, many other models are proposed like
RoBERTa (Liu et al., 2019b) and SpanBERT (Joshi
et al., 2020). Decoder-only models are typically
auto-regressive models trained to estimate the probability distribution of a text corpus, including
GPT2 (Radford et al., 2019), GPT3 (Brown et al.,
2020) and XLNet (Yang et al., 2019). Encoderdecoder models like BART (Lewis et al., 2020) and
T5 (Raffel et al., 2020) use the encoder-decoder architecture and are trained on sequence-to-sequence
tasks such as text denoising and translation.
-----
**6** **Conclusion and Future Work**
We propose Generate & Rank, a new multi-task
framework for math word problems. Specifically,
our model has a generator and a ranker which enhance each other with joint training. We also use
tree-based disturbance and online update to further
improve the performance. The experimental results
on the benchmark show that our work consistently
outperforms baselines in all datasets. In future
work, we will explore the generation and ranking
framework to other tasks like summarization and
translation.
**Acknowledgements**
This paper is partially supported by National Key
Research and Development Program of China with
Grant No. 2018AAA0101900/2018AAA0101902
as well as the National Natural Science Foundation
of China (NSFC Grant No. 62106008 and No.
61772039).
**References**
Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi
Ray, and Kai-Wei Chang. 2021. Unified pre-training
for program understanding and generation. In Pro_ceedings of the 2021 Conference of the North Amer-_
_ican Chapter of the Association for Computational_
_Linguistics._
Yefim Bakman. 2007. Robust [Understanding](http://arxiv.org/abs/math/0701393)
[of Word Problems with Extraneous Information.](http://arxiv.org/abs/math/0701393)
_arXiv:math/0701393._
[Daniel G. Bobrow. 1964. Natural Language Input for a](https://dspace.mit.edu/handle/1721.1/6903)
[Computer Problem Solving System.](https://dspace.mit.edu/handle/1721.1/6903)
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
Clemens Winter, Christopher Hesse, Mark Chen,
Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin
Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario
Amodei. 2020. [Language Models are Few-Shot](http://arxiv.org/abs/2005.14165)
[Learners. arXiv:2005.14165 [cs].](http://arxiv.org/abs/2005.14165)
Ting-Rui Chiang and Yun-Nung Chen. 2019.
Semantically-Aligned [Equation](https://doi.org/10.18653/v1/N19-1272) Generation for
[Solving and Reasoning Math Word Problems.](https://doi.org/10.18653/v1/N19-1272)
In Proceedings of the 2019 Conference of the
_North American Chapter of the Association for_
_Computational_ _Linguistics:_ _Human_ _Language_
_Technologies, Volume 1 (Long and Short Papers),_
pages 2656–2668.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. [BERT: Pre-training of](https://doi.org/10.18653/v1/N19-1423)
[Deep Bidirectional Transformers for Language Un-](https://doi.org/10.18653/v1/N19-1423)
[derstanding. In Proceedings of the 2019 Conference](https://doi.org/10.18653/v1/N19-1423)
_of the North American Chapter of the Association_
_for Computational Linguistics: Human Language_
_Technologies, Volume 1 (Long and Short Papers),_
pages 4171–4186.
[Charles R. Fletcher. 1985. Understanding and solving](https://doi.org/10.3758/BF03207654)
[arithmetic word problems: A computer simulation.](https://doi.org/10.3758/BF03207654)
_Behavior Research Methods, Instruments, & Com-_
_puters, 17(5):565–571._
Mohammad Javad Hosseini, Hannaneh Hajishirzi,
[Oren Etzioni, and Nate Kushman. 2014. Learning to](https://doi.org/10.3115/v1/D14-1058)
[Solve Arithmetic Word Problems with Verb Catego-](https://doi.org/10.3115/v1/D14-1058)
[rization. In Proceedings of the 2014 Conference on](https://doi.org/10.3115/v1/D14-1058)
_Empirical Methods in Natural Language Processing_
_(EMNLP), pages 523–533._
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S.
Weld, Luke Zettlemoyer, and Omer Levy. 2020.
[SpanBERT: Improving Pre-training by Representing](https://transacl.org/ojs/index.php/tacl/article/view/1853)
[and Predicting Spans. Transactions of the Associa-](https://transacl.org/ojs/index.php/tacl/article/view/1853)
_tion for Computational Linguistics, 8(0):64–77._
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish
Sabharwal, Oren Etzioni, and Siena Dumas Ang.
[2015. Parsing Algebraic Word Problems into Equa-](https://doi.org/10.1162/tacl_a_00160)
[tions. Transactions of the Association for Computa-](https://doi.org/10.1162/tacl_a_00160)
_tional Linguistics, 3:585–597._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini,
Nate Kushman, and Hannaneh Hajishirzi. 2016.
[MAWPS: A Math Word Problem Repository.](https://doi.org/10.18653/v1/N16-1136) In
_Proceedings of the 2016 Conference of the North_
_American Chapter of the Association for Computa-_
_tional Linguistics: Human Language Technologies,_
pages 1152–1157.
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
[Regina Barzilay. 2014. Learning to Automatically](https://doi.org/10.3115/v1/P14-1026)
[Solve Algebra Word Problems. In Proceedings of](https://doi.org/10.3115/v1/P14-1026)
_the 52nd Annual Meeting of the Association for_
_Computational Linguistics (Volume 1: Long Papers),_
pages 271–281.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer
Levy, Veselin Stoyanov, and Luke Zettlemoyer.
[2020. BART: Denoising Sequence-to-Sequence Pre-](https://doi.org/10.18653/v1/2020.acl-main.703)
[training for Natural Language Generation, Transla-](https://doi.org/10.18653/v1/2020.acl-main.703)
[tion, and Comprehension.](https://doi.org/10.18653/v1/2020.acl-main.703) In Proceedings of the
_58th Annual Meeting of the Association for Compu-_
_tational Linguistics, pages 7871–7880._
Jierui Li, Lei Wang, Jipeng Zhang, Yan Wang,
[Bing Tian Dai, and Dongxiang Zhang. 2019. Mod-](https://doi.org/10.18653/v1/P19-1619)
[eling Intra-Relation in Math Word Problems with](https://doi.org/10.18653/v1/P19-1619)
[Different Functional Multi-Head Attentions. In Pro-](https://doi.org/10.18653/v1/P19-1619)
_ceedings of the 57th Annual Meeting of the Asso-_
_ciation for Computational Linguistics, pages 6162–_
6167.
-----
Chao-Chun Liang, Kuang-Yi Hsu, Chien-Tsung
Huang, Chung-Min Li, Shen-Yu Miao, and KehYih Su. 2016. A tag-based statistical English math
word problem solver with understanding, reasoning
and explanation. In Proceedings of the Twenty-Fifth
_International Joint Conference on Artificial Intelli-_
_gence, IJCAI’16, pages 4254–4255._
Qianying Liu, Wenyv Guan, Sujian Li, and Daisuke
Kawahara. 2019a. [Tree-structured Decoding for](https://doi.org/10.18653/v1/D19-1241)
[Solving Math Word Problems.](https://doi.org/10.18653/v1/D19-1241) In Proceedings of
_the 2019 Conference on Empirical Methods in Nat-_
_ural Language Processing and the 9th International_
_Joint Conference on Natural Language Processing_
_(EMNLP-IJCNLP), pages 2370–2379._
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey
Edunov, Marjan Ghazvininejad, Mike Lewis, and
Luke Zettlemoyer. 2020. [Multilingual Denoising](https://transacl.org/ojs/index.php/tacl/article/view/2107)
[Pre-training for Neural Machine Translation. Trans-](https://transacl.org/ojs/index.php/tacl/article/view/2107)
_actions of the Association for Computational Lin-_
_guistics, 8(0):726–742._
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019b.
[RoBERTa: A Robustly Optimized BERT Pretrain-](http://arxiv.org/abs/1907.11692)
[ing Approach. arXiv:1907.11692 [cs].](http://arxiv.org/abs/1907.11692)
Ilya Loshchilov and Frank Hutter. 2019. [Decou-](https://openreview.net/forum?id=Bkg6RiCqY7)
[pled weight decay regularization.](https://openreview.net/forum?id=Bkg6RiCqY7) In 7th Inter_national Conference on Learning Representations,_
_ICLR 2019._
[Arindam Mitra and Chitta Baral. 2016. Learning To](https://doi.org/10.18653/v1/P16-1202)
[Use Formulas To Solve Simple Arithmetic Problems.](https://doi.org/10.18653/v1/P16-1202)
In Proceedings of the 54th Annual Meeting of the
_Association for Computational Linguistics (Volume_
_1: Long Papers), pages 2144–2153._
Alec Radford, Jeffrey Wu, Rewon Child, David Luan,
Dario Amodei, and Ilya Sutskever. 2019. Language
models are unsupervised multitask learners. OpenAI
_blog, 1(8):9._
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi
Zhou, Wei Li, and Peter J. Liu. 2020. [Exploring](http://jmlr.org/papers/v21/20-074.html)
[the Limits of Transfer Learning with a Unified Text-](http://jmlr.org/papers/v21/20-074.html)
[to-Text Transformer. Journal of Machine Learning](http://jmlr.org/papers/v21/20-074.html)
_Research, 21(140):1–67._
Subhro Roy and Dan Roth. 2015. [Solving General](https://doi.org/10.18653/v1/D15-1202)
[Arithmetic Word Problems. In Proceedings of the](https://doi.org/10.18653/v1/D15-1202)
_2015 Conference on Empirical Methods in Natural_
_Language Processing, pages 1743–1752._
[Subhro Roy and Dan Roth. 2017. Unit Dependency](https://ojs.aaai.org/index.php/AAAI/article/view/10959)
[Graph and Its Application to Arithmetic Word Prob-](https://ojs.aaai.org/index.php/AAAI/article/view/10959)
[lem Solving. Proceedings of the AAAI Conference](https://ojs.aaai.org/index.php/AAAI/article/view/10959)
_on Artificial Intelligence, 31(1)._
[Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reason-](https://doi.org/10.1162/tacl_a_00118)
[ing about Quantities in Natural Language. Transac-](https://doi.org/10.1162/tacl_a_00118)
_tions of the Association for Computational Linguis-_
_tics, 3:1–13._
[Yibin Shen and Cheqing Jin. 2020. Solving Math Word](https://doi.org/10.18653/v1/2020.coling-main.262)
[Problems with Multi-Encoders and Multi-Decoders.](https://doi.org/10.18653/v1/2020.coling-main.262)
In Proceedings of the 28th International Conference
_on Computational Linguistics, pages 2924–2934._
[James R. Slagle. 1965. Experiments with a deductive](https://doi.org/10.1145/365691.365960)
[question-answering program.](https://doi.org/10.1145/365691.365960) _Communications of_
_the ACM, 8(12):792–798._
[Sowmya S Sundaram and Deepak Khemani. 2015. Nat-](https://www.aclweb.org/anthology/W15-5955)
[ural Language Processing for Solving Simple Word](https://www.aclweb.org/anthology/W15-5955)
[Problems. In Proceedings of the 12th International](https://www.aclweb.org/anthology/W15-5955)
_Conference on Natural Language Processing, pages_
394–402.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N. Gomez, undefine[dukasz Kaiser, and Illia Polosukhin. 2017. Attention](https://papers.nips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html)
[is all you need. In Proceedings of the 31st Interna-](https://papers.nips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html)
_tional Conference on Neural Information Processing_
_Systems, NIPS’17, page 6000–6010._
Alex Wang, Yada Pruksachatkun, Nikita Nangia,
Amanpreet Singh, Julian Michael, Felix Hill, Omer
[Levy, and Samuel R. Bowman. 2019a. Superglue:](https://proceedings.neurips.cc/paper/2019/hash/4496bf24afe7fab6f046bf4923da8de6-Abstract.html)
[A stickier benchmark for general-purpose language](https://proceedings.neurips.cc/paper/2019/hash/4496bf24afe7fab6f046bf4923da8de6-Abstract.html)
[understanding systems. In Advances in Neural Infor-](https://proceedings.neurips.cc/paper/2019/hash/4496bf24afe7fab6f046bf4923da8de6-Abstract.html)
_mation Processing Systems 32: Annual Conference_
_on Neural Information Processing Systems 2019,_
pages 3261–3275.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018a.
[GLUE: A Multi-Task Benchmark and Analysis Plat-](https://doi.org/10.18653/v1/W18-5446)
[form for Natural Language Understanding.](https://doi.org/10.18653/v1/W18-5446) In
_Proceedings of the 2018 EMNLP Workshop Black-_
_boxNLP: Analyzing and Interpreting Neural Net-_
_works for NLP, pages 353–355._
Lei Wang, Yan Wang, Deng Cai, Dongxiang Zhang,
[and Xiaojiang Liu. 2018b. Translating a Math Word](https://doi.org/10.18653/v1/D18-1132)
[Problem to a Expression Tree. In Proceedings of the](https://doi.org/10.18653/v1/D18-1132)
_2018 Conference on Empirical Methods in Natural_
_Language Processing, pages 1064–1069._
Lei Wang, Dongxiang Zhang, Jipeng Zhang, Xing
Xu, Lianli Gao, Bing Tian Dai, and Heng Tao
Shen. 2019b. [Template-Based Math Word Prob-](https://doi.org/10.1609/aaai.v33i01.33017144)
[lem Solvers with Recursive Neural Networks. Pro-](https://doi.org/10.1609/aaai.v33i01.33017144)
_ceedings of the AAAI Conference on Artificial Intel-_
_ligence, 33(01):7144–7151._
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
[Deep neural solver for math word problems. In Pro-](https://doi.org/10.18653/v1/D17-1088)
_ceedings of the 2017 Conference on Empirical Meth-_
_ods in Natural Language Processing, pages 845–_
854.
[Zhipeng Xie and Shichao Sun. 2019. A Goal-Driven](https://doi.org/10.24963/ijcai.2019/736)
[Tree-Structured Neural Model for Math Word Prob-](https://doi.org/10.24963/ijcai.2019/736)
[lems.](https://doi.org/10.24963/ijcai.2019/736) In Proceedings of the Twenty-Eighth Inter_national Joint Conference on Artificial Intelligence,_
pages 5299–5305.
-----
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019.
[Xlnet: Generalized autoregressive pretraining for](https://proceedings.neurips.cc/paper/2019/file/dc6a7e655d7e5840e66733e9ee67cc69-Paper.pdf)
[language understanding.](https://proceedings.neurips.cc/paper/2019/file/dc6a7e655d7e5840e66733e9ee67cc69-Paper.pdf) In Advances in Neural
_Information Processing Systems, volume 32, pages_
5754–5764.
Jipeng Zhang, Lei Wang, Roy Ka-Wei Lee, Yi Bin, Yan
[Wang, Jie Shao, and Ee-Peng Lim. 2020. Graph-to-](https://doi.org/10.18653/v1/2020.acl-main.362)
[Tree Learning for Solving Math Word Problems. In](https://doi.org/10.18653/v1/2020.acl-main.362)
_Proceedings of the 58th Annual Meeting of the Asso-_
_ciation for Computational Linguistics, pages 3928–_
3937.
-----
| [
"Xin, Jiang",
"Lin, Li",
"Lifeng, Shang",
"Yichun, Yin",
"Jianhao, Shen",
"Ming, Zhang",
"Qun, Liu"
] | 2021-09-07T00:00:00 | EMNLP 2021 Findings | false | 0 | 18 | null | http://arxiv.org/abs/2109.03034 | https://arxiv.org/abs/2109.03034 | https://www.semanticscholar.org/paper/4698fc4712f0212c8a3810fd67b41ee8b8896aba |
Generating Chain-of-Thoughts with a Pairwise-Comparison Approach to Searching for the Most Promising Intermediate Thought | To improve the ability of the large language model (LLMs) to tackle complex reasoning problems, chain-of-thoughts (CoT) methods were proposed to guide LLMs to reason step-by-step, enabling problem solving from simple to complex. State-of-the-art methods for generating such a chain involve interactive collaboration, where the learner generates candidate intermediate thoughts, evaluated by the LLM, guiding the generation of subsequent thoughts. However, a widespread yet understudied problem is that the evaluation from the LLM is typically noisy and unreliable, potentially misleading the generation process in selecting promising intermediate thoughts. In this paper, motivated by Vapnik's principle, we use pairwise-comparison evaluation instead of point-wise scoring to search for promising intermediate thoughts with the noisy feedback from the LLM. In each round, we randomly pair intermediate thoughts and directly prompt the LLM to select the more promising one from each pair, allowing us to identify the most promising thoughts through an iterative process. To further alleviate the noise in the comparison, we incorporate techniques from ensemble learning and dueling bandits, proposing two variants of the algorithm. Experiments on three real-world tasks demonstrate the effectiveness of our proposed algorithm and verify the rationale of the pairwise comparison mechanism. | This paper uses pairwise-comparison evaluation instead of point-wise scoring to search for promising intermediate thoughts with the noisy feedback from the LLM, and incorporates techniques from ensemble learning and dueling bandits to incorporate the rationale of the pairwise comparison mechanism. | # Generating Chain-of-Thoughts with a Pairwise-Comparison Approach to Searching for the Most Promising Intermediate Thought
**Zhen-Yu Zhang** [1] **Siwei Han** [2] **Huaxiu Yao** [2] **Gang Niu** [1] **Masashi Sugiyama** [1 3]
**Abstract**
To improve the ability of the large language
_model (LLMs) to tackle complex reasoning prob-_
lems, chain-of-thoughts (CoT) methods were proposed to guide LLMs to reason step-by-step, enabling problem solving from simple to complex.
State-of-the-art methods for generating such a
chain involve interactive collaboration, where the
learner generates candidate intermediate thoughts,
evaluated by the LLM, guiding the generation
of subsequent thoughts. However, a widespread
yet understudied problem is that the evaluation
_from the LLM is typically noisy and unreliable,_
potentially misleading the generation process in
selecting promising intermediate thoughts. In this
paper, motivated by Vapnik’s principle, we use
_pairwise-comparison evaluation instead of point-_
wise scoring to search for promising intermediate
thoughts with the noisy feedback from the LLM.
In each round, we randomly pair intermediate
_thoughts and directly prompt the LLM to select the_
more promising one from each pair, allowing us to
identify the most promising thoughts through an
iterative process. To further alleviate the noise in
the comparison, we incorporate techniques from
ensemble learning and dueling bandits, proposing two variants of the algorithm. Experiments
on three real-world tasks demonstrate the effectiveness of our proposed algorithm and verify the
rationale of the pairwise comparison mechanism.
**1. Introduction**
_Large language models (LLMs), such as the GPT (Brown_
et al., 2020) and PaLM (Chowdhery et al., 2023), have re
1Center for Advanced Intelligence Project, RIKEN 2University
of North Carolina at Chapel Hill [3]Graduate School of Frontier
Sciences, The University of Tokyo. Correspondence to: Masashi
Sugiyama <[email protected]>.
_Proceedings of the 41_ _[st]_ _International Conference on Machine_
_Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by_
the author(s).
cently demonstrated remarkable capabilities in a variety of
real-world tasks. However, current LLMs still face limitations when dealing with complex tasks, especially those
involving multi-step reasoning, such as mathematical or reasoning problems (Rae et al., 2021; Wei et al., 2022). To
deal with such implicit complexity, chain-of-thoughts (CoT)
approaches were proposed (Wei et al., 2022; Wang et al.,
2022; Yao et al., 2023). These approaches were proposed
to use an incorporation of intermediate steps of reasoning
(intermediate “thought”), enabling the LLM to reason progressively, first generating intermediate solutions for simpler
problems to incrementally improve its capacity to handle
complicated tasks. Therefore, the key challenge is to design
an effective CoT generation algorithm that guides the LLM
towards desired solutions through step-by-step reasoning.
There is a fruitful line of work that considers the CoT generation problem. The pioneering work uses manual design
prompts to let the LLM generate a CoT by itself (Wei et al.,
2022; Wang et al., 2022). This line of research was recently
extended by the score-based tree-of-thoughts (S-ToT) approaches (Yao et al., 2023; Long, 2023), where the CoT
generation is framed as an interactive process with the algorithm and the LLM. These approaches generate a set of candidate intermediate thoughts each round and ask the LLM
to score them and select the most promising ones. The next
thoughts are then generated based on these selected ones,
creating a tree-like data structure. A search algorithm, such
as deep-first search, is used to identify the most promising
CoT in the tree (see the detailed illustration in Figure 2).
While these methods have shown remarkable empirical success, they rely on an accurate score evaluation of each intermediate thought by the LLM. However, it is important
to notice that: LLM scores are often noisy. For example,
the LLM may give different responses to different prompts,
even though these prompts convey the same meaning (Lu
et al., 2022). The noisy nature of LLM feedback introduces
new problems in the selection of the most promising intermediate thoughts and the subsequent generation of the tree
structure. Therefore, it is crucial to make the CoT generation
algorithms robust to the noisy feedback from LLMs.
Several preliminary approaches have been proposed to mitigate such noise in the LLM feedback, including estimating
-----
**Generating CoTs with a pairwise-comparison approach to searching for the most promising intermediate thought**
uncertainty from the semantic aspect (Kuhn et al., 2022) or
ensembling multiple thoughts (Wang et al., 2022). However,
getting an accurate point-wise estimate for each intermediate
thought could be resource-intensive, requiring the construction of an additional model (Paul et al., 2024) or multiple
queries (Wang et al., 2022): see also Figure 4 and Table 6
to 8 in our experiments. Fortunately, in the context of CoT
generation, our focus is on identifying the most promising
chain. Motivated by Vapnik’s principle (Vapnik, 1991), we
do not need to solve a more general and difficult problem
as an intermediate step, i.e. to estimate an accurate pointwise score for each intermediate thought. Instead, we can
focus directly on identifying the most promising one in each
round. However, it is still impractical to directly vote on all
intermediate thoughts to identify the most promising one
by LLMs due to the input length limit and the “lost in the
middle” phenomenon (Liu et al., 2024).
_We argue that for LLMs, comparing two thoughts simulta-_
_neously provides a more robust evaluation than assigning_
_individual scores. We aim to leverage the comparison of two_
thoughts instead of evaluating a single thought in isolation,
thereby providing a feasible alternative for identifying the
most promising intermediate thought. This argument is well
established in human cognition, as seen in mathematical
problems, where it is often more feasible to approximate
which thought is better by comparison than by considering
and evaluating them separately. We also observe similar
phenomena that LLMs to generate a more reliable evaluation in the experiments on the Sudoku task, as shown in
Figure 1, where the LLM successfully identifies the better
option given two intermediate solutions, but struggles to assign the correct value to intermediate thoughts individually.
Based on the above insights, we propose a pairwise
comparison-based algorithm for CoT generation to alleviate
the noise in the LLM feedback and to find the most promising intermediate thoughts each round. In each round, we
randomly pair all the intermediate thoughts and directly ask
the LLM to compare and select the more promising one from
each pair, keeping the selected one and discarding the other.
Then we repeat this procedure so that we get a small set
of most promising intermediate thoughts, and subsequently,
we generate the next thoughts based on these selected ones.
This mechanism allows us to use a direct pairwise comparison to identify the promising thoughts with a more robust
evaluation. We also propose to include previous thoughts in
the tree structure for comparison to mitigate the noisy nature
of LLM’s feedback. Taking these two points into account,
we frame the problem as an iterative process and propose a
general CoT generation algorithm called comparison-based
_tree-of-thoughts (C-ToT). To further model the noise in the_
comparison, we resort to the techniques of ensemble and
best-arm identification with dueling feedback (Falahatgar
et al., 2017) and propose two variants of the proposed C
3 1 2 2 1
|Col1|3 1 1 3 1 S-ToT Score: 2 p Pair-wise Comparison 3 1 1 2 1 S-ToT Score: 0 p Our Method’s Choice (C-ToT)|Col3|
|---|---|---|
||3 1 1 3 1 S-ToT Score: 2 p 3 1 1 2 1 S-ToT Score: 0 p||
|||Our Method’s Choice (C-ToT)|
|Col1|2 1 3 S-ToT Score: 2 p 2 1 2 S-ToT Score: 0 p|
|---|---|
||2 1 3 S-ToT Score: 2 p 2 1 2 S-ToT Score: 0 p|
|Col1|2 1 3 4 2 1 S-ToT Score: 2 p 2 1 3 4 1 1 S-ToT Score: 1 p|
|---|---|
||2 1 3 4 2 1 S-ToT Score: 2 p 2 1 3 4 1 1 S-ToT Score: 1 p|
**Figure 1: A demonstration of point-wise evaluation vs. pair-wise**
comparison based on real experimental results in Sudoku puzzles.
The point-wise evaluation algorithm (S-ToT) assigns hard scores
to each intermediate thought (the higher, the better), while our
proposed algorithm (C-ToT) uses pair-wise comparison to obtain
the more promising thoughts (green tick). In these cases, the LLM
assigns incorrect scores, but it makes a correct comparison.
ToT algorithm. Through experiments on three real-world
reasoning problems, we demonstrate the effectiveness of our
proposed approaches and verify the rationale of the pairwise
comparison mechanism. Our main contributions are:
(1) We investigate the problem of noisy feedback in the
CoT generation, which is widespread but understudied.
(2) Motivated by Vapnik’s principle, we propose a
pairwise-comparison based approach for CoT generation that exploits noisy feedback from the LLMs.
(3) We proposed two variants of C-ToT that further account
for different types of noise in the comparison feedback.
**2. Related Work**
**CoT Generation. Generating appropriate CoT for LLMs**
to enhance their inference power is a critical problem in
real-world applications. Previous work has explored taskspecific training algorithms for identifying the CoT, including creating semantic graphs (Xu et al., 2021), refining the
model through human-annotated CoT (Cobbe et al., 2021),
or learning an additional extractor using heuristic-driven
pseudo CoT (Chen et al., 2019). Different from these approaches, the LLM-based CoT generation is used directly
during inference, coupling the generation process with an
LLM. In these approaches, the LLM guides the CoT generation, eliminating the need for additional training.
The pioneering work in LLM-based CoT generation introduces intermediate thoughts sequentially between the input
query and LLM’s response. By simply prompting the LLM
to “think step by step”, this strategy has been shown to significantly improve several tasks over directly asking the LLM
the original question, such as mathematical puzzles (Wei
et al., 2022) or other general mathematical reasoning problems (Drori et al., 2022). Due to the noisy nature of the
LLM feedback, robustness can be improved by using an
ensemble of different CoTs (Wang et al., 2022).
To further improve the effectiveness of CoT generation, the
score-based tree-of-thoughts generation algorithm was intro
-----
**Generating CoTs with a pairwise-comparison approach to searching for the most promising intermediate thought**
duced independently by Yao et al. (2023) and Long (2023).
They model the CoT generation process as a tree generation
and search process. A single node in the tree represents
an intermediate thought. Starting from a given node, the
thought generator constructs a set of new nodes and the
LLM generates scores for each node as an evaluation. Finally, the timing of the tree expansion is determined by
the search algorithm used (e.g., breadth-first or depth-first
search). In addition, this search algorithm can also provide capabilities including backtracking from unpromising
thoughts. Further research extended the tree structure to a
graph, such as the graph-of-thoughts (Besta et al., 2023),
allowing the distillation of knowledge about entire network
of thoughts. However, these methods cannot handle the
noisy evaluation feedback caused by the LLM itself.
**Self-Reflection.** Rather than interacting with LLMs to
generate a step-by-step reasoning chain, self-reflection approaches involve LLMs directly offering an initial thought
chain to the query, followed by iterative refinement of the
whole chain. Madaan et al. (2023) and Paul et al. (2024)
introduced the “self-reflection” mechanism, using the LLMs
to provide feedback to their generation candidates and then
fine-tuning. Paul et al. (2024) updated the model to explicitly generate intermediate thoughts while interacting
with a critic model that provides automated feedback on
the reasoning. These methods introduced new models to
provide evaluation for the intermediate thoughts, but these
critical models still do not always provide perfect evaluation.
Furthermore, for complex problems that require sequential
reasoning, such as the Game of 24, where the next thought
should be generated and evaluated based on previous ones,
the C-ToT generation could be more appropriate.
**Uncertainty Quantification in LLMs. This is a recent in-**
terest that aims to evaluate the confidence of a given answer
by the LLM itself. Some work considered letting the LLM
provide the confidence (Northcutt et al., 2021; Kadavath
et al., 2022) by retraining the model. Another line of work
considered designing entropy-based measures (Kuhn et al.,
2022), or generating multiple outputs to obtain an uncertainty measure (Wang et al., 2022). Although they can be
included in the CoT generation, they introduce a high computational cost during testing, particularly when obtaining
an accurate score for each intermediate thought.
**3. Our Approach**
In this section, we first introduce the proposed comparisonbased ToT generation algorithm, which is a general framework that generates CoT with a pairwise comparison mechanism to find the most promising intermediate thought. To
further alleviate the noise in LLM’s comparison feedback,
we propose two different instantiations of our framework
with theoretical analysis.
**3.1. CoT Generation via Pair-wise Comparison**
We first introduce the comparison-based ToT framework,
where the key mechanism is the selection of the most promising thoughts among all candidates in each round.
We illustrate our proposed algorithm and compare it with
previous approaches in Figure 2. The CoT approaches
directly ask the LLM to generate a CoT. The S-ToT approaches ask the LLM to score each intermediate thought
and select the highest-scoring ones to generate the next
layer. Different from these methods, we propose a pairwisecomparison approach to searching for the most promising
intermediate thoughts. Note that with LLMs, due to feedback noise and input limitations, we cannot do a listwise
voting that directly asks the LLM to sort all the intermediate thoughts. Let Z be the set of all candidate intermediate thoughts, and we want to select the most promising
_K thoughts from it. The comparison iterates as follows:_
we randomly pair thoughts from the set and select only the
winner in each pair, thereby halving the size of Zi to |Zi|/2,
where Zi denotes the set in the i-th iteration. After at most
_K ×_ log2 |Z| rounds, we can identify the K most promising
intermediate thoughts by such direct comparison. In practice, we can do one iteration of comparison and keep the
remaining K thoughts in the last few rounds. For each pair,
we compare thoughts a and b with the LLM by asking which
one is better, using different prompts with n times, where
_n_ 1. We defer the implementation details to Section 4.1.
_≥_
We take previous unselected intermediate thoughts into
comparison to explore possibly valuable but mis-evaluated
thoughts caused by the noise in feedback. This is because
the evaluation of intermediate thoughts may not always be
accurate, and the generation of the tree structure may miss
valuable intermediate thoughts in previous iterations. In the
seminal research of S-ToT (Yao et al., 2023; Long, 2023;
Besta et al., 2023), the thought generator uses a backtracking mechanism to revisit previous thoughts when the current
ones fall below a certain threshold. While this strategy aims
to rescue promising thoughts, its efficiency is questionable
because it may delay the exploration of previously valuable
but incorrectly scored thoughts. In addition, backtracking
only occurs after a thought has fallen below a manually
chosen threshold, which is hard to know in advance.
Motivated by these shortcomings and the efficiency of our
pairwise-comparison mechanism, we maintain a repository
of previous intermediate thoughts. At each round, we include previously unselected thoughts in the comparisons,
rather than relying on a fixed threshold. As illustrated in Figure 2, during the pairwise comparison in the second layer,
we include the intermediate thought that was not selected in
the first layer. This mechanism ensures that the algorithm
has the flexibility to revisit previous thoughts based on the
comparison results from the LLM in each round.
-----
**Generating CoTs with a pairwise-comparison approach to searching for the most promising intermediate thought**
Chain-of-Thoughts Score-based Tree-of-Thoughts (S-ToT) Comparison-based Tree-of-Thoughts (C-ToT)
Input Input Input
Pairwise
# 1 # 2 # 3 Comparison 1
s=8 s=1 s=8
# 4 # 5 # 6 # 7 # 8
s=2 s=7 s=2 s=8 s=3 Pairwise
Comparison 2
Intermediate thoughts # 9 # 10
s=2 s=4
are compared by LLM round #1
Intermediate thoughts round #2
Output are scored by LLM Output Output round #3
**Figure 2: Schematic illustration of previous CoT and S-ToT approaches with our proposed C-ToT approach for CoT generation with**
LLMs. Each circle box represents an intermediate thought, which is a coherent sequence of language or equations that serves as an
intermediate step in problem solving. In the S-ToT method, each intermediate thought is scored by the LLM (denoted by s in the
figure), and the searching algorithm considers the highest-scoring ones as the most promising and then generates next intermediate
thoughts based on them. In the C-ToT approach, we use pairwise comparison with the LLM in each round to find the most promising
intermediate thoughts and then generate the next thoughts. Meanwhile, we include all previous intermediate thoughts in the comparison.
We formulate the comparison-based ToT generation as an
iterative interaction between the thought generation and the
LLM. Take the C-ToT illustration in Figure 2 as an example.
The algorithm starts by generating intermediate thoughts #1
to #3 based on the input. Following a pairwise comparison
mechanism, thoughts #1 and #3 are selected, leading to
the generation of new intermediate thoughts, namely #4 to
#8. In the second layer, thought #2, thoughts #4 and #5
(linked to #1), along with thoughts #6 to #8 (linked to #3)
are compared, subsequently resulting in the selection of
thought #2 and thought #7 (linked to #3), and the generation
of the next intermediate thoughts.
Formally, we denote an intermediate thought by z. In round
_t, Z_ _[t]_ represents the set of candidate intermediate thoughts
for comparison, and _Z_ _[t]_ denotes the selected intermediate
[b]
thoughts. In a sequence of T rounds, in the first round
the algorithm generates a set of thoughts Z [1] = {z[1]i _[}]i[m]=1_
based on the query, where m denotes the set size. Then, the
comparison-based ToT selects K most promising intermediate thoughts based on the comparison result from the LLM.
We denote the selected set of thoughts by _Z[b][1]_ = {z[1]j _[}][j][∈][[][K][]][.]_
In the second round, the algorithm generates the new intermediate thoughts based on each selected thought[1]. After
_T rounds, we can get K most promising thoughts, and all_
of them contain information about their parent nodes, thus
formulating as K CoTs. Therefore, this iterative process
facilitates the refinement and selection of thoughts over multiple rounds. For each pair of thoughts, we use a direct
comparison to identify the more promising one. We call
such a direct comparison method “Standard Mode”. We
1Each newly generated intermediate thought will contain the
information about all its parents
**Algorithm 1 C-ToT Algorithm**
1: Input: Query x, comparison times n, number of intermediate thoughts generation m, number of selected
thoughts K, depth of the tree T .
2: Generate initial thoughts Z [1] of size m with query x
3: for t = 1 to T do
4: **if Standard Mode then**
5: Pair thoughts in Z _[t]_ randomly
6: **while |Z** _[t]| > K do_
7: **for every pair (a, b) in Z** _[t]_ **do**
8: Compare thoughts a and b by LLM n times,
then take a majority vote. If a wins, keep
thought a in Z _[t]_ and drop b, and vice versa.
9: **end for**
10: **end while**
11: Denote _Z_ _[t]_ by the remaining thoughts
[b]
12: **else**
13: Call Algorithm 2 to obtain _Z_ _[t]_
[b]
14: **end if**
15: Generate the next m thoughts for each thought in _Z_ _[t]._
[b]
16: end for
summarize the proposed approach in Algorithm 1.
**Remark 1 (Comparison Complexity). In our approach, we**
keep all previous intermediate thoughts to compare in each
round. This may affect the operational efficiency and exceed the storage limit. To improve the efficiency, we can
introduce a counter for each intermediate thought to track its
comparison frequency. If the comparison count of an intermediate thought exceeds a threshold, we can remove it from
the tree. Since the comparison in each round is independent
of each other, we could use parallel computing to improve
-----
**Generating CoTs with a pairwise-comparison approach to searching for the most promising intermediate thought**
the efficiency of the algorithm, or exploit more efficient
machine learning techniques to schedule computational resources more adaptively and efficiently (Zhou, 2024) in the
future. If the tree depth is T, the total number of comparisons required is less than the order of (nTK log(m)).
_O_
**Remark 2 (Token Cost). The token costs of our proposed**
C-ToT approach and the S-ToT approaches are task-specific
and generally incomparable. The C-ToT approach could
discover valuable but misevaluated previous intermediate
thoughts earlier than the S-ToT method. However, it may introduce more token overhead as we compare these thoughts
multiple times. Therefore, we are better suited to the problem where the initial intermediate thoughts are more uncertain. We provide the token cost analysis in Appendix C.2.
**3.2. Instantiations and Analysis**
In the proposed C-ToT framework, we use a direct comparison for each pair of thoughts. Although our C-ToT method
explores two sample information compared to the S-ToT
methods that use only single sample information, the comparison feedback could still be inaccurate. We offer two
methods to select the winning thought in a pair.
**Standard. Suppose the comparison difficulty of each pair**
is the same. Inspired by the ensemble algorithms in CoT
generation (Wang et al., 2022), we can improve the robustness of the comparison feedback by setting n > 1 in the
“Standard Mode”, so that we compare the two thoughts in
each pair for n times and take majority voting output.
**Dueling. We consider a more general assumption of noisy**
comparisons, where we only assume an unknown ranking of
the Mt thoughts at round t. This implies that the comparison
difficulty of each pair varies, requiring a different number of
comparisons for each pair. If two thoughts a and b are compared, thought a is chosen with some unknown probability
_p(a, b) and b is chosen with p(b, a) = 1_ _p(a, b), where_
_−_
the higher-ranked one has probability 1/2. Repeated
_≥_
comparisons are independent of each other.
We formulate it as a best-arm identification problem with dueling feedback (Yue et al., 2012; Falahatgar et al., 2017), propose a dueling bandits instantiation of the C-ToT framework,
and analyze its properties. For each pair, we keep the empirical probability bpa, a proxy for p(a, b). We also maintain a
confidence value bc s.t., w.h.p., bpa ∈ (p(a, b)−bc, p(a, b)+bc).
We stop the comparisons when it is sure of the winner or
when it reaches its comparison budget n. If it reaches n comparisons, it outputs the element with more wins, randomly
breaking ties. During comparison, we also compare two elements a, b with LLM by query them with different prompts.
We summarize the proposed instantiation in Algorithm 2.
Here stochasticity γ models the problem hardness.
**Analysis. First, we introduce some definitions. Given a set**
**Algorithm 2 Knockout**
1: Input: Set Z, bias ε, confidence δ, stochasticity γ,
_i = 1_
2: while |Z| > K do
3: Pair thoughts in Z randomly
4: **for every pair (a, b) do**
5: Set bias ε = [(2][1]γ[/]2[3][i/][−][3][1)][ε], confidence δ = 2δ[i][,][ b]pa =
1/2, bc = 1/2, n = 21ε[2][ log][ 2]ε [,][ r][ = 0][,][ w][a][ = 0]
6: **while |pba −** 1/2| ≤ bc − _ε and r ≤_ _n do_
7: Compare thoughts a and b by LLM. if thought
_a wins, wa = wa + 1, and vice versa._
8: _r = r + 1, bpa =_ _[w]r[a]_ [,][ b]c = q 21r [log][ 4]δ[r][2]
9: **if bpa ≤** 1/2 then
10: Keep thought b in Z and drop a, break.
11: **else**
12: Keep thought a in Z and drop b, break.
13: **end if**
14: **end while**
15: **end for**
16: _i = i + 1_
17: end while
18: Return _Z[b] by the remaining thoughts_
of thoughts Z = {z1, ..., zM _} of size M_ . Suppose there is
an unknown underlying ranking function r : Z 7→ N that
ranks all the thoughts. Let r(z1), ..., r(zM ) be the ranking
of the thoughts, such that when two elements za and zb
are compared, the higher ranked one is selected first, e.g.
_r(za) < r(zb). We define the ε-maximum via the (ε, δ)-_
PAC paradigm, which requires that the output is likely to be
close to the intended value. Specifically, given ε > 0, δ > 0,
with probability 1 _δ, the maximum selection must_
_≥_ _−_
produce an element a such that for b, with r(b) = M,
_p(a, b) ≥_ 2[1]
_[−]_ _[ε][. We call such an output][ ε][-maximum.]_
**Lemma 1 (Theorem 3 in (Falahatgar et al., 2017)).**
_Knockout(Z, ε, δ) uses O(_ _[γ][2]ε[|][2][Z][|]_ log [1]δ [)][ comparisons and]
_with probability at least 1_ _δ, outputs an ε-maximum._
_−_
**Proposition 1. Suppose that the depth of the tree is T** _, and_
_thoughts in the shallower layers are more promising than_
_those in the deeper ones. Then, the probability of missing the_
_ε-maximum promising thoughts in the τ_ _-th layer is 1_ _δ[τ]_
_−_
_with at most O(_ _[γ][2][ P]i[T]ε=1[2]_ _[|][Z][i][|]_ log [1]δ [)][ comparisons required]
_for generating the whole tree of thoughts._
**Remark 3. Proposition 1 is directly derived from Lemma 1**
by the union bound. Proposition 1 shows that, under the
general assumption of noisy comparisons and utilizing our
proposed pairwise-comparison approach, valuable intermediate thoughts will still not be overlooked, especially for the
thoughts in the shallow layer, which may be more uncertain
as they appear at the beginning of the ToT generation. We
leave the detailed proofs to Appendix B.
-----
**Generating CoTs with a pairwise-comparison approach to searching for the most promising intermediate thought**
**4. Experiments**
We test our proposed algorithm in three real-world tasks:
_question answering (QA), as well as mathematical reasoning_
tasks, namely, the Game of 24 and Sudoku Puzzles. The
LLM employed in experiments is GPT-3.5-turbo-1106.
**4.1. Experiment Setup**
**Contenders Setup. Our evaluation firstly includes a com-**
parison with a baseline method that directly queries the LLM
for the final result (we denote it as Direct); three state-of-theart contenders: CoT (Wei et al., 2022), SC-CoT (Wang et al.,
2022), and SToT (Yao et al., 2023). For the CoT method, we
query the LLM directly to get the final answer, following the
settings as in (Wei et al., 2022). For the SC-CoT method, for
a fair comparison, if without further notice, we set the CoT
number approximately the same number of tokens with our
proposed algorithm. Specifically, 15 samples were generated for each question, using the same settings as in (Wang
et al., 2022), with the final answer determined by majority
voting. The SToT approach is also implemented identically
to the setting in Yao et al. (2023). For the depth of the tree
in the SToT algorithm, we set it equal to the depth with our
proposed algorithms C-ToT (Stand.) and C-ToT (Duel.).
We further propose three contenders for comparison to test
the effectiveness of our proposed algorithm. A robust implementation of SToT is proposed that follows the main
idea of SC-CoT to account for noise in the LLM feedback,
called SC-SToT. Two variants of SToT that equipped with
our proposed mechanisms are also included, denoted as
Comp-SToT, Back-SToT. Specifically,
(1) SC-SToT: We denote the self-consistent variant of the
ToT algorithm as SC-SToT, short for Self-Consistent
ToT algorithm. To alleviate feedback noise in the ToT
algorithm, we draw inspiration from the self-consistent
CoT generation algorithm (Wang et al., 2022). Our
proposal involves querying the LLM multiple times
during intermediate thought evaluation in the ToT algorithm and using the majority voting results as the
final evaluation in the SC-SToT. This contender is a
direct extension of the ToT algorithm to account for the
feedback noise of the LLM, but the cost is very high
as it scores each intermediate thought multiple times;
(2) Comp-SToT: We replace the score-based evaluation in
the original SToT algorithm with our proposed pairwise comparison approach and refer to this variant
algorithm as Comp-SToT.
(3) Back-SToT: We replace the search algorithm in the
original SToT algorithm with our proposed mechanism, which retains all previous intermediate thoughts
with their corresponding socres and takes the highest
scoring thoughts as the most promising ones. We call
this variant algorithm as Back-SToT.
In addition, we include three state-of-the-art algorithms into
comparison: PoT (Chen et al., 2023), Self-Refine (Madaan
et al., 2023) and GoT (Besta et al., 2023). Detailed experimental results can be found in Appendix C.1.
For our proposed C-ToT approaches, we denote the C-ToT
algorithm in “Standard Mode” by C-ToT (Stand.) and set
the number of comparisons n to 1. We denote the C-ToT
algorithm that considers the general comparison noise by CToT (Duel.), and set the maximum number of comparisons
to 3 and set γ = 0.1. All experiments are repeated 3 times.
**Intermediate Thoughts Generation. In general, different**
tasks should have different thought generators. Exploiting
problem properties is essential to effectively design the intermediate thoughts. We follow the setting of (Yao et al.,
2023) to generate the intermediate thoughts. For example,
we generate the thoughts as a few words, as in QA; as a line
of equations, as in the Game of 24; or as an intermediate
solution in the Sudoku puzzle. We defer the implementation
for prompts and the cost comparison of our approaches and
other contenders to Appendix A and Appendix C.2.
**4.2. Question Answering**
**Task Setup. We first test the performance of our proposed**
algorithm on the question answering tasks using the AQuA
dataset (Ling et al., 2017), which comprises 254 arithmetic
reasoning tasks aimed at assessing logical abilities through
various mathematical computation problems. Each question in this dataset is accompanied by five multiple-choice
options, labeled from A to E. We follow the experimental
protocol as it is in the work of Wang et al. (2022). The accuracy of the responses is gauged by comparing the generated
answers with the standard solutions. Results on other QA
datasets can be found in Appendix C.1.
**C-ToT Setup. For the AQuA dataset, we set the maximum**
depth of tree of thoughts to 3. For each intermediate thought
selected, we set m = 12, thus generating 12 new intermediate thoughts as the next step. The maximum number of
selected thoughts K per layer is set to 3. Therefore, starting
from the “question” as the root, all newly generated thoughts
are compared, and we select 3 most promising intermediate
thoughts to generate the next step.
In the question answering task, it is difficult to set a fixed
length for the C-ToT, i.e., an intermediate thought may
already summarize the answer before reaching the maximum length of the C-ToT. For those selected intermediate
thoughts that have already reached an answer, we add them
to the “answer list” and do not include them in the comparison in the next round. For those intermediate thoughts
that have already reached an answer but were not selected,
we will include them in the comparison in the next round.
This mechanism thus gives excluded answers a chance to
-----
**Generating CoTs with a pairwise-comparison approach to searching for the most promising intermediate thought**
50
45
40
35
30
25
20
15
10
Method Accuracy
Method Accuracy
Direct 8.0%
CoT 4.3%
SC-CoT 8.0%
SToT 34.3%
SC-SToT 40.0%
Comp-SToT 36.3%
Back-SToT 39.0%
C-ToT (Stand.) **40.0%**
C-ToT (Duel.) **41.0%**
42
40
38
36
34
Direct 24.8%
CoT 42.3%
SC-CoT 58.4%
SToT 57.1%
SC-SToT 57.6%
Comp-SToT 59.0%
Back-SToT 58.0%
C-ToT (Stand.) **61.4%**
C-ToT (Duel.) **63.0%**
**Table 1: Average ac-**
curacy on AQuA.
320.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
||||||C-T|oT (Du|el.)|
||C-T|oT (Sta Back-S|nd.) ToT|SC-SToT||||
|||||||||
|C|omp-ST|oT||||||
||SToT|||||||
Cost per question($)
**Figure 4: Accuracy and token**
costs of differenta methods.
|18|127|
|---|---|
|80|29|
Ours incorrect Ours correct
**Figure 3: Predictions of SToT**
and Comp-SToT on AQuA.
**Table 2: Average ac-**
curacy on Game of 24.
**4.3. Game of 24**
be included in the “answer list” by subsequent comparisons.
After T rounds of thought generation and comparison for selection, the selected chains are appended to the “answer list”,
and a majority voting mechanism is used on the “answer list”
to determine the final answer. We leave the implementation
details of the thought generator and the comparison prompt
for the question answering task to Appendix A.1.
**Comparison Results. We report the comparison results of**
our proposed approaches with other contenders in Table 1.
All CoT approaches outperform the Direct query method,
showing the importance of designing effective CoTs to
guide LLMs from simplicity to complexity. We can also
observe that the SC-SToT method outperforms the original
SToT method, where the algorithm scores the intermediate
thoughts multiple times to alleviate the noise. However, this
mechanism will significantly increase the token cost. We
leave the detailed discussion of token cost to Appnedix C.2.
Both proposed variants of SToT methods achieve higher
average accuracy than the original SToT method. In CompSToT, the point-wise scoring mechanism is replaced by a
pairwise comparison, while in Back-SToT, our backtracking
search algorithm replaces the original search algorithm, taking into account all previous intermediate thoughts. These
results demonstrate the effectiveness of these two mechanisms, such that the proposed C-ToT (Stand.) and C-ToT
(Duel.) approaches outperform all contenders. Moreover,
C-ToT (Duel.) achieves superior performance by further
modeling noise in the comparison.
We delve deeper to explore the benefits of the pairwise
comparison mechanism to test whether it can better find
the most promising intermediate thoughts. Note that we
do not have access to the underlying value or order of the
intermediate thoughts in each round. Therefore, we use the
final prediction error as a proxy, since the depth of the tree
structure in the QA datasets is shallow, limited to 1 to 3
levels. We quantify the number of QA problems correctly/incorrectly predicted by ToT and Comp-SToT and report
it in Figure 3. Our observation shows that Comp-SToT
predicts more correctly when S-ToT predicts incorrectly,
showing the superiority of the pairwise comparison mechanism over the pointwise scoring mechanism. This validates
the rationale of pairwise comparison mechanism.
**Task setup. This is a math problem where the goal is to use**
four numbers and basic arithmetic operations +, _,_ _, /_ to
_{_ _−_ _∗_ _}_
get a sum of 24. For example, given the input 4, 9, 10, 13,
_{_ _}_
a viable solution would be (10 4) (13 9) = 24. In our
_−_ _∗_ _−_
experiments, we use the same dataset as in the work of Yao
et al. (2023) and follow their experimental setup, which
consists of 1,362 problems taken from the 24-point game
on 4nums.com. We have selected questions numbered 401
to 500 as our question set. Each problem consisted of four
numbers selected from 1 to 13, and the goal is to formulate
a calculation using these numbers to reach a total of 24. The
accuracy of the solutions is scored based on whether all 4
input numbers were used and whether the result is 24.
**C-ToT Setup. In this task, we restore unselected interme-**
diate thoughts in previous layers and apply pruning when
the current node is inferior to previously unselected ones.
We set the maximum depth of tree to 6. The computation
terminates either when an answer containing the number 24
is derived, or when the maximum layer limit is reached.
We set the number of selected intermediate thoughts per
layer K = 5 and let the LLM generate a variable number
of new thoughts. If the total number of new thoughts is less
than or equal to twice the maximum number of selected
thoughts, thoughts are moved from the “remain list” (a list
that stores reserved thoughts) to the new node list until
the number of new thoughts reaches twice the maximum
number of selected thoughts or the “remain list” is emptied.
We also optimize the pruning process that takes place before
the comparison stage and apply it to all contenders to save
tokens. Newly generated intermediate thoughts with a single
number unequal to 24 is eliminated. The rest are then filtered
by comparison, selecting a number of new thoughts equal
to or less than the maximum number of selected thoughts.
Thoughts that are not selected are added to the “remain list”.
If one of the selected thoughts contains the final answer, it
is added to the “answer list” and the interactive process is
stopped. We leave the implementation details of the thought
generator and the comparison prompt to Appendix A.2.
**Comparison Results. We report the overall comparison**
results in Table 2. We observed trends similar to those reported in the QA task. We find that both the SToT and
-----
**Generating CoTs with a pairwise-comparison approach to searching for the most promising intermediate thought**
Method Acc. 3 × 3 Acc. 4 × 4 Acc. 5 × 5
Direct 56.7% 37.7% 16.7%
CoT 73.3% 36.7% 23.3%
SC-CoT 76.7% 50.0% 16.7%
SToT 86.7% 46.7% 46.7%
SC-SToT 96.7% 53.3% 50.0%
Comp-SToT 100.0% 46.7% 50.0%
Back-SToT 100.0% 60.0% 56.7%
C-ToT (Stand.) **100.0%** **63.3%** **60.0%**
C-ToT (Duel.) **100.0%** **63.3%** **63.3%**
**Table 3: Average accuracy on Sudoku Puzzles.**
and the program was terminated. We leave the implementation details of the thought generator and the comparison
prompt to Appendix A.3.
**Comparison Results. We report the comparison results in**
Table 3. In all three tasks, our proposed approaches, C-ToT
(Stand.) and C-ToT (Duel.), consistently achieve the highest
average accuracy, demonstrating their superior ability to handle complex reasoning tasks. While the SToT method generally outperforms the Direct method and the CoT method, it
does not always outperform the SC-ToT method, as seen in
the 4 4 Sudoku task. This phenomenon may be due to the
_×_
potential noise in pointwise scoring methods, which could
mislead the subsequent generation of intermediate thoughts.
The SC-ToT method introduces thought ensembles, which
naturally mitigate the noise in the LLM feedback. Our proposed methods consistently outperform other contenders,
suggesting that pairwise comparison mechanism effectively
mitigates noise in LLM feedback, identifies the most promising intermediate thoughts, and improves the generated chain
compared to SToT-based methods. We can observe that the
C-ToT (Stand.) algorithm achieves the same performance
as the C-ToT (Duel.) algorithm, which indicates that a
single pairwise comparison could already provide reliable
feedback in the Sudoku puzzle tasks.
**5. Conclusion**
This paper investigates a widespread but understudied problem of noisy feedback from LLMs in CoT generation tasks.
Motivated by Vapnik’s principle, we argue that for LLMs,
the simultaneous comparison of two thoughts provides a
more robust evaluation compared to individual value evaluations, and thus we propose a pairwise-comparison ToT
approach C-ToT, approaching to searching for the most
promising intermediate thought. The proposed method directly selects the most promising intermediate thought by
pairwise comparison, and incorporates previous thoughts
into the comparison to allow for rethinking. To further alleviate the noise in the comparison, we propose two variants of
the C-ToT algorithm, and analyze the theoretical properties.
Experiments on three real-world mathematical and reasoning tasks show the effectiveness of our proposed algorithm
and verify the rationale of the pairwise comparison.
CToT approaches significantly outperform the CoT-based
methods, indicating the need to interact with the LLM to
generate a more powerful chain of thoughts to handle complex reasoning tasks. The SC-SToT method improves the
performance of SToT, while the Comp-SToT can achieve a
similar improvement with pairwise comparison, indicating
the superiority of the comparison mechanism. Therefore,
the combination of the comparison mechanism and the specific design for comparison noise allows the C-ToT (Duel.)
to achieve the highest average accuracy.
We also report the average accuracy of different methods
against their token cost in Figure 4. We can observe that the
token cost per question of the Comp-SToT algorithm is the
same as that of the SToT algorithm, but it achieves a better
performance. In practice, we can reduce the token cost by
using a counter for each thought to track its comparison
frequency. If the comparison count of a thought exceeds a
threshold, we can remove it from the tree. We also leave the
detailed discussion of token cost to Appnedix C.2.
**4.4. Sudoku Puzzle**
**Task Setup. We use the Sudoku Dataset (Long, 2023),**
containing 10 Sudoku puzzles each of 3x3, 4x4, and 5x5
dimensions. Each puzzle is partially filled with numbers,
and the task is to complete the entire Sudoku grid without
changing the given numbers. The correctness of the solutions was determined by whether a complete and correct
Sudoku grid is generated.
**C-ToT Setup. In this task, we test our proposed algorithm**
against other competitors in Sudoku puzzles of three different sizes. In each case, we set the maximum depth of tree
of thoughts to 15. The computation stops either when the
correct Sudoku solution is derived or when the maximum
number of steps is reached. For each intermediate thought
selected, we set m = 5, thus generating 5 new intermediate
thoughts as the next step. The maximum number of selected
thoughts K per layer is also set to 3.
From the newly generated thoughts, a number equal to or
less than the maximum allowed was selected by comparison.
A pruning strategy was used to check for and eliminate
thoughts containing results that did not meet the Sudoku
requirements, such as duplicate numbers in the same row or
column. These non-compliant results were removed from
both the selected and unselected thoughts. The remaining
unselected thoughts were then added to the “remain list”.
If the number of selected thoughts was less than the maximum, additional thoughts were moved from the “remain list”
to the “select list” until either the maximum number was
reached or the “remain list” was emptied. We then checked
whether the “select list” contained a correct solution. If a
correct solution was found, it was added to the “answer list”
-----
**Generating CoTs with a pairwise-comparison approach to searching for the most promising intermediate thought**
**Acknowledgments**
HY was supported by Cisco Faculty Research Award.
MS was supported by JST CREST Grant Number JPMJCR18A2.
**Impact Statement**
This research investigates a general problem of CoT generation with any LLM, where we take into account the noise in
the feedback of the LLM. Therefore, when using LLMs for
complex mathematical or logical reasoning problems, the
user could benefit from our study from the aspect of generating a more effective CoT. The consequences of system
failure and bias in the data are not applicable.
**References**
Besta, M., Blach, N., Kubicek, A., Gerstenberger, R., Gianinazzi, L., Gajda, J., Lehmann, T., Podstawski, M.,
Niewiadomski, H., Nyczyk, P., et al. Graph of thoughts:
Solving elaborate problems with large language models.
_arXiv preprint arXiv:2308.09687, 2023._
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D.,
Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
Askell, A., et al. Language models are few-shot learners. Advances in Neural Information Processing Systems
_(NeurIPS), 33:1877–1901, 2020._
Chen, J., Lin, S.-t., and Durrett, G. Multi-hop question answering via reasoning chains. _arXiv preprint_
_arXiv:1910.02610, 2019._
Chen, W., Ma, X., Wang, X., and Cohen, W. W. Program
of thoughts prompting: Disentangling computation from
reasoning for numerical reasoning tasks. Transactions on
_Machine Learning Research, 2023._
Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra,
G., Roberts, A., Barham, P., Chung, H. W., Sutton, C.,
Gehrmann, S., et al. Palm: Scaling language modeling
with pathways. Journal of Machine Learning Research,
24(240):1–113, 2023.
Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H.,
Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano,
R., et al. Training verifiers to solve math word problems.
_arXiv preprint arXiv:2110.14168, 2021._
Drori, I., Zhang, S., Shuttleworth, R., Tang, L., Lu, A., Ke,
E., Liu, K., Chen, L., Tran, S., Cheng, N., et al. A neural
network solves, explains, and generates university math
problems by program synthesis and few-shot learning at
human level. Proceedings of the National Academy of
_Sciences, 119(32):e2123433119, 2022._
Falahatgar, M., Orlitsky, A., Pichapati, V., and Suresh, A. T.
Maximum selection and ranking under noisy comparisons. In Proceedings of the 34th International Con_ference on Machine Learning (ICML), pp. 1088–1096,_
2017.
Kadavath, S., Conerly, T., Askell, A., Henighan, T., Drain,
D., Perez, E., Schiefer, N., Dodds, Z. H., DasSarma, N.,
Tran-Johnson, E., et al. Language models (mostly) know
what they know. arXiv preprint arXiv:2207.05221, 2022.
Kuhn, L., Gal, Y., and Farquhar, S. Semantic uncertainty:
Linguistic invariances for uncertainty estimation in natural language generation. In Proceedings of the 11th
_International Conference on Learning Representations_
_(ICLR), 2022._
Ling, W., Yogatama, D., Dyer, C., and Blunsom, P. Program
induction by rationale generation: Learning to solve and
explain algebraic word problems. In Proceedings of the
_55th Annual Meeting of the Association for Computa-_
_tional Linguistics (ACL), pp. 158–167, 2017._
Liu, N. F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua,
M., Petroni, F., and Liang, P. Lost in the middle: How
language models use long contexts. Transactions of the
_Association for Computational Linguistics, 12:157–173,_
2024.
Long, J. Large language model guided tree-of-thought.
_arXiv preprint arXiv:2305.08291, 2023._
Lu, Y., Bartolo, M., Moore, A., Riedel, S., and Stenetorp,
P. Fantastically ordered prompts and where to find them:
Overcoming few-shot prompt order sensitivity. In Pro_ceedings of the 60th Annual Meeting of the Association_
_for Computational Linguistics (ACL), pp. 8086–8098,_
2022.
Madaan, A., Tandon, N., Gupta, P., Hallinan, S., Gao,
L., Wiegreffe, S., Alon, U., Dziri, N., Prabhumoye, S.,
Yang, Y., et al. Self-refine: Iterative refinement with selffeedback. Advances in Neural Information Processing
_Systems (NeurIPS), 36:pages to appear, 2023._
Northcutt, C. G., Jiang, L., and Chuang, I. L. Confident
learning: Estimating uncertainty in dataset labels. Journal
_of Artificial Intelligence Research, 70:1373–1411, 2021._
Paul, D., Ismayilzada, M., Peyrard, M., Borges, B., Bosselut, A., West, R., and Faltings, B. Refiner: Reasoning
feedback on intermediate representations. In Proceedings
_of the 18th Conference of the European Chapter of the_
_Association for Computational Linguistics (EACL), pp._
1100–1126, 2024.
-----
**Generating CoTs with a pairwise-comparison approach to searching for the most promising intermediate thought**
Rae, J. W., Borgeaud, S., Cai, T., Millican, K., Hoffmann,
J., Song, F., Aslanides, J., Henderson, S., Ring, R.,
Young, S., et al. Scaling language models: Methods,
analysis & insights from training gopher. arXiv preprint
_arXiv:2112.11446, 2021._
Srivastava, A., Rastogi, A., Rao, A., Shoeb, A. A. M., Abid,
A., Fisch, A., Brown, A. R., Santoro, A., Gupta, A.,
Garriga-Alonso, A., et al. Beyond the imitation game:
Quantifying and extrapolating the capabilities of language
models. Transactions on Machine Learning Research,
2023.
Vapnik, V. Principles of risk minimization for learning theory. Advances in Neural Information Processing Systems,
4, 1991.
Wang, X., Wei, J., Schuurmans, D., Le, Q. V., Chi,
E. H., Narang, S., Chowdhery, A., and Zhou, D. Selfconsistency improves chain of thought reasoning in language models. In Proceedings of the 11th International
_Conference on Learning Representations (ICLR), 2022._
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi,
E., Le, Q. V., Zhou, D., et al. Chain-of-thought prompting
elicits reasoning in large language models. Advances in
_Neural Information Processing Systems (NeurIPS), 35:_
24824–24837, 2022.
Xu, W., Deng, Y., Zhang, H., Cai, D., and Lam, W. Exploiting reasoning chains for multi-hop science question
answering. In Findings of the Association for Computa_tional Linguistics: EMNLP 2021, pp. 1143–1156, 2021._
Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao,
Y., and Narasimhan, K. Tree of thoughts: Deliberate
problem solving with large language models. Advances
_in Neural Information Processing Systems (NeurIPS), 36:_
11809–11822, 2023.
Yue, Y., Broder, J., Kleinberg, R., and Joachims, T. The
k-armed dueling bandits problem. Journal of Computer
_and System Sciences, 78(5):1538–1556, 2012._
Zhou, Z.-H. Learnability with time-sharing computational
resource concerns. _arXiv preprint arXiv:2305.02217,_
2024.
10
-----
**Generating CoTs with a pairwise-comparison approach to searching for the most promising intermediate thought**
**A. Implementation Details**
In this section, we provide implementation details for all experiments, focusing primarily on the design of the thought
generation and comparison prompts for each task.
**A.1. Question Answering**
**Thought Generator. We use a zero-shot prompt. For each question, we use the same prompt multiple times to generate a**
specified number of different new thoughts.
Prompt = ’Here is a question. You should work on it step by step. Your answer
must be only the alphabet of your choice and begin with ###. For example: ###
A, which should be at the last line. Q: {question}’
**Comparison Prompt. We use multiple different prompts to generate the comparison result at each round. For the QA**
problem, we use three different prompts. We evaluate the same thought three times by using each prompt once and take the
majority as the answer.
Prompt 1 = ’You should judge which of the two analysis is better. You must only
reply 1 or 2.
1: {input_1}
2: {input_2}’
Prompt 2 = ’Find out which of the two analysis is better. You must only reply 1
or 2.
1: {input_1}
2: {input_2}’
Prompt 3 = ’Compare the two analysis and find which is better. You must only
reply 1 or 2.
1: {input_1}
2: {input_2}’
**A.2. Game of 24**
**Thought Generator. In this experiment, we use prompts similar to the ToT experiment to generate thoughts. There are two**
prompts: one is to select two numbers from the remaining list for the next step in the 24-point calculation, and then to add
the newly obtained number back into the remaining list of numbers. The other is to generate the total operation formula that
results in 24, based on all previous steps, when only one number remains. Both prompts are few-shot.
Prompt 1 = ’You should choose two of the input numbers and use basic arithmetic
operations (+ - * /) to obtain a new number. The new number should replace
those two input numbers. Give me at least 6 possible next steps.
Input: 2 8 8 14
Possible next steps:
2 + 8 = 10 (left: 8 10 14)
8 / 2 = 4 (left: 4 8 14)
14 + 2 = 16 (left: 8 8 16)
2 * 8 = 16 (left: 8 14 16)
8 - 2 = 6 (left: 6 8 14)
14 - 8 = 6 (left: 2 6 8)
14 / 2 = 7 (left: 7 8 8)
14 - 2 = 12 (left: 8 8 12)
Input: {input}
Possible next steps:’
Prompt 2 = ’Use numbers and basic arithmetic operations (+ - * /) to obtain 24.
Each step, you are only allowed to choose two of the remaining numbers to
obtain a new number.
11
-----
**Generating CoTs with a pairwise-comparison approach to searching for the most promising intermediate thought**
Input: 4 4 6 8
Steps:
4 + 8 = 12 (left: 4 6 12)
6 - 4 = 2 (left: 2 12)
2 * 12 = 24 (left: 24)
Answer: (6 - 4) * (4 + 8) = 24
Input: 2 9 10 12
Steps:
12 * 2 = 24 (left: 9 10 24)
10 - 9 = 1 (left: 1 24)
24 * 1 = 24 (left: 24)
Answer: (12 * 2) * (10 - 9) = 24
Input: {input}’
**Comparison Prompt. We use multiple different prompts to generate the comparison result at each round. For the 24**
problem, we use three different prompts. All prompts are few-shot. We evaluate the same thought three times by using each
prompt once and take the majority as the answer.
Prompt 1 = ’I will give you two groups of numbers. The evaluation criteria is if
using all of the given numbers with basic arithmetic operations (+ - * /) can
reach 24. You should compare the two inputs and decide which input is better
. You should only reply 1 or 2.
input_1: 2 12
2 * 12 = 24
input_2: 11 12
all arithmetic operations can’t get 24
Answer: 1
input_1: 1 2 4
too small
input_2: 3 8
3 * 8 =24
Answer: 2
input_1: 1 12 11
1 + 12 + 11 = 24
input_2: 12 12
12 + 12 = 24
Both can reach 24, randomly select one
Answer: 1
input_1: {input_1}
input_2: {input_2}
Answer: ’
Prompt 2 = ’I will give you two groups of numbers. Tell me which input is better.
The better one is more possible to reach 24 by using all of the given
numbers with basic arithmetic operations (+ - * /). You should only reply 1
or 2.
//same examples
input_1: {input_1}
input_2: {input_2}
Answer: ’
Prompt 3 = ’Here are two groups of numbers. Tell me which input is more possible
to use all of the given numbers with basic arithmetic operations (+ - * /) to
get 24. You should only reply 1 or 2. Don’t add any explanation.
//same examples
12
-----
**Generating CoTs with a pairwise-comparison approach to searching for the most promising intermediate thought**
input_1: {input_1}
input_2: {input_2}
Answer: ’
**A.3. Sudoku Puzzle**
**Thought Generator. We use the following prompt to generate thoughts.**
Prompt = ’This is a {puzzle_size}x{puzzle_size} two-dimensional array represents
a matrix, where some numbers are already given, and ’*’ represents the
numbers that need to be filled in. You should pick 1 or 2 ’*’ to fill in a
number between 1 to {puzzle_size}. Don’t change the given number. Don’t
complete the whole puzzle immediately until there is only 1 or 2 ’*’ left to
be filled in. Your answer should just be the same format as the question
below. When you answer, begin with ###. For example: ###[[1, *, *], [*, 1,
*[], []*[, 2, ]*[]]]
Question: {question}’
**Comparison Prompt. We use multiple different prompts to generate the comparison result at each round. For the Sudoku**
problem, we use three different prompts. All prompts are zero-shot.
Prompt 1 = ’You should judge which of the two two-dimensional array better
represents a {puzzle_size}x{puzzle_size} Sudoku puzzle. ’*’ means the value
is yet to be decided. You should judge by considering if in each row or
column 1 to {puzzle_size} could appear and only appear once. You must only
reply 1 or 2.
1:{input_1}
2:{input_2}’
Prompt 2 = ’Find which of the two two-dimensional array better represents a {
puzzle_size}x{puzzle_size} Sudoku puzzle. ’*’ means the value hasn’t been
decided. The better one should satisfy that in each row or column 1 to {
puzzle_size} could appear and only appear once. You must only reply 1 or 2.
1:{input_1}
2:{input_2}’
Prompt 3 = ’Which of the two two-dimensional array better represents a {
puzzle_size}x{puzzle_size} Sudoku puzzle? ’*’ means the value is yet to be
decided. A better one means in each row or column 1 to {puzzle_size} could
appear and only appear once. You must only reply 1 or 2.
1:{input_1}
2:{input_2}’
**B. Proofs**
We first introduce the following lemma before our main proof.
**Lemma 2 (Lemma 2 in (Falahatgar et al., 2017)). Let ep(i, j) = p(i, j) −** 1/2 be the additional probability by which i is
_preferable to j. Let z[∗]_ _be the maximum in Z and k[∗]_ _be the comparison winner. The comparison algorithm on set Z uses_
_|Z|_
4ε[2][ log][ 2]δ _[comparisons and with probability][ ≥]_ [1][ −] _[δ][,]_
_p(z[∗], k[∗])_ _γε._
e _≤_
_Proof of Lemma 2. To make this paper self-contained, we provide the proofs in (Falahatgar et al., 2017) here. First, we_
prove that the probability of the direct pairwise comparison process providing a wrong winner is less than δ. Let bp[r]i [and][ b]c[r]
13
-----
**Generating CoTs with a pairwise-comparison approach to searching for the most promising intermediate thought**
denote _pi and_ _c respectively after r number of comparisons. The output of the pairwise comparison will not be i only if_
_pb[r]i_ _[<][ 1]2 b[+][ ε][ −] b[b]c[r]_ for any r < m = 21ε[2][ log][ 2]δ [or if][ b]pi < [1]2 [for][ r][ =][ m][.]
Considering the first case, after r comparisons, by Chernoff bound,
Pr(pb[r]i _[<][ 1]2 [+][ ε][ −]_ [b]c[r]) ≤ _e[−][2][r][(][b]c[r])[2]_ = e− log [4][r]δ[2]
4r[2][ .]
Using union bound,
Considering the second case, after m =
Pr(∃r s.t. bp[r]i _[≤]_ [1]2 [+][ ε][ −] [b]c[r]) ≤ 2[δ] _[.]_
1
2ε[2][ log][ 2]δ [rounds, by Chernoff bound,]
_Pr(pb[m]i_ _[<][ 1]2_ [)][ ≤] _[e][−][2][mε][2][ =][ δ]2_ _[.]_ (1)
Thus, the probability of each of these events happening is bounded by 2[δ] [, and thus the probability of the pairwise comparison]
process providing a wrong winner is less than δ.
As each of the |Z|/2 pairs is compared at most 21ε[2][ log][ 2]δ [times, the total comparisons is less than][ |]4[Z]ε[2][|][ log][ 2]δ [. Let][ k][∗] [be the]
comparison winner and z[∗] be the maximum in Z. Let a be the element paired with z[∗]. There are two cases: _p(z[∗], a)_ _ε_
e _≥_
and _p(z[∗], a) < ε._
e
If _p(z[∗], a)_ _ε, by Eqn. (1) with probability_ 1 _δ, z[∗]_ will win and hence by definitions of z[∗] and k[∗], _p(z[∗], k[∗]) = 0_ _γε._
Alternatively, if e _≥_ ep(z[∗], a) < ε, let winner(i, j ≥) denote the winner between− _i and j when compared for_ 21ε e[2][ log][ 1]δ [times. Then,] ≤
(a) (b) (c)
_r(a)_ _r(winner(z[∗], a))_ _r(k[∗])_ _r(z[∗])_
_≤_ _≤_ _≤_
where (a) follows from r(a) ≤ _r(z[∗]), (b) and (c) follow from the definitions of z[∗]_ and k[∗] respectively. From strong
stochastic tranisitivity on a, k[∗] and z[∗], _p(z[∗], k[∗])_ _γp(z[∗], a)_ _γε._
e _≤_ e _≤_
Now we begin to prove Lemma 1.
_Proof of Lemma 1. To make this paper self-contained, we also provide the proofs in (Falahatgar et al., 2017) here. We first_
show that with probability ≥ 1 − _δ, the output of Knockout is an ε-maximum. Let εi = cε/2[i/][3]_ and δi = δ/2[i]. Note that εi
and δi are bias and confidence values used in round i. Let bi be a maximum element in the set Z before round i. Then by
Lemma 2, with probability ≥ 1 − _δi,_
_pe(bi, bi+1) ≤_ 2[cε][i/][3][ .] (2)
By union bound, denote by p[′] the probability that Eqn. (2) does not hold for some round 1 _i_ log _Z_, then we have
_≤_ _≤_ _|_ _|_
log |Z|
X
_δi =_
_i=1_
log |Z|
X
_i=1_
_p[′]_ _≤_
2[i][ ≤] _[δ.]_
With probability 1 _δ, Eqn. (2) holds for all i and by stochastic triangle inequality,_
_≥_ _−_
log |Z|
X
_pe(bi, bi+1) ≤_
_i=1_
_cε_
2[i/][3][ =][ ε.]
_pe(b1, blog |Z|+1) ≤_
_i=1_
We now bound the number of comparisons. Let ni = 2|[i]Z[−]|[1][ be the number of elements in the set at the beginning of round][ i][.]
Denote by NCi the number of comparisons at round i, then we have
NCi ≤ _[n]2[i]_ 2c[2]ε[2][ ·][ log 2][i]δ[+1] _._
_[·][ γ][2][2][2][i/][3]_
14
-----
**Generating CoTs with a pairwise-comparison approach to searching for the most promising intermediate thought**
Hence the number of comparisons in all rounds is
log |Z|
X
_i=1_
_Z_
_|_ _|_
2[i][ ·][ γ]2[2]c[2][2][2]ε[i/][2][3][ ·][ log 2][i]δ[+1]
_≤_ _[|][Z][|][γ][2]_
2c[2]ε[2]
=
_[|][Z][|][γ][2]_
2c[2]ε[2]
_i + log [2]_
2[i/][3]
_i=1_
21/3
+ [1]
_c[2]_ _c_ [log 2]δ
_Z_ _γ2_
_|_ _|_
= log [1]
_O_
_ε[2]_ _δ_
Now we begin to prove Proposition 1.
_Proof of Proposition 1. In each round of comparison, according to Lemma 1, we have the probability of 1_ _δ to output_
_−_
the ε-maximum thoughts. Suppose that the thoughts in the shallower layers are more promising than those in the deeper
ones, which is often the case in step-by-step reasoning tasks. For example, r([z[1]j [])][ ≥] _[r][([][z][1]i_ _[,][ z][2][])][ for][ j][ ∈]_ [[][K][]][,][ i][ ̸∈] [[][K][]][, and]
**z[2]** _Z_ [2]. When generating the intermediate thoughts in the τ -th layer, the probability that the ε-maximum thoughts are not
_∈_
selected is 1 _δ[τ]_ .
_−_
Therefore, the probability of missing the ε-maximum promising thoughts in the τ -th layer is 1 _δ[τ]_ with at most
_−_
_O_ _γ[2][ P][T]iε=1[2]_ _[|][Z][i][|]_ log [1]δ comparisons required for generating the tree.
**C. Additional Experimental Results**
**C.1. Experimental Results Summary**
First, we present a summary of the experimental results.
We introduce three additional QA datasets—Gsm8k (Cobbe et al., 2021), Coin Flip (OOD) (Wei et al., 2022), and
BBH (Srivastava et al., 2023)—along with two more state-of-the-art algorithms for comparison, with implementation details
provided below.
**Contenders. PoT (Chen et al., 2023): PoT requires in-context samples to guide LLMs generating Python code step-by-step,**
and the number of samples is a hyperparameter. Since SToT and C-ToT do not always require such samples, we choose
appropriate number of samples in PoT to maintain experimental consistency with other methods. We use one in-context
sample for all datasets expecting Game of 24, which uses 3 samples.
Self-Refine (Madaan et al., 2023): The parameter of in Self-Refine is the number of iterations of Self-Refine. For each task,
we set the number of iterations to 3 so that the number of tokens used is approximately the same. We use the same template
as in the original paper and set the number of in-context examples to the same as in the PoT.
We can observe in Table 4 that the proposed C-ToT (Stand.) and C-ToT (Duel.) algorithms achieve the best performance in
almost all tasks. The Self-Refine also achieves promising results in QA tasks, while it performs relatively poorly compared
to the SToT and C-ToT algorithms in complex tasks that require step-by-step interaction and reasoning, such as Game of 24.
Method / Data Gsm8k Coin flip (OOD) BBH AQuA Game of 24 Sudoku
CoT 68.8 2.5 56.3 2.1 67.7 2.6 42.3 2.5 4.3 3.2 44.4 2.3
_±_ _±_ _±_ _±_ _±_ _±_
SToT 59.3 2.4 62.1 2.9 69.6 2.0 57.1 2.1 34.3 3.9 60.0 3.8
_±_ _±_ _±_ _±_ _±_ _±_
PoT 62.2 ± 3.3 **71.1 ± 2.8** 66.7 ± 2.3 47.5 ± 4.5 27.2 ± 3.7 43.1 ± 3.9
Self-Refine 67.6 3.2 64.8 1.6 69.2 3.7 56.2 4.1 16.3 3.9 52.3 3.0
_±_ _±_ _±_ _±_ _±_ _±_
C-ToT (Stand.) 71.3 3.3 66.2 2.8 75.7 1.9 61.4 2.9 40.0 4.6 74.4 2.1
_±_ _±_ _±_ _±_ _±_ _±_
C-ToT (Duel.) **73.0 ± 2.7** 70.4 ± 2.1 **80.0 ± 2.2** **63.0 ± 2.6** **41.0 ± 3.0** **75.4 ± 3.3**
**Table 4: Performance comparisons on benchmark datasets. On each dataset, 5 test runs were conducted and the average**
accuracy as well as standard deviation are presented, and the best one is emphasized in bold.
Our method is generally not comparable to GoT (Besta et al., 2023), but the idea of comparison can improve GoT. The
GoT divides a complex task into several subtasks, generate thoughts for each subtask and merge them, while the ToT-style
15
-----
**Generating CoTs with a pairwise-comparison approach to searching for the most promising intermediate thought**
methods aim at step-by-step thinking, where the new generated thought is based on previous ones, so it is hard to split or
merge. Thus, they are better suited for different tasks. For example, for sorting, we can divide it into subtasks; while for the
Game of 24, the answer should be generated based on previous thoughts, so C-ToT is more proper.
We can use the idea of comparison to improve the solving of subtasks, so we can subsequently improve the GoT. The
following is a preliminary study of the sorting task in the GoT paper, remaining all its setups, and comparing the performance
with scoring mechanism by LLM and the pairwise comparison mechanism by LLM in subtasks. Accuracy and token usage
are reported as in Table 5. 5 test runs were conducted and the average accuracy and the number of tokens (completion tokens
/ prompt tokens) are reported.
Data / Method S-GoT C-GoT
32 elements 90.4%; 1232/12726 90.5%; 1217/14969
64 elements 86.1%; 2592/31118 86.6%; 2356/35313
**Table 5: Performance comparisons with GoT.**
**C.2. Cost and Efficiency**
In Table 6, 7 and 8, we report the token cost and average accuracy comparison of our proposed approaches with other
contenders. 5 test runs were conducted and the average accuracy and the number of tokens (completion tokens / prompt
tokens) are reported.
We can observe that the token costs of our proposed C-ToT approaches and the contenders are task-specific and generally
incomparable. Since we preserve all previous intermediate thoughts in the QA task, the token cost of the C-ToT approaches
is higher than that of the S-ToT algorithm, but the performance is simultaneously improved. In the QA and Game of 24,
the token costs of the Comp-SToT and S-ToT approaches are comparable, but the Comp-SToT approaches achieve better
average accuracy. In the Sudoku puzzle, the token cost of Comp-SToT is lower than that of S-ToT. These results indicate the
effectiveness of using the direct pairwise comparison approach to find the most promising intermediate thoughts. In practice,
we can further reduce the token cost by using a counter for each intermediate thought to track its comparison frequency. If
the comparison count of an intermediate thought exceeds a threshold, we can remove it from the tree.
Method Generate/Prompt tokens Cost per case Accuracy
CoT 106/136 0.0003 42.3%
SC-CoT 1647/2023 0.0054 58.4%
SToT 1551/5415 0.0085 57.1%
SC-SToT 2081/13459 0.018 57.6%
Comp-SToT 1515/6299 0.0093 59.0%
Back-SToT 1551/8135 0.011 58.0%
C-ToT (Stand.) 1498/14627 0.017 61.4%
C-ToT (Duel.) 1649/52044 0.055 63.0%
**Table 6: Average accuracy of different methods with token costs on QA.**
Method Generate/Prompt tokens Cost per case Accuracy
CoT 99/437 0.0006 4.3%
SC-CoT 1717/6555 0.010 8.0%
SToT 1368/12205 0.015 34.3%
SC-SToT 2284/40825 0.045 40.0%
Comp-SToT 1309/11963 0.015 36.3%
Back-SToT 1679/23178 0.027 39.0%
C-ToT (Stand.) 2452/21003 0.026 40.0%
C-ToT (Duel.) 2174/60578 0.065 41.0%
**Table 7: Average accuracy of different methods with token costs on Game of 24.**
16
-----
**Generating CoTs with a pairwise-comparison approach to searching for the most promising intermediate thought**
Method Generate/Prompt tokens Cost per case Accuracy
CoT 431/178 0.001 44.4%
SC-CoT 6292/2666 0.015 47.8%
SToT 6309/23933 0.037 60.0%
SC-SToT 6568/70129 0.083 66.7%
Comp-SToT 2666/13164 0.019 65.6%
Back-SToT 4536/21383 0.030 72.2%
C-ToT (Stand.) 5340/29565 0.040 74.4%
C-ToT (Duel.) 7148/86425 0.101 75.5%
**Table 8: Average accuracy of different methods with token costs on Sudoku.**
**C.3. Ablation Studies**
We report the ablation studies on Aqua dataset, while we also observe the same trend in other datasets. 5 test runs were
conducted and the average accuracy and the number of tokens (completion tokens / prompt tokens) are reported.
We study the number of intermediate thoughts generated and selected in each round and report the results in Table 9. With
a fixed number of thoughts generated each round, selecting more thoughts leads to higher costs, while accuracy may not
benefit much. With a fixed number of thoughts selected each round, generating more thoughts leads to a significant increase
in accuracy because we can explore more thoughts. These results could benefit the further use of CoT methods.
Generate m / Select K _K = 1_ _K = 2_ _K = 3_ _K = 5_ _K = 6_
_m = 1_ 41.6% ; 50/515 – – – –
_m = 3_ 46.7% ; 139/1525 50.2% ; 244/2565 49.8% ; 375/3658 – –
_m = 5_ 50.1% ; 288/2653 53.5% ; 495/4452 55.0% ; 549/5945 54.8% ; 882/9379 –
_m = 10_ 53.0% ; 518/5189 56.6% ; 896/8716 57.3% ; 1091/11875 57.5% ; 2120/19473 57.6% ; 2155/20982
_m = 12_ 53.8% ; 674/6334 57.4% ; 1212/10732 61.4% ; 1498/14627 61.4% ; 1531/16002 61.5% ; 2125/26329
**Table 9: Ablation studies on parameters.**
We study the threshold for removing intermediate thoughts vs. accuracy and cost and report the results in Table 10. Under
the same tree depth, different thresholds show relatively stable performance.
Depth d / Threshold Th Th= 1 Th= 2 Th= 3 Th= 4 Th= 5
_d = 3_ 58.0% ; 1062/11337 60.7% ; 1315/13611 61.0% ; 1478/15111 61.4% ; 1513/15713 61.4% ; 1558/16107
_d = 4_ 58.8% ; 1426/13792 60.9% ; 1732/18822 61.4% ; 1771/19940 61.6% ; 1804/20003 61.3% ; 2001/20312
_d = 5_ 60.0% ; 1760/17556 61.0% ; 2143/21006 61.6% ; 2271/23412 61.7% ; 2265/24031 61.3% ; 2425/24755
**Table 10: Ablation studies on the number of thresholds for removing intermediate thoughts.**
We study the number of comparison and report the results in Table 11. In general, increasing the number of comparison
results in better accuracy and higher cost.
Dataset / Comparison n _n = 1_ _n = 3_ _n = 5_ _n = 8_
Aqua 61.4% ; 1498/14627 63.0% ; 1649/52044 64.3% ; 1824/78194 64.7% ; 1997/128802
**Table 11: Ablation studies on the number of comparisons.**
17
-----
| [
"Zhen-Yu, Zhang",
"Siwei, Han",
"Gang, Niu",
"Masashi, Sugiyama",
"Huaxiu, Yao"
] | 2024-07-08T00:00:00 | ICML 2024 Poster | true | 0 | 0 | null | https://proceedings.mlr.press/v235/zhang24t.html | https://arxiv.org/abs/2402.06918 | https://www.semanticscholar.org/paper/0cadb5ff535dbe5a97befb7a35baca987c620654 |
Generative AI for Math: Part I -- MathPile: A Billion-Token-Scale Pretraining Corpus for Math | High-quality, large-scale corpora are the cornerstone of building foundation models. In this work, we introduce \textsc{MathPile}, a diverse and high-quality math-centric corpus comprising about 9.5 billion tokens. Throughout its creation, we adhered to the principle of ``\emph{less is more}'', firmly believing in the supremacy of data quality over quantity, even in the pre-training phase. Our meticulous data collection and processing efforts included a complex suite of preprocessing, prefiltering, language identification, cleaning, filtering, and deduplication, ensuring the high quality of our corpus. Furthermore, we performed data contamination detection on downstream benchmark test sets to eliminate duplicates. We hope our \textsc{MathPile} can help to enhance the mathematical reasoning abilities of language models. We plan to open-source different versions of \mathpile with the scripts used for processing, to facilitate future developments in this field. | null | ## Generative AI for Math: Part I MATHPILE: A Billion-Token-Scale Pretraining Corpus for Math
**Zengzhi Wang[3,4]** **Rui Xia[3]** **Pengfei Liu[1,2,4][∗]**
1Shanghai Jiao Tong University 2Shanghai Artificial Intelligence Laboratory
3Nanjing University of Science and Technology 4Generative AI Research Lab (GAIR)
Nanjing University of Science and Technology
**Data Preprocessing [email protected]**
**& Prefiltering**
**10.43B Tokens**
**Language IDAbstract**
High-quality, large-scale corpora are the cor-10.20B Tokens
nerstone of building foundation models. InCleaning & Filtering
this work, we introduce MATHPILE, a diverse
and high-quality math-centric corpus compris-10.18B Tokens
ing about 9.5 billion tokens. Throughout its
creation, we adhered to the principle of “Deduplication _less_
_is more”, firmly believing in the supremacy_
of data quality over quantity, even in the pretraining phase. Our meticulous data collection
and processing efforts included a complex suite
of preprocessing, prefiltering, language identification, cleaning, filtering, and deduplication,
ensuring the high quality of our corpus. Fur
thermore, we performed data contamination
detection on downstream benchmark test sets
to eliminate duplicates. We hope our MATHPILEcan help to enhance the mathematical reasoning abilities of language models. We plan
to open-source different versions of MATHPILEwith the scripts used for processing, to
facilitate future developments in this field.
**Introduction**
Math-Centric Diversity
“Less is More’’ Principle Textbooks Lecture Notes Web
Open-Source 9.5B Tokens
arXiv StackExchange Definitions
K-12 College Postgraduate
Wikipedia Theorem Proofs
Math Competition
High-Quality Data
Documentation
Preprocessing Prefiltering
Dataset Sheet Quality Annotations
Language ID Cleaning & Filtering
Dedup. Data Contamination Detect. Length Distribution Data Statistics
Figure 1: Key features of MATHPILE.
In this work, our concern centers on mathematical reasoning capabilities within foundational language models (Chern et al., 2023; Azerbayev et al.,
2023b, inter alia), for which can potentially boost
the application in education tools, automated problem solving, data analysis, code programming and
so on, thereby improving user experience. To facilitate this, we are not directly building a model, but
rather focusing on a more fundamental aspect: cre_ating a high-quality and diverse pre-training cor-_
_pus tailored for the math domain, namely MATH-_
PILE. Specifically, our work is significantly different from the previous work in the following
characteristics (See Table 1 for comparison):
**Math-centric. Previous open-sourced pretrain-**
ing corpora have typically focused on general
domains, such as Pile (Gao et al., 2021), RedPajama (Together, 2023a) and Dolma (AllenAI,
2023). Others have concentrated on multilingual aspects or programming languages, such
as ROOTS (Laurençon et al., 2022) and The
Stack (Kocetkov et al., 2022), respectively. However, a notable absence in these offerings is a corpus specificlly tailoring for mathematics. While
Powerful conversational models such as ChatGPT (OpenAI, 2022) and Claude (Anthropic,
2023) are significantly transforming numerous
products and aspects of daily life. A crucial factor
in their success is the strength of the foundational
language model. State-of-the-art foundation models are typically pretrained using massive, diverse
and high-quality corpora, encompassing sources
like Wikipedia, scientific papers, community forums, Github code, web pages and more (Gao
et al., 2021; Together, 2023a). We expect a powerful foundational language model to possess comprehensive and balanced capabilities, including
language understanding, commonsense reasoning,
mathematical reasoning, language generation, and
more (Bubeck et al., 2023).
_∗Corresponding author_
-----
there exist some corpora designed for training
or continually improving math-specific language
models, such as Minerva’s mathematical training
dataset (Lewkowycz et al., 2022) and OpenAI’s
MathMix (Lightman et al., 2023), these are not
open-sourced. Note that a recent work concurrent
with ours, OpenWebMath (Paster et al., 2023), although math-centric, is solely sourced from web
pages. We will discuss the comparison with it later.
Recognizing this gap, our work aims to bridge the
divide by developing an open-sourced mathematical corpus, democratizing access to high-quality
mathematical data and enabling researchers and
developers to advance the capabilities of language
models in mathematical reasoning more effectively
and inclusively.
**Diversity. While Hendrycks et al. (2021b) in-**
troduced AMPS, a problem set ranging from elementary mathematics to multivariable calculus
(K-12 level) for pre-training purposes, it lacks
content at the college-level and more challenging
competition-level mathematics, focusing instead
on a supervised dataset rather than an extensive
corpus. The ProofPile corpus, introduced by Azerbayev et al. (2023a), aims to improve autoformalization and formal proving capabilities in models,
yet its scope is confined to formal proving, not covering the broader mathematical domain from K-12
to postgraduate level. Concurrently with our work,
Paster et al. (2023) propose the OpenWebMath
corpus, featuring a corpus composed of mathematical web pages. However, our corpus goes beyond
web pages, integrating high-quality mathematics
textbooks, lecture notes, scientific papers from
arXiv in the field of mathematics, and carefully
selected content from StackExchange, ProofWiki,
and Wikipedia among others, which positions our
corpus as a richer and more diverse mathematical
resource for language models.
**High-Quality. Recent studies have increasingly**
highlighted the detrimental effects of low-quality
and repeated content in pretraining corpora on
model training, as evidenced in various works (Allamanis, 2019; Luccioni and Viviano, 2021; Lee
et al., 2022; Hernandez et al., 2022; Longpre et al.,
2023). The importance of high-quality datasets
has thus come to the fore. It has been shown that
properly filtered and deduplicated web data can
yield models as equally powerful as those trained
on curated, high-quality corpora (Penedo et al.,
2023). This similar practice has been recently
adopted in several notable studies (Cerebras, 2023;
AllenAI, 2023; Together, 2023b). A notable example is the 1.3 billion-parameter code-focused
model pretrained on synthetically generated textbooks and filtered web pages, a project that broke
existing scaling laws although did not open source
its data (Gunasekar et al., 2023). It’s important to
emphasize that quality of the corpus is far more significant than its quantity. For instance, OpenAI’s
MathMix comprises only 1.5 billion tokens. In this
work, we diligently adhere to the principle of less is
_more, as outlined in Zhou et al. (2023). To achieve_
a high-quality corpus, we have undertaken extensive preprocessing, prefiltering, cleaning, filtering,
and deduplication efforts. We are committed to
continually refining and optimizing this corpus,
striving for excellence in every aspect to make a
distinct contribution to the math domain.
**Data Documentation. Auditing large-scale pre-**
training corpus, such as carefully documenting
the characteristics of the data, intended uses, its
information content, and any potential biases is
of paramount importance (Bender and Friedman,
2018; Gebru et al., 2021; McMillan-Major et al.,
2023). Despite growing advocacy for such practices, many pre-training corpus are released without detailed data documentation due to their large
size (Mitchell et al., 2022). Recently, some works
have audited certain publicly available pre-training
datasets that previously lacked thorough documentation. These audits found that these corpus potentially contain useless content (e.g., hate speech,
sexually explicit content) (Luccioni and Viviano,
2021; Kreutzer et al., 2022; Elazar et al., 2023),
copyright-violating content (Bandy and Vincent,
2021), and the test sets for downstream tasks (Allamanis, 2019; Dodge et al., 2021). Adhering steadfastly to the principle of enhancing transparency in
pretraining corpora for practitioners following previous efforts, we have provided a dataset sheet for
our MATHPILE(see Table 5). Throughout our extensive data processing workflow, numerous documents were annotated for quality, such as language
identification scores and the ratio of symbols to
words (as exemplified in Figure 4). These quality annotations enable future users to apply their
specific filters based on these scores. Additionally,
-----
A document from MATHPILE-Textbooks
**Text:**
# LINEAR TORIC FIBRATIONS
SANDRA DI ROCCO
## INTRODUCTION TO TORIC FIBRATIONS
(a)Definition 1.1. A toric fibration is a surjective flat map X is a toric variety _f : X →_ _Y with connected fibres where_
(b) Y is a normal algebraic variety
(c) dim(Y ) < dim(X).
Remark 1.2. Observe that ifF . _f : X →_ _Y is a toric fibration then Y and a general fiber F are also toric varieties. Moreover if X is smooth, respectively Q-factorial then so is Y and_
Combinatorial characterization. A toric fibration has the following combinatorial characterization (see [EW, Chapter VI] for further details). Letbe a toric variety of dimension n and let i : ∆ _,→_ _N a sublattice._ _X = XΣ, where Σ ⊂_ _N_ _[∼]= Z[n],_
Proposition 1.3. [EW] The inclusion i induces a toric fibration if and only if:
(b) For every(a) ∆ is a primitive lattice, i.e. σ ∈ Σ(n), σ = (∆ τ +⊗ ηR, where) ∩ _N = ∆ τ ∈_ ∆. and η ∩ ∆= {0} (i.e. Σ is a splitfan).
We briefly outline the construction. The projectiontoric variety defined by the fan ΣF = {σ ∈ Σ ∩ π∆ : N}. _→_ _N/∆_ induces a map of fans Σ → _π(Σ) and thus a map of toric varieties f : X →_ _Y . The general fiber F is a_
When the toric varietyvarieties (X, L) and XF, L in a toric fibration is polarized by an ample line bundle|F, for a general fiber F, define lattice polytopes L we will call the pair P(X,L), P(F, L (|Ff ) :. The polytope X → _Y, L) P a polarized toric fibration. Observe that the polarized toric(X,L) is in fact a "twisted sum" of a finite number of_
lattice polytopes fibering over _P(F, L|F ). Definition 1.4. Let R0, . . ., Rk ⊂_ ∆ be polytopes. Let π : M → Λ be a surjective map of lattices such that π (Ri) = vi and
theisomorphic to v0, · · ·, v Conv (k are distinct vertices ofR0, . . ., Rk). We will denote it by: Conv (v0, . . ., vk). We will call a Cayley π-twisted sum (or simply a Cayley sum) of R0, . . ., Rk a polytope which is affinely
[R0 ⋆. . . ⋆Rk]π
If the polytopes Ri are additionally normally equivalent, i.e. they define the same normal fan ΣY, we will denote the Cayley sum by:
Cayley (R0, . . ., Rk)(π,Y ) .
These are the polytopes that are associated to a polarized toric fibration. Consider a sublattice i : ∆ _,→_ _N and the dual lattice surjection π : M →_ Λ.
Proposition 1.5. [CDR08] The sublattice i : ∆ _,→_ _N induces a polarized toric fibration (f : X →_ _Y, L) if and only if P(X,L) = Cayley (R0, . . ., Rk)(π,Y ) for some_
normally equivalent polytopes R0, . . ., Rk.
The polarized general fiber _F, L|F_ corresponds to the polarized toric variety associated to the polytope P(F, L|F ) = Conv (v0, . . ., vk) and the polytopes R0, · · ·, Rk
define the embeddings of the invariant sections polarized by the restrictions of _L._
Example 1.6. Consider the Hirzebruch surface F1 = Blp P[2][] = P _OP1 ⊕_ _OP1 (1)_ polarized by the tautological line bundle ξ = 2ϕ[∗] []OP2 (1) _−_ _E where ϕ is the_
blow-up map and E the exceptional divisor. The associated polytope is _P_ = Cayley (∆1, 2∆1).
FIGURE 1. The Hirzebruch surface P _OP1 ⊕_ _OP1 (1)_
Example 1.7. More generally:
- when π(P ) = ∆t the polytope Cayley (R0, . . ., Rk)(π,Y ) defines the variety P (L0 ⊕ _. . . ⊕_ _Lk ), where the Li are ample line bundles on the toric variety Y, polarized_
by the tautological bundle ξ. In particular L|F = OPt (1).
- When π(P ) is a simplex (not necessarily smooth) Cayley (R0, . . ., Rk)(π,Y ) defines a Mori-type fibration. A fibration whose general fiber has Picard rank one. - When
_π(P ) = s∆t then again the variety has the structure of a P[t]-fibration whose general fiber F is embedded via an s-Veronese embedding:_ _F, L|F_ = P[t], OPt (s) .
For general Cayley sums, [R0 ⋆. . . ⋆Rk]π, one has the following geometrical interpretation. Let (X, L) be the associated polarized toric variety and let Y be the toric variety
defined by the Minkowski sum R0 + . . . + Rk. The fan defining Y is a refinement of the normal fan of Ri for i = 0, . . ., k. Consider the associated birational maps
_ϕsymbol the maps of fansi : Y →_ _Yi, where ( ϕYi : Σi, LiY) → is the polarized toric variety defined by the polytopeΣYi . Define then the fan:_ _Ri. The line bundles Hi = ϕ[∗]i_ [(][L][i][)][ are nef line bundles on][ Y][ . Denote by the same]
ΣZ : _ϕ[−]i_ [1] _σj_ _× ηl, for all σj ∈_ ΣYi, ηl ∈ Σ∆
n o
where Λ = Conv (v0, . . ., vk). It is a refinement of ΣX and thus the defining variety Z is birational to X. Moreover it is a split fan and thus it defines a toric fibration
_fH :i Z on the invariant sections. →_ _Y . The Cayley sum [R0 ⋆. . . ⋆Rk]π is the polytope defined by the nef line bundle ϕ[∗](L), and the polytopes Ri are the polytopes defined by the nef line bundles_
Historical Remark. The definition of a Cayley polytope originated by what is "classically" referred to as the Cayley trick. We first recall the definition of Resultant and Discriminant.
Let f1(x), . . ., fn(x) be a system of n polynomials in n variables x = (x1, . . ., xn) supported on A ⊂ Z[n]. This means that fi = Πaj ∈Acj _x[aj]_ . The resultant (of
_A ), RA_ _cj ), is a polynomial in the coefficients cj_, which vanishes whenever the corresponding polynomials have a common zero.
The discriminant of a finite subset _A, ∆A, is also a polynomial ∆A_ _cj_ in the variables cj ∈ _A which vanishes whenever the corresponding polynomial has a multiple root._
Theorem 1.8. [GKZ][Cayley Trick] The A-resultant of the system f1, . . ., f _n equals the Adiscriminant of the polynomial:_
_n_
_p(x, y) = fi(x) +_ 2 _yi−1fi(x)._
X
Let Ri = N (fi) ⊂ R[n] be the Newton polytopes of the polynomials fi. The Newton polytope of the polynomial p(x, y) is the Cayley sum [R1 ⋆. . . ⋆Rn]π, where
_π : R[2][n][−][1]_ _→_ R[n][−][1] is the natural projection such that π [R1 ⋆. . . ⋆Rn]π = ∆n−1.
...
**Subset: Textbooks**
**meta:**
book_name: Linear Toric Fibrations_Sandra Di Rocco,
type: Notes,
...
Figure 2: An example textbook document in MATHPILE
-----
we have conducted extensive deduplication for this
corpus and performed data contamination detection
with downstream benchmark test sets, removing
any duplicated samples identified (cf. § 3.4). Interestingly, we have also discovered a significant
number of questions from downstream test sets in
OpenWebMath (cf. § 3.4). This underscores the
importance of meticulous data documentation. We
plan to release different versions of MATHPILEto
facilitate future use, further emphasizing the utility
and adaptability of our work. See Appendix B for
examples in MATHPILE.
In conclusion, we hope to facilitate the growth of
the field of AI for mathematics by contributing this
specialized, high-quality, diverse corpus focused
on the mathematical domain while maintaining utmost transparency about the data for practitioners.
Our work lays the groundwork for training more
powerful mathematical problem-solving models in
the future.
**2** **The Collection of Corpus**
In order to construct MATHPILE, we gather data
from a variety of sources, which also includes a
component of manual collection.
**2.1** **Mathematical Textbooks**
Textbooks are typically self-contained, encompassing mathematical concepts, exercises, and detailed
solution steps. We believe that such resources are
valuable for educational purposes, not only for
human but also machine learning models. Some
recent works have also corroborated this point,
even though they didn’t focus on the math domain,
and their textbooks are not genuine but were synthesized from more advanced models (Gunasekar
et al., 2023; Li et al., 2023).
To collect these genuine and high-quality textbooks, we began by conducting extensive manual
searches across the internet, seeking open-source
and freely accessible mathematics-related textbook
websites. Afterwards, we proceeded to download
these PDF files, resulting in a collection of 38 K-12
level textbooks, along with 369 college-level mathematics textbooks that cover a wide range of subjects including linear algebra, probability theory,
calculus, and optimization. In addition to these
textbooks, we also included 467 college course
handouts and lecture notes, which tend to be more
concise compared to full-length textbooks. Subsequently, we employed the Mathpix API[1] to parse
the PDFs into markdown format. Then, we meticulously cleaned up extraneous elements such as
parsed image URLs, preface sections, table of contents, acknowledge sections, index sections, and
consecutive empty lines within the parsed content.
After that, we arrived at a total of 874 documents.
We also refined high-quality mathematicsrelated synthetic textbooks from OpenPhi Project.[2]
It is an open-source counterpart to the Phi
work (Gunasekar et al., 2023). While the underlying model and generation process differ, the output
encompasses a broad spectrum of subjects, extending beyond programming. To isolate mathematicsrelated documents, we employed a straightforward criterion: the presence of the symbol “$$”,
commonly associated with mathematical expressions. This approach yielded 3,889 documents
from an initial pool of 124,493. As the volume of
pre-training data escalates, the synthesis of highquality data becomes increasingly crucial. More
advanced filtering methods and mathematical corpora synthesis are left for future exploration.
**2.2** **Mathematical Papers from ArXiv**
ArXiv offers a free distribution service and serves
as an open-source archive housing millions of scientific papers. It also provides invaluable training data for numerous powerful language models (Touvron et al., 2023a; Together, 2023a, inter
_alia). In our endeavor to collect mathematical pa-_
pers from ArXiv, we identify 50 sub-subjects spanning Mathematics, Computer Science, Statistics,
Physics, Quantitative Finance and Economics. Our
process involved filtering ArXiv’s metadata[3] to focus on the chosen subjects (cf. Table 6), followed
by accessing the source LaTex files (if available).
We exclusively retained the LaTex files and consolidated multiple files based on their respective
order as indicated by commands such as “include”
and “input” within the main LaTex file of each
paper. Subsequently, we undertook extensive transformations to enhance data clarity and consistency.
Specifically, we
[1https://mathpix.com/ocr](https://mathpix.com/ocr)
[2https://huggingface.co/open-phi](https://huggingface.co/open-phi)
[3https://www.kaggle.com/datasets/](https://www.kaggle.com/datasets/Cornell-University/arxiv)
[Cornell-University/arxiv](https://www.kaggle.com/datasets/Cornell-University/arxiv)
-----
**Open** **Has Synth.** **Data Contam.**
**Datasets** **Type** **Target Domain** **# Textbooks** **# Tokens** **Source**
**Source** **Data** **Detection**
Minerva ✗ Corpus General Math ✗ ✗ ✓ 38.5B arXiv, Web
MathMix ✗ Corpus + PS General Math **?** ✓ ✓ 01.5B **?**
arXiv, Textbooks, Lib.,
ProofPile ✓ Corpus Theorem Proving 7 ✗ ✗ 08.3B
StackExchange, ProofWiki, MATH
OpenWebMath ✓ Corpus General Math ✗ ✗ ✗ 14.7B Web
DM-Mathematics ✓ PS Math Competition ✗ ✓ **-** 04.4B Synthesis
AMPS ✓ PS Math Competition ✗ ✓ ✗ 00.7B Khan Academy, Synthesis
arXiv, Textbooks, StackExchange,
MATHPILE(Ours) ✓ Corpus General Math 3,979 ✓ ✓ 09.5B
Wikipedia, ProofWiki, Web
Table 1: The comparison of MATHPILEwith other mathematical Corpora. PS denotes the problem set type. For
some corpora that are not open-sourced, details are unknown and comparisons are based only on information from
corresponding papers, with unknowns indicated by “?”. Note that token counts vary with different tokenizers; we
primarily copy statistics from each dataset’s technical report. For our corpus, we default to using the GPTNeoX-20B
tokenizer (Black et al., 2022). DM-Mathematics was introduced in Saxton et al. (2019). We use “Minerva” to refer
to the dataset adopted by Minerva. Note that ProofPile-2 (Azerbayev et al., 2023b), which includes OpenWebMath,
RedPajama’s arXiv subset (non-math-centric) and algebra code, is not included in this comparison.
1) removed comments in each paper;
2) reverted many macro commands (e.g.,
“newcommand”) to their original forms;
3) omitted figure environments while retaining
captions and figure labels;
4) excluded acknowledgements sections;
5) eliminated references in each paper;
6) condensed more than three consecutive empty
lines to two;
7) replaced certain formatting commands like
“hfill” and “vspace” with an empty line;
8) replaced the “maketitle” command in the
main document body with the actual title (if
available);
9) preserved only the content within the main
body of the LaTex document.
Finally, we had a grand total of 347,945 meticulously cleaned LaTex documents (around 8.5 billion tokens), with each document corresponding to
a single paper.
**2.3** **Mathematical Entries in Wikipedia**
Wikipedia[4] is one the largest and most popular
free online encyclopedias, offering information
on a wide range of topics, including history, science, technology, culture, and more. This extensive knowledge has proven to be highly beneficial for numerous natural language processing
tasks (Lewis et al., 2020, inter alia) and pretrained
language models (Devlin et al., 2019; Touvron
4Wikipedia is licensed under CC BY-SA 4.0.
et al., 2023a, inter alia). To collect mathematical entries from Wikipedia, we downloaded the
mathematics-focused (without pictures) dump of
Wikipedia in English for the month of August
2023. We extracted the HTML documents from
the dump using the library libzim,[5] resulting in
approximately 106,900 documents. Subsequently,
we converted these HTML documents into markdown format using the html2text library[6] while
removing the hyperlinks following the practice of
LLaMA (Touvron et al., 2023a). We retained the
alternative text content but excluded image (often
in SVG format) paths. Additionally, we eliminated
extra newlines within paragraphs and condensed
more than three consecutive empty lines to two
using regular expressions. Further refinement involved the removal of boilerplate content at the bottom of the pages, typically denoted with phrases
like “This article is issued from Wikipedia.
The text is ...”. In the end, our efforts yielded
a collection of 106,881 mathematical Wikipedia
entries, about 0.8 billion tokens.
**2.4** **Entries from ProofWiki**
ProofWiki,[7] an online compendium of mathematical proofs, has been instrumental in advancing the
fields of autoformalization and formal proof proving, as evidenced by NaturalProofs (Welleck et al.,
2021) and ProofPile (Azerbayev et al., 2023a). We
[5https://pypi.org/project/libzim/](https://pypi.org/project/libzim/)
[6https://pypi.org/project/html2text/](https://pypi.org/project/html2text/)
7ProofWiki is licensed under CC BY-SA 3.0.
-----
sourced data from the ProofWiki dump dated April
9, 2022 (provided by the Internet Archive), mirroring the preprocessing approach employed by
NaturalProofs, which was based on the version
from November 12, 2020. Specifically, this involved leveraging the BeautifulSoup[8] to parse
all wiki pages followed by the extraction of raw
text content using the wikitextparser library.[9]
This process yielded a substantial collection of
mathematical content, totaling about 7.6 million
tokens, comprising 10,328 definitions and 13,511
theorem-proof pairs. To facilitate better data organization, we formatted the definitions using the
“definition” environment, and the theorem-proof
pairs within the “section” environment with their
respective titles serving as the section headings, in
line with the format of ProofPile.
**2.5** **Mathematical Discussions on**
**StackExchange**
StackExchange,[10] renowned for its network of
community-powered question-and-answering websites, spans a wide array of topics, each concentrated on a particular topic. Its high-quality data
trove has significantly contributed to the development of various language models (Touvron et al.,
2023a; Zhou et al., 2023, inter alia). In our study,
we identify eleven sites within this network, including five dedicated to mathematics (such as Mathematics and MathOverflow) and six others in closely
related fields like Physics (cf. Table 7).
Our data collection process began with downloading the site dumps from August 2023 (provided by the Internet Archive). In our data curation,
we only retained the essential components in the
posts, namely questions and answers (also associated meta information). To convert HTML documents to raw text, we utilized the BeautifulSoup
library, coupled with a meticulous removal of invalid XML characters. We then systematically
paired questions and their respective answers. Each
question typically garners multiple responses, each
with its own score and in some cases, an endorsement as the accepted answer by the questioner.
To ensure the high quality of our dataset, we
[8https://pypi.org/project/beautifulsoup4/](https://pypi.org/project/beautifulsoup4/)
[9https://pypi.org/project/wikitextparser/](https://pypi.org/project/wikitextparser/)
10ProofWiki is licensed under CC BY-SA 2.5 3.0 or 4.0,
depending on the date of the content.
leveraged two filtering score thresholds: a basic quality threshold set at 5, and a more stringent one at 10. Questions were filtered based
on these thresholds, while answers were judged
by the lesser of the threshold or the score of
the accepted answers if one exists. Additionally,
we also retained unanswered questions with at
least a score of 10 as a reserve to facilitate future use.[11] Finally, our process has yielded a rich
collection of data: 176,962 mathematics-intensive
questions with 290,570 answers (filtered by the
5-score threshold), and 3,418 unanswered questions (10-score threshold). For the sites potentially
related to mathematics, we have gathered 90,957
questions with 144,559 answers (5-score threshold). In total, the assembled questions and answers
(5-score threshold), including filtered unanswered
questions, amount to about 254 million tokens.
**2.6** **Mathematical Web Pages from Common**
**Crawl**
Common Crawl,[12] an invaluable resource that has
been archiving a comprehensive and open repository of web crawl data since 2007, stands as a
cornerstone for training many advanced language
models, including GPT-3 (Brown et al., 2020) and
LLaMA. In our endeavor to extract mathematical
web pages, we focus on refining the web corpus
from SlimPajama (Cerebras, 2023)-a cleaned and
deduplicated counterpart of RedPajama, specifically targeting the SlimPajama-CommonCrawl and
SlimPajama-C4 subsets.
Eschewing the common approach of using neutral network-based filtering, we opt for heuristic
rule-based methods. Our procedure began with
the creation of TF-IDF features, derived from our
curated high-quality textbooks (cf. § 2.1). During this process, we removed the stop words, limited the features to a maximum of 10,000, and
employed white space tokenization. Upon the observation of the resulting vocabulary, we identified
11 commonly used LaTex commands, integral to
mathematical expressions. We utilize these commands as a basis for a hard match within each document. A document is classified as mathematical if
it contains any of these commands along with the
symbol “$$”, typically indicative of a mathemati
11We also provide an unfiltered version for future use.
[12https://commoncrawl.org/terms-of-use/](https://commoncrawl.org/terms-of-use/)
-----
520B **Textbooks (0.44%)**
**Data Preprocessing** **Wikipedia (2.51%)**
**& Prefiltering**
**StackExchange** **ProofWiki**
**(48.02%)** **(2.64%)**
**Language ID** **Common Crawl**
**(8.32%)**
**arXiv (38.07%)**
**Cleaning & Filtering**
**Deduplication**
**Data Sources**
**29GB**
**903k Docs, ~9.5B Tokens**
**2.2TB, ~520B Tokens**
Figure 3: The creation process of MATHPILE. Beginning with data collection from diverse sources (about 520 B
tokens), followed by our rigorous processing process, we obtain a math-centric corpus, encompassing 9.5 billion
tokens. Note that we additionally perform data contamination detection on benchmark test sets (cf. § 3.4). We
visualize the proportions of different components in MATHPILEbased on document counts per component (Right).
cal document. This rule-based approach, though
simplistic, proved to be highly effective, especially
given the vast size of the Common Crawl corpus
. We also experimented with more intricate dense
embedding-based methods to identify mathematical documents, but these resulted in poor recall.
Our efforts resulted in the compilation of a
substantial collection of mathematical web pages:
4,307 documents from the SlimPajama-C4 training
set and 72,137 documents from the SlimPajamaCommonCrawl training set, totaling approximately
633 million tokens. Note that there are possibilities
for more efficient and effective methods to filter
mathematical documents from the broader expanse
of Common Crawl snapshots, a venture we aim to
pursue in our future work.
**3** **Global Data Processing**
While we have already conducted specific data preprocessing for each data source during the data collection process, we subsequently engage in three
critical steps: language identification, filtering,
and deduplication, to ensure the quality of the
entire corpus, as shown in Figure 3.
**3.1** **Language Identification**
To filter non-English documents, we utilized the
fastText language identifier, which was trained on
Wikipedia, Tatoeba, and SETimes (Joulin et al.,
2017; Grave et al., 2018). A common practice is
to classify a document as its respective language if
the score exceeds 0.5, a threshold also employed
by CCNet (Wenzek et al., 2020). However, during
the application of this practice, we encountered
a considerable number of false positives—cases
where documents were erroneously filtered as nonEnglish when, in fact, they were written in English
but contained a substantial amount of mathematical symbols. We attribute this issue to the domain
gap between the datasets used for fastText training
(primarily wiki and news domain) and our mathematical content.
To more accurately filter out non-English documents, we set a customized score threshold for
each data source to classify documents as English.
Specifically, we set thresholds at 0.1 for Wikipedia
and StackExchange, 0.3 for arXiv, 0.5 for Common
Crawl. For ProofWiki and Textbooks, we opted
not to employ this, as we had ensured during our
manual collection process that all documents in
these sources were written in English. As a result
of this refinement process, approximately 8,400
documents were removed, amounting to a total 231
million tokens.
**3.2** **Data Cleaning and Filtering**
Despite our meticulous and thorough data preprocessing efforts for each source during the corpus collection phase, we’ve noted that some documents, particularly from websites like Wikipedia
and Common Crawl, are of insufficient quality
used for language modeling. These documents
-----
might be too brief or include content that is either
automatically generated or commonplace. While
previous studies have introduced detailed methods for filtering pre-training corpora (Raffel et al.,
2020; Rae et al., 2021; Longpre et al., 2023;
Penedo et al., 2023; Cerebras, 2023), we found that
these techniques are not entirely suitable for our
math-focused corpus. Applying them as-is would
lead to the exclusion of many valuable documents.
To address this issue, we developed a unique set
of cleaning and filtering heuristic rules, specifically
crafted for the mathematical domain and drawing
from past studies. Specifically, we
1) detect lines containing “lorem ipsum” and
filter them out if the resulting line is less than
5 characters;
2) detect lines containing “javascript” that also
include “enable”, “disable” or “browser” and
are under 200 characters, and filter them;
3) filter lines containing fewer than 10 words
that include keywords like “Login”, “sign-in”,
“read more...”, or “items in cart.”;
4) filter documents if the ratio of uppercase
words exceeds 40%;
5) filter lines that end with “...” if they constitute
more than 30% of the entire document;
6) filter documents if the ratio of non-alphabetic
words surpasses 80%;
7) exclude documents with an average English
word length outside the range of (3, 10);
8) discard documents that lack at least two common stop words such as “the”, “be” “to” “of”
“and” “that” or “have”;
9) filter out documents if the ratio of ellipses (...)
to words exceeds 0.5 (e.g., progress bars);
10) remove documents where 90% of lines start
with bullet points;
11) filter documents including less than 200 characters after removing spaces and punctuation
marks.
It’s these carefully formulated rules that have
enabled us to curate a high-quality mathematical
corpus. Furthermore, these rules have allowed us
to assign quality annotations to each document
(from Wikipedia and Common Crawl). These annotations offer future researchers and developers
the flexibility to filter the data according to their
criteria, tailoring it to their specific needs. We
provide a cleaned example document with quality
annotations, as shown in Figure 4. The process
resulted in the filtration of approximately 1,100
documents, leading to the removal of 17 million
tokens.
A document from MATHPILE-Common Crawl
**Text: This number is called the Copeland–Erd˝os constant, and is known to be**
irrational and normal. I believe its transcendence or otherwise is an open problem.
This source claims that it has been proved to be transcendental, but the paper they
refer to is the one in which it was proved to be normal and so I think the source is
mistaken.
For now, the knowledge that it is almost surely transcendental will have to suffice!
Not the answer you’re looking for? Browse other questions tagged number-theory
transcendental-numbers or ask your own question.
Does the number 2.3, 5, 7, 11, 13 . . . exist and, if so, is it rational or irrational
&or transcendental?
Is 0.248163264128. . . a transcendental number?
What is the name of this number? Is it transcendental?
Is 0.112123123412345123456 . . . algebraic or transcendental?
Is 0.121121111112111. . . a transcendental number?
Do we know a transcendental number with a proven bounded continued fraction
expansion?
If we delete the non-primes from e, is the resulting number transcendental?
Is there any known transcendental b such that b[b] is also transcendental?
...
**Subset: Common Crawl**
**meta:**
language_detection_score: 0.9118,
char_num_after_normalized: 887,
contain_at_least_two_stop_words: True,
ellipsis_line_ratio: 0.0, idx: 95994,
lines_start_with_bullet_point_ratio: 0.0,
mean_length_of_alpha_words: 4.2941,
non_alphabetical_char_ratio: 0.0234,
symbols_to_words_ratio: 0.0117,
uppercase_word_ratio: 0.0117
...
Figure 4: An example document after cleaning and
filtering with quality annotations
**3.3** **Data Deduplication**
Given that our corpus originates from diverse
sources, including the web and textbooks, it is
inevitable that there will be repetitions both within
and across these sources. Deduplication plays a
crucial role in enhancing the training efficiency
of the language model and reducing the memorization from the training data (Lee et al., 2022;
Penedo et al., 2023). The challenge in deduplication lies in efficiently processing large-scale corpora to identify and eliminate not just exact duplicates but also near-duplicates. To this end, we
employ the MinHash LSH algorithm built on the
implementation of text-dup (Mou et al., 2023)
and Lee et al. (2022). MinHash excels at efficiently
estimating the similarity between sets at scale by
transforming data into compact signatures using
multiple hash functions (Broder, 1997). In the context of the text, the set corresponding to a document
-----
|In algebraic topology we often encounter chain complexes with extra multiplicative structure. For example, the cochain complex of a topological space has what is called the E -algebra structure which comes from the cup prod- ∞ uct. In this talk I present an idea for studying such chain com- plexes, E differential graded algebras (E DGAs), using ∞ ∞ stable homotopy theory. Namely, I discuss new equiva- lences between E DGAS that are defined using commuta- ∞ tive ring spectra. ring spectra are equivalent. Quasi-isomorphic E DGAs ∞ are E topologically equivalent. However, the examples I ∞ am going to present show that the opposite is not true; there are E DGAs that are E topologically equivalent but not ∞ ∞ quasi-isomorphic. This says that between E DGAs, we ∞ have more equivalences than just the quasi-isomorphisms. I also discuss interaction of E topological equiva- ∞ lences with the Dyer-Lashof operations and cases where E topological equivalences and quasi-isomorphisms ∞ agree.|Özet : In algebraic topology we often encounter chain complexes with extra multiplicative structure. For example, the cochain complex of a topological space has what is called the E -algebra structure which comes from the cup ∞ product. In this talk I present an idea for studying such chain complexes, E differential graded algebras (E ∞ ∞ DGAs), using stable homotopy theory. Namely, I discuss new equivalences between E DGAS that are defined using ∞ commutative ring spectra.We say E DGAs are E topo- ∞ ∞ logically equivalent when the corresponding commutative ring spectra are equivalent. Quasi-isomorphic E DGAs ∞ are E topologically equivalent. However, the examples I ∞ am going to present show that the opposite is not true; there are E DGAs that are E topologically equivalent but not ∞ ∞ quasi-isomorphic. This says that between E DGAs, we ∞ have more equivalences than just the quasi-isomorphisms. I also discuss interaction of E topological equivalences ∞ with the Dyer-Lashof operations and cases where E topo- ∞ logical equivalences and quasi-isomorphisms agree.|
|---|---|
Table 2: A near-duplication match found in CommonCrawl by MinHash LSH deduplication (in italics). See
Appendix D for more examples.
consists of the collection of its n-grams. However,
computing similarity for all possible pairs of such
sets is time-consuming, hence the common practice is to employ a variant of MinHash known as
Locality-Sensitive Hashing (LSH) (Gionis et al.,
1999) which is efficient in identifying similar items
by grouping documents with similar MinHash signatures into the same buckets, thereby reducing the
number of pairwise comparisons needed. Specifically, our process involved splitting each document
using whitespace and constructing 5-grams. We
then applied the “sha1” hash function and configured the system with 450 buckets and 20 minhashes
per bucket, resulting in a total of 9,000 minhashes
for each document. This aligns with the setting
outlined in RefinedWeb (Penedo et al., 2023).
During the deduplication process within each
data source, we encountered a significant number of exact-match and near-duplicate documents.
Specifically, we identified 304 duplicate documents in arXiv, 623 in Common Crawl, a notable
83,716 in Wikipedia, 783 in textbooks (mainly
from synthetic textbooks), and 144 duplicate questions in Stack Exchange. Due to our standardization of the format from ProofWiki, despite detecting many near-duplicates through our deduplication process, we found that these were indeed different lemmas, proofs, or definitions (as we showcase in Table 10), thus, we ultimately did not dedu
plicate this source. Upon manual review, we made
some interesting yet reasonable findings. For example, the significant duplication in Wikipedia was
due to the collection of multiple historical versions
of a document during the data collection. In StackExchange, duplication occurred because community members often posted similar questions on
different sites (like Math and MathOverflow) to
garner more responses (examples of which are provided in Table 13). We provide a near-duplicate
example found in Common Crawl, as shown in 2
(See Table 8 to Table 13 for more examples from
each data source). When deduplicating across different data sources, we hardly found any duplicate
documents, except for one question from StackExchange that appeared in a document from Common
Crawl. Therefore, we removed it. As a result of the
deduplication process, about 714 million tokens
were removed.
Note that we also experimented with using suffix
arrays (Manber and Myers, 1993) to eliminate exact match sequences within documents. However,
it tended to remove common phrases like “Questions: ”. While it can effectively remove some
templated content, it also disrupts the contextual
integrity of our corpus. Consequently, we decided
against employing this in order to preserve the context of our data.
-----
**3.4** **Data Contamination Detection**
As the scale of pre-training corpora expands, it is
inevitable to encounter instances where examples
from the evaluation set are found in the training
set, a phenomenon known as data contamination.
Generally, there are typically two types of data
contamination, input contamination, where only
the input of test examples appears in the training
corpus, and input-and-label contamination, where
both the inputs and their corresponding labels are
present in the training corpus (Dodge et al., 2021).
A common practice in past studies is to conduct
the post-hoc data contamination analysis, utilizing
n-gram overlap to assess the extent of contamination. For instance, GPT-2 performs an 8-gram approach (Radford et al., 2019), while GPT-3 (Brown
et al., 2020) and FLAN (Wei et al., 2022) use 13grams, and LLaMA-2 adopts more intricate skipgram strategy (Touvron et al., 2023b). In this work,
we advocate for the necessity of data contamination detection at the dataset creation stage itself,
if feasible. Delaying this process until after training completion often results in irreversible damage (Kocetkov et al., 2022). Here, we utilize popular mathematical reasoning benchmarks, namely
GSM8K (Cobbe et al., 2021), MATH (Hendrycks
et al., 2021b), and MMLU-STEM (Hendrycks
et al., 2021a), to detect data contamination.
Considering the variety of forms in which questions and answers might appear in pre-training
corpora, we compiled the questions and answers
from these benchmark tests into a set to serve as
a reference for data contamination detection. It’s
important to note that for MMLU, which uses the
multiple-choice format with typically short options,
we considered only the questions. Intuitively, a
math problem can have varied reasoning steps,
making it relatively easier to detect the contamination of test questions in pre-training corpora.
We employed line-level exact match detection for
both our corpus and test sets, as the questions in
these benchmarks are generally brief and often contained within a single line. Specifically, we split
documents into lines, hashed each line using MD5,
and took the first 64 bits along with the corresponding line to form a set. This procedure was also
applied to the constructed reference test set collection. If a line from the test set, along with its
corresponding hash code, is found in the training
set’s corresponding set, and the length of the line
is over 50 characters,[13] we classify it as a leaked
sample with an exact match.
|Corpus|GSM8K MATH MMLU-STEM|
|---|---|
|Ours OpenWebMath|- 023 02 - 195 65|
Table 3: The occurrences of benchmark test sets in pretraining corpora detected by exact match. Note that
some samples appear multiple times and the number
only represents the lower bound of occurrences, as there
may be some duplicates that have not been detected.
After conducting our detection process, we
identified 23 questions from MATH and 2 from
MMLU-STEM in our corpus (as shown in Table 3).
These duplicates primarily appeared in StackExchange, Textbooks, and Common Crawl. Upon
locating the questions within our corpus, we noted
that no answers were provided following the questions. Table 14 and Table 15 showcase examples
of these leaks from Textbooks and Common Crawl,
along with their context. An interesting observation is that the leaks in Textbooks originated from
AMC mathematics competition books, which coincidentally are also a source of questions in the
MATH benchmark. We also applied this process to
OpenWebMath, where we discovered many more
duplicate questions from the MATH and MMLU
test sets (also shown in Table 3), although many
were duplicates. To illustrate, we provide some
examples in Table 16. Interestingly, Azerbayev
et al. (2023b) also report similar findings, albeit
through a different detection method. This underscores the need for extra caution when creating
pre-training corpora, as neglecting this can easily
invalidate downstream benchmarks. Ultimately,
we removed all detected exact matches from our
corpus to mitigate data contamination issues. The
resulting corpus is referred to as MATHPILE.
**4** **Data Analysis**
**Overview. As shown in Table 4, we present de-**
tailed statistical information for each component
of MATHPILE, such as the number of documents
and the count of tokens. Following our meticulous
and comprehensive data collection and processing
13To avoid filtering out many common short phrases.
10
-----
|Components Size (|MB) # Documents # Tokens max(# Tokens) min (# Tokens) ave (# Tokens)|
|---|---|
|Textbooks 006 Wikipedia 002 ProofWiki 000 CommonCrawl 02,5 StackExchange 013 arXiv 24,5|44 003,979 0187,194,060 1,634,015 256 47,046 74 022,639 0078,222,986 0109,282 056 03,455 23 023,839 0007,608,526 0006,762 025 00319 60 075,142 0615,371,126 0367,558 057 08,189 31 433,751 0253,021,062 0125,475 028 00583 76 343,830 8,324,324,917 4,156,454 020 24,211|
|Total 29,4|08 903,180 9,465,742,677 - - 10,480|
Table 4: The components and data statistics of MATHPILE.
arXiv CommonCrawl ProofWiki
10 3
10 4
otal Documents
% of T10 5
StackExchange Textbooks Wikipedia
10 3
10 4
otal Documents
% of T10 5
10[2] 10[3] 10[4] 10[5] 10[6] 10[2] 10[3] 10[4] 10[5] 10[6] 10[2] 10[3] 10[4] 10[5] 10[6]
Document Length (Tokens) Document Length (Tokens) Document Length (Tokens)
Figure 5: Document length distribution for different sources in MATHPILE(log-scale).
process, we obtain 29GB of high-quality and diverse math-centric corpus, encompassing around
9.5 billion tokens, from an initial volume of 2.2TB
of raw data (cf. Figure 3). Compositionally, arXiv
constitutes the largest portion of MATHPILE, while
the Textbooks represent the smallest share, yet they
are of exceptionally high quality.
**The Length Distribution of Documents.** We
analyze the document length (in terms of token
numbers) and their respective proportions from
each source within MATHPILE, which is visualized in Figure 5. Intuitively, if the data from each
source contains a higher amount of near-duplicates
or machine-generated content, the distribution of
documents of similar lengths becomes more prevalent, leading to a less smooth distribution curve.
Figure 5 shows that, thanks to our thorough and
rigorous processing, the document length distribution in MATHPILEis relatively smooth across
different sources. Note that ProofWiki, due to its
fixed format of definitions, lemmas, and proofs,
naturally contains shorter content, resulting in a
distribution with many similar lengths. We can
also observe that, on average, the documents from
arXiv and Textbooks tend to be lengthier, while
those from ProofWiki and StackExchange are generally shorter.
**5** **Related Work**
**Pre-training Corpora for Language Models. In**
the field of language modeling, early models such
as GPT (Radford et al., 2018) and BERT (Devlin
11
-----
et al., 2019) are primarily pre-trained on resources
like Books (Zhu et al., 2015) and Wikipedia. Subsequent developments, including GPT-2 (Radford
et al., 2019) and T5 (Raffel et al., 2020), expand
the scope of training corpus to encompass web
pages from sources like Reddit (resulting in WebText) and Common Crawl (resulting in C4). GPT3 (Brown et al., 2020) marks a significant leap,
enlarging its pre-training corpus to 300 billion tokens, utilizing Common Crawl, WebText, Books,
and Wikipedia. Gao et al. (2021) introduce the
Pile, a comprehensive collection of 22 diverse and
high-quality datasets, specifically designed for the
pre-training of large-scale language models. The
Gopher project (Rae et al., 2021), although not
open-sourced, compiles an extensive corpus of approximately 10.5TB, including web pages, books,
new articles, and code. Similarly, the PaLM work
(Chowdhery et al., 2023) develop a high-quality
corpus of 780 billion tokens, spanning filtered
web pages, books, Wikipedia, news, code, and
social media conversations, yet it remained closedsource. On the other hand, BLOOM (Scao et al.,
2022), an open-sourced multilingual models, is pretrained on the ROOTS dataset (Laurençon et al.,
2022), which aggregates content from hundreds
of sources across 59 languages. Kocetkov et al.
(2022) build The Stack, a 3.1 TB code dataset in
30 programming languages. LLaMA’s training involved a diverse mixture of data, including sources
like arXiv and StackExchange, in addition to the
aforementioned sources (Touvron et al., 2023a).
However, LLaMA did not release its corpus, in
contrast to RedPajama, which did offer an opensource version (Together, 2023a). SlimPajama further enhances RedPajama by conducting an extensive deduplication process (Cerebras, 2023).
RefinedWeb demonstrates the potential for webonly corpora to achieve performance comparable
to well-curated corpus, such as Pile (Penedo et al.,
2023). Recently, as the competition intensifies in
the field of large language models, many more
powerful models, such as GPT-4 (OpenAI, 2023),
Mistral-7B (Jiang et al., 2023) and the lastest Gemini (Team et al., 2023), in addition to not opensourcing their data, are also refraining from disclosing detailed information about their corpus in
their technical reports. For the open-source community, constructing high-quality and diverse pre
training corpora is a crucial factor in bridging the
performance gap with closed-source models. This
is precisely the contribution we aim to make with
our work.
**Pre-training Benchmarks and Corpora for**
**Mathematical Reasoning. Teaching models to**
possess mathematical reasoning abilities akin to
humans is considered a vital aspect of achieving advanced artificial intelligence. This challenge has garnered widespread attention from
both the machine learning and natural language
communities. To better gauge the mathematical reasoning capabilities of models, numerous benchmark datasets have been introduced.
such as AQuA (Ling et al., 2017), SVAMP (Patel et al., 2021), DM-Mathematics (Saxton
et al., 2019), GSM8K (Cobbe et al., 2021) and
MATH (Hendrycks et al., 2021b), to name a few.
These datasets feature problems ranging from basic
arithmetic operations to competition-level mathematics questions, encompassing a wide spectrum
of difficulty. In addition, some benchmarks focus
on the theorem-proving abilities, such as NaturalProofs (Welleck et al., 2021). The STEM subset of MMLU (Hendrycks et al., 2021a) concentrates on evaluating multi-task understanding in
science, technology, engineering, and mathematics.
To enhance the mathematical reasoning capabilities of language models, some pre-training corpora
have been proposed. AMPS (Hendrycks et al.,
2021b), although a large-scale synthetic exercise
set, mainly targets problem-solving at the difficulty
level of the MATH dataset. ProofPile focuses on
mathematical theorem proving (Azerbayev et al.,
2023a). Concurrently with our work, OpenWebMath (Paster et al., 2023) constructs a large-scale
mathematical corpus, but is solely sourced from
web pages. On the other hand, Google’s corpus used for training Minerva (Lewkowycz et al.,
2022) and the OpenAI’s MathMix corpus (Lightman et al., 2023) are not open-sourced. Our work
is dedicated to bridging this gap by constructing
a high-quality mathematical corpus from diverse
sources.
**6** **Conclusion**
In this work, we present MATHPILE, a specialized
corpus centered around mathematics, characterized by its diversity and high quality. Through
12
-----
out its development, we meticulously source and
gather data, applying a rigorous and math-specific
pipeline. This pipeline encompasses various stages
such as preprocessing, prefiltering, language identification, cleaning and filtering, and deduplication,
all aimed at maintaining the high quality of the corpus. Note that we also conduct data contamination
detection to remove duplicates from popular mathematical reasoning benchmark test sets, which is
crucial for ensuring the integrity and effectiveness
of these benchmarks in evaluating language models. However, this is an aspect often overlooked
in other similar works. We hope that our MATHPILEcan be utilized, either independently or in
collaboration with other corpora, to enhance the
mathematical reasoning capabilities of language
models, thereby fostering widespread applications.
**Acknowledgements**
We sincerely appreciate the laboratory members
who reviewed this paper and provided their suggestions and feedback, contributing to the improvement of this work.
**Limitations**
The decisions made during the data collection and
processing phases might not always be optimal, as
verifying their effectiveness through low-cost computational methods is not feasible without training
at each step. Therefore, our approach has been to
draw upon the work of predecessors and, building
on that foundation, cautiously navigate the specific
challenges within the mathematical domain. This
is because the practices of previous work may not
always be entirely suitable for our math-focused
scenario. Despite our significant efforts, the resulting corpus may not always be of the highest quality,
especially documents sourced from the web, where
a few low-quality documents might still persist.
**References**
[Miltiadis Allamanis. 2019. The adverse effects of code](https://doi.org/10.1145/3359591.3359735)
[duplication in machine learning models of code. In](https://doi.org/10.1145/3359591.3359735)
_Proceedings of the 2019 ACM SIGPLAN Interna-_
_tional Symposium on New Ideas, New Paradigms,_
_and Reflections on Programming and Software, On-_
ward! 2019, page 143–153, New York, NY, USA.
Association for Computing Machinery.
AllenAI. 2023. allenai/dolma · datasets at hug[ging face. https://huggingface.co/datasets/](https://huggingface.co/datasets/allenai/dolma)
[allenai/dolma.](https://huggingface.co/datasets/allenai/dolma)
Anthropic. 2023. Anthropic \ introducing
claude. [https://www.anthropic.com/index/](https://www.anthropic.com/index/introducing-claude)
[introducing-claude.](https://www.anthropic.com/index/introducing-claude)
Zhangir Azerbayev, Bartosz Piotrowski, Hailey
Schoelkopf, Edward W. Ayers, Dragomir Radev, and
[Jeremy Avigad. 2023a. Proofnet: Autoformalizing](https://doi.org/10.48550/ARXIV.2302.12433)
[and formally proving undergraduate-level mathemat-](https://doi.org/10.48550/ARXIV.2302.12433)
[ics. CoRR, abs/2302.12433.](https://doi.org/10.48550/ARXIV.2302.12433)
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster,
Marco Dos Santos, Stephen McAleer, Albert Q.
Jiang, Jia Deng, Stella Biderman, and Sean Welleck.
[2023b. Llemma: An open language model for math-](https://doi.org/10.48550/ARXIV.2310.10631)
[ematics. CoRR, abs/2310.10631.](https://doi.org/10.48550/ARXIV.2310.10631)
[Jack Bandy and Nicholas Vincent. 2021. Addressing](http://arxiv.org/abs/2105.05241)
["documentation debt" in machine learning research:](http://arxiv.org/abs/2105.05241)
[A retrospective datasheet for bookcorpus. CoRR,](http://arxiv.org/abs/2105.05241)
abs/2105.05241.
[Emily M. Bender and Batya Friedman. 2018. Data](https://doi.org/10.1162/tacl_a_00041)
[statements for natural language processing: Toward](https://doi.org/10.1162/tacl_a_00041)
[mitigating system bias and enabling better science.](https://doi.org/10.1162/tacl_a_00041)
_Transactions of the Association for Computational_
_Linguistics, 6:587–604._
Sidney Black, Stella Biderman, Eric Hallahan, Quentin
Anthony, Leo Gao, Laurence Golding, Horace
He, Connor Leahy, Kyle McDonell, Jason Phang,
Michael Pieler, Usvsn Sai Prashanth, Shivanshu
Purohit, Laria Reynolds, Jonathan Tow, Ben Wang,
[and Samuel Weinbach. 2022. GPT-NeoX-20B: An](https://doi.org/10.18653/v1/2022.bigscience-1.9)
[open-source autoregressive language model. In Pro-](https://doi.org/10.18653/v1/2022.bigscience-1.9)
_ceedings of BigScience Episode #5 – Workshop on_
_Challenges & Perspectives in Creating Large Lan-_
_guage Models, pages 95–136, virtual+Dublin. Asso-_
ciation for Computational Linguistics.
[Andrei Z. Broder. 1997. On the resemblance and con-](https://doi.org/10.1109/SEQUEN.1997.666900)
[tainment of documents. In Compression and Com-](https://doi.org/10.1109/SEQUEN.1997.666900)
_plexity of SEQUENCES 1997, Positano, Amalfitan_
_Coast, Salerno, Italy, June 11-13, 1997, Proceedings,_
pages 21–29. IEEE.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
Clemens Winter, Christopher Hesse, Mark Chen,
Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin
Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario
[Amodei. 2020. Language models are few-shot learn-](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html)
[ers. In Advances in Neural Information Processing](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html)
13
-----
_Systems 33: Annual Conference on Neural Informa-_
_tion Processing Systems 2020, NeurIPS 2020, De-_
_cember 6-12, 2020, virtual._
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott M. Lundberg,
Harsha Nori, Hamid Palangi, Marco Túlio Ribeiro,
[and Yi Zhang. 2023. Sparks of artificial general in-](https://doi.org/10.48550/ARXIV.2303.12712)
[telligence: Early experiments with GPT-4. CoRR,](https://doi.org/10.48550/ARXIV.2303.12712)
abs/2303.12712.
Cerebras. 2023. Slimpajama: A 627b token, cleaned
and deduplicated version of redpajama - cerebras.
[http://tinyurl.com/slimpajama.](http://tinyurl.com/slimpajama)
Ethan Chern, Haoyang Zou, Xuefeng Li, Jiewen Hu, Kehua Feng, Junlong Li, and Pengfei Liu. 2023. Gen[erative ai for math: Abel. https://github.com/](https://github.com/GAIR-NLP/abel)
[GAIR-NLP/abel.](https://github.com/GAIR-NLP/abel)
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
Sebastian Gehrmann, Parker Schuh, Kensen Shi,
Sasha Tsvyashchenko, Joshua Maynez, Abhishek
Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben
Hutchinson, Reiner Pope, James Bradbury, Jacob
Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin,
Toju Duke, Anselm Levskaya, Sanjay Ghemawat,
Sunipa Dev, Henryk Michalewski, Xavier Garcia,
Vedant Misra, Kevin Robinson, Liam Fedus, Denny
Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim,
Barret Zoph, Alexander Spiridonov, Ryan Sepassi,
David Dohan, Shivani Agrawal, Mark Omernick,
Andrew M. Dai, Thanumalayan Sankaranarayana
Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine
Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta,
Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei,
Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav
[Petrov, and Noah Fiedel. 2023. Palm: Scaling lan-](http://jmlr.org/papers/v24/22-1144.html)
[guage modeling with pathways. J. Mach. Learn.](http://jmlr.org/papers/v24/22-1144.html)
_Res., 24:240:1–240:113._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training verifiers to solve math word prob-](http://arxiv.org/abs/2110.14168)
[lems. CoRR, abs/2110.14168.](http://arxiv.org/abs/2110.14168)
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
[Kristina Toutanova. 2019. BERT: Pre-training of](https://doi.org/10.18653/v1/N19-1423)
[deep bidirectional transformers for language under-](https://doi.org/10.18653/v1/N19-1423)
[standing. In Proceedings of the 2019 Conference of](https://doi.org/10.18653/v1/N19-1423)
_the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
4171–4186, Minneapolis, Minnesota. Association
for Computational Linguistics.
Jesse Dodge, Maarten Sap, Ana Marasovi´c, William
Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret
[Mitchell, and Matt Gardner. 2021. Documenting](https://doi.org/10.18653/v1/2021.emnlp-main.98)
[large webtext corpora: A case study on the colossal](https://doi.org/10.18653/v1/2021.emnlp-main.98)
[clean crawled corpus. In Proceedings of the 2021](https://doi.org/10.18653/v1/2021.emnlp-main.98)
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, pages 1286–1305, Online and_
Punta Cana, Dominican Republic. Association for
Computational Linguistics.
Yanai Elazar, Akshita Bhagia, Ian Magnusson, Abhilasha Ravichander, Dustin Schwenk, Alane Suhr,
Pete Walsh, Dirk Groeneveld, Luca Soldaini, Sameer
Singh, Hanna Hajishirzi, Noah A. Smith, and Jesse
Dodge. 2023. [What’s in my big data?](https://doi.org/10.48550/ARXIV.2310.20707) _CoRR,_
abs/2310.20707.
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang,
Horace He, Anish Thite, Noa Nabeshima, Shawn
[Presser, and Connor Leahy. 2021. The pile: An](http://arxiv.org/abs/2101.00027)
[800gb dataset of diverse text for language modeling.](http://arxiv.org/abs/2101.00027)
_CoRR, abs/2101.00027._
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach,
[Hal Daumé III, and Kate Crawford. 2021. Datasheets](https://doi.org/10.1145/3458723)
[for datasets. Commun. ACM, 64(12):86–92.](https://doi.org/10.1145/3458723)
Aristides Gionis, Piotr Indyk, and Rajeev Motwani.
[1999. Similarity search in high dimensions via hash-](http://www.vldb.org/conf/1999/P49.pdf)
[ing. In VLDB’99, Proceedings of 25th International](http://www.vldb.org/conf/1999/P49.pdf)
_Conference on Very Large Data Bases, September 7-_
_10, 1999, Edinburgh, Scotland, UK, pages 518–529._
Morgan Kaufmann.
Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Ar[mand Joulin, and Tomas Mikolov. 2018. Learning](https://aclanthology.org/L18-1550)
[word vectors for 157 languages. In Proceedings of](https://aclanthology.org/L18-1550)
_the Eleventh International Conference on Language_
_Resources and Evaluation (LREC 2018), Miyazaki,_
Japan. European Language Resources Association
(ELRA).
Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio
César Teodoro Mendes, Allie Del Giorno, Sivakanth
Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo
de Rosa, Olli Saarikivi, Adil Salim, Shital Shah,
Harkirat Singh Behl, Xin Wang, Sébastien Bubeck,
Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, and
[Yuanzhi Li. 2023. Textbooks are all you need. CoRR,](https://doi.org/10.48550/ARXIV.2306.11644)
abs/2306.11644.
Dan Hendrycks, Collin Burns, Steven Basart, Andy
Zou, Mantas Mazeika, Dawn Song, and Jacob Stein[hardt. 2021a. Measuring massive multitask language](https://openreview.net/forum?id=d7KBjmI3GmQ)
[understanding. In 9th International Conference on](https://openreview.net/forum?id=d7KBjmI3GmQ)
_Learning Representations, ICLR 2021, Virtual Event,_
_Austria, May 3-7, 2021. OpenReview.net._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
14
-----
[Jacob Steinhardt. 2021b. Measuring mathematical](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/be83ab3ecd0db773eb2dc1b0a17836a1-Abstract-round2.html)
[problem solving with the MATH dataset. In Pro-](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/be83ab3ecd0db773eb2dc1b0a17836a1-Abstract-round2.html)
_ceedings of the Neural Information Processing Sys-_
_tems Track on Datasets and Benchmarks 1, NeurIPS_
_Datasets and Benchmarks 2021, December 2021, vir-_
_tual._
Danny Hernandez, Tom B. Brown, Tom Conerly, Nova
DasSarma, Dawn Drain, Sheer El Showk, Nelson
Elhage, Zac Hatfield-Dodds, Tom Henighan, Tristan
Hume, Scott Johnston, Benjamin Mann, Chris Olah,
Catherine Olsson, Dario Amodei, Nicholas Joseph,
[Jared Kaplan, and Sam McCandlish. 2022. Scaling](https://doi.org/10.48550/ARXIV.2205.10487)
[laws and interpretability of learning from repeated](https://doi.org/10.48550/ARXIV.2205.10487)
[data. CoRR, abs/2205.10487.](https://doi.org/10.48550/ARXIV.2205.10487)
Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
de Las Casas, Florian Bressand, Gianna Lengyel,
Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock,
Teven Le Scao, Thibaut Lavril, Thomas Wang, Tim[othée Lacroix, and William El Sayed. 2023. Mistral](https://doi.org/10.48550/ARXIV.2310.06825)
[7b. CoRR, abs/2310.06825.](https://doi.org/10.48550/ARXIV.2310.06825)
Armand Joulin, Edouard Grave, Piotr Bojanowski, and
[Tomas Mikolov. 2017. Bag of tricks for efficient](https://aclanthology.org/E17-2068)
[text classification. In Proceedings of the 15th Con-](https://aclanthology.org/E17-2068)
_ference of the European Chapter of the Association_
_for Computational Linguistics: Volume 2, Short Pa-_
_pers, pages 427–431, Valencia, Spain. Association_
for Computational Linguistics.
Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li,
Chenghao Mou, Carlos Muñoz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf,
Dzmitry Bahdanau, Leandro von Werra, and Harm
[de Vries. 2022. The stack: 3 TB of permissively](https://doi.org/10.48550/ARXIV.2211.15533)
[licensed source code. CoRR, abs/2211.15533.](https://doi.org/10.48550/ARXIV.2211.15533)
Julia Kreutzer, Isaac Caswell, Lisa Wang, Ahsan Wahab,
Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo, Nishant Subramani, Artem Sokolov, Claytone Sikasote, Monang Setyawan, Supheakmungkol
Sarin, Sokhar Samb, Benoît Sagot, Clara Rivera,
Annette Rios, Isabel Papadimitriou, Salomey Osei,
Pedro Ortiz Suarez, Iroro Orife, Kelechi Ogueji, Andre Niyongabo Rubungo, Toan Q. Nguyen, Mathias Müller, André Müller, Shamsuddeen Hassan
Muhammad, Nanda Muhammad, Ayanda Mnyakeni, Jamshidbek Mirzakhalov, Tapiwanashe Matangira, Colin Leong, Nze Lawson, Sneha Kudugunta,
Yacine Jernite, Mathias Jenny, Orhan Firat, Bonaventure F. P. Dossou, Sakhile Dlamini, Nisansa de Silva,
Sakine Çabuk Ballı, Stella Biderman, Alessia Battisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar,
Israel Abebe Azime, Ayodele Awokoya, Duygu Ataman, Orevaoghene Ahia, Oghenefego Ahia, Sweta
[Agrawal, and Mofetoluwa Adeyemi. 2022. Quality](https://doi.org/10.1162/tacl_a_00447)
[at a glance: An audit of web-crawled multilingual](https://doi.org/10.1162/tacl_a_00447)
[datasets. Transactions of the Association for Compu-](https://doi.org/10.1162/tacl_a_00447)
_tational Linguistics, 10:50–72._
Hugo Laurençon, Lucile Saulnier, Thomas Wang,
Christopher Akiki, Albert Villanova del Moral,
Teven Le Scao, Leandro von Werra, Chenghao Mou,
Eduardo González Ponferrada, Huu Nguyen, Jörg
Frohberg, Mario Sasko, Quentin Lhoest, Angelina
McMillan-Major, Gérard Dupont, Stella Biderman,
Anna Rogers, Loubna Ben Allal, Francesco De Toni,
Giada Pistilli, Olivier Nguyen, Somaieh Nikpoor,
Maraim Masoud, Pierre Colombo, Javier de la Rosa,
Paulo Villegas, Tristan Thrush, Shayne Longpre, Sebastian Nagel, Leon Weber, Manuel Muñoz, Jian
Zhu, Daniel van Strien, Zaid Alyafeai, Khalid Almubarak, Minh Chien Vu, Itziar Gonzalez-Dios,
Aitor Soroa, Kyle Lo, Manan Dey, Pedro Ortiz
Suarez, Aaron Gokaslan, Shamik Bose, David Ifeoluwa Adelani, Long Phan, Hieu Tran, Ian Yu, Suhas
Pai, Jenny Chim, Violette Lepercq, Suzana Ilic,
Margaret Mitchell, Alexandra Sasha Luccioni, and
[Yacine Jernite. 2022. The bigscience ROOTS corpus:](http://papers.nips.cc/paper_files/paper/2022/hash/ce9e92e3de2372a4b93353eb7f3dc0bd-Abstract-Datasets_and_Benchmarks.html)
[A 1.6tb composite multilingual dataset. In NeurIPS.](http://papers.nips.cc/paper_files/paper/2022/hash/ce9e92e3de2372a4b93353eb7f3dc0bd-Abstract-Datasets_and_Benchmarks.html)
Katherine Lee, Daphne Ippolito, Andrew Nystrom,
Chiyuan Zhang, Douglas Eck, Chris Callison-Burch,
[and Nicholas Carlini. 2022. Deduplicating training](https://doi.org/10.18653/v1/2022.acl-long.577)
[data makes language models better. In Proceedings](https://doi.org/10.18653/v1/2022.acl-long.577)
_of the 60th Annual Meeting of the Association for_
_Computational Linguistics (Volume 1: Long Papers),_
pages 8424–8445, Dublin, Ireland. Association for
Computational Linguistics.
Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman
Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih,
Tim Rocktäschel, Sebastian Riedel, and Douwe
[Kiela. 2020. Retrieval-augmented generation for](https://proceedings.neurips.cc/paper/2020/hash/6b493230205f780e1bc26945df7481e5-Abstract.html)
[knowledge-intensive NLP tasks. In Advances in Neu-](https://proceedings.neurips.cc/paper/2020/hash/6b493230205f780e1bc26945df7481e5-Abstract.html)
_ral Information Processing Systems 33: Annual Con-_
_ference on Neural Information Processing Systems_
_2020, NeurIPS 2020, December 6-12, 2020, virtual._
Aitor Lewkowycz, Anders Andreassen, David Dohan,
Ethan Dyer, Henryk Michalewski, Vinay V. Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag,
Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur,
[Guy Gur-Ari, and Vedant Misra. 2022. Solving quan-](http://papers.nips.cc/paper_files/paper/2022/hash/18abbeef8cfe9203fdf9053c9c4fe191-Abstract-Conference.html)
[titative reasoning problems with language models.](http://papers.nips.cc/paper_files/paper/2022/hash/18abbeef8cfe9203fdf9053c9c4fe191-Abstract-Conference.html)
In NeurIPS.
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del
Giorno, Suriya Gunasekar, and Yin Tat Lee. 2023.
[Textbooks are all you need II: phi-1.5 technical re-](https://doi.org/10.48550/ARXIV.2309.05463)
[port. CoRR, abs/2309.05463.](https://doi.org/10.48550/ARXIV.2309.05463)
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan
Leike, John Schulman, Ilya Sutskever, and Karl
Cobbe. 2023. [Let’s verify step by step.](https://doi.org/10.48550/ARXIV.2305.20050) _CoRR,_
abs/2305.20050.
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun[som. 2017. Program induction by rationale genera-](https://doi.org/10.18653/v1/P17-1015)
[tion: Learning to solve and explain algebraic word](https://doi.org/10.18653/v1/P17-1015)
15
-----
[problems. In Proceedings of the 55th Annual Meet-](https://doi.org/10.18653/v1/P17-1015)
_ing of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 158–167, Vancouver,_
Canada. Association for Computational Linguistics.
Shayne Longpre, Gregory Yauney, Emily Reif, Katherine Lee, Adam Roberts, Barret Zoph, Denny Zhou,
Jason Wei, Kevin Robinson, David Mimno, and
Daphne Ippolito. 2023. [A pretrainer’s guide to](https://doi.org/10.48550/ARXIV.2305.13169)
[training data: Measuring the effects of data age,](https://doi.org/10.48550/ARXIV.2305.13169)
[domain coverage, quality, & toxicity.](https://doi.org/10.48550/ARXIV.2305.13169) _CoRR,_
abs/2305.13169.
[Alexandra Luccioni and Joseph Viviano. 2021. What’s](https://doi.org/10.18653/v1/2021.acl-short.24)
[in the box? an analysis of undesirable content in the](https://doi.org/10.18653/v1/2021.acl-short.24)
[Common Crawl corpus. In Proceedings of the 59th](https://doi.org/10.18653/v1/2021.acl-short.24)
_Annual Meeting of the Association for Computational_
_Linguistics and the 11th International Joint Confer-_
_ence on Natural Language Processing (Volume 2:_
_Short Papers), pages 182–189, Online. Association_
for Computational Linguistics.
[Udi Manber and Eugene W. Myers. 1993. Suffix arrays:](https://doi.org/10.1137/0222058)
[A new method for on-line string searches. SIAM J.](https://doi.org/10.1137/0222058)
_Comput., 22(5):935–948._
Angelina McMillan-Major, Emily M. Bender, and
[Batya Friedman. 2023. Data statements: From tech-](https://doi.org/10.1145/3594737)
[nical concept to community practice. ACM J. Re-](https://doi.org/10.1145/3594737)
_sponsib. Comput. Just Accepted._
Margaret Mitchell, Alexandra Sasha Luccioni, Nathan
Lambert, Marissa Gerchick, Angelina McMillanMajor, Ezinwanne Ozoani, Nazneen Rajani, Tristan Thrush, Yacine Jernite, and Douwe Kiela. 2022.
[Measuring data. CoRR, abs/2212.05129.](https://doi.org/10.48550/ARXIV.2212.05129)
Chenghao Mou, Chris Ha, Kenneth Enevoldsen, and
[Peiyuan Liu. 2023. Chenghaomou/text-dedup: Ref-](https://doi.org/10.5281/zenodo.8364980)
[erence snapshot.](https://doi.org/10.5281/zenodo.8364980)
[OpenAI. 2022. Introducing chatgpt. https://openai.](https://openai.com/blog/chatgpt)
[com/blog/chatgpt.](https://openai.com/blog/chatgpt)
OpenAI. 2023. [GPT-4 technical report.](https://doi.org/10.48550/ARXIV.2303.08774) _CoRR,_
abs/2303.08774.
Keiran Paster, Marco Dos Santos, Zhangir Azer[bayev, and Jimmy Ba. 2023. Openwebmath: An](https://doi.org/10.48550/ARXIV.2310.06786)
[open dataset of high-quality mathematical web text.](https://doi.org/10.48550/ARXIV.2310.06786)
_CoRR, abs/2310.06786._
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
[2021. Are NLP models really able to solve simple](https://doi.org/10.18653/v1/2021.naacl-main.168)
[math word problems? In Proceedings of the 2021](https://doi.org/10.18653/v1/2021.naacl-main.168)
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 2080–2094, Online._
Association for Computational Linguistics.
Guilherme Penedo, Quentin Malartic, Daniel Hesslow,
Ruxandra Cojocaru, Alessandro Cappelli, Hamza
Alobeidli, Baptiste Pannier, Ebtesam Almazrouei,
16
[and Julien Launay. 2023. The refinedweb dataset](https://doi.org/10.48550/ARXIV.2306.01116)
[for falcon LLM: outperforming curated corpora](https://doi.org/10.48550/ARXIV.2306.01116)
[with web data, and web data only.](https://doi.org/10.48550/ARXIV.2306.01116) _CoRR,_
abs/2306.01116.
Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya
Sutskever, et al. 2018. Improving language understanding by generative pre-training.
Alec Radford, Jeff Wu, Rewon Child, David Luan,
Dario Amodei, and Ilya Sutskever. 2019. Language
models are unsupervised multitask learners.
Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie
Millican, Jordan Hoffmann, H. Francis Song, John
Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob
Menick, Albin Cassirer, Richard Powell, George
van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang,
Jonathan Uesato, John Mellor, Irina Higgins, Antonia
Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant M. Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine
Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena
Gribovskaya, Domenic Donato, Angeliki Lazaridou,
Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong,
Daniel Toyama, Cyprien de Masson d’Autume, Yujia
Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin,
Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris
Jones, James Bradbury, Matthew J. Johnson, Blake A.
Hechtman, Laura Weidinger, Iason Gabriel, William
Isaac, Edward Lockhart, Simon Osindero, Laura
Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub,
Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Ko[ray Kavukcuoglu, and Geoffrey Irving. 2021. Scal-](http://arxiv.org/abs/2112.11446)
[ing language models: Methods, analysis & insights](http://arxiv.org/abs/2112.11446)
[from training gopher. CoRR, abs/2112.11446.](http://arxiv.org/abs/2112.11446)
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
[Wei Li, and Peter J. Liu. 2020. Exploring the limits](http://jmlr.org/papers/v21/20-074.html)
[of transfer learning with a unified text-to-text trans-](http://jmlr.org/papers/v21/20-074.html)
[former. J. Mach. Learn. Res., 21:140:1–140:67.](http://jmlr.org/papers/v21/20-074.html)
David Saxton, Edward Grefenstette, Felix Hill, and
Pushmeet Kohli. 2019. [Analysing mathematical](https://openreview.net/forum?id=H1gR5iR5FX)
[reasoning abilities of neural models. In 7th Inter-](https://openreview.net/forum?id=H1gR5iR5FX)
_national Conference on Learning Representations,_
_ICLR 2019, New Orleans, LA, USA, May 6-9, 2019._
OpenReview.net.
Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman
Castagné, Alexandra Sasha Luccioni, François Yvon,
Matthias Gallé, Jonathan Tow, Alexander M. Rush,
Stella Biderman, Albert Webson, Pawan Sasanka
Ammanamanchi, Thomas Wang, Benoît Sagot,
-----
Niklas Muennighoff, Albert Villanova del Moral,
Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen,
Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez,
Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien
Launay, Margaret Mitchell, Colin Raffel, Aaron
Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri
Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg
Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue,
Christopher Klamm, Colin Leong, Daniel van Strien,
[David Ifeoluwa Adelani, and et al. 2022. BLOOM: A](https://doi.org/10.48550/ARXIV.2211.05100)
[176b-parameter open-access multilingual language](https://doi.org/10.48550/ARXIV.2211.05100)
[model. CoRR, abs/2211.05100.](https://doi.org/10.48550/ARXIV.2211.05100)
Gemini Team, Rohan Anil, Sebastian Borgeaud,
Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu,
Radu Soricut, Johan Schalkwyk, Andrew M Dai,
Anja Hauth, et al. 2023. Gemini: A family of
highly capable multimodal models. arXiv preprint
_arXiv:2312.11805._
Together. 2023a. Redpajama, a project to create leading open-source models, starts by reproducing llama
[training dataset of over 1.2 trillion tokens. https:](https://www.together.ai/blog/redpajama)
[//www.together.ai/blog/redpajama.](https://www.together.ai/blog/redpajama)
Together. 2023b. Redpajama-data-v2: An open
dataset with 30 trillion tokens for training large lan[guage models. https://www.together.ai/blog/](https://www.together.ai/blog/redpajama-data-v2)
[redpajama-data-v2.](https://www.together.ai/blog/redpajama-data-v2)
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal
Azhar, Aurélien Rodriguez, Armand Joulin, Edouard
[Grave, and Guillaume Lample. 2023a. Llama: Open](https://doi.org/10.48550/ARXIV.2302.13971)
[and efficient foundation language models. CoRR,](https://doi.org/10.48550/ARXIV.2302.13971)
abs/2302.13971.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, Dan Bikel, Lukas Blecher, Cristian CantonFerrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian
Fuller, Cynthia Gao, Vedanuj Goswami, Naman
Goyal, Anthony Hartshorn, Saghar Hosseini, Rui
Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez,
Madian Khabsa, Isabel Kloumann, Artem Korenev,
Punit Singh Koura, Marie-Anne Lachaux, Thibaut
Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar
Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan
Saladi, Alan Schelten, Ruan Silva, Eric Michael
Smith, Ranjan Subramanian, Xiaoqing Ellen Tan,
Binh Tang, Ross Taylor, Adina Williams, Jian Xiang
Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen
Zhang, Angela Fan, Melanie Kambadur, Sharan
Narang, Aurélien Rodriguez, Robert Stojnic, Sergey
Edunov, and Thomas Scialom. 2023b. [Llama 2:](https://doi.org/10.48550/ARXIV.2307.09288)
[Open foundation and fine-tuned chat models. CoRR,](https://doi.org/10.48550/ARXIV.2307.09288)
abs/2307.09288.
Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin
Guu, Adams Wei Yu, Brian Lester, Nan Du, An[drew M. Dai, and Quoc V. Le. 2022. Finetuned](https://openreview.net/forum?id=gEZrGCozdqR)
[language models are zero-shot learners. In The Tenth](https://openreview.net/forum?id=gEZrGCozdqR)
_International Conference on Learning Representa-_
_tions, ICLR 2022, Virtual Event, April 25-29, 2022._
OpenReview.net.
Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hanna
Hajishirzi, Yejin Choi, and Kyunghyun Cho. 2021.
[Naturalproofs: Mathematical theorem proving in nat-](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/d9d4f495e875a2e075a1a4a6e1b9770f-Abstract-round1.html)
[ural language. In Proceedings of the Neural Infor-](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/d9d4f495e875a2e075a1a4a6e1b9770f-Abstract-round1.html)
_mation Processing Systems Track on Datasets and_
_Benchmarks 1, NeurIPS Datasets and Benchmarks_
_2021, December 2021, virtual._
Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Ar[mand Joulin, and Edouard Grave. 2020. CCNet:](https://aclanthology.org/2020.lrec-1.494)
[Extracting high quality monolingual datasets from](https://aclanthology.org/2020.lrec-1.494)
[web crawl data. In Proceedings of the 12th Lan-](https://aclanthology.org/2020.lrec-1.494)
_guage Resources and Evaluation Conference, pages_
4003–4012, Marseille, France. European Language
Resources Association.
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao
Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu,
Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis,
[Luke Zettlemoyer, and Omer Levy. 2023. LIMA:](https://doi.org/10.48550/ARXIV.2305.11206)
[less is more for alignment. CoRR, abs/2305.11206.](https://doi.org/10.48550/ARXIV.2305.11206)
Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan
Salakhutdinov, Raquel Urtasun, Antonio Torralba,
[and Sanja Fidler. 2015. Aligning books and movies:](https://doi.org/10.1109/ICCV.2015.11)
[Towards story-like visual explanations by watching](https://doi.org/10.1109/ICCV.2015.11)
[movies and reading books. In 2015 IEEE Interna-](https://doi.org/10.1109/ICCV.2015.11)
_tional Conference on Computer Vision, ICCV 2015,_
_Santiago, Chile, December 7-13, 2015, pages 19–27._
IEEE Computer Society.
17
-----
**MATHPILEDatasheet**
**MOTIVATION**
|For what purpose was the dataset cre- ated?|Developed in a context where datasets like Google’s Minerva and OpenAI’s MathMix are not open-sourced, MATHPILEaims to counter this trend by enriching the open-source community and enhancing mathematical language modeling with its (relatively) large-scale, math-centric, diverse, high-quality dataset. It can be used on its own or cooperated with general domain corpora like books, and Github code, to improve the reasoning abilities of language models.|
|---|---|
|Who created the dataset and on be- half of which entity?|MATHPILEwas created by the authors of this work.|
|---|---|
|Who funded the creation of the dataset?|The creation of MATHPILEwas funded by GAIR Lab, SJTU.|
|---|---|
|Any other comment?|None.|
|---|---|
**COMPOSITION**
|What do the instances that comprise the dataset represent?|MATHPILEis comprised of text-only documents, encompassing a broad range of sources. These include academic papers from arXiv, educational materials such as textbooks and lecture notes, definitions, theorems and their proofs, informative articles from Wikipedia, interactive Q&A content from StackExchange commu- nity users, and webpages sourced from Common Crawl. All these instances are math-focused.|
|---|---|
|How many instances are there in to- tal?|MATHPILEcontains about 903 thousand of documents, or around 9.5 billion tokens.|
|---|---|
|Does the dataset contain all possible instances or is it a sample (not nec- essarily random) of instances from a larger set?|MATHPILEis curated from a diverse array of sources, including arXiv, Textbooks, Wikipedia, StackExchange, ProofWiki, and Common Crawl. However, it doesn’t encompass all instances from these sources. We have implemented a rigorous data processing pipeline, which involves steps like preprocessing, prefiltering, language identification, cleaning, filtering, and deduplication. This meticulous approach is taken to guarantee the high quality of the content within MATHPILE.|
|---|---|
|What data does each instance consist of?|Each instance in MATHPILEis a text-only document, uniquely identified by its source, labeled under Subset. These instances are enriched with metadata, such as the score from language identifica- tion, the ratio of symbols to words, and their respective file paths. Note that instances from the StackExchange are composed of a question and its accompanying answers, each with their own set of meta data, including community users. To illustrate them, we provide specific examples for each source, ranging from Figure 6 to Figure 12.|
|---|---|
18
-----
|Is there a label or target associated with each instance?|No.|
|---|---|
|Is any information missing from indi- vidual instances?|No.|
|---|---|
|Are relationships between individual instances made explicit?|No.|
|---|---|
|Are there recommended data splits?|No.|
|---|---|
|Are there any errors, sources of noise, or redundancies in the dataset?|Despite our rigorous efforts in cleaning, filtering out low-quality content, and deduplicating documents, it’s important to acknowl- edge that a small fraction of documents in MATHPILEmight still fall short of our quality standards, particularly those sourced from web pages.|
|---|---|
|Is the dataset self-contained, or does it link to or otherwise rely on external resources?|Yes, MATHPILEis self-contained.|
|---|---|
|Does the dataset contain data that might be considered confidential?|No.|
|---|---|
|Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might oth- erwise cause anxiety?|We do not expect offensive content despite our significant efforts in cleaning and filtering. But, we can not fully guarantee this.|
|---|---|
**COLLECTION**
|How was the data associated with each instance acquired?|Our data is primarily sourced from the arXiv website and the Internet Archive. The CommonCrawl data originates from SlimPa- jama. The textbooks included are manually collected, with quality checks performed on publicly available textbooks from various internet sources.|
|---|---|
|What mechanisms or procedures were used to collect the data?|Refer to § 2 for details on how they collect data.|
|---|---|
|If the dataset is a sample from a larger set, what was the sampling strategy?|We strive to use the most recent data dumps available and then selectively choose high-quality documents that are closely related to mathematics.|
|---|---|
|Who was involved in the data collec- tion process and how were they com- pensated?|Authors from this paper were involved in collecting it and process- ing it.|
|---|---|
|Over what timeframe was the data collected?|MATHPILEencompasses documents created between 2007 and August 2023. Note that some documents and textbooks included may be created in the previous century.|
|---|---|
|Were any ethical review processes conducted?|No.|
|---|---|
**PREPROCESSING**
19
-----
|Was any preprocessing/cleaning/la- beling of the data done?|Yes, during our data collection phase, we conducted extensive filtering and cleansing procedures, detailed in § 2. After the com- pletion of data collection, we conducted further steps including language identification, additional cleaning and filtering, dedupli- cation, and leakage detection in benchmark datasets. Subsequently, we removed any contaminated examples identified through this process. See § 3 for details.|
|---|---|
|Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data?|Yes.|
|---|---|
|Is the software that was used to preprocess/clean/label the data avail- able?|Yes, we will open-source the corresponding scripts.|
|---|---|
**USES**
|Has the dataset been used for any tasks already?|Yes, this data has been used to develop mathematical language models.|
|---|---|
|Is there a repository that links to any or all papers or systems that use the dataset?|No.|
|---|---|
|What (other) tasks could the dataset be used for?|MATHPILEwas developed to enhance language modeling, offering significant benefits for a variety of mathematical reasoning tasks.|
|---|---|
|Is there anything about the com- position of the dataset or the way it was collected and preprocessed/- cleaned/labeled that might impact fu- ture uses?|Our cleaning and filtering processes, while thorough, may not be entirely optimal, potentially leading to the exclusion of some valu- able documents. Additionally, MATHPILEis specifically tailored for English, which limits its applicability in multilingual contexts.|
|---|---|
|Are there tasks for which the dataset should not be used?|Any tasks which may considered irresponsible or harmful.|
|---|---|
**DISTRIBUTION**
|Will the dataset be distributed to third parties outside of the entity on behalf of which the dataset was cre- ated?|Yes, MATHPILEwill be available on the Huggingface Hub.|
|---|---|
|How will the dataset will be dis- tributed?|MATHPILEwill be made available through the HuggingFace Hub.|
|---|---|
|When will the dataset be distributed?|The MATHPILEwill be available after this paper is made public.|
|---|---|
|Will the dataset be distributed under a copyright or other intellectual prop- erty (IP) license, and/or under appli- cable terms of use (ToU)?|If the source data of MATHPILEis governed by a license more restrictive than CC BY-NC-SA 4.0, MATHPILEadheres to that stricter licensing. In all other cases, it operates under the CC BY-NC-SA 4.0 license.|
|---|---|
|Have any third parties imposed IP- based or other restrictions on the data associated with the instances?|Not to our knowledge.|
|---|---|
20
-----
|Do any export controls or other regulatory restrictions apply to the dataset or to individual instances?|Not to our knowledge.|
|---|---|
**MAINTENANCE**
|Who will be supporting/hosting/main- taining the dataset?|MATHPILEwill be hosted on the HuggingFace Hub.|
|---|---|
|How can the owner/curator/manager of the dataset be contacted?|[email protected] [email protected]|
|---|---|
|Is there an erratum?|No.|
|---|---|
|Will the dataset be updated?|Yes, it is currently a work in progress and updates are ongoing.|
|---|---|
|If others want to extend/augmen- t/build on/contribute to the dataset, is there a mechanism for them to do so?|No.|
|---|---|
Table 5: Datasheet for MATHPILE, following the framework introduced by Gebru et al. (2021).
21
-----
**B** **Examples of MATHPILE**
We provide some illustrative examples from each source in MATHPILE, as shown in Figure 6 to Figure 12.
A document from MATHPILE-CommonCrawl
**Text:**
Are there optimizers where it is possible to specify ordinal ranking of parameters?
Assume that f is smooth (n-th order differentiable in each of the parameters).
An approach I often use when applying unconstrained optimisation algorithms to constrained problems is to transform the parameter space such that the constraints cannot be violated.
Of course this results inprecision. _θ1[∗]_ _[≥]_ _[θ]2[∗]_ _[≥]_ _[θ]3[∗]_ [which isn’t quite what you asked for. To get a strict ranking you’ll need to bump][ x][1][ −] _[x]2[2]_ [and][ x][1][ −] _[x]2[2]_ _[−]_ _[x]3[2]_ [down at the last digit of]
thus spake a.k.thus spake a.k.
These variants of your constraints are linear, so provided that your function f is well-behaved (smooth, easy to calculate, easy to compute derivatives, derivatives are well-conditioned,
etc.), any constrained optimization solver should be able to solve your problem without issue.
Not the answer you’re looking for? Browse other questions tagged optimization constrained-optimization or ask your own question.
Does the amount of correlation of model parameters matter for nonlinear optimizers?
Optimization of a blackbox function with an equality constraint?
...
**Subset: CommonCrawl**
**meta:**
language_detection_score: 0.8670,
char_num_after_normalized: 926,
contain_at_least_two_stop_words: True,
ellipsis_line_ratio: 0.0,
idx: 383668,
lines_start_with_bullet_point_ratio: 0.0,
mean_length_of_alpha_words: 5.0870,
non_alphabetical_char_ratio: 0.0,
symbols_to_words_ratio: 0.0,
uppercase_word_ratio: 0.0060,
...
Figure 6: An example Common Crawl document in MATHPILE
**C** **Details for Corpus Collection**
The subjects from which we collected papers on arXiv are listed in Table 6. The specific StackExchange
sites from which we gathered data are listed in Table 7.
Subjects
math.AG, math.AT, math.AP, math.CT, math.CA, math.CO, math.AC, math.CV, math.DG,
math.DS, math.FA, math.GM, math.GN, math.GT, math.GR, math.HO, math.IT, math.KT,
math.LO, math.MP, math.MG, math.NT, math.NA, math.OA, math.OC, math.PR, math.QA,
math.RT, math.RA, math.SP, math.ST, math.SG, math-ph, quant-ph, cs.CC, cs.CG, cs.DM, cs.DS,
cs.FL, cs.GT, cs.LG, cs.NA, cs.LO, q-fin.MF, stat.CO, stat.ML, stat.ME, stat.OT, stat.TH, econ.TH
Table 6: The subject list during collecting corpus from arXiv.
Sites sourced from StackExchange
math.stackexchange.com, mathoverflow.net, mathematica.stackexchange.com,
matheducators.stackexchange.com, hsm.stackexchange.com, physics.stackexchange.com,
proofassistants.stackexchange.com, tex.stackexchange.com, datascience.stackexchange,
cstheory.stackexchange.com, cs.stackexchange.com
Table 7: The site list during collecting corpus from StackExchange.
22
-----
**D** **Examples of Duplicates Encountered in the Deduplication Process**
We provide some illustrative examples of duplicates from each source in the deduplication process, as
shown in Table 8 to Table 13.
We also provide examples of downstream task benchmarks (i.e., MATH and MMLU-STEM) leaks
identified during our data contamination detection process for our corpus (as shown in Table 14 and
Table 15) and OpenWebMath 16 (as shown in Table 16).
23
-----
A document from MATHPILE-Wikipedia
**Text:**
# Inner Automorphism
In abstract algebra, an inner automorphism is an automorphism of a group, ring, or algebra given by the conjugation action of a fixed element, called the conjugating element. They
can be realized via simple operations from within the group itself, hence the adjective "inner". These inner automorphisms form a subgroup of the automorphism group, and the
quotient of the automorphism group by this subgroup is defined as the outer automorphism group.
## Definition
If G is a group and g is an element of G (alternatively, if G is a ring, and g is a unit), then the function
_φg : G →_ _G_
_φg_ (x) := g[−][1]xg
is called (right) conjugation by g (see also conjugacy class). This function is an endomorphism of G: for all x1, x2 ∈ _G,_
_φg_ (x1x2) = g[−][1]x1x2g = (g[−][1]x1g)(g[−][1]x2g) = φg (x1)φg (x2),
where the second equality is given by the insertion of the identity between x1 and x2. Furthermore, it has a left and right inverse, namely φg−1 . Thus, φg is bijective, and so an
isomorphism of G with itself, i.e., an automorphism. An inner automorphism is any automorphism that arises from conjugation.[1]
When discussing right conjugation, the expression g[−][1]xg is often denoted exponentially by x[g] . This notation is used because composition of conjugations satisfies the identity:
(x[g][1] )[g][2] = x[g][1] _[g][2]_ for all g1, g2 ∈ _G. This shows that right conjugation gives a right action of G on itself._
### Inner and Outer Automorphism Groups
The composition of two inner automorphisms is again an inner automorphism, and with this operation, the collection of all inner automorphisms of G is a group, the inner automorphism
group of G denoted Inn(G).
Inn(G) is a normal subgroup of the full automorphism group Aut(G) of G. The outer automorphism group, Out(G), is the quotient group
Aut(G)
Out(G) = _._
Inn(G)
The outer automorphism group measures, in a sense, how many automorphisms of G are not inner. Every non-inner automorphism yields a non-trivial element of Out(G), but
different non-inner automorphisms may yield the same element of Out(G).
Saying that conjugation of x by a leaves x unchanged is equivalent to saying that a and x commute:
_a[−][1]xa = x ⇐⇒_ _xa = ax._
Therefore, the existence and number of inner automorphisms that are not the identity mapping is a kind of measure of the failure of the commutative law in the group (or ring).
An automorphism of a group G is inner if and only if it extends to every group containing G.[2]
...
**Subset: Wikipedia**
**meta:**
language_detection_score: 0.7236,
char_num_after_normalized: 5794,
contain_at_least_two_stop_words: True,
ellipsis_line_ratio: 0.0,
lines_start_with_bullet_point_ratio: 0.0,
mean_length_of_alpha_words: 4.2245,
mimetype: text/html,
page_index: 48171,
page_path: A/Inner_automorphism,
page_title: Inner automorphism,
non_alphabetical_char_ratio: 0.1422,
symbols_to_words_ratio: 0.0,
uppercase_word_ratio: 0.0871,
...
Figure 7: An example Wikipedia document in MATHPILE
24
-----
A document from MATHPILE-Textbooks
**Text:**
# LINEAR TORIC FIBRATIONS
SANDRA DI ROCCO
## INTRODUCTION TO TORIC FIBRATIONS
(a)Definition 1.1. A toric fibration is a surjective flat map X is a toric variety _f : X →_ _Y with connected fibres where_
(b) Y is a normal algebraic variety
(c) dim(Y ) < dim(X).
Remark 1.2. Observe that ifF . _f : X →_ _Y is a toric fibration then Y and a general fiber F are also toric varieties. Moreover if X is smooth, respectively Q-factorial then so is Y and_
Combinatorial characterization. A toric fibration has the following combinatorial characterization (see [EW, Chapter VI] for further details). Letbe a toric variety of dimension n and let i : ∆ _,→_ _N a sublattice._ _X = XΣ, where Σ ⊂_ _N_ _[∼]= Z[n],_
Proposition 1.3. [EW] The inclusion i induces a toric fibration if and only if:
(b) For every(a) ∆ is a primitive lattice, i.e. σ ∈ Σ(n), σ = (∆ τ +⊗ ηR, where) ∩ _N = ∆ τ ∈_ ∆. and η ∩ ∆= {0} (i.e. Σ is a splitfan).
We briefly outline the construction. The projectiontoric variety defined by the fan ΣF = {σ ∈ Σ ∩ π∆ : N}. _→_ _N/∆_ induces a map of fans Σ → _π(Σ) and thus a map of toric varieties f : X →_ _Y . The general fiber F is a_
When the toric varietyvarieties (X, L) and XF, L in a toric fibration is polarized by an ample line bundle|F, for a general fiber F, define lattice polytopes L we will call the pair P(X,L), P(F, L (|Ff ) :. The polytope X → _Y, L) P a polarized toric fibration. Observe that the polarized toric(X,L) is in fact a "twisted sum" of a finite number of_
lattice polytopes fibering over _P(F, L|F ). Definition 1.4. Let R0, . . ., Rk ⊂_ ∆ be polytopes. Let π : M → Λ be a surjective map of lattices such that π (Ri) = vi and
theisomorphic to v0, · · ·, v Conv (k are distinct vertices ofR0, . . ., Rk). We will denote it by: Conv (v0, . . ., vk). We will call a Cayley π-twisted sum (or simply a Cayley sum) of R0, . . ., Rk a polytope which is affinely
[R0 ⋆. . . ⋆Rk]π
If the polytopes Ri are additionally normally equivalent, i.e. they define the same normal fan ΣY, we will denote the Cayley sum by:
Cayley (R0, . . ., Rk)(π,Y ) .
These are the polytopes that are associated to a polarized toric fibration. Consider a sublattice i : ∆ _,→_ _N and the dual lattice surjection π : M →_ Λ.
Proposition 1.5. [CDR08] The sublattice i : ∆ _,→_ _N induces a polarized toric fibration (f : X →_ _Y, L) if and only if P(X,L) = Cayley (R0, . . ., Rk)(π,Y ) for some_
normally equivalent polytopes R0, . . ., Rk.
The polarized general fiber _F, L|F_ corresponds to the polarized toric variety associated to the polytope P(F, L|F ) = Conv (v0, . . ., vk) and the polytopes R0, · · ·, Rk
define the embeddings of the invariant sections polarized by the restrictions of _L._
Example 1.6. Consider the Hirzebruch surface F1 = Blp P[2][] = P _OP1 ⊕_ _OP1 (1)_ polarized by the tautological line bundle ξ = 2ϕ[∗] []OP2 (1) _−_ _E where ϕ is the_
blow-up map and E the exceptional divisor. The associated polytope is _P_ = Cayley (∆1, 2∆1).
FIGURE 1. The Hirzebruch surface P _OP1 ⊕_ _OP1 (1)_
Example 1.7. More generally:
- when π(P ) = ∆t the polytope Cayley (R0, . . ., Rk)(π,Y ) defines the variety P (L0 ⊕ _. . . ⊕_ _Lk ), where the Li are ample line bundles on the toric variety Y, polarized_
by the tautological bundle ξ. In particular L|F = OPt (1).
- When π(P ) is a simplex (not necessarily smooth) Cayley (R0, . . ., Rk)(π,Y ) defines a Mori-type fibration. A fibration whose general fiber has Picard rank one. - When
_π(P ) = s∆t then again the variety has the structure of a P[t]-fibration whose general fiber F is embedded via an s-Veronese embedding:_ _F, L|F_ = P[t], OPt (s) .
For general Cayley sums, [R0 ⋆. . . ⋆Rk]π, one has the following geometrical interpretation. Let (X, L) be the associated polarized toric variety and let Y be the toric variety
defined by the Minkowski sum R0 + . . . + Rk. The fan defining Y is a refinement of the normal fan of Ri for i = 0, . . ., k. Consider the associated birational maps
_ϕsymbol the maps of fansi : Y →_ _Yi, where ( ϕYi : Σi, LiY) → is the polarized toric variety defined by the polytopeΣYi . Define then the fan:_ _Ri. The line bundles Hi = ϕ[∗]i_ [(][L][i][)][ are nef line bundles on][ Y][ . Denote by the same]
ΣZ : _ϕ[−]i_ [1] _σj_ _× ηl, for all σj ∈_ ΣYi, ηl ∈ Σ∆
n o
where Λ = Conv (v0, . . ., vk). It is a refinement of ΣX and thus the defining variety Z is birational to X. Moreover it is a split fan and thus it defines a toric fibration
_fH :i Z on the invariant sections. →_ _Y . The Cayley sum [R0 ⋆. . . ⋆Rk]π is the polytope defined by the nef line bundle ϕ[∗](L), and the polytopes Ri are the polytopes defined by the nef line bundles_
Historical Remark. The definition of a Cayley polytope originated by what is "classically" referred to as the Cayley trick. We first recall the definition of Resultant and Discriminant.
Let f1(x), . . ., fn(x) be a system of n polynomials in n variables x = (x1, . . ., xn) supported on A ⊂ Z[n]. This means that fi = Πaj ∈Acj _x[aj]_ . The resultant (of
_A ), RA_ _cj ), is a polynomial in the coefficients cj_, which vanishes whenever the corresponding polynomials have a common zero.
The discriminant of a finite subset _A, ∆A, is also a polynomial ∆A_ _cj_ in the variables cj ∈ _A which vanishes whenever the corresponding polynomial has a multiple root._
Theorem 1.8. [GKZ][Cayley Trick] The A-resultant of the system f1, . . ., f _n equals the Adiscriminant of the polynomial:_
_n_
_p(x, y) = fi(x) +_ 2 _yi−1fi(x)._
X
Let Ri = N (fi) ⊂ R[n] be the Newton polytopes of the polynomials fi. The Newton polytope of the polynomial p(x, y) is the Cayley sum [R1 ⋆. . . ⋆Rn]π, where
_π : R[2][n][−][1]_ _→_ R[n][−][1] is the natural projection such that π [R1 ⋆. . . ⋆Rn]π = ∆n−1.
...
**Subset: Textbooks**
**meta:**
book_name: Linear Toric Fibrations_Sandra Di Rocco,
type: Notes,
...
Figure 8: An example textbook document in MATHPILE
25
-----
A document from MATHPILE-ProofWiki
**Text:**
\section{Test for Submonoid}
Tags: Abstract Algebra, Monoids
\begin{theorem}
To show that \struct {T, circ} is a submonoid of a monoid \struct {S, circ}, we need to show that:
:(2)::(1): _T\struct {T, circ} is a magma (that is, that it is closed) ⊆_ _S_
:(3): \struct {T, circ} has an identity.
\end{theorem}
\begin{proof}
From Subsemigroup Closure Test, (1) and (2) are sufficient to show that \struct {T, circ} is a subsemigroup of \struct {S, circ}.
Demonstrating the presence of an identity is then sufficient to show that it is a monoid. {{qed}}
Category:Monoids
\end{proof}
...
**Subset: ProofWiki**
**meta:**
type: Theorem_Proof,
...
Figure 9: An example ProofWiki (a theorem and its proof) document in MATHPILE
A document from MATHPILE-ProofWiki
**Text:**
\begin{definition}[Definition:That which produces Medial Whole with Medial Area/Whole]
Let a, b ∈R>0 be (strictly) positive real numbers such that a > b.
Let a − _b be a straight line which produces with a medial area a medial whole._
The real number a is called the ”’whole”’ of the straight line which produces with a medial area a medial whole.
Category:Definitions/Euclidean Number Theory
\end{definition}
**Subset: ProofWiki**
**meta:**
type: Definition,
...
Figure 10: An example ProofWiki (definition) document in MATHPILE
26
-----
A document from MATHPILE-arXiv
**Text:**
\begin{document}
\title{Coherence freeze in an optical lattice investigated via pump-probe spectroscopy}
\author{Samansa Maneshi}
\email[]{[email protected]}
\author{Chao Zhuang}
\author{Christopher R. Paul}
\author{Luciano S. Cruz}
\altaffiliation[Current address: ]{UFABC, São Paulo, Brazil.}
\author{Aephraim M. Steinberg}
\affiliation{Centre for Quantum Information & Quantum Control and Institute for Optical Sciences,
Department of Physics, University of Toronto, Canada }
\date{\today}
\pacs{37.10.Jk, 03.65.Yz, 03.67.-a, 42.50.Md}
\begin{abstract}
Motivated by our observation of fast echo decay and a surprising coherence freeze, we have developed a pump-probe spectroscopy technique for vibrational states of ultracold [85]Rb
atoms in an optical lattice to gain information about the memory dynamics of the system. We use pump-probe spectroscopy to monitor the time-dependent changes of frequencies
experienced by atoms and to characterize the probability distribution of these frequency trajectories. We show that the inferred distribution, unlike a naive microscopic model of the
lattice, correctly predicts the main features of the observed echo decay.
\end{abstract}
\maketitle
Characterizing decoherence mechanisms is a crucial task for experiments aiming to control quantum systems, e.g., for quantum information processing (QIP). In this work, we
demonstrate how two-dimensional (2D) pump-probe spectroscopy may be extended to provide important information on these mechanisms. As a model system, we study quantum
vibrational states of ultracold atoms in an optical lattice. In addition to being a leading candidate system for QIP \citeBrennenJaksch, optical lattices are proving a versatile testing
ground for the development of quantum measurement and control techniques \citeOMandel, Anderlini and a powerful tool for quantum simulations, e.g. the study of Anderson
localization and the Hubbard model \citeMottAnderson.
In our experiment, we study the vibrational coherence of [85]Rb atoms trapped in a shallow one-dimensional standing wave. Through our 2D pump-probe technique, we obtain detailed
microscopic information on the frequency drift experienced by atoms in the lattice, enabling us to predict the evolution of coherence. Since the pioneering development of the technique
in NMR\citeJeener-Ernst, 2D spectroscopy has been widely used to obtain high-resolution spectra and gain information about relaxations, couplings, and many-body interactions, in
realms ranging from NMR \citeErnst to molecular spectroscopy \citeMukamel-Jonas, Hybl, Brixner, MillerNature to semiconductor quantum wells \citeCundiff, KWStone. Here, we
show that similar powerful techniques can be applied to the quantized center-of-mass motion of trapped atoms, and more generally, offer a new tool for the characterization of systems
in QIP and quantum control.
\begin{figure}
\caption{(Color online) Two typical measurements of echo amplitude vs. time. The echo pulse and the observed echo envelope are centered at times tp and 2tp, respectively. After an
initial decay, echo amplitude stays constant for about 1ms forming a plateau, before decaying to zero. The average lattice depths are 20ER (circles) and 18ER (squares).}
\label{fig1}
\end{figure}
We have previously measured the evolution of coherence between the lowest two vibrational states of potential wells \cite{Ours}.
The dephasing time is about 0.3ms (T2[⋆][).]
This dephasing is partly due to an inhomogeneous distribution of lattice depths as a result of the transverse Gaussian profile of the laser beams. To measure the homogeneous
decoherence time (T2), we perform pulse echoes, measuring the echo amplitude as a function of time \cite{Ours}.
Figure \ref{fig1} shows two typical measurements of echo amplitude carried out on different dates under slightly different conditions such as different average lattice depths and
different dephasing times. The echo amplitude initially decays with a time constant of aboutIt then exhibits a 1ms-long coherence freeze followed by a final decay. Absent real decoherence on the short time scale of 0.7ms, which is much faster than the photon scattering time ( 1ms, only loss of frequency memory would inhibit∼ 60ms) in the lattice.
the appearance of echoes. This loss comes about when atoms experience time-varying frequencies. We use 2D pump-probe spectroscopy to monitor this frequency drift. Our 2D
pump-probe spectroscopy is essentially a version of spectral hole-burning for vibrational states. By monitoring the changes in the hole spectrum as a function of time we gain
information on the atoms’ frequency drift.
Information obtained from our 2D spectra enables us to characterize the temporal decay of frequency memory and through our simulations we find that “coherence freeze" is related to
the shape of this memory loss function.
Similar plateaus in echo decay and a two-stage decay of echo amplitude have been observed in a Cooper-pair box \cite{Nakamura}, for a single electron spin in a quantum dot
\cite{Vandersypen} and for electron spins in a semiconductor \cite{SClark}. Those plateaus or two-stage decays have been either explained through {\it{a priori}} models or simply
described phenomenologically. Here, we are introducing an experimental technique to directly probe the origin of plateaus.
The periodic potential in our experiment is formed by interfering two laser beams blue-detuned bytrapping atoms in the regions of low intensity, which minimizes the photon scattering rate and the transverse forces. The two laser beams intersect with parallel linear polarizations at 25GHz from the D2 transition line, F = 3 → _F_ _[′]_ = 4 (λ = 780nm), thus
an angle of θ = (49.0 ± 0.2)[◦], resulting in a spacing of L = (0.930 ± 0.004)µm between the wells. Due to gravity, the full effective potential also possesses a “tilt” of
2.86ER per lattice site, where ER = 8mLh[2] [2][ is the effective lattice recoil energy. The photon scattering time in our experiment is][ ≈] [60][ms][ and the Landau-Zenner tunneling]
times for transitions from the lowest two levels are greater than 160ms.
Atoms are loaded to the lattice during a molasses cooling stage and prepared in the ground vibrational state by adiabatic filtering \cite{StefanQPT}. Due to the short coherence length
of atoms in optical molasses (60nm at 10µK), there is no coherence between the wells. We measure populations of atoms in the ground vibrational, the first excited, and the (lossy)
higher excited states P1, P2, and PL, respectively, by fluorescence imaging of the atomic cloud after adiabatic filtering \cite{StefanQPT}.
...
**Subset: arXiv**
**meta:**
id: 1005.2635,
language_detection_score: 0.8389,
...
Figure 11: An example arXiv document in MATHPILE
27
-----
A document from MATHPILE-StackExchange
**Question:**
Title: Are fractions hard because they are like algebra?
Body:
It occurs to me that to really understand the ways that people work with fractions on paper requires a good grasp of the ideas that numbers have multiple representations and that
expressions can be manipulated in various ways without changing the number they represent. These are essentially algebraic ideas.
For example, adding fractions requires us to rewrite the fractions in a different form, and then essentially factorise the expression. This is the same as rearranging expressions in algebra.
And cancelling down before multiplying is very sophisticated rewriting relying on various associative and commutative laws.Dividing fractions requires us to rerepresent an operation like ÷ [2]3 [as][ ×][ 3]2 [. This is the same as realising the connection between operations that you use to solve equations in algebra.]
So it seems that we are really asking children to think in algebraic ways in order to understand fraction calculations well. This would seem to me to be a good reason why children and
adults find it hard - they need more scaffolding in some abstract ideas.
Is this a reasonable theory and has anyone written about this algebra-fractions connection before? To be clear, I am not asking if this is the only reason fractions are hard, but if there is
any discussion out there to draw parallels between learning algebra and learning to manipulate fractions.
Id: 7826
Score: 17
Tags: <algebra><fractions>
LastEditorDisplayName: None
OwnerDisplayName: None
ClosedDate: None
FavoriteCount: None
language_detection_score: 0.9558
...
**Answers:**
Body: Not sure about paper references. One reason why people don[´]t understand fractions is because they are seemingly illogical.
You score one basket out of three 1/3.
A little while later you try again and score 1/2. Clearly you have scored 2/5 shots? In many ways this is the correct answer. So why shouldn[´]t 3[1] [+][ 1]2 [=][ 2]5
People generally don[´]t understand equivalent fractions. It is strange for one farmer to say there are 4 sheep and another to say there are 8/2 sheep in the same field. People assume that
the number 4 does what it says on the tin and is how we always describe 4 ness of something. They don[´]t understand equivalence.
Partly to blame is treating fractions like conjuring tricks. If this is the question...do this, if this is the question ...do another uncorrelated thing. I asked my class (who seemingly could
compute [2]3 _[×][ 3]5_ [correctly) to draw me a picture]
instead of just multiplying. No one could do it yet they all said "but it´s
_frac615 you times the top and the bottom!"_
I think drawing fractions is extremely useful. Draw [2]3 _[÷][ 2][ or][ 2][ ÷][ 2]3_ [It´]s not easy but I find students develop robustness eventually and begin to abstract themselves.
Id: 7827,
Score: 9,
is_accepted_answer: False,
language_detection_score: 0.9599,
Body: The obvious (to me) source of difficulty is that fractions are just plain complicated, more so than almost anything else in elementary education. You have to operate with a pair of
numbers, instead of a single one, and you have to keep the order straight. Adding is quite complicated in its own right. Things are further complicated by rules about least common
denominators and least terms.
I ´m a little unclear about the question´s emphasis on algebra. Any sort of general rule or operation in arithmetic must have a connection to algebra, but I do not see what is intrinsically
difficult about algebra that relates to numeric fractions. Certainly some parts of algebra are hard, and some parts harder than others, algebraic fractions among them. It seems to me that
fractions are difficult because it´s easy to confuse the various bits. Even when youve got them straight, they´´ re noticeably slower to use, take concentration, and when things have such
cognitive demands, they´re harder to think with.
Conceptually, they´re a little bit odd, which is probably distracting until you get used to them. What they represent do not seem to apply to the same things that (whole) numbers do.
Evidently fractions are not considered in this passage:
In that city, which was the oldest in the world, the cat was an object of veneration. Its worship was the religion of the country. The multiplication and addition of cats were a perpetual
instruction in arithmetic. Naturally, any inattention to the wants of a cat was punished with great severity in this world and the next... – A. Bierce, "A Revolt of the Gods"
Now to have one-and-a-half cats seems a very different thing than to have three halves. In the former case, there´s a good chance that the one cat you have will be alive and purring,
while the same could not possibly be said about any of the halves. No doubt such lessons are considered blasphemous in that city. While many things may be divided into parts – cars
are a better example than cats – not many can be divided into equivalent parts that can be used as a basis for fractions. As we get used to fractions, as well as real numbers, we are
taught to ignore this and accept statements such as "the average family has 2.4 children." Here is another example:
By then, she will have shed 80 of the 240 pounds she weighed in with when she entered Peter Bent Brigham hospital obesity program. A third of her left behind! – The Boston Herald
American, 7/7/77
The question seems to welcome references. There are certainly several that connect fractions with algebra. This paper,
Seigler et al. (2013), Fractions: the new frontier for theories of numerical development, Trends in Cognitive Sciences,
is a short survey of what is known and unknown about neural bases for one´s knowledge of fractions. Whole number arithmetic knowledge has been studied, and the authors suggest that
the representation of the knowledge fractions is an area ripe for investigation. It reviews (with references) why fractions are difficult and the relation of skill at fractions to skill at
algebra. Generally – or, rather, I only know of papers that discuss the connection in that direction, with algebra skill being dependent on fractions skill. (OTOH, Im not widely read in ´
this area.)
Id: 7831,
Score: 11,
is_accepted_answer: False,
language_detection_score: 0.9780
**Subset: StackExchange**
Figure 12: An example StackExchange document in MATHPILE. Here is a question from “matheducators”
“.stackexchang.com” with two high-quality responses.
28
-----
|In algebraic topology we often encounter chain complexes with extra multiplicative structure. For example, the cochain complex of a topological space has what is called the E -algebra structure which comes from the cup prod- ∞ uct. In this talk I present an idea for studying such chain com- plexes, E differential graded algebras (E DGAs), using ∞ ∞ stable homotopy theory. Namely, I discuss new equiva- lences between E DGAS that are defined using commuta- ∞ tive ring spectra. ring spectra are equivalent. Quasi-isomorphic E DGAs ∞ are E topologically equivalent. However, the examples I ∞ am going to present show that the opposite is not true; there are E DGAs that are E topologically equivalent but not ∞ ∞ quasi-isomorphic. This says that between E DGAs, we ∞ have more equivalences than just the quasi-isomorphisms. I also discuss interaction of E topological equiva- ∞ lences with the Dyer-Lashof operations and cases where E topological equivalences and quasi-isomorphisms ∞ agree.|Özet : In algebraic topology we often encounter chain complexes with extra multiplicative structure. For example, the cochain complex of a topological space has what is called the E -algebra structure which comes from the cup ∞ product. In this talk I present an idea for studying such chain complexes, E differential graded algebras (E ∞ ∞ DGAs), using stable homotopy theory. Namely, I discuss new equivalences between E DGAS that are defined using ∞ commutative ring spectra.We say E DGAs are E topo- ∞ ∞ logically equivalent when the corresponding commutative ring spectra are equivalent. Quasi-isomorphic E DGAs ∞ are E topologically equivalent. However, the examples I ∞ am going to present show that the opposite is not true; there are E DGAs that are E topologically equivalent but not ∞ ∞ quasi-isomorphic. This says that between E DGAs, we ∞ have more equivalences than just the quasi-isomorphisms. I also discuss interaction of E topological equivalences ∞ with the Dyer-Lashof operations and cases where E topo- ∞ logical equivalences and quasi-isomorphisms agree.|
|---|---|
|Université de la Saskatchewan, 1 - 4 juin 2015 www.smc.math.ca//2015f Comité d’organisation Financement étudiants Minisymposia invités Minisymposia libres Conférences libres Horaire - Minisymposa invités Open Problems Graphs and matrices Responsable et président: Shaun Fallat et Karen Meagher (University of Regina) WAYNE BARRETT, Brigham Young University The Fielder Vector and Tree Decompositions of Graphs [PDF] In the 1970’s Fiedler initiated a study of the second smallest eigenvalue of the Laplacian matrix L of a graph and the corresponding eigenvector(s). These "Fiedler" vectors have become spectacularly successful in revealing properties of the associated graph. A tree decomposition T of a graph G = (V, E) is an associated tree whose nodes are subsets of V and whose edge set respects the structure of G. Tree decompositions have been used in the analysis of complex networks. This talk reports on an algorithm developed by students at BYU for obtaining a tree decomposition by means of Fiedler vector(s) of G. ... Graphs that have a weighted adjacency matrix with spec- trum {λ 1n−2, λ2 2} [PDF] In this talk I will characterize the graphs which have an edge weighted adjacency matrix belonging to the class of n × n involutions with spectrum equal to {λn 1−2, λ2 2} for some λ and some λ 2. The connected graphs turn out to be 1 the cographs constructed as the join of at least two unions of pairs of complete graphs, and possibly joined with one other complete graph.|University of Saskatchewan, June 1 - 4, 2015 www.cms.math.ca//2015 Invited Minisymposia Contributed Minisymposia Contributed Talks Graphs and matrices Organizer and Chair: Shaun Fallat and Karen Meagher (University of Regina) WAYNE BARRETT, Brigham Young University The Fielder Vector and Tree Decompositions of Graphs [PDF] In the 1970’s Fiedler initiated a study of the second smallest eigenvalue of the Laplacian matrix L of a graph and the corresponding eigenvector(s). These "Fiedler" vectors have become spectacularly successful in revealing properties of the associated graph. A tree decomposition T of a graph G = (V, E) is an associated tree whose nodes are subsets of V and whose edge set respects the structure of G. Tree decompositions have been used in the analysis of complex networks. This talk reports on an algorithm developed by students at BYU for obtaining a tree decomposition by means of Fiedler vector(s) of G. ... Graphs that have a weighted adjacency matrix with spec- trum {λn 1−2, λ2 2} [PDF] In this talk I will characterize the graphs which have an edge weighted adjacency matrix belonging to the class of n × n involutions with spectrum equal to {λn 1−2, λ2 2} for some λ and some λ 2. The connected graphs turn out to be 1 the cographs constructed as the join of at least two unions of pairs of complete graphs, and possibly joined with one other complete graph.|
|---|---|
Table 8: Near-duplication matches found in CommonCrawl by MinHash LSH deduplication (in italics).
29
-----
|\begin{document} \title{Querying Guarded Fragments via Resolution} \section{A detailed example} Here we include some equations and theorem-like environments to show how these are labeled in a supplement and can be referenced from the main text. Consider the following equation: \begin{equation} \label{eq:suppa} a^2 + b^2 = c^2. \end{equation} You can also reference equations such as \cref{eq:matrices,eq:bb} from the main article in this supplement. \lipsum[100-101] \begin{theorem} An example theorem. \end{theorem} \lipsum[102] \begin{lemma} An example lemma. \end{lemma} \lipsum[103-105] Here is an example citation: \cite{KoMa14}. \section[Proof of Thm]{Proof of \cref{thm:bigthm}} \label{sec:proof} \lipsum[106-112] \section{Additional experimental results} \Cref{tab:foo} shows additional supporting evidence. \begin{table}[htbp] {\footnotesize \caption{Example table} \label{tab:foo} \begin{center} \begin{tabular}{|c|c|c|} \hline Species & \bf Mean & \bf Std.∼Dev. \\\hline 1 & 3.4 & 1.2 \\ 2 & 5.4 & 0.6 \\\hline \end{tabular} \end{center} } \end{table} \end{document}|\begin{document} \title{Limited memory Kelley’s Method Converges for Composite Convex and Submodular Objectives} \section{A detailed example} Here we include some equations and theorem-like environments to show how these are labeled in a supplement and can be referenced from the main text. Consider the following equation: \begin{equation} \label{eq:suppa} a^2 + b^2 = c^2. \end{equation} You can also reference equations such as \cref{eq:matrices,eq:bb} from the main article in this supplement. \lipsum[100-101] \begin{theorem} An example theorem. \end{theorem} \lipsum[102] \begin{lemma} An example lemma. \end{lemma} \lipsum[103-105] Here is an example citation: \cite{KoMa14}. \section[Proof of Thm]{Proof of \cref{thm:bigthm}} \label{sec:proof} \lipsum[106-112] \section{Additional experimental results} \Cref{tab:foo} shows additional supporting evidence. \begin{table}[htbp] {\footnotesize \caption{Example table} \label{tab:foo} \begin{center} \begin{tabular}{|c|c|c|} \hline Species & \bf Mean & \bf Std.∼Dev. \\\hline 1 & 3.4 & 1.2 \\ 2 & 5.4 & 0.6 \\\hline \end{tabular} \end{center} } \end{table} \end{document}|
|---|---|
Table 9: A near-duplication match found in arXiv by MinHashLSH deduplication (in italics).
30
-----
|\section{Definition:Constructed Semantics /Instance 4/Rule of Idempotence} Tags: Formal Semantics \begin{theorem} The Rule of Idempotence: :$(p \lor p) \implies p$ is a tautology in Instance 4 of constructed semantics. \end{theorem} \begin{proof} By the definitional abbreviation for the conditional: :$\mathbf A \implies \mathbf B =_{\text{ def}} \neg \mathbf A \lor \mathbf B$ the Rule of Idempotence can be written as : : $\neg \left({p \lor p}\right) \lor p$ This evaluates as follows: :$\begin{array }{| cccc|c|c|} \hline \neg & (p & \lor & p) & \lor & p \\ \hline 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 1 \\ 0 & 2 & 2 & 2 & 0 & 2 \\ 2 & 3 & 3 & 3 & 0 & 3 \\ \hline \end{array}$ {{qed}} Category:Formal Semantics \end{proof}|\section{Definition:Constructed Semantics /Instance 5/Rule of Idempotence} Tags: Formal Semantics \begin{theorem} The Rule of Idempotence: :$(p \lor p) \implies p$ is a tautology in Instance 5 of constructed semantics. \end{theorem} \begin{proof} By the definitional abbreviation for the conditional: :$\mathbf A \implies \mathbf B =_{\text{ def}} \neg \mathbf A \lor \mathbf B$ the Rule of Idempotence can be written as : : $\neg \left({p \lor p}\ right) \lor p$ This evaluates as follows: :$\begin{array }{| cccc|c|c|} \hline \neg & (p & \lor & p) & \lor & p \\ \hline 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 1 \\ 3 & 2 & 2 & 2 & 0 & 2 \\ 0 & 3 & 3 & 3 & 0 & 3 \\ \hline \end{array}$ {{qed}} Category:Formal Semantics \end{proof}|
|---|---|
|\section{Imaginary Part of Complex Product} Tags: Complex Multiplication \begin{theorem} Let $z_1$ and $z_2$ be complex numbers. Then: :$\map \Im {z_1 z_2} = \map \Re {z_1} \, \map \Im {z_2} + \map \Im {z_1} \, \ map \Re {z_2}$ \end{theorem} \begin{proof} Let $z_1 = x_1 + i y_1$ and $z_2 = x_2 + i y_2$. By definition of complex multiplication: :$z_1 z_2 = x_1 x_2 - y_1 y_2 + i \paren {x_1 y_2 + x_2 y_1}$ Then {{begin -eqn}} {{eqn | l = \map \Im {z_1 z_2} | r = x_1 y_2 + x_2 y_1 | c = {{Defof|Imaginary Part}} }} {{eqn | r = \map \Re {z_1} \, \map \Im { z_2} + \map \Im {z_1} \, \map \Re {z_2 } | c = {{Defof|Imaginary Part}} }} {{end -eqn}} {{qed}} \end{proof}|\section{Real Part of Complex Product} Tags: Complex Multiplication \begin{theorem} Let $z_1$ and $z_2$ be complex numbers. Then: :$\map \Re {z_1 z_2} = \map \Re {z_1} \ map \Re {z_2} - \map \Im {z_1} \map \ Im {z_2}$ \end{theorem} \begin{proof} Let $z_1 = x_1 + i y_1$ and $z_2 = x_2 + i y_2$. By definition of complex multiplication: :$z_1 z_2 = x_1 x_2 - y_1 y_2 + i \paren {x_1 y_2 + x_2 y_1}$ Then: {{begin -eqn}} {{eqn | l = \map \Re {z_1 z_2} | r = x_1 x_2 - y_1 y_2 | c = {{ Defof|Real Part}} }} {{eqn | r = \map \Re {z_1} \map \Re {z_2} - \map \Im {z_1} \map \Im {z_2} | c = {{ Defof|Real Part}} }} {{end -eqn}} {{qed}} \end{proof}|
|---|---|
Table 10: Near-duplication matches found in ProofWiki by MinHash LSH deduplication.
31
-----
|# HP-42S The **HP-42S RPN Scientific** is a programmable RPN Scientific hand held calculator introduced by Hewlett- Packard in 1988. It has advanced functions suitable for ap- plications in mathematics, linear algebra, statistical analy- sis, computer science and others. HP-42S The HP-42S — Type| Programmable scientific Manufacturer| Hewlett-Packard Introduced| 1988 Discontinued| 1995 Calculator Entry mode| RPN Precision| 12 display digits (15 digits internally),[1] expo- nent ±499 Display type| LCD dot-matrix Display size| 2 lines, 22 characters, 131×16 pixels CPU Processor| Saturn (Lewis) Programming Programming language(s)| RPN key stroke (fully merged) Firmware memory| 64 KB of ROM Program steps| 7200 Interfaces Ports| IR (Infrared) printing Other Power supply| 3×1.5V button cell batteries (Panasonic LR44, Duracell PX76A/675A or Energizer 357/303) Weight| 6 oz (170 g) Dimensions| 148×80×15mm ## Overview Perhaps the HP-42S was to be released as a replacement for the aging HP-41 series as it is designed to be compatible with all programs written for the HP-41. Since it lacked expandability, and lacked any real I/O ability, both key features of the HP-41 series, it was marketed as an HP-15C replacement. The 42S, however, has a much smaller form factor than the 41, and features many more built-in functions, such as a matrix editor, complex number support, an equation solver, user-defined menus, and basic graphing capabilities (the 42S can draw graphs only by programs). Additionally, it features a two-line dot matrix display, which made stack manipulation easier to understand. Production of the 42S ended in 1995.[2] As this calculator is regarded amongst the best ever made in terms of quality, key stroke feel, ease of programming, and daily usability for engineers,[3] in the HP calculator community the 42S has become famous for its high prices in online auctions, up to several times its introduction price, which has created a scarcity for utility end users.|# HP-42S The **HP-42S RPN Scientific** is a programmable RPN Scientific hand held calculator introduced by Hewlett- Packard in 1988. It has advanced functions suitable for ap- plications in mathematics, linear algebra, statistical analy- sis, computer science and others. HP-42S The HP-42S — Type| Programmable scientific Manufacturer| Hewlett-Packard Introduced| 1988 Discontinued| 1995 Calculator Entry mode| RPN Precision| 12 display digits (15 digits internally),[1] expo- nent ±499 Display type| LCD dot-matrix Display size| 2 lines, 22 characters, 131×16 pixels CPU Processor| Saturn (Lewis) Programming Programming language(s)| RPN key stroke (fully merged) Firmware memory| 64 KB of ROM Program steps| 7200 Interfaces Ports| IR (Infrared) printing Other Power supply| 3×1.5V button cell batteries (Panasonic LR44, Duracell PX76A/675A or Energizer 357/303) Weight| 6 oz (170 g) Dimensions| 148×80×15mm ## Overview Perhaps the HP-42S was to be released as a replacement for the aging HP-41 series as it is designed to be compatible with all programs written for the HP-41. Since it lacked expandability, and lacked any real I/O ability, both key features of the HP-41 series, it was marketed as an HP-15C replacement. The 42S, however, has a much smaller form factor than the 41, and features many more built-in functions, such as a matrix editor, complex number support, an equation solver, user-defined menus, and basic graphing capabilities (the 42S can draw graphs only by programs). Additionally, it features a two-line dot matrix display, which made stack manipulation easier to understand. Production of the 42S ended in 1995.[2] As this calculator is regarded amongst the best ever made in terms of quality, key stroke feel, ease of programming, and daily usability for engineers,[3] in the HP calculator community the 42S has become famous for its high prices in online auctions, up to several times its introduction price, which has created a scarcity for utility end users.|
|---|---|
Table 11: Duplication matches found in Wikipedia by MinHash LSH deduplication (in italics).
32
-----
|# Basic Concepts in Graph Theory ## Section 1: What is a Graph? There are various types of graphs, each with its own defini- tion. Unfortunately, some people apply the term "graph" rather loosely, so you can’t be sure what type of graph they’re talking about unless you ask them. After you have finished this chapter, we expect you to use the terminology carefully, not loosely. To motivate the various definitions, we’ll begin with some examples. Example 1 (A computer network) Computers are often linked with one another so that they can interchange in- formation. Given a collection of computers, we would like to describe this linkage in fairly clean terms so that we can answer questions such as "How can we send a message from computer A to computer B using the fewest possible intermediate computers?" We could do this by making a list that consists of pairs of computers that are connected. Note that these pairs are unordered since, if computer C can communicate with com- puter D, then the reverse is also true. (There are sometimes exceptions to this, but they are rare and we will assume that our collection of computers does not have such an excep- tion.) Also, note that we have implicitly assumed that the computers are distinguished from each other: It is insuffi- cient to say that "A PC is connected to a Mac." We must specify which PC and which Mac. Thus, each computer has a unique identifying label of some sort. For people who like pictures rather than lists, we can put dots on a piece of paper, one for each computer. We label each dot with a computer’s identifying label and draw a curve connecting two dots if and only if the correspond- ing computers are connected. Note that the shape of the curve does not matter (it could be a straight line or some- thing more complicated) because we are only interested in whether two computers are connected or not. Below are two such pictures of the same graph. Each computer has been labeled by the initials of its owner. ... ## Basic Concepts in Graph Theory The notation Pk(V ) stands for the set of all k-element subsets of the set V . Based on the previous example we have Definition 1 (Simple graph) A simple graph G is a pair G = (V, E) where - V is a finite set, called the vertices of G, and - E is a subset of P2(V ) (i.e., a set E of two-element subsets of V ), called the edges of G. ...|# Basic Concepts in Graph Theory ## Section 1: What is a Graph? There are various types of graphs, each with its own defini- tion. Unfortunately, some people apply the term "graph" rather loosely, so you can’t be sure what type of graph they’re talking about unless you ask them. After you have finished this chapter, we expect you to use the terminology carefully, not loosely. To motivate the various definitions, we’ll begin with some examples. Example 1 (A computer network) Computers are often linked with one another so that they can interchange in- formation. Given a collection of computers, we would like to describe this linkage in fairly clean terms so that we can answer questions such as "How can we send a message from computer A to computer B using the fewest possible intermediate computers?" We could do this by making a list that consists of pairs of computers that are connected. Note that these pairs are unordered since, if computer C can communicate with com- puter D, then the reverse is also true. (There are sometimes exceptions to this, but they are rare and we will assume that our collection of computers does not have such an excep- tion.) Also, note that we have implicitly assumed that the computers are distinguished from each other: It is insuffi- cient to say that "A PC is connected to a Mac." We must specify which PC and which Mac. Thus, each computer has a unique identifying label of some sort. For people who like pictures rather than lists, we can put dots on a piece of paper, one for each computer. We label each dot with a computer’s identifying label and draw a curve connecting two dots if and only if the correspond- ing computers are connected. Note that the shape of the curve does not matter (it could be a straight line or some- thing more complicated) because we are only interested in whether two computers are connected or not. Below are two such pictures of the same graph. Each computer has been labeled by the initials of its owner. ... ## Basic Concepts in Graph Theory The notation Pk(V ) stands for the set of all k-element subsets of the set V . Based on the previous example we have Definition 1 (Simple graph) A simple graph G is a pair G = (V, E) where - V is a finite set, called the vertices of G, and - E is a subset of P2(V ) (i.e., a set E of two-element subsets of V ), called the edges of G. ...|
|---|---|
Table 12: Duplication matches found in Textbooks by MinHash LSH deduplication (in italics).
33
-----
|This was originally posted on mathoverflow, but it seems it’s more appropriate to post here. Let B be a paracompact space with the property that any (topological) vector bundle E → B is trivial. What are some non-trivial examples of such spaces, and are there any interesting properties that characterize them? For simple known examples we of course have contractible spaces, as well as the 3-sphere S3. This one follows from the fact that its rank n vector bundles are classified by π (BO(n)) = π (O(n)) = 0. I’m primarily interested in 3 2 the case where B is a closed manifold. Do we know any other such examples? There is this nice answer to a MSE question which talks about using the Whitehead tower of the appropriate clas- sifying space to determine whether a bundle is trivial or not. This seems like a nice tool (of which I am not familiar with) to approaching this problem. As a secondary question, could I ask for some insight/references to this approach? EDIT Now that we know from the answer all the examples for closed 3-manifolds (integral homology spheres), I guess I can now update the question to the case of higher odd di- mensions. Does there exist a higher dimensional example?|Let B be a paracompact space with the property that any (topological) vector bundle E → B is trivial. What are some non-trivial examples of such spaces, and are there any interesting properties that characterize them? For simple known examples we of course have contractible spaces, as well as the 3-sphere S3. This one follows from the fact that its rank n vector bundles are classified by π (BO(n)) = π (O(n)) = 0. I’m primarily interested in 3 2 the case where B is a closed manifold. Do we know any other such examples? There is this nice answer to a MSE question which talks about using the Whitehead tower of the appropriate clas- sifying space to determine whether a bundle is trivial or not. This seems like a nice tool (of which I am not familiar with) to approaching this problem. As a secondary question, could I ask for some insight/references to this approach? EDIT Now that we know from the answers all the examples for closed 3-manifolds, I guess I can now update the ques- tion to the case of higher odd dimensions. Does there exist a higher dimensional example?|
|---|---|
|This is a copy of my question on MSE (https://math.stackexchange.com/questions/3372432) because this forum seems better suited for historical questions: In 1985, Gosper used the not-yet-proven formula by Ra- manujan √ 1 2 2 X∞ (4n)! 26390n + 1103 = · · π 992 (n!)4 3964n n=0 to compute 17 · 106 digits of π, at that time a new world record. Here (https://www.cs.princeton.edu/courses/archive/fall98/ cs126/refs/pi-ref.txt) it reads: There were a few interesting things about Gosper’s com- putation. First, when he decided to use that particular formula, there was no proof that it actually converged to pi! Ramanujan never gave the math behind his work, and the Borweins had not yet been able to prove it, because there was some very heavy math that needed to be worked through. It appears that Ramanujan simply observed the equations were converging to the 1103 in the formula, and then assumed it must actually be 1103. (Ramanujan was not known for rigor in his math, or for providing any proofs or intermediate math in his formulas.) The math of the Borwein’s proof was such that after he had computed 10 million digits, and verified them against a known calcula- tion, his computation became part of the proof. Basically it was like, if you have two integers differing by less than one, then they have to be the same integer. Now my historical question: Who was the first to prove this formula? Was it Gosper because he added the last piece of the proof, or was it the Borweins, afterwards? And was Gosper aware of this proof when he did his computation?|In 1985, Gosper used the not-yet-proven formula by Ra- manujan √ 1 2 2 X∞ (4n)! 26390n + 1103 = · · π 992 (n!)4 994n n=0 to compute 17 · 106 digits of π, at that time a new world record. Here (https://www.cs.princeton.edu/courses/archive/fall98/ cs126/refs/pi-ref.txt) it reads: There were a few interesting things about Gosper’s com- putation. First, when he decided to use that particular formula, there was no proof that it actually converged to pi! Ramanujan never gave the math behind his work, and the Borweins had not yet been able to prove it, because there was some very heavy math that needed to be worked through. It appears that Ramanujan simply observed the equations were converging to the 1103 in the formula, and then assumed it must actually be 1103. (Ramanujan was not known for rigor in his math, or for providing any proofs or intermediate math in his formulas.) The math of the Borwein’s proof was such that after he had computed 10 million digits, and verified them against a known calcula- tion, his computation became part of the proof. Basically it was like, if you have two integers differing by less than one, then they have to be the same integer. Now my historical question: Who was the first to prove this formula? Was it Gosper because he added the last piece of the proof, or was it the Borweins, afterwards? And was Gosper aware of this proof when he did his computation?|
|---|---|
Table 13: Near-duplication matches found in StackExchange by MinHash LSH deduplication (in italics).
34
-----
_Coin A is flipped three times and coin B is flipped four times. What is the probability that the number of heads_
_obtained from flipping the two fair coins is the same?_
Video Solution
Answer:
## Problem 3.2.2 (AMC 10)
Two tour guides are leading six tourists. The guides decide to split up. Each tourist must choose one of the guides, but
with the stipulation that each guide must take at least one tourist. How many different groupings of guides and tourists
are possible?
......
_One morning each member of Angela’s family drank an 8-ounce mixture of coffee with milk. The amounts of coffee_
_and milk varied from cup to cup, but were never zero. Angela drank a quarter of the total amount of milk and a sixth of_
_the total amount of coffee. How many people are in the family?_
Answer:
## Problem 20.2.15 (AMC 12)
The state income tax where Kristin lives is levied at the rate of p% of the first $28000 of annual income plus (p + 2)%
of any amount above $28000. Kristin noticed that the state income tax she paid amounted to (p + 0.25)% of her
annual income. What was her annual income?
Answer:
......
2002
_Find the least positive integer k for which the equation_ _n_ = k has no integer solutions for n. (The notation _x_
_⌊_ _⌋_
_means the greatest integer less than or equal to x.)_
Answer:
## Problem 40.1.9 (AIME)
Find the number of positive integers n less than 1000 for which there exists a positive real number x such that
_n = x⌊x⌋.’, ”, ’Note: ⌊x⌋_ is the greatest integer less than or equal to x.’
......
_What is the sum of the roots of z[12]_ = 64 that have a positive real part?
Answer:
## Problem 45.8.13 (AMC 12)
The complex numbers z and w satisfy z[13] = w, w[11] = z, and the imaginary part of z is sin _[mπ]n_ [, for relatively prime]
positive integers m and n with m < n. Find n.’
Answer:
......
Table 14: Exact match examples from the test set of MATH benchmark found in Textbooks by line-level exact
match deduplication (in italics).
35
-----
_Let x and y be real numbers satisfying x[4]y[5]_ + y[4]x[5] = 810 and x[3]y[6] + y[3]x[6] = 945. Evaluate 2x[3] + (xy)[3] + 2y[3].
Let x1 < x2 < x3 be the three real roots of the equation _√2014x[3]_ _−_ 4029x[2] + 2 = 0. Find x2(x1 + x3).
Let m be the largest real solution to the equation
3 5 17 19
_x_ 3 [+] _x_ 5 [+] _x_ 17 [+] _x_ 19 [=][ x][2][ −] [11][x][ −] [4]
_−_ _−_ _−_ _−_
There are positive integers a, b, and c such that m = a + _b +_ _c. Find a + b + c._
_[√]_
Let f (x) = x[4] + ax[3] + bx[2] + cx + d. If f ( 1) = 1, f (2) = 4, f ( 3) = 9, and f (4) = 16. Find f (1).
_−_ _−_ p _−_ _−_ _−_ _−_
Solve in positive integers x[2] _−_ 4xy + 5y[2] = 169.
Solve in integers the question x + y = x[2] _xy + y[2]._
_x+y_ _−_
Solve in integers _x[2]_ _xy+y[2][ =][ 3]7_
_−_
Prove the product of 4 consecutive positive integers is a perfect square minus 1.
For any arithmetic sequence whose terms are all positive integers, show that if one term is a perfect square, this
sequence must have infinite number of terms which are perfect squares.
Prove there exist infinite number of positive integer a such that for any positive integer n, n[4] + a is not a prime
number.
......
_The real root of the equation 8x[3]_ 3x[2] 3x 1 = 0 can be written in the form _√3_ _a+c 3√b+1_ _, where a, b, and c are_
_−_ _−_ _−_
_posit ive integers. Find a + b + c._
Find the number of positive integers m for which there exist nonnegative integers x0, x1, . . ., x2011 such that
2011
_m[x][k]_ _._
_k=1_
X
_m[x][0]_ =
Suppose x is in the interval [0, _[π]2_ []][ and][ log][24 sin][ x][(24 cos][ x][) =][ 3]2 [. Find][ 24 cot][2][ x][.]
Let P (x) be a quadratic polynomial with real coefficients satisfying x[2] _−_ 2x + 2 ≤ _P_ (x) ≤ 2x[2] _−_ 4x + 3 for all
real numbers x, and suppose P (11) = 181. Find P (16).
Letgreatest possible value of (a, b, c) be the real solution of the system of equations a[3] + b[3] + c[3] can be written in the form x[3] _−_ _xyz[m]n_ [, where] = 2, y[ m][3] [ and]− _xyz[ n][ are relatively prime positive] = 6, z[3]_ _−_ _xyz = 20. The_
integers. Find m + n.
Find the smallest positive integer n with the property that the polynomial x[4] _−_ _nx + 63 can be written as a product of_
two nonconstant polynomials with integer coefficients.
The zeros of the function f (x) = x[2] _−_ _ax + 2a are integers. What is the sum of the possible values of a?_
Let a, b, and c be three distinct one-digit numbers. What is the maximum value of the sum of the roots of the equation
(x − _a)(x −_ _b) + (x −_ _b)(x −_ _c) = 0?_
At the theater children get in for half price. The price for 5 adult tickets and 4 child tickets is 24.50. How much would
8 adult tickets and 6 child tickets cost?
The quadratic equation x[2] + px + 2p = 0 has solutions x = a and x = b. If the quadratic equation x[2] + cx + d = 0
has solutions x = a + 2 and x = b + 2, what is the value of d?
......
_Find the smallest positive integer n with the property that the polynomial x[4]_ _−_ _nx + 63 can be written as a product of_
_two nonconstant polynomials with integer coefficients._
The zeros of the function f (x) = x[2] _−_ _ax + 2a are integers. What is the sum of the possible values of a?_
Let a, b, and c be three distinct one-digit numbers. What is the maximum value of the sum of the roots of the equation
(x − _a)(x −_ _b) + (x −_ _b)(x −_ _c) = 0 ?_
At the theater children get in for half price. The price for 5 adult tickets and 4 child tickets is 24.50. How much would
8 adult tickets and 6 child tickets cost?
The quadratic equation x[2] + px + 2p = 0 has solutions x = a and x = b. If the quadratic equation x[2] + cx + d = 0
has solutions x = a + 2 and x = b + 2, what is the value of d?
PolynomialAndEquation Root Delta SpecialEquation Function NumberTheoryBasic IndeterminateEquation
SqueezeMethod Pythagore anTripletFormula TrigIdentity Inequality LogicalAndReasoning
AMC10/12 AIME IMO
US International
With Solutions
© 2009 - 2023 Math All Star
......
Table 15: Exact match examples from the test set of MATH benchmark found in CommonCrawl by line-level exact
match deduplication (in italics). In these examples, we only observe repeated questions from MATH, but do not
identify duplicate answers. 36
-----
_The sum of an infinite geometric series is a positive number S, and the second term in the series is 1. What is the_
_smallest possible value of S?_
**(A)** [1+]2√5
## Problem 17
**(B) 2** **(C)**
**(D) 3** **(E) 4**
All the numbers 2, 3, 4, 5, 6, 7 are assigned to the six faces of a cube, one number to each face. For each of the eight
vertices of the cube, a product of three numbers is computed, where the three numbers are the numbers assigned to the
three faces that include that vertex. What is the greatest possible value of the sum of these eight products?
**(A) 312** **(B) 343** **(C) 625** **(D) 729** **(E) 1680**
...
_What is the value of b + c if x[2]_ + bx + c > 0 only when x ∈ (−∞, −2) ∪ (3, ∞)?
May 11, 2020
...
_An ambulance travels at 40 mph and can follow a 20-mile route making no stops to get to the hospital. A helicopter_
_travels at one mile per minute, and the air route is 15 miles to get to the same hospital. However, the helicopter takes_
_three minutes for takeoff and three minutes for landing. How many fewer minutes does it take for the helicopter to_
_complete its trip (takeoff, flight and landing) than for the ambulance to complete its trip?_
Apr 6, 2020
#1
+34
0
Keep in mind that Time=Distance/Speed
_What is the greatest possible area of a triangular region with one vertex at the center of a circle of radius 1 and the_
_other two vertices on the circle?_
A bad first step is to put the center at the origin, one point at (1,0), and one point at (sin x, cos x).
A start is the area of a triangle with included angle expression
_a × b × sin θ_
2
Assuming θ in radians. If theta is π/2 then we have a right triangle. Let a=b=1. Area expression is
_A = (sin θ)/2_
This is maximum for θ = π/2.
Answer is maximum area for a right triangle.
...
Table 16: Exact match examples from the test set of MATH benchmark (upper) and MMLU-STEM (bottom) found
in OpenWebMath by line-level exact match deduplication (in italics). In these examples, we only observe repeated
questions, but do not identify duplicate answers.
37
-----
| [
"Zengzhi, Wang",
"Rui, Xia",
"Pengfei, Liu"
] | 2023-12-28T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2312.17120 | https://arxiv.org/abs/2312.17120 | null |
GeoRE: A Relation Extraction Dataset for Chinese Geometry Problems | Relation extraction is an important foundation for many natural language understanding applications, as well as geometry problem solving. In this paper, we present GeoRE, a relation extraction dataset for Chinese geometry problems. To the best of our knowledge, GeoRE is the first Chinese relation extraction dataset about geometry problems. It consists of 12,901 geometry problems on 43 shapes, covering 19 positional relations and 4 quantitative relations. We experiment with various state-of-the-art (SOTA) models and the best model achieves only 70.3% F1 value on GeoRE. This shows that GeoRE presents a challenge for future research. | null | [
"Wei, Yu",
"Mengzhu, Wang",
"Xiaodong, Wang",
"Yongfu, Zha",
"Xun, Zhou",
"Yongjian, Zhang",
"Shuyu, Miao",
"Jingdong, Liu"
] | 2021-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
|
GraphIC: A Graph-Based In-Context Example Retrieval Model for Multi-Step Reasoning | In-context learning (ICL) enables large language models (LLMs) to generalize to new tasks by incorporating a few in-context examples (ICEs) directly in the input, without updating parameters. However, the effectiveness of ICL heavily relies on the selection of ICEs, and conventional text-based embedding methods are often inadequate for tasks that require multi-step reasoning, such as mathematical and logical problem solving. This is due to the bias introduced by shallow semantic similarities that fail to capture the deeper reasoning structures required for these tasks. We present GraphIC, a novel approach that leverages graph-based representations of reasoning processes, coupled with Bayesian Networks (BNs) to select ICEs. Graph structures inherently filter out shallow semantics while preserving the core reasoning structure. Importantly, BNs capture the dependency of a node's attributes on its parent nodes, closely mirroring the hierarchical nature of human cognition-where each thought is shaped by preceding ones. This makes BNs particularly well-suited for multi-step reasoning tasks, aligning the process more closely with human-like reasoning. Extensive experiments across three types of reasoning tasks (mathematical reasoning, code generation, and logical reasoning) demonstrate that GraphIC outperforms both training-free and training-based models in selecting ICEs, excelling in terms of both effectiveness and efficiency. We show that GraphIC enhances ICL's performance and interoperability, significantly advancing ICE selection for multi-step reasoning tasks. | GraphIC is presented, a novel approach that leverages graph-based representations of reasoning processes, coupled with Bayesian Networks (BNs) to select ICEs, and outperforms both training-free and training-based models in selecting ICEs, excelling in terms of both effectiveness and efficiency. | [
"Yaqing, Wang",
"Jiale, Fu",
"Simeng, Han",
"Jiaming, Fan",
"Chen, Si",
"Xu, Yang"
] | 2024-10-03T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.02203 | https://arxiv.org/abs/2410.02203 | https://www.semanticscholar.org/paper/41c1b6e832ab67ea64b34ef0cabcba3a90222765 |
|
GroupDebate: Enhancing the Efficiency of Multi-Agent Debate Using Group Discussion | In recent years, Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse NLP tasks. Extensive research has explored how to enhance the logical reasoning abilities such as Chain-of-Thought, Chain-of-Thought with Self-Consistency, Tree-Of-Thoughts, and multi-agent debates. In the context of multi-agent debates, significant performance improvements can be achieved with an increasing number of agents and debate rounds. However, the escalation in the number of agents and debate rounds can drastically raise the tokens cost of debates, thereby limiting the scalability of the multi-agent debate technique. To better harness the advantages of multi-agent debates in logical reasoning tasks, this paper proposes a method to significantly reduce token cost in multi-agent debates. This approach involves dividing all agents into multiple debate groups, with agents engaging in debates within their respective groups and sharing interim debate results between groups. Comparative experiments across multiple datasets have demonstrated that this method can reduce the total tokens by up to 51.7% during debates and while potentially enhancing accuracy by as much as 25%. Our method significantly enhances the performance and efficiency of interactions in the multi-agent debate. | A method to significantly reduce token cost in multi-agent debates by dividing all agents into multiple debate groups, with agents engaging in debates within their respective groups and sharing interim debate results between groups. | ## GroupDebate: Enhancing the Efficiency of Multi-Agent Debate Using Group Discussion
**Tongxuan Liu[1][∗],** **Xingyu Wang[2],** **Weizhe Huang[1],** **Wenjiang Xu[2],** **Yuting Zeng[1],**
**Lei Jiang[1],** **Hailong Yang[3],** **Jing Li[1][∗]**
1
University of Science and Technology of China
2 3
Institute of Automation, Chinese Academy of Sciences Beihang University
```
{tongxuan.ltx, hwz871982879, yuting_zeng, jianglei0510}@mail.ustc.edu.cn
{wangxingyu2024, xuwenjiang2024}@ia.ac.cn
[email protected], [email protected]
```
**Abstract**
In recent years, Large Language Models (LLMs) have demonstrated remarkable
capabilities across diverse NLP tasks. Extensive research has explored how to enhance the logical reasoning abilities such as Chain-of-Thought, Chain-of-Thought
with Self-Consistency, Tree-Of-Thoughts, and multi-agent debates. In the context
of multi-agent debates, significant performance improvements can be achieved
with an increasing number of agents and debate rounds. However, the escalation
in the number of agents and debate rounds can drastically raise the tokens cost of
debates, thereby limiting the scalability of the multi-agent debate technique. To
better harness the advantages of multi-agent debates in logical reasoning tasks,
this paper proposes a method to significantly reduce token cost in multi-agent
debates. This approach involves dividing all agents into multiple debate groups,
with agents engaging in debates within their respective groups and sharing interim
debate results between groups. Comparative experiments across multiple datasets
have demonstrated that this method can reduce the total tokens by up to 51.7%
during debates and while potentially enhancing accuracy by as much as 25%. Our
method significantly enhances the performance and efficiency of interactions in the
multi-agent debate.
**1** **Introduction**
Large language Models (LLMs) such as GPT [1, 4, 5, 25, 26], LLaMa [31, 32], and PaLM [2, 7]
have demonstrated remarkable capabilities in various downstream tasks. These models can reach
or even exceed human performance in a range of NLP tasks but their performance is still limited in
complex mathematical and logical reasoning tasks [21]. To address these limitations, researchers
have proposed Chain-of-Thought [17, 35, 23] that generates the reasoning process step by step.
Subsequent research has introduced such as the Tree-of-Thoughts [38], Graph-of-Thoughts [3], and
the use of Verification [20] to further enhance the ability to perform complex multi-step reasoning.
Unfortunately, these single-agent methods are more likely to fall into random fabrication of facts or
the generation of delusions, thus leading to erroneous outcomes in multi-step reasoning processes
[5, 14, 15]. The multi-agent debate methods mitigate these issues by allowing different agents to
express their arguments to each other and these approaches have demonstrated considerable potential
and effectiveness across various types of tasks and datasets [6, 9, 19, 29, 33, 36, 37].
However, as the number of agents and rounds increases, the token cost in multi-agent debate can
escalate significantly. This issue results in monetary expenditure on tokens through LLM-based
_∗Corresponding authors._
Preprint. Under review.
-----
|Col1|Col2|GSM8K|Col4|Col5|
|---|---|---|---|---|
|API Val|ues|||(5, 4)|
||||||
|||(5,|3) (4, 4)||
|||(4, 3)|||
|(3,|(4, 2)(3, 3) 2)||||
|(2,|(2, 4) 3)||||
|(2, 2)|||||
|(1, 1)|||||
||||||
5000 10000 15000 20000 25000
Token Cost
Arithmetic
90
88
86
84
82
80
78
76
35
30
25
20
15
10
100
90
80
70
60
50
25
20
|API|Values|(5,|2) (3, 3)|(4, 3)|(3, 4)|(5, 3)|
|---|---|---|---|---|---|---|
||||||||
||(3, 2)|(4, 2) (2, 4|)||||
|(2, (1, 4|(2, 3) 2) )||||||
||||||||
|(1, 1)|||||||
||||||||
API Values (5, 3)
(5, 2) (4, 3) (3, 4)
(3, 3)
(4, 2)
(3, 2)
(2, 4)
(2, 3)
(2, 2)
(1, 4)
(1, 1)
2000 4000 6000 8000 10000 12000
Token Cost
15
10
Figure 1: Comparison of Token Cost and Accuracy Under Different Combinations of Agents
**and Rounds. The numbers in parentheses corresponding to each circle represent the pair of agent**
number and round number. The size/color of the circle represents the number of API calls, indicating
that the larger the circle, the more times the OpenAI API is called.
API or substantial computational overhead and power consumption, thereby severely hindering
the scalability and broader application of multi-agent debate, especially in scenarios with limited
computational resources [11]. As illustrated in the Figure 1, compared with a single LLM-based
agent, employing a multi-agent debate with three agents in five rounds can potentially raise the
accuracy from the initial 50% to 98%, but introduces 101× token cost in the Arithmetic [4] task.
Similarly, in the GSM8K [8] task, five rounds of multi-agent debate involving four agents can raise
the accuracy from 76% to 88%, but it results in 90× token cost. To address the issue of the rapidly
increasing number of tokens in multi-agent debates, researchers have proposed various improved
techniques. For instance, the multi-agent debate in [9] summarizes the output of other agents to serve
as the input for the next round. [29] proposes a "forgetfulness" mode that only the output from the
previous round is stored as input for the next round. However, only employing a "forgetfulness"
mode or summary mechanism to reduce token cost is still limited due to their theoretical complexity
and the issue of exacerbated token growth. Moreover, owing to their simplistic debating modes, they
struggle to fully exploit the collaborative capabilities of multi-agent debates.
In human societies, when multiple individuals engage in a debate, the group discussion method
is usually employed to enhance the efficiency of interaction while also preserving the diversity of
viewpoints [18]. Inspired by this, in this paper, we propose a novel method GroupDebate (GD),
which is based on group discussion to further reduce token cost in multi-agent debates. Specifically,
Our method divides all participating agents into several debate groups, with each group conducting
internal debates. Following the debates, the results are summarized and placed into a shared pool.
After that, each group of agents retrieves the debate summaries of all groups from the pool, which
serve as the input for the agents in the next round. Upon the conclusion of the debate, all agents
reach a consensus or the final outcome is determined by majority vote. Furthermore, we conduct a
theoretical analysis of the total token cost of the GroupDebate, thereby affirming the effectiveness
of the method. In our experiments, we evaluate the effectiveness of GroupDebate in comparison to
existing multi-agent debate methods and observe up to 45%/42.6%/50.6%/51.7% reduction in token
cost in the Arithmetic/GSM8K/MMLU/MATH dataset, as well as up to 25%/11% improvement in
accuracy in the MMLU/MATH dataset. Moreover, compared with methods such as CoT, Reflection,
and CoT-SC, GroupDebate also significantly outperforms them in terms of accuracy.
The main contributions of this paper are as follows:
1. We propose an innovative multi-agent debate strategy based on group discussion which can
improve the efficiency and performance of multi-agent debates.
2. We conduct a theoretical analysis of token cost based on our method, demonstrating its
efficiency and effectiveness.
3. Extensive experiments across four logical reasoning and mathematical datasets show that our
method can not only significantly reduce token cost but also potentially enhance accuracy,
validating the efficiency and superiority of our method.
-----
**Question: “ Comet Halley orbits the sun every 75 years.** Bill’ s dad…… .
Debate How old was Bill when he saw the Comet for the first time?"
|Col1|Since Comet Halley orbits……Therefore, the answer would be 52.5.|
|---|---|
|Col1|If Comet Halley orbits……Therefore, the answer would be 15.|
|---|---|
Let's start by finding……
Therefore, the answer would be
45.
Round1
These are the solutions to the problem from other agents:……
|Col1|Based on the given information……the answer would be 15.|
|---|---|
|Col1|Using the information from the other agents……the answer would be 15.|
|---|---|
Based on the solutions……the
correct answer is 15.
Round2
Figure 2: An Example of Multi-agent Debate Among Three Agents with Two Rounds.
**2** **Preliminaries**
**2.1** **Multi-agent Debate**
In the context of multi-agent debates (MAD), by integrating multiple LLMs (each treated as an
individual agent) and using various collaboration strategies, agents can propose viewpoints, review,
and respond to the results of other agents in multiple rounds of debates [6, 29, 30]. The process of
MAD can be summarized as follows: (i) At the beginning, each agent is provided with a question
and generates an individual response; (ii) These responses then form the new input context for each
agent, and the agents generate new responses; (iii) This debate procedure is repeated over multiple
rounds and the final answer is obtained through majority voting. Throughout multi-agent debate
procedure, all agents can consistently improve their own responses based on the responses of other
agents. In order to reduce input context length, [9] proposes that after collecting the responses from
other agents, the responses should first be summarized and then used as the new input context for
each agent. Figure 2 shows an example of two-round debates among three agents. In the first round,
each agent independently responds to the input and their outputs are collected and summarized. In the
second round, each agent’s input includes summaries from the previous round, which are combined
with a prompt to guide the output. Ultimately, all agents reach a consensus conclusion.
**2.2** **Token Cost Problem in Multi-agent Debate**
In the Figure 1, we can observe that although an increase in the number of agents and rounds
can significantly enhance accuracy, the sharply increasing token cost is still a serious challenge in
multi-agent debate. We analyze this based on the Simultaneous-Talk interaction strategy [6]. In this
strategy, each agent synchronizes their results with other agents in each round of the debate. We
separately scrutinized the changes in token cost brought about by increases in the number of agents
and the number of rounds. From Figure 3, it can be observed that under 4 rounds, as the number of
agents increases from 1 to 8, the token cost in GSM8K/Arithmetic/MMLU has respectively grown by
36×/44×/49×. Similarly, under 4 agents, as the number of rounds increases from 1 to 4, the token cost
in GSM8K/Arithmetic//MMLU has respectively increased by 17×/29×/19×. These findings reveal
that as the number of agents and rounds increases, the token cost also significantly rises.
**3** **Methodology**
In this section, we first introduce the overall framework of our GroupDebate. Subsequently, we
provide mathematical analysis of the token cost for both MAD and our GroupDebate. Formally,
assume there are M LLM-based agents, denoted as A = _Ai_ _i = 1, 2,_ _, M_, participating
_{_ _|_ _· · ·_ _}_
in a multi-round debate, with the total number of debate rounds denoted as T . In each round t
(t = 1, 2, . . ., T ), the output of each agent Ai is represented as Output[t]i[. The tokens of the initial]
question prompt are denoted as Q. These notations will be used throughout this paper.
-----
45000 Arithmetic
40000
35000
30000
25000
Tokens20000
15000
10000
5000
0
1 2 3 4 5 6 7 8
Number of Agents
Arithmetic
15000
12000
Tokens 90006000
3000
0
1 2 3 4
Rounds
GSM8K
42000
36000
30000
24000
Tokens18000
12000
6000
0
1 2 3 4 5 6 7 8
Number of Agents
18000 GSM8K
15000
12000
9000
Tokens
6000
3000
0
1 2 3 4
Rounds
MMLU
60000
50000
40000
30000
Tokens
20000
10000
0
1 2 3 4 5 6 7 8
Number of Agents
MMLU
20000
15000
10000
Tokens
5000
0
1 2 3 4
Rounds
Figure 3: Token Cost Under Different Numbers of Agents and Rounds. Figures in the first row
illustrate the token cost with variations in agents under the premise of 4 rounds. Figures in the second
row depict the token cost with changes in rounds under the condition of 4 agents.
**3.1** **GroupDebate**
We have M agents A = _Ai_ _i = 1, 2,_ _, M_, which can be randomly divided into N groups
_{_ _|_ _· · ·_ _}_
_G =_ _Gj_ _j = 1, 2,_ _, N_, with average K agents in each group. The GroupDebate splits the total
_{_ _|_ _· · ·_ _}_
debate rounds into S stages, with each stage encompassing R rounds. Thus, the total number of
rounds T can be calculated as T = S × R. For the s-th stage’s r-th round, GroupDebate selects one
of the following processes:
(1) Initial Thinking. If s = 1 and r = 1 (i.e., the first stage’s first round), we input the initial
question prompt Q to each agent.
(2) Inta-group Debate. If r > 1, we utilize the outputs from other agents within the same
group as the input for each agent.
(3) Inter-group Debate. If s > 1 and r = 1, we merge the outputs from the last round in each
group into a summary and input the summaries from other groups to each agent.
Meanwhile, inspired by [29], we summary the responses from other groups and restrict each agent to
receive the latest summary from the previous stage in the inter-group debate. After the S-th stage’s
_R-th round, all agents vote, and the ultimate result is determined by the majority selection. The_
detailed GroupDebate process can be found in Appendix A. The Figure 4 illustrates an example of
GroupDebate consisting of two stages and two groups. In the first stage, two agents in each group
receive the initial question and exchange ideas within the group. In the second stage, agents share the
summaries of their respective groups between groups and then discuss within their own groups again.
**3.2** **Token Cost Analysis**
**Token Cost in Multi-agent Debate.** We implement the summary mechanism in MAD following
[9], where we summarize the output of other agents as the input for each agent in the next round. The
summary for agent Ai in round t is denoted as Summaryi[t][. We define the token cost in the summary]
generation after each round t as Token[summary]t . And token cost in each round t can be computed as
follows:
(Q + Output[t]i[)][,] _t = 1_
_i=1_
X
_Token[t]_ =
(1)
_Token[t][−][1]_ +
(Summaryi[t][−][1] + Output[t]i[)][,] _t > 1_
_i=1_
X
Finally, the total token cost in MAD is Token[MAD] = _MTQ + (M_ [2]T + MT [2])C, where C
_O_
represents the upper bound on the token number for each agent’s response and the generated summary.
More mathematical details are illustrated in Appendix B.1.
-----
GroupDebate **Questioncollects urine in the body?: Which of the following best describes the structure that(A)Bladder (B) Kidney (C)Ureter (D)Urethra** "
|Group1|Col2|Group2|
|---|---|---|
|The answer is The answer is A B Using the solutions from other agents:…… The answer is The answer is A A|Stage1 Round1|The answer is C The answer is B Using the solutions from other agents:…… The answer is B The answer is B|
||Round2||
|Summary the solutions from all the agents:…… All the agents believe the answer is A. They think ……|Summary Pool|Summary the solutions from all the agents:…… All the agents think the answer is B. They use the solution ……|
|Using the solutions from all groups :…… The answer is The answer is A B Using the solutions from other agents:…… The answer is The answer is A A|Stage2 Round3|Using the solutions from all groups :…… The answer is A The answer is B Using the solutions from other agents:…… The answer is A The answer is A|
||Round4||
||||
Figure 4: An Example of GroupDebate. 4 agents are divided into 2 groups and the GroupDebate
process comprises two stages, with each stage involving two rounds of intra-group debate.
**Token Cost in GroupDebate.** In GroupDebate, we summary the outputs from other groups at the
end of each stage. Here, we define the summary of group Gj at the end of stage s as Summaryj[s][.]
We define the token cost in the summary generation after each stage s as Token[summary]s . And token
cost in round t at stage s is
_Output[1]i_ _[,]_ _t = 1_
_i=1_
X
_M × Q +_
(Q + Output[t]i[−][1]
_i=1_
X
_Summaryj[s][−][1]_ + Output[t]i[)][,] _t = (s_ 1)R + 1
_−_
_j=1_
X
_Token[t]s_ [=]
(Q + Output[t]i [+]
_iX∈Gj_
_Output[t]i[′][−][1]),_ (s 1)R + 1 < t <= min(sR, T )
_−_
_i[′]X∈Gj_
(2)
_j=1_
Finally, the total token cost of GroupDebate is Token[GD] = _MTQ + (_ _[M]N[ 2][T]_ + MSN )C,
_O_
where C represents the upper bound on the token number for each agent’s response and the generated
summary. More calculation details are shown in Appendix B.2.
**Discussion.** From the overall token cost complexity perspective, GD and MAD exhibit the same
level of complexity regarding the input token cost of the question prompt Q, suggesting an equal
impact on both methods. In our GroupDebate, given fixed values for T and M, the number of
groups N and the total number of stages S can be dynamically adjusted. When we set N →
_MT_
_S_, theoretically, we can obtain Token[GD] _MTQ +_ _√M_ [3]TSC . This complexity
_O_ _→O_
is significantly lower than that of MAD. If we consider settingq _S to a small positive integer, treating_
it as a constant, then Token[GD] can further approach O _MTQ +_ _√M_ [3]TC . Moreover, in fact, N
and S also influence the diversity in multi-agent debate, affecting the accuracy of the debate results,
which will be further studied in Section 4.3.
-----
MAD GD
Arithmetic GSM8K MMLU MATH
40000
25000 25000 35000
30000
20000 20000 25000 30000
15000 15000 20000 20000
Tokens 15000
10000 10000
10000 10000
5000 5000
5000
0 0 0 0
(4,3) (4,4) (5,3) (5,4) (6,3) (6,4) (4,3) (4,4) (5,3) (5,4) (6,3) (6,4) (4,3) (4,4) (5,3) (5,4) (6,3) (6,4) (4,3) (4,4) (5,3) (5,4) (6,3) (6,4)
100 80
40
80
80
60 30
60
60
ACC(%) 40 40 40 20
20 20 20 10
0 0 0 0
(4,3) (4,4) (5,3) (5,4) (6,3) (6,4) (4,3) (4,4) (5,3) (5,4) (6,3) (6,4) (4,3) (4,4) (5,3) (5,4) (6,3) (6,4) (4,3) (4,4) (5,3) (5,4) (6,3) (6,4)
Figure 5: Comparison of Token Cost and Accuracy Between GD and MAD under Different
**Agents and Rounds. The notation (5,4) signifies 5 agents with 4 rounds.**
**4** **Experiments**
**4.1** **Experimental Setup**
**Tasks and Metrics.** To demonstrate the accuracy and effectiveness of different methods, we adopt
total token cost, accuracy (ACC) as evaluation metrics. Additionally, we select four representative
tasks related to logical reasoning and mathematical tasks to evaluate our methods, namely Arithmetic
[4], GSM8K [8], MMLU [12], and MATH [13].
**Baselines.** We conduct a comparison of the efficiency and accuracy between GroupDebate (GD) and
the following methods: (1) Chain-of-Thought (CoT) [35]. (2) Reflection [27], with the trail number
set to 3. (3) Self-Consistency with Chain-of-Thought (CoT-SC) [34], where CoT-SC(40) represents
CoT-SC with 40 reasoning paths. (4) multi-agent debate (MAD) [19], to ensure fair comparisons,
we also conduct the experiment of the MAD under various agent and round configurations. Both
GD(5,3) and MAD(5,3) indicate the use of 5 agents and 3 rounds.
**Implementation Details.** We set the number of rounds of intra-group debate to 2 in GD. Additionally, we only retain output from the last round or summary generated from the last stage. Our
experiments are conducted using the GPT-3.5-turbo-0301 language model [24]. In order to prevent
the input prompt token exceeding the GPT-3.5 limit, the MAD defaults to using the summary [9].
For all baselines and GD, we conduct ten sets of tests separately, calculate the average, and mark the
range of variation. We evaluate these methods in a zero-shot setting, and the details about prompts
are illustrated in Appendix D.
**4.2** **Main Results**
In this section, we conduct a detailed comparison of GD with MAD as well as other single-agent
methods including CoT, Reflection and CoT-SC(40). In the MATH dataset, MAD can not produce
results in both (6,3) and (6,4) scenarios due to the prompt tokens exceeding the GPT-3.5 limit. The
main observations are as follows:
**Comparison Between GD and MAD.** First, as illustrated in Figure 5, GD consistently reduces
token cost under different agent and round settings, achieving up to 45%/42.6%/50.6%/51.7%
reduction in token cost in the Arithmetic/GSM8K/MMLU/MATH datasets. This demonstrates that
our method can effectively reduce token cost in multi-agent debate while being theoretically grounded.
-----
CoT Reflection CoT-SC(40) MAD(5,3) GD(5,3)
Arithmetic GSM8K MMLU MATH
100 80 40
80
80 60 30
60
60
40 20
ACC(%) 40 40
20 20 20 10
0 0 0 0
CoTReflectionCoT-SC(40)MAD(5,3)GD(5,3) CoTReflectionCoT-SC(40)MAD(5,3)GD(5,3) CoTReflectionCoT-SC(40)MAD(5,3)GD(5,3) CoTReflectionCoT-SC(40)MAD(5,3)GD(5,3)
12500
12500 15000 20000
10000
10000 15000
7500 7500 10000
10000
Tokens 5000 5000
5000
2500 2500 5000
0 0 0 0
CoTReflectionCoT-SC(40)MAD(5,3)GD(5,3) CoTReflectionCoT-SC(40)MAD(5,3)GD(5,3) CoTReflectionCoT-SC(40)MAD(5,3)GD(5,3) CoTReflectionCoT-SC(40)MAD(5,3)GD(5,3)
Figure 6: Comparison of Token Cost and Accuracy Between GD and MAD.
8Agent,4Round 6Agent,4Round
76 30500 73.00
74 30000 72.75 20500
72.50
72 29500 72.25 20000
ACC (%)70 29000Tokens ACC (%)72.0071.75 19500Tokens
68 28500 71.50 19000
66 28000 71.25 18500
71.00
27500
2222 332 422 44 222 33 42
Strategy Strategy
Accuracy Tokens
Figure 7: Comparison of Group Strategy. The notation (4,2,2) signifies three distinct groups, each
containing 4, 2, and 2 agents respectively.
Second, GD also improves accuracy in all different settings, achieving up to 25%/11% improvement
in accuracy in the MMLU/MATH dataset, which suggests GD can enhance accuracy in multi-agent
debate while reducing token cost.
**Comparison Between GD and Other Single-Agent Methods.** As shown in Figure 6, GD(5,3) and
MAD(5,3) can significantly enhance the accuracy across all four datasets. This is because using multiagent debate allows multiple agents to exchange ideas with each other, ensuring diversity. Secondly,
multi-agent debate methods generally incur higher token cost compared to single-agent methods,
indicating a significant challenge in reducing token cost while maintaining superior accuracy in multiagent debates. Our method takes a further step and achieves significant advantages in both token cost
and accuracy compared to MAD(5,3) under the same settings. This highlights the superiority and
effectiveness of our method in multi-agent debates.
**4.3** **In-Depth Analysis of Different GroupDebate Strategies**
**Group Strategy.** In order to investigate the impact of different group strategy on accuracy and
token cost, a comparison was made under the conditions of 6 and 8 agents with 4 rounds in the
MMLU dataset. As illustrated in the Figure 7, as the groups becomes more refined, the accuracys
increase and token cost decreases. And the group strategy of (2,2,2,2) compared to the group strategy
of (4,4) results in a total token decrease of 10% and an accuracy increase of 17%.
-----
Arithmetic
GSM8K
MML
Math
42.5
27.5
100
95
90
85
80
92
|Col1|Col2|Col3|Col4|90 88 ACC(%) 86 84 82|Col6|Col7|Col8|Col9|75 ACC(%) 70 65 60|Col11|MAD GD|Col13|Col14|Col15|40.0 37.5 ACC(%) 35.0 32.5 30.0|MAD GD|Col18|Col19|Col20|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||||||||
|||||||||||||||||||||
|||MA|D|||||MAD||||||||||||
80
80
75
70
65
60
MAD
GD
5000 10000 15000 20000
Tokens
GD
5000 10000 15000
Tokens
GD
5000 10000 15000 20000
Tokens
10000 20000
Tokens
Figure 10: Scaling Study of Token Cost.
**Intra-group Debate Rounds.** To explore the
impact of the number of intra-group debate
rounds, we conduct analysis under the condition of 4 agents and 4 rounds with varying numbers of intra-group debate rounds. As shown in
Figure 8, best accuracy can be achieved when
the number of intra-group debate rounds R is
2. This suggests that brief intra-group discussion can achieve better accuracy. Moreover, as
_R increases, the number of stages S decreases,_
resulting in lower token cost, which aligns with
our derived complexity formula.
**4.4** **Scaling Study**
Accuracy VS Intra-group Debate Rounds
22000
38
20000
36
18000
34 16000
ACC(%) 14000Tokens
32
12000
30 10000
1 2 3 4
Figure 8: Different Intra-group Debate Rounds.
The variations in accuracy are brought about by
different intra-group rounds R.
**Agent and Round Scaling.** In order to explore the influence of rounds and agents on ac- MMLU Accuracy
curacy under MAD and GD, we evaluate the 80 Round=2, MADRound=3, MAD
changing trends of accuracy for MAD and GD 75 Round=4, MAD
under various rounds and agents. As shown 70 Round=3, GD
in Figure 9, with the increase in rounds, there 6565.6 Round=5, GD
is a significant growth in accuracy, but when ACC (%)6061.2 COTReflection
rounds exceeds 4, a decrease in accuracy is ob- COT-SC(10)
served across different numbers of agents. This 5553.553.0 COT-SC(20)COT-SC(40)
reflects the phenomenon that limited increase 2 3 4 5 6 7 8
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Round=2, MAD Round=3, MAD|
|---|---|---|---|---|---|---|---|
||||||||Round=4, MAD Round=5, MAD Round=3, GD Round=4, GD Round=5, GD COT Reflection COT-SC(10) COT-SC(20) COT-SC(40)|
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
in rounds can enhance accuracy, but excessive Agent Numbers
debate rounds can lead to accuracy degradation. Figure 9: Scaling Study of Agents and Rounds.
As the number of agents increases, there is a
significant growth in accuracy, indicating that
an increase in agents can notably enhance the accuracy for both MAD and GD. Concurrently, it
should be noted that the rate of improvement in accuracy tends to gradually decelerate as the number
of agents continues to rise. The experimental results indicate the importance of controlling the
appropriate number of agents and rounds.
**Token Scaling.** We assess the scaling trends of token cost and accuracy under both MAD and GD
through increasing rounds or agents. First, as illustrated in Figure 10, with the increase in token
cost, both MAD and GD exhibit an overall upward trend in accuracy. And initially the accuracy
increases rapidly, but as the token cost becomes very large, the rate of accuracy growth slows down.
Moreover, in comparison between MAD and GD, GD consistently outperforms MAD with scaling of
tokens across all four datasets. While MAD’s accuracy tends to converge as the token cost becomes
exceedingly large, GD still potentially exhibits a growing trend. And we notice that GD has more
sharply increasing points, which may be indicative of emergent intelligence in the token scaling in
GD. It’s an intriguing research point to explore scaling laws about accuracy and efficiency within
multi-agent debate.
-----
**5** **Related Work**
**5.1** **LLM Reasoning**
Numerous research have explored to enhance the logical reasoning capabilities of LLMs. Chain-ofThought [35] is conducted in a manner that mirrors human thought processes when tackling complex
issues, utilizing a step-by-step approach. Tree-of-Thoughts [38] allows LLMs to determine their
next course of action by considering various reasoning paths and self-evaluation choices. Graphof-Thoughts [3] represents the nonlinear task resolution process of LLMs as an arbitrary graph,
where ideas are represented as vertices, and the dependencies between these ideas form the edges.
Additionally, the use of verification [20] and feedback recording are used to enhancement reasoning
capabilities. STaR [39] generates multiple chains of thought, from which effective ones are selected.
[28] involves creating a pool of CoT candidates and selecting the optimal candidate based on certain
conditions. [40] proposes a method for selecting the optimal prompt from the candidate set. Skeletonof-Thought [22] firstly generates skeleton of answer, followed by the parallel complete of content for
each point in the skeleton, thus accelerating answer generation. Table-of-Thoughts [16] enhances the
accuracy of reasoning through the structured modeling of the reasoning process. Self-Consistency
with CoT [34] samples a set of reasoning path and selects the most consistent answer.
**5.2** **Multi-agent Debate**
In multi-agent collaboration, the multi-agent debate approach has been demonstrated as an effective
orthogonal enhancement in logical reasoning. [19] proposes a Multi-Agent Debate (MAD) framework
that encourages divergent thinking in LLMs, where a judge manages the debate and obtain a final
solution. [36] focuses on common sense reasoning and conduct the debate align with real-world
scenarios. [9] utilizes debates among multiple agents to enhance accuracy, and investigates the impact
of the number of agents and rounds of debate on accuracy. [37] proposes a multi-agent collaboration
strategy that simulates the academic peer review process, allowing different models to correct each
other. It demonstrates that feedback exchange is superior to simple solution sharing. [33] integrates
a prior knowledge retrieval into the debate process, thereby enhancing reasoning capabilities. [10]
employs autonomous enhancement of negotiation strategies using a multi-round negotiation game
exploration model with two agents. [6] presents various communication strategies and evaluates
the effects of these differing approaches. Corex [29] employs collaborative methods such as debate,
review, and retrieve among multiple agents.
**6** **Limitations**
Although GroupDebate can bring about notable accuracy improvements on the MMLU and MATH
datasets, the first key limitation is that we have not delved into the underlying reasons and the
optimal settings of N and S. We only theoretically analyze the constraints of N and S required to
achieve optimal token cost complexity. However, determining the optimal values of N and S also
requires considering accuracy to maximize it under the same token cost, which is very complex. It
necessitates the integration of further evaluations and experiments to deduce the theoretical basis
for the enhancement of accuracy and optimal settings in GroupDebate. Furthermore, although
GroupDebate can significantly reduce token cost in muti-agent debates, its token cost is still higher
than single-agent methods like CoT. It is necessary to explore more ways to further reduce token cost
while ensuring high accuracy, which is crucial for their widespread application.
**7** **Conclusion**
In this paper, we investigate the token cost issue in multi-agent debates, a critical challenge that limits
the scalability of multi-agent debate. We propose a novel GroupDebate method, which leverages the
group discussion to mitigate this issue while fostering a diverse range of viewpoints. Specifically,
we divide all participating agents into several debate groups, where each agent can engage in both
intra-group debates and inter-group exchanges of ideas. Experimental results across four logical
reasoning datasets demonstrate GroupDebate can significantly reduce token cost as well as enhance
accuracy in multi-agent debates. In the future, we will further explore the theorem of how group
discussion can improve accuracy and theoretically determine the optimal settings in GroupDebate.
-----
**References**
[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni
Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4
technical report. arXiv preprint arXiv:2303.08774, 2023.
[2] Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos,
Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report.
_arXiv preprint arXiv:2305.10403, 2023._
[3] Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas
Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph
of thoughts: Solving elaborate problems with large language models. In Proceedings of the
_AAAI Conference on Artificial Intelligence, volume 38, pages 17682–17690, 2024._
[4] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
[5] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece
Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general
intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
[6] Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu,
and Zhiyuan Liu. Chateval: Towards better llm-based evaluators through multi-agent debate.
_arXiv preprint arXiv:2308.07201, 2023._
[7] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam
Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm:
Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1–
113, 2023.
[8] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to
solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
[9] Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate. arXiv preprint
_arXiv:2305.14325, 2023._
[10] Yao Fu, Hao Peng, Tushar Khot, and Mirella Lapata. Improving language model negotiation
with self-play and in-context learning from ai feedback. arXiv preprint arXiv:2305.10142, 2023.
[11] Taicheng Guo, Xiuying Chen, Yaqi Wang, Ruidi Chang, Shichao Pei, Nitesh V. Chawla, Olaf
Wiest, and Xiangliang Zhang. Large language model based multi-agents: A survey of progress
and challenges, 2024.
[12] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and
Jacob Steinhardt. Measuring massive multitask language understanding. _arXiv preprint_
_arXiv:2009.03300, 2020._
[13] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn
Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset.
_arXiv preprint arXiv:2103.03874, 2021._
[14] Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, et al. A survey on hallucination in
large language models: Principles, taxonomy, challenges, and open questions. arXiv preprint
_arXiv:2311.05232, 2023._
[15] Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang,
Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation.
_ACM Computing Surveys, 55(12):1–38, 2023._
[16] Ziqi Jin and Wei Lu. Tab-cot: Zero-shot tabular chain of thought. _arXiv preprint_
_arXiv:2305.17812, 2023._
[17] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners. Advances in neural information processing systems,
35:22199–22213, 2022.
-----
[18] Richard A Krueger. Focus groups: A practical guide for applied research. Sage publications,
2014.
[19] Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang,
Zhaopeng Tu, and Shuming Shi. Encouraging divergent thinking in large language models
through multi-agent debate. arXiv preprint arXiv:2305.19118, 2023.
[20] Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan
Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprint
_arXiv:2305.20050, 2023._
[21] Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, and Yue Zhang. Evaluating the
logical reasoning ability of chatgpt and gpt-4. arXiv preprint arXiv:2304.03439, 2023.
[22] Xuefei Ning, Zinan Lin, Zixuan Zhou, Huazhong Yang, and Yu Wang. Skeleton-of-thought:
Large language models can do parallel decoding. arXiv preprint arXiv:2307.15337, 2023.
[23] Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin,
David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. Show
your work: Scratchpads for intermediate computation with language models. arXiv preprint
_arXiv:2112.00114, 2021._
[24] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin,
Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to
follow instructions with human feedback. Advances in neural information processing systems,
35:27730–27744, 2022.
[25] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language
understanding by generative pre-training. 2018.
[26] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.
Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
[27] Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao.
Reflexion: Language agents with verbal reinforcement learning. Advances in Neural Information
_Processing Systems, 36, 2024._
[28] KaShun Shum, Shizhe Diao, and Tong Zhang. Automatic prompt augmentation and selection
with chain-of-thought from labeled data. arXiv preprint arXiv:2302.12822, 2023.
[29] Qiushi Sun, Zhangyue Yin, Xiang Li, Zhiyong Wu, Xipeng Qiu, and Lingpeng Kong. Corex:
Pushing the boundaries of complex reasoning through multi-model collaboration. arXiv preprint
_arXiv:2310.00280, 2023._
[30] Mikhail Terekhov, Romain Graux, Eduardo Neville, Denis Rosset, and Gabin Kolly. Secondorder jailbreaks: Generative agents successfully manipulate through an intermediary. In
_Multi-Agent Security Workshop@ NeurIPS’23, 2023._
[31] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open
and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
[32] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei,
Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open
foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
[33] Haotian Wang, Xiyuan Du, Weijiang Yu, Qianglong Chen, Kun Zhu, Zheng Chu, Lian Yan,
and Yi Guan. Apollo’s oracle: Retrieval-augmented reasoning in multi-agent debates. arXiv
_preprint arXiv:2312.04854, 2023._
[34] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha
Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language
models. arXiv preprint arXiv:2203.11171, 2022.
[35] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le,
Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models.
_Advances in neural information processing systems, 35:24824–24837, 2022._
[36] Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, and Bing Qin. Examining inter-consistency of
large language models collaboration: An in-depth analysis via debate. In The 2023 Conference
_on Empirical Methods in Natural Language Processing, 2023._
-----
[37] Zhenran Xu, Senbao Shi, Baotian Hu, Jindi Yu, Dongfang Li, Min Zhang, and Yuxiang Wu.
Towards reasoning in large language models via multi-agent peer review collaboration. arXiv
_preprint arXiv:2311.08152, 2023._
[38] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik
Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Ad_vances in Neural Information Processing Systems, 36, 2024._
[39] Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. Star: Bootstrapping reasoning with
reasoning. Advances in Neural Information Processing Systems, 35:15476–15488, 2022.
[40] Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan,
and Jimmy Ba. Large language models are human-level prompt engineers. arXiv preprint
_arXiv:2211.01910, 2022._
-----
**A** **GroupDebate Algorithm**
The detailed GroupDebate Algorithm is as follows:
**Algorithm 1 GroupDebate Methods**
**Require: Number of groups N**, number of agents M, question Q, total rounds T, intra-group debate
round R, total stages S, answer extracter V OTE
**Ensure: Answer**
1: A [A1, A2, . . ., AM ] _△_ Initialize and shuffle the agents randomly
_←_
2: G [G1, G2, . . ., GN ] _△Initialize each group_
_←_
3: H [H1, H2, . . ., HM ] _△Initialize each agent with empty memory_
_←_
4: Summary [Summary1, Summary2, . . ., SummaryN ] _△Initialize summary_
_←_
pool of each group with empty list
5: for i = 1 to M do
6: _Hi_ [Q] _△_ Initialize memory of each agent
7: end for ←
8: for s = 1 to S do
9: **for j = 1 to N do**
10: **for t = (s −** 1)R + 1 to min(sR, T ) do
12:11: **forif A si = 1 ∈** _G andj do t = 1 then_
13: _hi_ _Ai(Hi)_ _△_ Utilize agents to generate responses in the first round
15:14: _HformatHii ← ← ←_ _HHii + + BUF hi_ _△△Append response to memoryAppend empty buffer to memory in order to uniform_
16: **else if s ̸= 1 and t = (s −** 1)R + 1 then
17: _hi_ _Ai(Hi)_ _△_ Utilize agents to generate responses in the first round of
each stage ←
18: _Hi[_ 2] _hi_ _△_ update the previous output
_−_ _←_
19: **else**
20: **for Ai′ ∈** _Gj and Ai′ ̸= Ai do_
21: _buf ←_ [ ]
22: _buf_ _buf + Replayi′_ _△_ aggregate outputs of other agents in the same
_←_
group
23: **end for**
24: _Hi[_ 1] _buf_ _△_ Append outputs of other agents in the same group
_−_ _←_
25: _hi_ _Ai(Hi)_ _△_ Utilize agents to generate responses using other agents’
outputs ←
26: _Hi[_ 2] _hi_ _△_ update the previous output
_−_ _←_
27: **end if**
28: **end for**
29: **end for**
30: **if s ̸= S then**
31: _summary ←_ [ ]
33:32: **forsummary Ai ∈** _Gj do_ _summary + Hi[_ 2]
_←_ _−_
34: **end for**
35: _Summaryj_ _LLM_ (summary) _△_ Utilize LLM to generate summary at
the end of each stage ←
36: **end if**
37: **end for**
38: **for i = 1 to M do**
39: _Hi[_ 1] _Summary_
_−_ _←_
40: **end for**
41: end for
42: Answer ← _V OTE(H)_
43: return Answer
-----
**B** **Token Cost Analysis**
In this appendix section, we aim to provide a theoretical analysis of the token cost for both MAD and
GD. As LLMs’ outputs typically are not too long and we can actually control the token length of
LLMs’ outputs in prompts to some extent, we assume that the upper bound on the number of tokens
output by each agent participating in debate is Outputmax and the upper bound on the number of
tokens in the generated summary is Summarymax. We define C as the maximum of Outputmax
and Summarymax.
**B.1** **Token Cost in MAD**
Here, we implement the MAD method, which summarizes the responses from other agents and inputs
all previous summaries for each agent in each round. The token cost includes both input and output
cost, and in each round t, it can be divided into two parts: summary generation Token[summary]t and
agents’ responses Token[t]. Thus, the total token cost Token[MAD] can be represented as:
_Token[MAD]_ = Token[1] +
(Token[summary]t−1 + Token[t]) (3)
_t=2_
X
Specifically, we provide a detailed description of the token cost for each part. (1) summary genera**tion: The token cost for each agent includes the output from other agents and output summary. (2)**
**agents’ responses: If t = 1, the token cost for each agent includes the initial question prompt and its**
own output. If t > 1, the token cost for each agent includes the current summary, its own output, and
the total token cost of all its previous inputs and outputs. The detailed computation process of the
token cost in MAD can be found in Algorithm 2.
**Algorithm 2 Token Cost in MAD Methods**
**Require: Number of groups N**, number of agents M, question length Q, total rounds T, output
length of each agent Ai(i = 1, 2, . . ., M ) in each round t(t = 1, 2, . . ., T ) Output[t]i[, the]
summary of the output without Ai in each round t(t = 1, 2, . . ., T 1) Summaryi[t]
_−_
**Ensure: Total token cost Token[MAD]**
1: Token[1] _←_ _M × Q +_ _i=1_ _[Output]i[1]_ _△_ First round token cost
2: for t = 2 to T do
3: _Token[summary]t−1_ _←_ [P][P]i[M][M]=1[(][P]i[′]≠ _i_ _[Output]i[t][′][−][1]_ + Summaryi[t][−][1]) _△_ Token cost in
summary stage
4: _Token[t]_ _←_ _Token[t][−][1]_ + _i=1[(][Summary]i[t][−][1]_ + Output[t]i[)] _△_ Token cost in
subsequent rounds in an iterative way
5: _Token[t]_ _i=1[(][P][t]t[−][′]=1[1]_ [(][Output][P][M] _i[t][′]_ [+][ Summary]i[t][′] [) +][ Q][ +][ Output]i[t][)] _△_ Token
_←_ [P][M]
cost in subsequent rounds
6: end for
7: Token[MAD] _Token[1]_ + _t=2[(][Token]t[summary]1_ + Token[t]) = _t=1_ _Mi=1[(][Q][ +][ Output]i[t][) +]_
_←_ _−_
_Mi=1_ _Tt=2_ _i[′]=i_ _[Output]i[t][′][−][1]_ + Summaryi[t][−][1] + _t[′]=1[(][Output]i[t][′]_ [+]P[ Summary]i[t][′] [)]
_̸_
_△_ Total token cost in debate[P][T] [P][T]
P P P
8: return Token[MAD] [P][t][−][1]
-----
Following the line 7 in Algorithm 2, with Output[t]i _Outputmax and Summaryi[t]_
_≤_
_Summarymax for every t and i, we can infer the following:_
_Token[MAD]_ = MTQ +
_Output[t]i_ [+] ( _Output[t]i[′][−][1]_ + Summaryi[t][−][1]) (4)
Xt=1 Xi=1 Xi=1 Xt=2 _iX[′]≠_ _i_
_t−1_
(Output[t]i[′] [+][ Summary]i[t][′] [)]
_t[′]=1_
X
_Output[t]i_ [+]
_i=1_
X
_t=1_
_i=1_
_t=2_
_i=1_ _t=2_
_MTQ + (_ [3]
_≤_ 2 _[M][ 2][T][ −]_ [3]2 _[M][ 2][ +][ M]_ [)][ ×][ Output][max]
+ (M [2]T + [1]
2 _[MT][ 2][ −]_ _[M][ 2][ −]_ 2[3] _[MT][ +][ M]_ [)][ ×][ Summary][max]
_< MTQ + 2M_ [2]T × Outputmax + (M [2]T + MT [2]) × Summarymax
Therefore, we can obtain Token[MAD] = O _MTQ + (M_ [2]T + MT [2])C .
**B.2** **Token Cost in GroupDebate**
As mentioned in Section 3.1, our GroupDebate includes three types of processes and thus the total
token cost Token[GD] can be further dividied into:
_S_ _S_ _min(sR,T )_
_Token[GD]_ = Token[1]1 + (Token[summary]s−1 + Token[(]s[s][−][1)][R][+1]) + _Token[t]s_
initial thinking Xs=2 Xs=1 _t=(sX−1)R+2_
inter-group debate intra-group debate
| {z }
(5)
| {z } | {z }
Specifically, for initial thinking, the token cost of each agent includes the initial question prompt and
its own output. For intra-group debate, the token cost of each agent includes all responses from other
agents within the same group in the previous round and its output. For inter-group debate, the token
cost of each agent includes the summary generation cost, which comprises the responses from other
groups and the output summary, as well as its own output. The detailed computation process of the
token cost in GroupDebate can be found in Algorithm 3.
Following Appendix B.1 and Eq. 5, we have:
_Token[GD]_ = MQ +
_Output[(]i[s][−][1)][R]_ + Summaryj[s][−][1]
_iX∈Gj_
_Output[1]i_ [+]
_i=1_ _s=2_
X X
_j=1_
(Q + Output[(]i[s][−][1)][R]
_i=1_
X
_S_ _min(sR,T )_ _N_
_Summaryj[s][−][1]_ + Output[(]i[s][−][1)][R][+1])]
_j=1_
X
+ (Q + Output[t]i [+] _Output[t]i[′][−][1]_
Xs=1 _t=(sX−1)R+2_ Xj=1 _iX∈Gj_ _i[′]X∈Gj_
_MTQ + [3MS_ 2M + (T _S)(K + 1)M_ ] _Outputmax_
_≤_ _−_ _−_ _×_
+ (S 1)(M + 1)N _Summarymax_
_−_ _×_
_MTQ + [2][M][ 2][T]_ _Outputmax + 2MSN_ _Summarymax_
_≤_ _N_ _×_ _×_
= _MTQ + (_ _[M][ 2][T]_ + MSN )C
_O_ _N_
(6)
It is worth noting that, when we set N _MTS_, theoretically, we can obtain Token[GD]
_→O_ _→_
_O_ _MTQ +_ _√M_ [3]TSC . Furthermore, if we consider settingq _S to a very small positive integer,_
-----
**Algorithm 3 Tokens Cost in GroupDebate Methods**
**Require: Number of groups N**, number of agents M, question length Q, total rounds T,
group debate round R, total stages S, summary of each group at the end of each stage
_Summary = {Summaryj[s][|][j][ = 1][,][ 2][, . . ., N, s][ = 1][,][ 2][, . . ., S][}][, output length of each agent]_
_Ai(i = 1, 2, . . ., M_ ) in each round t(t = 1, 2, . . ., T ) Output[t]i[, each group agents set]
_G =_ _Gj_ _j = 1, 2, . . ., N_
_{_ _|_ _}_
**Ensure: Total token cost Token[GD]**
1:2: Token for t = 2[1]1 _[←] to R[M] do[ ×][ Q][ +][ P]i[M]=1_ _[Output]i[1]_ _△_ First round token cost
3: _Token[t]1_ _j=1_ _i_ _Gj_ [(][Q][ +][ Output]i[t] [+][ P]i[′] _Gj_ _[Output][t]i[′][−][1])_ _△_ Token cost
_[←]_ [P][N] _∈_ _∈_
in subsequent rounds of the first stage
P
4: end for
5: for s = 2 to S do
6: _Token[summary]s−1_ _←_ [P]j[N]=1[(][P]i∈Gj _[Output][(]i[s][−][1)][R]_ + Summaryj[s][−][1]) _△_ Token cost
for summary at the end of stage s − 1
7: _Token[(]s[s][−][1)][R][+1]_ _←_ [P]i[M]=1[(][Q][ +][ Output][(]i[s][−][1)][R] + _j=1_ _[Summary]j[s][−][1]_ + Output[(]i[s][−][1)][R][+1])
_△_ Token cost in the first round of the stage s
8: **for t = (s** 1)R + 2 to min(sR, T ) do [P][N]
_−_
9: _Token[t]s_ _j=1_ _i_ _Gj_ [(][Q] [+] _[Output]i[t]_ [+] [P]i[′] _Gj_ _[Output]i[t][′][−][1])_ _△_ Token cost
_[←]_ [P][N] _∈_ _∈_
in subsequent rounds of the stage s
P
10: **end for**
11: end for
12: Token[GD] _t=1_ _[Token]1[t]_ [+][P][S]s=2[(][Token]s[summary]1 +[P][min(]t=(s[sR,T]1)R[ )]+1 _[Token]s[t]_ [)] _△_
_←_ [P][R] _−_ _−_
Total token cost in debate
13: return Token[GD]
then Token[GD] can approach O _MTQ +_ _√M_ [3]TC . This complexity is significantly lower than
that of MAD.
**C** **More Experimental Results**
**C.1** **Details about Main Reults**
In Section 4.2, we have shown the comparison of token cost and accuracy between GD and other
baseline methods. We further present the detailed experimental data in this section. Table 1 clearly
shows the percentage reduction in tokens and the increase in ACC compared to MAD. Table 2
presents the detailed data results compared to single-agent methods. The results suggest that GD can
significantly reduce token cost as well as futher enhance accuracy in muti-agent debates.
MAD MAD+Forget MAD+Group GD
80
30000
60
20000
40
Tokens ACC(%)
10000 20
0 0
(4,3) (4,4) (5,3) (5,4) (6,3) (6,4) (4,3) (4,4) (5,3) (5,4) (6,3) (6,4)
Figure 11: Ablation Study.
-----
Dataset Metric Method (4,3) (4,4) (5,3) (5,4) (6,3) (6,4)
MAD 8919 15132 12127 21165 15871 28205
Ours 7109 9603 9864 11640 12122 16450
∆(%) _↓20.3_ _↓36.5_ _↓18.7_ _↓45.0_ _↓23.6_ _↓41.7_
MAD 90 94 96 97 97 **98**
Ours 94 96 98 98 99 **100**
∆ _↑4_ _↑2_ _↑2_ _↑1_ _↑2_ _↑2_
MAD 10177 16282 13706 20991 17108 27590
Ours 7362 10241 9194 13612 11908 15823
∆(%) _↓27.7_ _↓37.1_ _↓32.9_ _↓35.1_ _↓30.4_ _↓42.6_
MAD 84 86 86 88 88 **90**
Ours 86 88 90 91 90 **92**
∆ _↑2_ _↑2_ _↑4_ _↑3_ _↑2_ _↑2_
MAD 12231 20110 16764 28650 22434 37020
Ours 8643 12379 11102 15685 14475 18282
∆(%) _↓29.3_ _↓38.4_ _↓33.8_ _↓45.3_ _↓35.5_ _↓50.6_
MAD 61 63 63 63 **64** **64**
Ours 74 76 78 78 78 **80**
∆ _↑13_ _↑13_ _↑15_ _↑15_ _↑14_ _↑16_
MAD 19949 30461 21609 40223 Exceed Exceed
Ours 9249 12760 14553 19410 15842 19736
∆(%) _↓53.6_ _↓58.1_ _↓32.7_ _↓51.7_ N/A N/A
MAD 33 35 35 **36** N/A N/A
Ours 34 38 38 40 40 **42**
Tokens
ACC (%)
Tokens
ACC (%)
Tokens
ACC (%)
Tokens
ACC (%)
**Arithmetic**
**GSM8K**
**MMLU**
**MATH**
∆ _↑1_ _↑3_ _↑3_ _↑4_ N/A N/A
Table 1: Detailed Results of GD and MAD under Different Agents and Rounds across Different
**Datasets. The best results are bold.**
**C.2** **Ablation Study**
In order to further investigate the impact of certain components in GD, we conduct a comparative
analysis of MAD, MAD+Forget (MAD with only preserving summaries from the previous round),
MAD+Group (MAD with group discussion) and GD. First, as illustrated in the Figure 11, GD
outperforms all MAD and its variants in token cost and accuracy, which shows the effectiveness of
involving both forget mechanism and group discussion in our method. Second, through comparing
MAD+Forget with MAD and GD with MAD+Group, the forget mechanism can effectively reduce
token cost while maintaining accuracy almost unchanged, which suggests that there is no need for
agents to remember all summary results. Third, MAD+Group, compared to MAD+Forget, reduces
a substantial number of tokens and significantly improves accuracy. This further highlights the
effectiveness of our proposed group discussion method. Based on the grouping strategy analyzed
previously, we hypothesize that the primary reason for the enhancement in accuracy is due to the
diversity preserved among the groups.
**D** **Prompts**
In this section, we present some examples of prompts. Table 3 displays the input prompts used in our
GroupDebate across different datasets, which encompass five different types. Table 4 outlines the
prompts regarding output format requirements in our GroupDebate.
-----
Dataset Method ACC(%) Prompt Tokens Total Tokens API Numbers
COT 50.2 ± 7.1 39.0 ± 0.0 119.1 ± 7.6 1
Reflection 76.0 ± 6.1 864.3 ± 26.5 1170.9 ± 43.3 4
**Arithmetic** COT-SC(40) 96.1 ± 3.0 1560.0 ± 0.0 4910.9 ± 113.4 40
MAD 96.2 ± 3.1 9153.7 ± 189.3 12127.4 ± 245.4 25
GroupDebate(Ours) **98.1 ± 2.1** 7290.7 ± 58.8 9864.7 ± 85.1 17
COT 76.1 ± 6.0 102.1 ± 2.1 233.8 ± 9.8 1
Reflection 76.6 ± 5.2 1164.7 ± 47.1 1379.1 ± 65.4 4
**GSM8K** COT-SC(40) 90.1 ± 4.2 4083.2 ± 84.4 9380.0 ± 381.8 40
MAD 86.7 ± 4.9 11281.6 ± 421.5 13706.5 ± 552.9 25
GroupDebate(Ours) **90.4 ± 4.0** 7169.9 ± 132.3 9194.9 ± 212.9 17
COT 53.0 ± 7.1 136.4 ± 12.4 239.4 ± 15.3 1
Reflection 53.5 ± 7.0 1217.2 ± 61.3 1471.2 ± 71.8 4
**MMLU** COT-SC(40) 67.1 ± 6.7 5456.3 ± 495.2 10058.7 ± 670.4 40
MAD 63.8 ± 7.1 13067.5 ± 726.7 16764.9 ± 958.6 25
GroupDebate(Ours) **78.3 ± 6.0** 8922.6 ± 291.4 11602.7 ± 389.3 17
COT 20.5 ± 7.1 93.9 ± 6.1 518.4 ± 77.3 1
Reflection 22.4 ± 6.0 1865.9 ± 162.7 2457.3 ± 222.4 4
**MATH** COT-SC(40) 33.4 ± 8.6 3758.7 ± 242.8 17958.3 ± 1588.2 40
MAD 35.3 ± 8.1 17340.4 ± 1276.6 21609.6 ± 1554.2 25
GroupDebate(Ours) **38.4 ± 8.0** 10701.3 ± 557.3 14553.6 ± 811.9 17
Table 2: Detailed Results about Comparison between GD and Single-agent Methods. GroupDebate and MAD here utilize 5 agents and 3 rounds. The best accuracy results are bold and the standard
deviation is also presented.
**Type** Task **Prompt**
_Welcome to the debate! You are a seasoned debater with expertise in succinctly and persuasively expressing your viewpoints._
_You will be assigned to debate groups, where you will engage in discussions with fellow participants. The outcomes of_
_each group’s deliberations will be shared among all members. It is crucial for you to leverage this information effectively_
_in order to critically analyze the question at hand and ultimately arrive at the correct answer. Best of luck!_
System All
Arithmetic _What is the result of {}+{}*{}+{}-{}*{}? <Output format>._
GSM8K _Can you solve the following math problem? <Problem> Explain your reasoning. <Output format>._
MMLU _Can you answer the following question as accurately as possible? : A), B), C), D) Explain your answer, <Output format>._
MATH _Can you solve the following math problem? <Problem> Explain your reasoning as concise as possible.<Output format>._
Starting
_These are the recent opinions from other agents: <other agent responses> Using the opinions_
Intra-group Debate All
_carefully as additional advice, can you provide an updated answer?_
_Examine your solution and that other agents step by step. <Output format>._
_These are the recent/updated opinions from all agents: <all agent responses>_
Summary All _Summarize these opinions carefully and completly in no more than 80 words._
_Aggregate and put your final answers in parentheses at the end of your response._
_These are the recent opinions from all groups: Your group response: < group summary>, Other group responses:_
Inter-group Debate All _<other group summary>. Using the reasoning from all groups as additional advice, can you give an updated answer?_
_Examine your solution and that all groups step by step. <Output format>._
Table 3: Prompts in Each Stage. List of prompts used in each task.
Dataset Output format requirements
Arithmetic _Make sure to state your answer at the end of the response._
_Your final answer should be a single numerical number, in the form \boxed{{answer}},_
GSM8K
_at the end of your response._
MMLU _Put your final choice in parentheses at the end of your response._
MATH _Put your final answer in the form \boxed{{answer}}, at the end of your response._
Table 4: Output Format Requirements in Each Dataset.
-----
| [
"Tongxuan, Liu",
"Xingyu, Wang",
"Jing, Li",
"Weizhe, Huang",
"Wenjiang, Xu",
"Yuting, Zeng",
"Lei, Jiang",
"Hailong, Yang"
] | 2024-09-21T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.14051 | https://arxiv.org/abs/2409.14051 | https://www.semanticscholar.org/paper/dbca56618b2359085e8ce37b47c4903ff80c71ef |
HDFlow: Enhancing LLM Complex Problem-Solving with Hybrid Thinking and Dynamic Workflows | Despite recent advancements in large language models (LLMs), their performance on complex reasoning problems requiring multi-step thinking and combining various skills is still limited. To address this, we propose a novel framework HDFlow for complex reasoning with LLMs that combines fast and slow thinking modes in an adaptive manner. Our approach consists of two key components: 1) a new approach for slow, deliberate reasoning called Dynamic Workflow, which automatically decomposes complex problems into more manageable sub-tasks and dynamically designs a workflow to assemble specialized LLM or symbolic reasoning tools to solve sub-tasks; 2) Hybrid Thinking, a general framework that dynamically combines fast and slow thinking based on problem complexity. Finally, we propose an easy-to-scale method for automatically synthesizing a large-scale dataset of 27K challenging reasoning problems for complex reasoning and a hybrid thinking tuning method that trains smaller LLMs on this dataset to internalize the fast/slow hybrid reasoning strategies. Experiments on four reasoning benchmark datasets demonstrate that our slow thinking with dynamic workflows significantly outperforms Chain-of-Thought, and hybrid thinking achieves the highest accuracy while providing an effective balance between computational efficiency and performance. Fine-tuning using our hybrid thinking approach also significantly boosts the complex reasoning capabilities of open-source language models. The results showcase the promise of slow thinking, dynamic workflows, and hybrid thinking in expanding the frontier of complex problem-solving with LLMs\footnote{Code and data will be released at \url{https://github.com/wenlinyao/HDFlow}.}. | A novel framework HDFlow for complex reasoning with LLMs that combines fast and slow thinking modes in an adaptive manner and hybrid thinking achieves the highest accuracy while providing an effective balance between computational efficiency and performance is proposed. | ## HDFLOW: ENHANCING LLM COMPLEX PROBLEM- SOLVING WITH HYBRID THINKING AND DYNAMIC WORKFLOWS
**Wenlin Yao, Haitao Mi, Dong Yu**
Tencent AI Lab
Bellevue, WA 98004, USA
_{wenlinyao,haitaomi,dyu}@global.tencent.com_
ABSTRACT
Despite recent advancements in large language models (LLMs), their performance
on complex reasoning problems requiring multi-step thinking and combining various skills is still limited. To address this, we propose a novel framework HDFlow
for complex reasoning with LLMs that combines fast and slow thinking modes
in an adaptive manner. Our approach consists of two key components: 1) a new
approach for slow, deliberate reasoning called Dynamic Workflow, which automatically decomposes complex problems into more manageable sub-tasks and
dynamically designs a workflow to assemble specialized LLM or symbolic reasoning tools to solve sub-tasks; 2) Hybrid Thinking, a general framework that dynamically combines fast and slow thinking based on problem complexity. Finally,
we propose an easy-to-scale method for automatically synthesizing a large-scale
dataset of 27K challenging reasoning problems for complex reasoning and a hybrid thinking tuning method that trains smaller LLMs on this dataset to internalize
the fast/slow hybrid reasoning strategies. Experiments on four reasoning benchmark datasets demonstrate that our slow thinking with dynamic workflows significantly outperforms Chain-of-Thought, and hybrid thinking achieves the highest
accuracy while providing an effective balance between computational efficiency
and performance. Fine-tuning using our hybrid thinking approach also significantly boosts the complex reasoning capabilities of open-source language models.
The results showcase the promise of slow thinking, dynamic workflows, and hybrid thinking in expanding the frontier of complex problem-solving with LLMs[1].
1 INTRODUCTION
Large language models (LLMs) have demonstrated remarkable capabilities across a wide range of
tasks, from code generation and mathematical reasoning to natural language understanding and generation. However, their performance on complex reasoning problems that require multi-step thinking and various skills is still limited. Recent advancements in symbolic reasoning and tool usage,
such as AlphaCode (Li et al., 2022; AlphaCode Team), AlphaGeometry (Trinh et al., 2024), and
AlphaProof (AlphaProof/AlphaGeometry teams), have shown significant improvements in specific
domains by integrating LLMs with specialized procedures and symbolic reasoning engines. Various prompting strategies, such as Chain-of-Thought (CoT) (Wei et al., 2022), Tree of Thoughts
(ToT) (Yao et al., 2024), and Graph of Thoughts (GoT) (Besta et al., 2024a), have been developed to
enable different reasoning topologies to enhance LLM problem-solving capabilities. Despite these
advancements, enhancing the reasoning abilities of LLMs to solve challenging problems across diverse domains in a unified framework remains crucial for expanding their real-world applicability.
Existing methods for complex reasoning with LLMs have several limitations. First, complex
problem-solving often requires combining various knowledge domains, skills, and tool usage. While
previous approaches such as AlphaCodium (Ridnik et al., 2024) and Alphageometry (Trinh et al.,
2024) have demonstrated the potential of combining language models and symbolic reasoning to
[1Code and data will be released at https://github.com/wenlinyao/HDFlow.](https://github.com/wenlinyao/HDFlow)
-----
solve complex problems, they rely on manually designed workflows tailored to specific domains
(i.e., competitive programming or geometry theorem proving). The language model and symbolic
engine take predefined turns in a rigid problem-solving process. This limits the applicability and
adaptability of these systems to broader domains. Thus, we aim to enhance the generic problemsolving capabilities of LLMs by dynamically alternating between natural language reasoning in the
“text space” and symbolic reasoning in the “symbolic space” based on the problem at hand. This
dynamic integration of the two reasoning modes enables the system to address a much broader range
of problems and adapt the problem-solving process to the unique requirements of each task. Second,
traditional approaches to complex problem-solving with LLMs often rely on a single mode of thinking, which may struggle with more intricate tasks that demand a deliberate, analytical approach. For
example, many approaches employ a fixed reasoning strategy, such as CoT prompting, regardless
of the problem’s complexity. For instance, OpenAI’s most recent o1 model[2] only engages in a singular deep thinking mode despite the complexity of the user’s query. This can lead to suboptimal
performance on tasks that require a more deliberate, multi-step approach. While multi-agent frameworks such as AutoGPT (Significant Gravitas), ReAct Yao et al. (2022), and AutoGen (Wu et al.,
2023) have addressed some aspects of this challenge by enabling recursive goal decomposition, interleaving reasoning and acting, and state-driven workflows, they do not fully exploit the potential of
thinking approaches that can switch between intuitive thinking and more analytical thinking modes
based on problem complexity. Finally, as problem complexity increases, the performance of existing approaches tends to degrade significantly, highlighting the need for frameworks that can scale
to handle even the most challenging reasoning problems. Recently, OpenAI o1 model (OpenAI)
demonstrates the potential to consistently improve LLM performance of complex reasoning with
compute scaling in inference-time through deep thinking.
To address these limitations, we propose a novel framework for complex reasoning with LLMs
that combines fast (system I) and more analytical slow thinking (system II) adaptively, inspired
by the dual process theory of human cognition (Daniel, 2017). Our approach consists of two key
components. First, we introduce a new approach for slow, deliberate reasoning called Dynamic
**Workflow, which automatically decomposes complex problems into more manageable sub-tasks. It**
then dynamically designs a workflow to assemble specialized LLM or symbolic tools to solve each
sub-task. To achieve this, the dynamic workflow orchestrates a team of specialized LLM experts,
each contributing unique domain knowledge or tool usage, to solve the sub-tasks in a structured
manner. Second, we propose Hybrid Thinking, a general framework that dynamically combines
fast and slow thinking based on problem complexity. For simpler tasks, the model defaults to a
fast-thinking mode using CoT strategy. When the model’s confidence in the fast thinking output is
low, it automatically switches to slow thinking with dynamic workflow, allowing for more efficient
and more accurate problem-solving. Finally, to train local LLMs for complex reasoning, we present
an easy-to-scale method for automatically synthesizing a large-scale dataset of 27K challenging
reasoning problems and propose a hybrid thinking tuning approach that finetunes open-source LLMs
on this dataset, enabling them to internalize the fast/slow hybrid reasoning strategies.
We conduct experiments on four reasoning benchmark datasets (i.e., BBH (Suzgun et al., 2022),
MATH (Hendrycks et al., 2021), Game of 24 Yao et al. (2024), DeepMind Math (Saxton et al.,
2019). Experiments using GPT-4-Turbo reveal that slow thinking with dynamic workflows significantly outperformed CoT, with an average accuracy improvement of 22.4%. Hybrid thinking,
which combines fast and slow thinking, achieved the highest accuracy on three of the four datasets.
While slow thinking required the most inference tokens, hybrid thinking struck an effective balance
between computational efficiency and performance. Furthermore, fine-tuning Llama-3-8B-Instruct
using hybrid thinking significantly boosted performance across all datasets compared to the original
model. Hybrid thinking after fine-tuning yielded accuracy gains of 10-23% over CoT prompting,
with broad improvements across different subject areas in MATH. Overall, the results demonstrate
the promise of slow thinking with dynamic workflows and hybrid thinking in enhancing the complex
problem-solving abilities of LLMs.
2o1-preview model tested on Sept.24, 2024. o1-preview model thinks for a few seconds to users’ casual
conversational queries such as How are you?
-----
2 RELATED WORK
**Symbolic Reasoning and Tool Usage. Bridging LLMs with symbolic reasoning and tool usage**
has demonstrated significant improvements across various domains. AlphaCode (Li et al., 2022;
AlphaCode Team) combines LLMs with a specialized search and reranking mechanism, achieving
top-tier performance in competitive programming. Similarly, AlphaCodium (Ridnik et al., 2024)
improves AlphaCode’s performance by applying a predefined multi-stage process of problem analysis, solution generation, and iterative testing and bug fixing. By using an evolutionary search
procedure guided by an LLM, FunSearch (Romera-Paredes et al., 2024) can discover new mathematical constructions and algorithmic heuristics. AlphaGeometry (Trinh et al., 2024) leverages a
neuro-symbolic system trained on synthetic data to guide a symbolic deduction engine, achieving
near-expert performance in geometry theorem proving. Chain of Code (Li et al., 2024) encourages
LLMs to write pseudocode for challenging sub-problems, which is then executed by the LM itself
when it cannot be handled by a standard interpreter. These approaches rely on carefully designing
when and how to integrate symbolic reasoning for each task domain.
**Prompting Strategies. Various prompting strategies have been developed to enable different rea-**
soning topologies (Besta et al., 2024b) for enhancing LLM problem-solving capabilities. Chainof-Thought (CoT) prompting (Wei et al., 2022) first introduced the concept of generating intermediate reasoning steps to improve performance on complex tasks. Building upon this, the Tree
of Thoughts (ToT) (Yao et al., 2024) enables the exploration of multiple potential reasoning paths
and incorporates deliberate decision-making through self-evaluation and backtracking. Graph of
Thoughts (GoT) (Besta et al., 2024a), models LLM-generated information as an arbitrary graph
where thoughts are vertices and dependencies are edges, allowing for more complex reasoning
structures and outcomes. In a different direction, Program of Thoughts (PoT) approach (Chen et al.,
2022) disentangles computation from reasoning by expressing the reasoning process as a program,
with external computation handling numerical operations. SELF-DISCOVER (Zhou et al., 2024)
introduces a self-discovery process where LLMs autonomously select and compose multiple atomic
reasoning modules into explicit reasoning structures. Our hybrid thinking approach allows for the
efficient resolution of tasks within the LLM’s core capabilities through direct reasoning, while adaptively engaging in deeper, multi-step workflows for more complex problems.
**Multi-Agent Frameworks for Task-Solving. Recent advancements also led to the development of**
various frameworks for complex task-solving and multi-agent collaboration. AutoGPT (Significant
Gravitas) pioneered the idea of using LLMs for recursive goal decomposition and task completion,
where sub-tasks are then performed sequentially to yield a larger result. ReAct (Yao et al., 2022) introduced a method for interleaving reasoning and acting, allowing LLMs to generate both reasoning
traces and task-specific actions. Reflexion (Shinn et al., 2024) further enhanced language agents’
capabilities by incorporating verbal reinforcement learning, enabling them to reflect on feedback
and improve decision-making. MetaGPT (Hong et al., 2024) addressed the challenge of LLM hallucination in multi-agent systems by incorporating human workflows and standardized operating
procedures into the framework. AutoGen (Wu et al., 2023) presented a flexible multi-agent conversation framework that allows for customizable, conversable agents with human participation.
CAMEL (Li et al., 2023) introduced a role-playing approach to facilitate autonomous cooperation
among communicative agents. Finally, StateFlow (Wu et al., 2024) proposed a state-driven workflow that conceptualizes complex task-solving processes as state machines, enhancing control and
interpretability. In contrast to these existing works, our approach uniquely integrates hybrid thinking, combining fast and slow thinking modes with automatic workflows, to enhance LLMs’ ability
to tackle complex reasoning problems more effectively and with greater adaptability.
3 OVERVIEW OF THE HYBRID THINKING APPROACH
Our hybrid thinking approach (Figure 1) combines the strengths of fast and slow thinking modes
to enable LLMs to more effectively solve complex reasoning problems. It consists of the following
three key components. 1) Fast Thinking with Direct CoT. In the fast thinking mode, the LLM uses
a direct chain of thought (CoT) approach to quickly solve the task query if possible. This leverages
the LLM’s core abilities to perform certain types of reasoning efficiently by directly generating the
rationale and the final answer. 2) Adaptive Combination of Fast and Slow Thinking. Next, we
employ a self-verification mechanism where the LLM examines each step of the fast-thinking CoT
-----
**Final Answer**
Yes Yes
Verify Each No Dynamic
CoT Solver Verify Answer
Reasoning Step Workflow Solver
No (retry) **Slow**
**Fast Thinking** **Thinking**
Figure 1: Overview of our HDFlow approach for complex problem-solving. Overall, it is a dualpath hybrid thinking approach, beginning with a CoT solver for initial fast reasoning followed by
verification of each reasoning step. If verification fails, the process transitions to a slower, more
deliberate ”Dynamic Workflow Solver.” This solver iterates until a verified answer is obtained, incorporating a final verification step before concluding with a solution.
this self-verification process, it triggers a switch to the slow-thinking mode. 3) Slow Thinking with
**Slow Thinking with Dynamic Workflow**
**Stage 1: Problem** **Stage 2:** Experts with specialties **Stage 3: Graph**
- Linguist
**Reflection** **Workflow Design** - Mathematician **Construction and**
- Data Scientist **Execution**
Analyze Key Elements Design Experts - …
Experts with tool usage Workflow
Identify Sub-tasks Workflow Arrangement - Python (pseudocode)
- Symbolic Engine
- …
Figure 2: Three-Stage Framework of Dynamic Workflow. The dynamic workflow design begins
with Problem Reflection, where key elements are analyzed and sub-tasks identified. Stage 2 focuses
on Expert Design, utilizing a variety of specialists and tools to architect an optimal workflow. Stage
3 involves constructing and executing the workflow graph to get the final result.
reasoning to assess its confidence in the generated answer. This is achieved by applying the LLM
to analyze the coherence, logical consistency, and correctness of each reasoning step in the context
of the given query. If the LLM detects any inconsistencies, errors, or low-confidence steps during
**Dynamic Workflow. To tackle highly complex tasks, we propose a novel slow-thinking mechanism**
called Dynamic Workflow (Figure 2), which automatically decomposes the original task into subtasks and dynamically switches between verbal reasoning and symbolic reasoning to solve each subtask. Our approach starts with multi-level problem reflection and decomposition. It then designs a
workflow to assemble specialized LLM skills or symbolic tools for sub-tasks. Next, we dynamically
chain together the sub-task reasoning steps into a multi-step workflow and execute the workflow.
Finally, all sub-task results are aggregated into the final answer to the original query. We will present
details in Section 4.
By first attempting fast thinking, our hybrid thinking approach can efficiently handle queries that are
within the LLM’s core capabilities. When the query exceeds what fast thinking alone can confidently
handle, the hybrid thinking will smoothly transition to a slow thinking workflow to enable the LLM
to tackle a broader range of challenges accurately.
4 SLOW THINKING WITH DYNAMIC WORKFLOW
In contrast to the rapid responses of fast thinking (e.g., CoT), our new slow-thinking mechanism
applies dynamic workflow to enable a more deliberate, analytical approach to complex problemsolving (see Figure 2). It allows an LLM to dynamically transition between reasoning in the text
space (natural language reasoning) and the symbolic space (symbolic reasoning). The high-level
idea is we first let the LLM decompose the original reasoning problem into several more manageable
sub-tasks and solve each sub-task to form the final solution. When necessary, the LLM Engine will
translate the sub-problem from the text space into the symbolic space, enabling the symbolic engine[3]
3In this paper, we mainly use program to achieve symbolic reasoning.
-----
to perform precise symbolic reasoning. The results are then mapped back into natural language using
the LLM Engine. By decomposing the problem, combining the strengths of both natural language
and symbolic reasoning in a tailored workflow, and executing it from start to finish, LLMs can
tackle very hard problems that require multiple steps of accurate reasoning. Appendix B presents a
complete example solution using our dynamic workflow approach and compares with the solution
using OpenAI o1-preview. Prompts used are listed in Appendix C.
4.1 BREAKING DOWN COMPLEXITY: PROBLEM ANALYSIS AND DECOMPOSITION (STAGE 1)
The first step in our slow thinking is problem analysis and planning. We aim to break down the
original problem statement into more manageable sub-tasks. Specifically, the LLM is asked to
analyze the key elements of the query, such as available information, constraints, and the desired
output. It then identifies logical sub-goals needed to progress from the initial state to the solution.
This decomposition allows the LLM to approach the problem in a structured manner, focusing on
one part at a time. Therefore, the LLM can catch gaps in reasoning and handle complex problems
that the fast thinking of CoT alone would struggle with.
**Problem Reflection. The first step in tackling complex problems is conducting a thorough problem**
reflection. This involves the LLM analyzing the original problem and restating it in its own words
to demonstrate understanding. Our problem reflection includes two parts: 1) Identifying the core
objective or question posed by the problem. 2) Recognizing any constraints, assumptions, or special
conditions mentioned. By internalizing the problem through reflection, the LLM can gain a solid
understanding of what needs to be accomplished before proceeding to decomposition.
**Subtask Decomposition. Once the problem is well understood, the LLM is instructed to perform**
a multi-level decomposition to break it down into some tractable sub-problems. The LLM is asked
to follow four principles to achieve an optimal decomposition. Sequential dependency. The subproblems are organized in a logical sequence, such that the outputs of earlier steps feed into subsequent ones, creating a structured workflow from start to finish. Non-overlapping. Each sub-problem
represents a distinct portion of the original problem, with no duplication of work between subproblems. This keeps the overall solution efficient. Proper Decomposition. The sub-problems are
decomposed to the optimal level of granularity - not so small that there are too many to track and
coordinate, but not so large that they are still struggling to solve. Modular. Where appropriate,
sub-problems are defined in a generalizable, modular way, such that the logic and code used to solve
them can potentially be reused to solve similar problems in other contexts.
**Integrating Symbolic Reasoning. Another key aspect of our approach is leveraging the symbolic**
engines to modularize the solution and handle well-defined sub-tasks more accurately. For example,
some sub-tasks in the decomposition can often be addressed by writing code functions. Therefore,
we explicitly instruct the LLM to consider sub-tasks that can be well handled by writing and executing modular code in subtask decomposition.
4.2 ORCHESTRATING EXPERTISE: WORKFLOW DESIGN (STAGE 2)
With the problem decomposed into sub-tasks, our approach next proposes a team of specialized
experts, each contributing unique skills and tools, arranged in a dynamic workflow. The central
component is a Meta-Expert, initialized from the foundation LLM, designs the expert team, and
coordinates their efforts. The orchestration process consists of four steps.
1. Design of Experts. Based on the identified sub-tasks, the Meta-Expert designs a team of
specialized experts with one expert solving one sub-task. Each expert is assigned a unique
name and a clear description of their specific skills, knowledge, and responsibilities[4]. The
dynamic workflow leverages two types of experts to handle each sub-task, enabling a seamless integration of verbal and symbolic reasoning. The first type are specialized experts
initiated from LLMs, such as linguists, mathematicians, and data scientists. These experts
bring domain-specific knowledge and skills to the workflow, allowing for sophisticated
verbal reasoning and analysis within their fields. The second type of expert focuses on
4Our implementation leverages JSON for efficient data management and extraction across the system.
-----
symbolic reasoning, particularly using programming or other symbolic engines[5]. For example, some sub-tasks can often be addressed by writing compact, targeted code functions.
This allows the LLM to handle common operations such as mathematical calculations, data
parsing and manipulation, and so on without bringing errors.
2. Workflow Arrangement. The Meta-Expert arranges the experts into an efficient workflow
sequence. Each expert’s output serves as the input for the next, progressively moving
towards the final solution. The Meta-Expert ensures there is no redundancy of functions
across experts.
3. Collaboration and Iteration. As the experts work through the problem, the Meta-Expert
facilitates collaboration and puts together their inputs and outputs. For sub-tasks involving
logical reasoning, mathematical operations, data structures, or programming, the MetaExpert provides strategic guidance and sends the implementation details to the corresponding symbolic reasoning experts. These experts utilize LLMs to generate code, which is
then executed to perform symbolic reasoning in Stage 3.
4. Final Review and Conclusion. The last expert in the workflow, often an LLM specialist,
is tasked with holistically reviewing the findings of the previous experts and generating the
final answer to the original problem.
By combining the power of specialized LLMs and the usage of tools into a thoughtfully designed,
adaptable workflow, our approach can tackle complex problems that are beyond the capabilities of
the original model. The Meta-Expert serves as the intelligent connector, analyzing the unique needs
of each problem and dynamically assembling the optimal workflow. Our approach creates a bridge
between natural language reasoning and rule-governed symbolic reasoning.
4.3 FLOW EXECUTION: CONSTRUCTING AND RUNNING WORKFLOWS (STAGE 3)
With the workflow graph generated, our approach finally proceeds to execute the graph to get the
final result. The execution follows the dependency order, ensuring the correct flow of data between
experts. To ensure robust execution, if any of the generated code encounters errors, the corresponding symbolic reasoning experts will trace the issue, use the error message to repair the code, and
rerun it. As the workflow progresses, the downstream experts continually update their memory with
the intermediate results and insights generated by previous experts. Upon completion of the workflow execution, the last LLM expert analyzes the results, identifies key findings, and summarizes
them into a final answer to the original problem. The workflow execution is not a one-time process.
The LLM continually assesses the quality and correctness of the final generated solutions and identifies potential errors. It engages in iterative rerun by applying a different problem decomposition,
expert assignments, or adjusting the workflow structure.
5 MODEL TUNING OF HYBRID THINKING
In our experiments, we observed that open-source language models (typically those with around
7B parameters) often struggle with advanced meta-planning and problem-solving skills required for
solving difficult reasoning tasks. To address this limitation and develop local smaller models with
hybrid thinking abilities comparable to the large models, we construct a comprehensive training
dataset and propose hybrid thinking tuning to improve the complex reasoning abilities of local models. We define “local” models as models that can be trained and deployed on local hardware with
limited computational resources, such as the Llama-3 model (Meta, 2024). Our goal is to improve
the complex reasoning abilities of these local models through our proposed approach.
The primary challenge lies in constructing a large-scale dataset of reasoning problems that are sufficiently diverse, high-quality, and difficult. Such a dataset is crucial for teaching smaller local models
to perform complex reasoning tasks. However, manually curating such a dataset presents significant
difficulties in ensuring a wide range of problem domains and maintaining high standards in problem
formulation. As a result, it is extremely time-consuming and expensive to ask human experts to
5We mainly use Python code interpreter as the symbolic engine in our experiments, but our approach can
be extended to other symbolic engines, such as the symbolic deduction engines used in AlphaGeometry (Trinh
et al., 2024) to solve Euclidean geometry problems.
-----
**Step 1** **Step 2**
**Step 3**
Write 3 reasoning Apply CoT to
problems based on validate problems
the task description
Output: Reasoning Problems
Generate 10 new tasks
inspired by 10 tasks sampled
from 214 human-written tasks
Brainstorm 10
puzzle tasks
Deduplication
Output: Reasoning Task Descriptions
Figure 3: Data Synthesis of Complex Reasoning Problems. The creation and refinement of reasoning
problems contain three steps. Step 1 involves brainstorming and generating high-level descriptions
of new reasoning tasks, either inspired by human-written tasks or directly writing puzzle tasks. Step
1 produces 45K descriptions of reasoning tasks. Step 2 performs semantic matching and deduplication and results in 18K reasoning task descriptions. The final Step 3 writes concrete questions based
on task descriptions and applies a CoT validation process to filter or refine the tasks down to 27k
valid reasoning problems.
**Interpret a Morse Code Message**: Given a string of Morse code, translate it into English text,
adhering to standard Morse code conventions. The task involves recognizing each sequence of dots
(.) and dashes (-) as letters and spaces as separators for words.
A Morse code sequence has been found etched into an old artifact. It is believed to be a significant
mathematical formula. The Morse code is: ‘-. .. -. . - -.– / - .... .-. . . / - .. – . ... / ... . ...- . -. - -.–
/ ..-. .. ...- . / . –.- ..- .- .-.. ... / — -. . / .... ..- -. -.. .-. . -.. / .- -. -.. / - .– . -. - -.– / - .... .-. . .‘.
Decode this Morse code into English text, adhering to the standard Morse code conventions where
sequences of dots (.) and dashes (-) represent letters, and spaces are used to separate words.
**Cryptarithm Task: Solve the Equation**: In this cryptarithm, each letter represents a unique digit
from 0-9: **CROSS + ROADS = DANGER** No number may begin with zero. Determine the
digit each letter represents to satisfy the equation.
In a game of spies, two teams use different substitution ciphers to communicate. Team A uses a
cipher where each letter is replaced by the letter three positions to the right in the alphabet (with
wrapping), while Team B uses a cipher where each letter is replaced by the letter four positions
to the left (with wrapping). During the game, a message encrypted using Team B’s cipher was
intercepted: “XLMW MW XLI GIRXVI.” Decode this message assuming it was meant for Team A
but encrypted by Team B.
Figure 4: Three example reasoning problems generated by our data synthesis approach.
consistently generate problems meeting all criteria. Therefore, we propose a novel approach for automatically generate a variety of reasoning problems and collect solutions of hybrid thinking, which
can then be used to train our local LLMs.
5.1 REASONING PROBLEMS SYNTHESIS
To enhance reasoning task diversity and coverage, our data synthesis pipeline consists of three steps
(Figure 3). In the first step, we strategically leverage human-authored seed tasks to inspire the
creation of new reasoning problems (similar to Self-Instruct (Wang et al., 2023)) or let the LLM
brainstorm reasoning puzzles that cover a variety of task formats, difficulty levels, and problem
domains. This step only focuses on generating high-level task descriptions to encourage diversity.
In the second step, we apply deduplication to remove near-identical tasks. Finally, we apply LLMs
again to write three specific problems based on the task descriptions and validate those problems.
**Task Generation Inspired by Seed Tasks. The first step of our reasoning data synthesis pipeline is**
generating an expanded set of reasoning tasks. We augment the few-shot prompts with 10 high-level
task descriptions randomly sampled from the 214 BigBench tasks (Srivastava et al., 2022). Next,
-----
we employ the 10 seed tasks as in-context examples to prompt LLMs[6] to generate 10 new tasks
inspired by seed tasks. To encourage additional diversity in the generated tasks, we also let the LLM
to brainstorm different genres of puzzles, such as crossword puzzles, math puzzles, number puzzles,
relational puzzles, logic puzzles, etc. By repeating two strategies, we produce an expanded pool of
45K candidate reasoning tasks that creatively cover diverse reasoning types and scenarios.
**Data Filtering and Deduplication. The previous task generation step produces a sizable pool**
of candidate reasoning tasks. However, the generated data is likely to contain duplicate or highly
similar entries. To address this, we employ a comprehensive data filtering and deduplication process.
First, we apply n-gram to identify nearly identical tasks. Next, we filter out any tasks or problems
that fail to meet our quality criteria, such as insufficient complexity (e.g., trivial one-step questions),
or ambiguity in the description by prompting GPT-4-Turbo. This helps ensure that only high-quality,
unambiguous reasoning tasks are retained in the final dataset. Through this rigorous deduplication
and filtering process, we condense the pool of 45K generated tasks down to 18K deduplicated tasks.
**Reasoning Problem Synthesis. In the last step, we aim to synthesize multiple concrete reasoning**
problems for each of the 18K tasks produced by the previous task generation and deduplication
steps. Taking each task’s description as input, we prompt an LLM to generate 3 distinct questions or
problems that test the specified reasoning skill. This enables us to turn each high-level task into a set
of actual solvable questions, resulting in a pool of 54k reasoning problems. To ensure the generated
problems are well-posed and solvable, we employ a chain-of-thought (CoT) based validation step.
We prompt GPT-4-Turbo to apply CoT to each synthesized problem and analyze if the resulting
reasoning steps coherently lead to a definite answer. Problems for which the model fails to converge
to a clear solution or exhibits inconsistent reasoning are filtered out. This results in the final 27K
reasoning problems. Figure 4 provides three examples of reasoning problems generated.
5.2 FINETUNING OPEN-SOURCE MODELS ON SYNTHESIZED DATA
To prepare the training data for enhancing the open-source models’ complex problem-solving abilities, we utilize the GPT-4-turbo model to collect reasoning trajectories on the dataset of synthesized
and mathematical problems. For each problem, GPT-4-turbo generates one or several fast/slow
reasoning trajectories using the hybrid thinking approach. Each reasoning trajectory consists of
a sequence of (query, answer) pairs representing the model’s step-wise hybrid thinking process.
Therefore, we use all (query, answer) pairs from the reasoning trajectories to construct the training data, capturing the complete problem-solving process. When multiple reasoning trajectories are
produced (iterative retry), only the solution trajectory that passes the verification process is retained
in the training set to optimize the model’s problem-solving capabilities, while the verification results
for all trajectories are kept to enhance the model’s self-verification abilities.
The Llama-3 models have demonstrated superior performance compared to other models of similar
size due to significant enhancements in both pretraining and post-training (Meta, 2024). Therefore,
we choose the Llama-3-8B-Instruct model as the foundation model for our hybrid thinking tuning
experiments. Specifically, The Llama-3-8B-Instruct model was fine-tuned using 8 A100 GPUs with
bf16 precision[7]. The training utilized a global batch size of 128, spanning 4 epochs. The model
employed the AdamW optimizer of a learning rate of 2.0e-5, with a maximum sequence length of
4096 tokens and a maximum of 2048 new tokens generated.
6 EXPERIMENT
6.1 REASONING BENCHMARK DATASETS
**BIG-Bench Hard (BBH) (Suzgun et al., 2022): A subset of 27 challenging tasks from the BIG-**
Bench benchmark (Srivastava et al., 2022), which aims to measure the capabilities and limitations
of language models across diverse text-based tasks. MATH (Hendrycks et al., 2021): A dataset
consisting of 5,000 test problems from mathematics competitions. These problems assess the mathematical problem-solving ability and often require the application of problem-solving techniques
6We use both GPT-4-0125 and Claude-3-Opus to encourage diversity. We find Claude-3-Opus does generate
very different reasoning tasks compared with GPT-4-0125.
7We adopt LitGPT (AI, 2023) in our model training.
-----
|Methods|BBH MATH DeepMind Math GameOf24|Avg.|
|---|---|---|
|CoT (Fast Think.) Slow Think. Hybrid Think.|77.8 62.6 53.4 9.3 87.1 (+9.3) 67.6 (+4.6) 67.7 (+14.3) 70.3 (+61.0) 87.8 (+10.0) 70.0 (+7.9) 59.6 (+6.2) 72.0 (+62.7)|50.8 73.2 (+22.4) 72.4 (+21.6)|
|---|---|---|
Table 1: Accuracy (%) of GPT-4-Turbo-0125 across different reasoning modes on various datasets.
We show the accuracy of the model using Chain of Thought (CoT) v.s. slow thinking (with dynamic
workflow) and Hybrid Thinking approaches proposed by us. The Fast/Slow indicates the ratio of
Fast and Slow Thinking contributions in the Hybrid approach. Results are derived from the top 100
instances for each sub-category in BBH (27 sub-tasks), MATH (7 sub-domains), and GameOf24 (3
difficulty levels) to reduce API cost and ensure replicability. For the DeepMind Math dataset, the
top 10 instances from each of the 56 sub-domains were used.
|Methods|BBH MATH DeepMind Math GameOf24|Avg. Tokens|
|---|---|---|
|CoT (Fast Think.) Slow Think. Hybrid Think.|351 992 581 387 3227 5694 3562 5246 1299 4398 1742 4983|577.8 4432.0 3105.5|
|---|---|---|
Table 2: Average number of inference tokens of GPT-4-Turbo-0125 using different reasoning modes
on various datasets. Performance is reported in Table 1.
and heuristics beyond standard K-12 mathematics tools. Game of 24 (Yao et al., 2024): A mathematical reasoning challenge dataset containing 1,362 games sorted by human solving time. The
goal is to use four given numbers and basic arithmetic operations (+ - * /) to obtain 24. DeepMind
**Math (Saxton et al., 2019): A dataset consisting of various types of mathematics questions, released**
with both generation code and pre-generated questions. This dataset provides an additional measure
of algebraic generalization abilities.
6.2 RESULTS BASED ON PROMPTING
We first conduct experiments by prompting GPT-4-Turbo-0125[8] to achieve three reasoning modes:
Chain of Thought (CoT), Slow Thinking with Dynamic Workflow, and Hybrid Thinking across four
benchmark datasets. Table 1 shows that slow thinking with dynamic workflow significantly outperforms CoT by 22.4% on average across four benchmarks. It also reveals that Hybrid Thinking
achieves the best accuracy on three datasets BBH, MATH and GameOf24. Notably, both Slow
Thinking and Hybrid Thinking consistently outperform CoT across all datasets, with the most dramatic improvements seen in GameOf24, where gains are 61.0% and 62.7% respectively.
Table 2 illustrates the average number of inference tokens used by each method. CoT consistently
used the fewest tokens (average 577.8), while Slow Thinking required the most (4432.0 on average).
Hybrid Thinking struck a balance with an average of 3105.5 tokens. A clear trade-off emerged between computational efficiency and performance, with CoT using the fewest tokens but achieving
the lowest accuracy. Hybrid Thinking demonstrated a good balance, achieving high accuracy with
moderate token usage. These findings suggest that incorporating dynamic workflows and combining fast and slow thinking processes can enhance the reasoning capabilities of LLMs, with Hybrid
Thinking emerging as a particularly promising approach.
6.3 RESULTS OF HYBRID THINKING TUNING
We next compare the performance of the original Llama-3-8B-Instruct model and the model after our
hybrid thinking tuning. As shown in Table 3, the Llama-3-8B-Instruct model after hybrid thinking
tuning significantly outperforms the baseline model on all datasets. Examining the different thinking
modes, hybrid thinking consistently provided the best tradeoff between performance and efficiency.
Compared to the CoT baseline, hybrid thinking improved accuracy by 10.6%, 10.2%, 23.1% and
[8https://platform.openai.com/docs/models. A full list of prompts can be found in Ap-](https://platform.openai.com/docs/models)
pendix C.
-----
|Methods|BBH MATH DeepMind Math GameOf24|Avg.|
|---|---|---|
**Llama-3-8B-Instruct (Original)**
|CoT|51.7 30.0 18.6 2.7|25.8|
|---|---|---|
**Llama-3-8B-Instruct (After Hybrid Thinking Tuning)**
|CoT (Fast Think.) Slow Think. Hybrid Think.|58.5 (+6.8) 37.0 (+7.0) 34.2 (+15.6) 5.1 (+2.4) 61.2 (+9.5) 37.8 (+7.8) 48.8 (+30.2) 15.4 (+12.7) 62.3 (+10.6) 40.2 (+10.2) 41.7 (+23.1) 16.0 (+13.3)|33.7 (+7.9) 40.8 (+15.0) 40.5 (+14.7)|
|---|---|---|
Table 3: Performance comparison of the original Llama-3-8B-Instruct model and the Llama-3-8BInstruct after our hybrid thinking tuning. We show the accuracy (%) of the model using Chain
of Thought (CoT) v.s. slow thinking (with dynamic workflow) and Hybrid Thinking approaches
proposed by us. The Fast/Slow indicates the ratio of Fast and Slow Thinking contributions in the
Hybrid approach. Results are derived from all test instances in BBH, MATH, DeepMind Math and
GameOf24.
|Methods|BBH MATH DeepMind Math GameOf24|Avg. Tokens|
|---|---|---|
**Llama-3-8B-Instruct (Original)**
|CoT|356 496 359 510|430.2|
|---|---|---|
**Llama-3-8B-Instruct (After Hybrid Thinking Tuning)**
|CoT (Fast Think.) Slow Think. Hybrid Think.|720 985 770 1384 3901 5743 4395 6714 2521 4414 2577 6371|964.7 5188.2 3970.7|
|---|---|---|
Table 4: Average number of inference tokens of the original Llama-3-8B-Instruct model and the
Llama-3-8B-Instruct after our hybrid thinking tuning on various datasets. Performance is reported
in Table 3.
**Fast Think.** **Slow Think.** **Fast Think.** **Slow Think.**
0.6 **0.86** 0.6 **0.85**
**Fast Think.** **Slow Think.**
1
**0.19** **0.26**
0.8 **0.42**
0.6 **0.86**
0.4 **0.81** **0.74**
**0.58**
0.2
**0.14**
0
**BBH** **MATH** **DEEPMIND** **GAMEOF24**
**MATH**
**Fast Think.** **Slow Think.**
1.0
0.8 **0.36** **0.34**
**0.49**
0.6 **0.85**
0.4
**0.64** **0.66**
**0.51**
0.2
**0.15**
0.0
**BBH** **MATH** **DEEPMIND** **GAMEOF24**
**MATH**
Figure 5: Proportion of fast thinking (CoT) and slow thinking (dynamic workflow) applied in hybrid
thinking across four datasets. The left is GPT-4-Turbo (performance is shown in Table 1), while the
right is Llama-3-8B-Instruct after our hybrid thinking tuning (Table 3).
13.3% on the BBH, MATH, DeepMind Math and GameOf24 datasets respectively. Interestingly,
we also observe that hybrid thinking tuning enhances Llama-3’s fast thinking (CoT) performance
across all reasoning tasks at the cost of increased model inference tokens.
Table 5 breaks down performance on the MATH dataset into specific subject areas. Again, the
Llama-3-8B-Instruct model after hybrid thinking tuning outperforms the original model on all subsets, with gains ranging from 8% on intermediate Algebra to 23% on Number Theory. Hybrid
thinking yielded the highest accuracy in each domain, demonstrating its broad applicability.
-----
|MATH Subsets|Llama-3-8B-Ins.|Llama-3-8B-Ins. (After Hybrid Thinking Tuning)|Col4|Col5|
|---|---|---|---|---|
||CoT|CoT (Fast Think.)|Slow Think.|Hybrid Think. Fast/Slow|
|Prealgebra Algebra Number Theory Count. and Prob. Geometry Precalculus Inter. Algebra|43.2% 30.2% 15.0% 21.1% 13.4% 12.5% 9.1%|58.9% 53.6% 31.1% 32.5% 24.8% 22.0% 15.6%|59.7% 52.7% 37.6% 34.2% 23.6% 21.8% 16.3%|63.3% 0.69/0.31 56.1% 0.68/0.32 38.0% 0.52/0.48 35.9% 0.48/0.52 26.3% 0.33/0.67 24.5% 0.35/0.65 17.3% 0.30/0.70|
|---|---|---|---|---|
Table 5: Accuracy comparison of the original Llama-3-8B-Instruct model and the Llama-3-8BInstruct after our hybrid thinking tuning on different domains of the MATH dataset. “Count. and
Prob.” and “Inter. Algebra” represents “Counting and Probability” and “Intermediate Algebra”.
6.4 FAST/SLOW ROUTING ANALYSIS
Figure 5 illustrates the proportion of fast thinking and slow thinking (orange) approaches applied by
both models when solving complex problems across the datasets. The GPT-4-Turbo model demonstrates a higher reliance on fast thinking for BBH, DeepMind MATH, and Game of 24 tasks compared with Llama-3-8B-Instruct model. This observation can be attributed to the fact that GPT-4Turbo’s fast thinking (in the form of CoT) is more reliable and effective compared to Llama-3-8BInstruct. As a result, hybrid thinking in GPT-4-Turbo tends to apply more fast thinking since it is
sufficient to achieve a correct solution in many cases. In contrast, Llama-3-8B-Instruct after tuning exhibits a greater reliance on slow thinking strategies, particularly in complex tasks, where fast
thinking alone may not yield the desired results. This highlights the importance of hybrid thinking to
improve problem-solving efficiency, suggesting that our method can dynamically adjust the optimal
balance between fast and slow thinking based on the model’s downstream reasoning capabilities.
In summary, the dynamic combination of fast and slow thinking modes greatly enhanced the model’s
problem-solving capabilities. Our results showcase the potential of hybrid thinking approaches to
expand the frontier of what LLMs can achieve on challenging tasks.
7 DISCUSSION AND FUTURE WORK
**Limitations and Potential Improvements. One promising direction is to incorporate a value net-**
work that scores the successfulness or quality of completing each sub-task within the dynamic workflow. By integrating such a value network, we can formulate the problem-solving process as a reinforcement learning task, enabling the optimization and search for the best solution trajectory. This
enhancement could lead to more efficient and effective problem-solving strategies, as the model
learns to prioritize and select the most promising decompositions and workflows based on predicted
values.
**Generalization to Other Reasoning Tasks. Constructing high-quality and sufficiently challeng-**
ing reasoning problems for training still remains a significant challenge. While our data synthesis
approach offers a scalable solution, ensuring the validity and difficulty of each generated reasoning
problem is crucial for effective model development. One potential improvement is to involve human experts in the data synthesis process, allowing them to verify, modify, and curate the generated
problems.
**Integration with Symbolic Reasoning Systems. Our dynamic workflow approach seamlessly inte-**
grates specialized language models and symbolic reasoning tools, enabling LLMs to tackle complex
problems more effectively. However, there is significant potential to extend this integration to more
advanced symbolic reasoning systems, such as Lean[9] for mathematical theorem proving or other
domain-specific tools. Moreover, integrating our approach with tools such as search engines and
web browsers could enable LLMs to access and utilize external resources, further amplifying their
problem-solving abilities to broader applications. By incorporating more powerful tools into the
dynamic workflow, we can expand the range of problems that LLMs can solve.
9https://lean-lang.org/
-----
8 CONCLUSION
This paper introduces a novel framework HDFlow for enhancing the complex problem-solving
capabilities of LLMs through hybrid thinking and dynamic workflows. The dynamic workflow
mechanism enables LLMs to decompose complex problems into manageable sub-tasks and integrate specialized language models and symbolic reasoning tools, while hybrid thinking strategically
engages deeper, multi-step reasoning for challenging problems that exceed the capabilities of fast
thinking alone. Extensive experiments demonstrate the significant advantages of our approach, with
slow thinking with dynamic workflow greatly outperforming CoT and hybrid thinking achieving the
highest overall accuracy by balancing efficiency and performance.
REFERENCES
[Lightning AI. Litgpt. https://github.com/Lightning-AI/litgpt, 2023.](https://github.com/Lightning-AI/litgpt)
Google DeepMind AlphaCode Team. Alphacode 2 technical report. [URL https:](https://storage.googleapis.com/deepmind-media/AlphaCode2/AlphaCode2_Tech_Report.pdf)
[//storage.googleapis.com/deepmind-media/AlphaCode2/AlphaCode2_](https://storage.googleapis.com/deepmind-media/AlphaCode2/AlphaCode2_Tech_Report.pdf)
[Tech_Report.pdf.](https://storage.googleapis.com/deepmind-media/AlphaCode2/AlphaCode2_Tech_Report.pdf)
Google DeepMind AlphaProof/AlphaGeometry teams. Ai achieves silver-medal standard solving international mathematical olympiad problems. [URL https://deepmind.google/](https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/)
[discover/blog/ai-solves-imo-problems-at-silver-medal-level/.](https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/)
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of
thoughts: Solving elaborate problems with large language models. In Proceedings of the AAAI
_Conference on Artificial Intelligence, volume 38, pp. 17682–17690, 2024a._
Maciej Besta, Florim Memedi, Zhenyu Zhang, Robert Gerstenberger, Nils Blach, Piotr Nyczyk,
Marcin Copik, Grzegorz Kwa´sniewski, J¨urgen M¨uller, Lukas Gianinazzi, et al. Topologies of
reasoning: Demystifying chains, trees, and graphs of thoughts. arXiv preprint arXiv:2401.14295,
2024b.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint
_arXiv:2211.12588, 2022._
Kahneman Daniel. Thinking, fast and slow. 2017.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS,
2021.
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Jinlin Wang, Ceyao
Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao,
Chenglin Wu, and J¨urgen Schmidhuber. MetaGPT: Meta programming for a multi-agent collaborative framework. In The Twelfth International Conference on Learning Representations, 2024.
[URL https://openreview.net/forum?id=VtmBAGCN7o.](https://openreview.net/forum?id=VtmBAGCN7o)
Chengshu Li, Jacky Liang, Andy Zeng, Xinyun Chen, Karol Hausman, Dorsa Sadigh, Sergey
Levine, Li Fei-Fei, Fei Xia, and Brian Ichter. Chain of code: Reasoning with a language modelaugmented code emulator. In Ruslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller,
Nuria Oliver, Jonathan Scarlett, and Felix Berkenkamp (eds.), Proceedings of the 41st Interna_tional Conference on Machine Learning, volume 235 of Proceedings of Machine Learning Re-_
_search, pp. 28259–28277. PMLR, 21–27 Jul 2024._ [URL https://proceedings.mlr.](https://proceedings.mlr.press/v235/li24ar.html)
[press/v235/li24ar.html.](https://proceedings.mlr.press/v235/li24ar.html)
Guohao Li, Hasan Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents for ”mind” exploration of large language model society. Advances in Neural
_Information Processing Systems, 36:51991–52008, 2023._
-----
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, R´emi Leblond, Tom
Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation
with alphacode. Science, 378(6624):1092–1097, 2022.
AI Meta. Introducing meta llama 3: The most capable openly available llm to date. Meta AI, 2024.
OpenAI. Learning to reason with llms. URL [https://openai.com/index/](https://openai.com/index/learning-to-reason-with-llms/)
[learning-to-reason-with-llms/.](https://openai.com/index/learning-to-reason-with-llms/)
Tal Ridnik, Dedy Kredo, and Itamar Friedman. Code generation with alphacodium: From prompt
engineering to flow engineering. arXiv preprint arXiv:2401.08500, 2024.
Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej Balog,
M Pawan Kumar, Emilien Dupont, Francisco JR Ruiz, Jordan S Ellenberg, Pengming Wang,
Omar Fawzi, et al. Mathematical discoveries from program search with large language models.
_Nature, 625(7995):468–475, 2024._
David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical reasoning abilities of neural models. arXiv preprint arXiv:1904.01557, 2019.
Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion:
Language agents with verbal reinforcement learning. Advances in Neural Information Processing
_Systems, 36, 2024._
[Significant Gravitas. AutoGPT. URL https://github.com/Significant-Gravitas/](https://github.com/Significant-Gravitas/AutoGPT)
[AutoGPT.](https://github.com/Significant-Gravitas/AutoGPT)
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam
Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adri`a Garriga-Alonso, et al. Beyond the
imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint
_arXiv:2206.04615, 2022._
Mirac Suzgun, Nathan Scales, Nathanael Sch¨arli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung,
Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou,, and Jason Wei. Challenging bigbench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261,
2022.
Trieu H Trinh, Yuhuai Wu, Quoc V Le, He He, and Thang Luong. Solving olympiad geometry
without human demonstrations. Nature, 625(7995):476–482, 2024.
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and
Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instructions. In
_Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume_
_1: Long Papers), pp. 13484–13508, 2023._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny
Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in
_neural information processing systems, 35:24824–24837, 2022._
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun
Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, and
Chi Wang. Autogen: Enabling next-gen llm applications via multi-agent conversation framework.
2023.
Yiran Wu, Tianwei Yue, Shaokun Zhang, Chi Wang, and Qingyun Wu. Stateflow: Enhancing llm
task-solving through state-driven workflows. arXiv preprint arXiv:2403.11322, 2024.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao.
React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629,
2022.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik
Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Ad_vances in Neural Information Processing Systems, 36, 2024._
-----
Pei Zhou, Jay Pujara, Xiang Ren, Xinyun Chen, Heng-Tze Cheng, Quoc V Le, Ed H Chi, Denny
Zhou, Swaroop Mishra, and Huaixiu Steven Zheng. Self-discover: Large language models selfcompose reasoning structures. arXiv preprint arXiv:2402.03620, 2024.
-----
A APPENDIX
B DYNAMIC WORKFLOW EXAMPLE SOLUTION
In this section, we demonstrate how our dynamic workflow approach works by presenting an example solution to a reasoning problem. We also present the solution provided by the OpenAI o1preview model to compare with our approach. We can see the solution produced by our approach
contains many sub-goals (sub-tasks), which is very similar to the o1-preview solution. Our approach
also leverages the symbolic tool - Python to enhance the accuracy of the final answer. The reasoning
problem is generated using our data synthesis method.
B.1 REASONING PROBLEM:
An encrypted Morse code message contains a list of three items that were
crucial to a historical event. The Morse code is: ‘- . .- / -.-. --..-. ..-. . . / ... ..- --. .- .-.‘. Decode this Morse code message
into English text, keeping in mind the standard Morse code
conventions where each sequence of dots (.) and dashes (-)
corresponds to a letter, and spaces separate words.
B.2 SOLUTION USING DYNAMIC WORKFLOW:
**Stage 1. Problem Reflection**
===== Meta-Expert@REFLECTION =====
### Problem Reflection start ###
**[Task Description]**[: Decode a given Morse code message into English text]
.
**[Elements of the Task]**[:]
1. The message is encoded in Morse code, which uses a series of dots (.)
and dashes (-) to represent letters and spaces to separate words.
2. Decoding involves translating each Morse code sequence into its
corresponding English letter or symbol.
3. The Morse code provided is a list of three items related to a
historical event, which implies the output should be meaningful and
correctly formatted.
**[Sub-tasks Identification]**[:]
1. **Morse Code Dictionary Creation**: Establish a dictionary in Python
where keys are Morse code sequences (dots and dashes) and values are
corresponding English letters. This will facilitate the translation
process.
2. **Parsing the Morse Code**: Split the input Morse code string into
individual Morse code sequences for letters and words. This involves
handling spaces correctly to differentiate between letters and word
boundaries.
3. **Translation Process**: For each Morse code sequence obtained from
the parsing step, use the Morse code dictionary to translate it into
the corresponding English letter.
4. **Reconstruction of Message**: Combine the translated letters into
words and then into the complete message. This step ensures that the
output is a readable English sentence.
5. **Output Formatting**: Format the final output to ensure it is clear
and presentable, adhering to proper English syntax and spacing.
### Problem Reflection end ###
**Stage 2. Workflow Design**
-----
===== Meta-Expert@EXPERTS_DESIGN =====
### Specialized Experts Design start ###
**[Morse Code Dictionary Creation Expert]**[: Specializes in creating a]
comprehensive dictionary that maps Morse code symbols to English
letters. This expert will ensure that all Morse code sequences are
accurately represented with their corresponding English characters,
which is crucial for the decoding process. Expert card (in JSON
format): {"Name": "Morse Code Dictionary Creation Expert", "
Expert_Type": "LLM", "Input_Type": "None", "Output_Type": "Dict[str,
str]"}
**[Morse Code Parsing Expert]**[: Skilled in breaking down the Morse code]
string into individual Morse sequences for letters and words. This
expert will handle the input Morse code string, correctly identifying
the separation between letters and words, which is essential for
accurate translation. Expert card (in JSON format): {"Name": "Morse
Code Parsing Expert", "Expert_Type": "LLM", "Input_Type": "str", "
Output_Type": "List[List[str]]"}
**[Python Expert of Translation]**[: Utilizes Python to map Morse code]
sequences to English letters using the dictionary provided by the
Morse Code Dictionary Creation Expert. This expert will translate
each Morse code sequence into the corresponding English letter,
ensuring that the translation is accurate and adheres to the Morse
code standards. Expert card (in JSON format): {"Name": "Python Expert
of Translation", "Expert_Type": "Tool", "Input_Type": "List[List[str
]], Dict[str, str]", "Output_Type": "List[str]"}
**[Message Reconstruction Expert]**[: Specializes in reconstructing the]
translated English letters into coherent words and sentences. This
expert ensures that the translated letters are correctly combined to
form meaningful words and that these words are arranged into a
readable and grammatically correct sentence. Expert card (in JSON
format): {"Name": "Message Reconstruction Expert", "Expert_Type": "
LLM", "Input_Type": "List[str]", "Output_Type": "str"}
**[Final Review and Presentation Expert]**[: Responsible for reviewing the]
reconstructed message to ensure it meets the clarity and accuracy
standards. This expert will also format the final message for
presentation, ensuring that it is clear, concise, and ready for
delivery as the solution to the problem. Expert card (in JSON format)
: {"Name": "Final Review and Presentation Expert", "Expert_Type": "
LLM", "Input_Type": "str", "Output_Type": "str"}
### Specialized Experts Design end ###
**Stage 3. Graph Construction and Execution**
===== Morse Code Dictionary Creation Expert =====
### Approach Explanation Start ###
To solve the assigned sub-task of creating a comprehensive Morse code
dictionary, I will map each Morse code symbol (comprising dots ‘.‘
and dashes ‘-‘) to its corresponding English letter. This dictionary
is essential for the subsequent decoding process, as it allows for
the translation of Morse code sequences into readable English text.
**[Steps to Create the Morse Code Dictionary]**[:]
1. **Identify Morse Code Symbols**: List all Morse code symbols for
English alphabet letters (A-Z) and numerals (0-9) if necessary.
2. **Map to English Letters**: Each Morse code symbol will be associated
with its corresponding English letter. This mapping is based on the
standard Morse code conventions.
-----
3. **Dictionary Format**: The dictionary will be structured in Python
dictionary format, where keys are Morse code symbols and values are
the corresponding English letters.
This dictionary will be used by the next expert in the workflow, the
Morse Code Parsing Expert, who will parse the Morse code message into
individual symbols using this dictionary for translation.
### Approach Explanation End ###
### My Final Output Start ###
{
"A": ".-",
"B": "-...",
"C": "-.-.",
"D": "-..",
"E": ".",
"F": "..-.",
"G": "--.",
"H": "....",
"I": "..",
"J": ".---",
"K": "-.-",
"L": ".-..",
"M": "--",
"N": "-.",
"O": "---",
"P": ".--.",
"Q": "--.-",
"R": ".-.",
"S": "...",
"T": "-",
"U": "..-",
"V": "...-",
"W": ".--",
"X": "-..-",
"Y": "-.--",
"Z": "--..",
"1": ".----",
"2": "..---",
"3": "...--",
"4": "....-",
"5": ".....",
"6": "-....",
"7": "--...",
"8": "---..",
"9": "----.",
"0": "-----"
}
### My Final Output End ###
===== Morse Code Parsing Expert =====
### Approach Explanation Start ###
To solve the task of parsing the Morse code string into individual Morse
sequences for letters and words, I will follow these steps:
1. **Understanding Morse Code Structure**: Morse code uses dots (.) and
dashes (-) to represent letters, and spaces are used to separate
different Morse sequences. A single space indicates a separation
between letters within the same word, and a sequence of three spaces
(represented as " / ") indicates a separation between words.
2. **Splitting the Input String**: The input Morse code string will be
split based on the " / " sequence to differentiate between words.
-----
This results in a list where each element is a Morse code sequence
representing a word.
3. **Further Splitting for Letters**: Each word obtained from the
previous step will be further split using a single space as the
delimiter. This will break down the word into individual Morse code
sequences, each representing a letter.
4. **Constructing the Output**: The final output will be a list of lists,
where each sublist represents a word and contains Morse code
sequences for each letter in that word.
This structured approach ensures that the Morse code is accurately parsed
into its constituent letters and words, preparing it for translation
in the subsequent steps of the workflow.
### Approach Explanation End ###
### My Final Output Start ###
[[’- . .-’, ’-.-. --- ..-. ..-. . .’, ’... ..- --. .- .-.’]]
### My Final Output End ###
===== Python Expert of Translation =====
### Python Code Start ###
def translate_morse_code(morse_words, morse_dict):
translated_words = []
for word in morse_words:
translated_word = []
for letter in word.split():
# Using next() with a default value to avoid StopIteration
translated_letter = next((key for key, value in morse_dict.
items() if value == letter), ’?’)
translated_word.append(translated_letter)
translated_words.append(’’.join(translated_word))
return translated_words
morse_words = [[’- . .-’, ’-.-. --- ..-. ..-. . .’, ’... ..- --. .
.-.’]]
morse_dict = {
"A": ".-",
"B": "-...",
"C": "-.-.",
"D": "-..",
"E": ".",
"F": "..-.",
"G": "--.",
"H": "....",
"I": "..",
"J": ".---",
"K": "-.-",
"L": ".-..",
"M": "--",
"N": "-.",
"O": "---",
"P": ".--.",
"Q": "--.-",
"R": ".-.",
"S": "...",
"T": "-",
"U": "..-",
"V": "...-",
"W": ".--",
"X": "-..-",
"Y": "-.--",
-----
"Z": "--..",
"1": ".----",
"2": "..---",
"3": "...--",
"4": "....-",
"5": ".....",
"6": "-....",
"7": "--...",
"8": "---..",
"9": "----.",
"0": "-----"
}
for words in morse_words:
translated = translate_morse_code(words, morse_dict)
print(translated)
### Python Code End ###
### Python Execution Result Start ###
[’TEA’, ’COFFEE’, ’SUGAR’]
### Python Execution Result End ###
===== Message Reconstruction Expert =====
### My Approach Explanation Start ###
As the Message Reconstruction Expert, my primary role is to take the
translated English words provided by the Python Expert of Translation
and construct them into a coherent and grammatically correct
sentence. The translated words are ’TEA’, ’COFFEE’, and ’SUGAR’.
These words represent items, and the context suggests they are a list
of crucial items related to a historical event.
Given this context, my task is to format these words into a readable
English sentence that clearly communicates the items as a list. The
appropriate format for presenting such a list in English would
typically involve separating the items with commas and using the
conjunction "and" before the last item. This ensures the sentence is
both grammatically correct and easy to understand.
### My Approach Explanation End ###
### My Final Output Start ###
Tea, coffee, and sugar
### My Final Output End ###
===== Final Review and Presentation Expert =====
### Explanation of Approach Start ###
As the Final Review and Presentation Expert, my primary role is to ensure
that the reconstructed message from the Message Reconstruction
Expert is clear, accurate, and well-presented. The message provided
is "Tea, coffee, and sugar," which lists three items crucial to a
historical event.
**[Steps in my approach:]**
1. **Review Accuracy**: Check if the message accurately reflects the
translated Morse code. The previous experts have translated and
reconstructed the message correctly as "Tea, coffee, and sugar" from
the Morse code ‘- . .- / -.-. --- ..-. ..-. . . / ... ..- --. ..-.‘.
2. **Review Clarity**: Ensure the message is clear and understandable.
The use of commas and the conjunction "and" appropriately lists the
items, making the sentence coherent and easy to read.
-----
3. **Presentation**: Format the message to ensure it is ready for
presentation. This involves checking for any grammatical errors,
proper capitalization, and punctuation.
Given the outputs from the previous experts, the message "Tea, coffee,
and sugar" is already well-constructed. My task focuses on confirming
these aspects and presenting the final output.
### Explanation of Approach End ###
### My Final Output Start ###
Tea, coffee, and sugar
### My Final Output End ###
===== Meta-Expert@FINAL_JUDGEMENT =====
### Analysis Start ###
**[Problem Recap]**[: The task was to decode a Morse code message that lists]
three items crucial to a historical event. The Morse code provided
was ‘- . .- / -.-. --- ..-. ..-. . . / ... ..- --. .- .-.‘.
**[Workflow Review]**[:]
1. **Morse Code Dictionary Creation**: The dictionary was correctly
established, mapping each Morse code sequence to its corresponding
English letter. This is foundational for accurate translation.
2. **Morse Code Parsing**: The parsing expert successfully identified and
separated the Morse code into three distinct word sequences, which
is crucial for correct translation.
3. **Translation Process**: Using the dictionary, the translation expert
accurately converted the Morse code sequences into the English words
’TEA’, ’COFFEE’, and ’SUGAR’. This step was executed correctly,
adhering to Morse code standards.
4. **Message Reconstruction**: The reconstruction expert transformed the
list of words into a coherent and grammatically correct English
phrase: "Tea, coffee, and sugar". This step was crucial in making the
output readable and presentable.
5. **Final Review and Presentation**: The final output "Tea, coffee, and
sugar" was reviewed and presented clearly. The expert ensured that
the output was concise and correctly formatted, suitable for
answering the problem statement.
**[Evaluation of Accuracy and Completeness]**[:]
- The translation of Morse code to English was accurate based on the
provided dictionary.
- The final message is meaningful and directly corresponds to the Morse
code given.
- The output is formatted in a way that is easy to understand and
relevant to the context of a list of items.
**[Conclusion]**[:]
The workflow from dictionary creation to final presentation was executed
seamlessly, with each expert fulfilling their role effectively. The
final output accurately reflects the decoded message from the Morse
code, presented in a clear and concise manner.
### Analysis End ###
FINAL EVALAUTION: YES
-----
B.3 SOLUTION USING OPENAI O1-PREVIEW
**Hidden Thought Process (thought for 12 seconds)**
===== Parsing Morse code =====
I’m separating the Morse code message into three words. Each word is then
further divided into its constituent letters for decoding.
===== Parsing Morse code =====
In this chunk, the task involves parsing Morse code by handling letter
concatenation within words and word separation through slashes. Morse
code typically separates letters with spaces and words with slashes.
===== Breaking down Morse code =====
I’m looking at Morse code to decode - . .-, mapping it to letters. This
makes me think about interpreting symbols as letters.
===== Considering Morse code decoding =====
I’m thinking through the method of reversing Morse code, focusing on
identifying letter partitions and parsing two-letter words to
determine valid Morse code sequences, examining all possible
partitions of the code.
===== Identifying letter patterns =====
I’m working through different Morse code partitions to isolate letter
patterns like T E A and T U. This involves analyzing various
combinations of Morse code symbols.
===== Decoding Morse code =====
It’s interesting to see that the Morse code was parsed as -.-. --- ..-.
..-. . . for ’COFFIE’ and ’T E A’ as the most plausible word. This
suggests a detailed analysis of each symbol is crucial for accurate
translation.
===== Decoding Morse code =====
I’m working through the Morse code sequence, breaking down each symbol to
reveal letters like C, O, F, F, E, E, S, U, G, and ?.
**Final solution**
To decode the Morse code message ‘- . .- / -.-. --- ..-. ..-. . . / ...
..- --. .- .-.‘, we’ll break it down word by word and letter by
letter.
### Morse Code Reference Table:
| Letter | Morse Code |
| --- | --- |
| A | .- |
| B | -... |
| C | -.-. |
| D | -.. |
| E | . |
| F | ..-. |
| G | --. |
| H | .... |
| I | .. |
| J | .--- |
| K | -.- |
| L | .-.. |
| M | -- |
| N | -. |
| O | --- |
| P | .--. |
| Q | --.- |
-----
| R | .-. |
| S | ... |
| T | - |
| U | ..- |
| V | ...- |
| W | .-- |
| X | -..- |
| Y | -.-- |
| Z | --.. |
### Decoding Process:
**[First Word: ‘- . .-‘]**
- [‘-‘ : ]**[T]**
- [‘.‘ : ]**[E]**
- [‘.-‘ : ]**[A]**
Combined: **TEA**
**[Second Word: ‘-.-. --- ..-. ..-. . .‘]**
- [‘-.-.‘ : ]**[C]**
- [‘---‘ : ]**[O]**
- [‘..-.‘ : ]**[F]**
- [‘..-.‘ : ]**[F]**
- [‘.‘ : ]**[E]**
- [‘.‘ : ]**[E]**
Combined: **COFFEE**
**[Third Word: ‘... ..- --. .- .-.‘]**
- [‘...‘ : ]**[S]**
- [‘..-‘ : ]**[U]**
- [‘--.‘ : ]**[G]**
- [‘.-‘ : ]**[A]**
- [‘.-.‘ : ]**[R]**
Combined: **SUGAR**
### Final Decoded Message:
**[TEA COFFEE SUGAR]**
- * *
These three items-**tea**, **coffee**, and **sugar** - were crucial
commodities in historical trade events, such as the Boston Tea Party
and the global trade networks during the colonial era.
**[Answer:]**
TEA COFFEE SUGAR
C PROMPTS USED IN OUR DYNAMIC WORKFLOW APPROACH
Note that placeholders enclosed in {} are used within the prompts to represent variables that will be
substituted with actual values at LLM query time.
-----
Problem Reflection Prompt
### Problem Statement Start ###
_{task problem}_
### Problem Statement End ###
You are an exceptionally capable Meta-Expert, possessing a unique capability for conducting problem
reflection. Your primary function involves receiving the above problem query, which you must me-
thodically decompose into smaller, more manageable sub-tasks (including sub-tasks that can solved
by implementing Python functions). When designing the solution, you should think about its general-
izability. A robust solution can tackle a similar range of problems effectively with minor adaptations.
This decomposition will later facilitate the creation of a team of specialized experts, enabling efficient
collaboration of experts to address and solve the above problem. When breaking down into sub-tasks,
it is crucial to:
1. Ensure Sequential Logic: Arrange the sub-tasks in a logical, sequential order that facilitates a
smooth workflow from start to finish.
2. Avoid Overlap: Each sub-task must be distinct, with no duplication of efforts across the tasks, en-
suring efficient allocation of expertise.
3. Pursue Optimal Decomposition: Ensure sub-tasks are sufficiently defined to be tackled effectively.
Maintain a manageable number of specific sub-tasks, facilitating easier coordination and management.
In particular, please conduct the ”Problem Reflection” for the given problem: Reflect on the problem,
and describe it in your own words, in bullet points. Analyze how you can decompose the problem into
smaller, more manageable sub-tasks. Note that you can integrate Python-driven sub-tasks by imple-
menting and running modular Python code if necessary. Pay attention to small details, nuances, notes
and examples in the problem description.
Experts Design Prompt
### Problem Statement Start ###
_{task problem}_
### Problem Statement End ###
### Problem Reflection Start ###
_{problem reflection}_
### Problem Reflection End ###
You are an extremely powerful Meta-Expert with the unique ability to design a team of specialized
experts and arrange those experts through a workflow to tackle and solve the above problem. Based on
the above problem statement and its reflection analysis, please design a team of experts and orchestrate
those experts to effectively address and solve the above problem.
In particular, you are to do ”Specialized Experts Design”:
- Design a list of subject-matter experts (SMEs) including, but not limited to, Essayist Expert, Python
Expert, Linguistic Analyst, Mathematician, Data Scientist, and various other Analysts. Each expert is
only to perform one specific sub-task, such as processing data, making decisions, or utilizing Python
tools.
- Arrange the experts to operate in a sequential workflow, meaning each expert’s output becomes the
input for the next, progressively moving towards the final answer. Avoid redundancy of functions
across experts.
- Assign unique names to each expert and provide an clear description of their specific skills, knowl-
edge, and the sub-tasks they are going to perform. Ensure the expert description is comprehensive
and self-contained that encapsulates all important information and details from **Sub-tasks Identifi-
cation**.
- For sub-tasks involving logical reasoning, mathematical operations, data structure manipulation, or
programming-related challenges, you can outline strategic approaches and delegate the specifics of im-
plementation to the Python expert (Tool). The Python expert will translate the instructions into code,
execute it, and return the results. You can include multiple Python experts if needed. Please provide
explicit implementation instructions to the Python expert(s).
- Conclude each expert’s description with a name card in JSON format, summarizing key attributes.
Specify the type of each expert as either ’LLM’ for those based on Large Language Model or ’Tool’
for those utilizing Python tools.
- The final expert should be responsible for reviewing the findings of previous experts and then gener-
ating the final answer to the problem.
-----
Execution Prompt of Experts Initiated from LLM
### Problem Statement Start ###
_{original problem}_
### Problem Statement End ###
### Problem Reflection Start ###
_{problem reflection}_
### Problem Reflection End ###
Please act as {name}. Your role: {role} You are part of a specialized expert team. You are designed to
accomplish a sub-task and collaborate with other experts through a workflow graph to solve the above
problem.
The expert team operates based on the following design:
### Experts Design Start ###
_{experts design}_
### Experts Design End ###
Each expert, including you, is responsible for a specific sub-task. The workflow is structured so that
each expert’s output becomes the input for the next, progressively moving towards the final answer.
The process should be thought of as sequential steps, where you contribute towards the solution based
on the outputs from the previous experts.{data type instruction} You can think step by step if neces-
sary.
The results from the preceding experts are as follows:
### Experts’ Results Start ###
_input data_
### Experts’ Results End ###
Please provide a brief explanation of your approach to solving the assigned sub-task. After your
explanation, clearly indicate your final output as follows:
### My Final Output Start ###
[Your final answer here]
### My Final Output End ###
-----
Execution Prompt of Experts initiated from Symbolic Engine
### Problem Statement Start ###
_{original problem}_
### Problem Statement End ###
### Problem Reflection Start ###
_{problem reflection}_
### Problem Reflection End ###
Please act as {name}. Your role: {role} You are a specialized Python expert among a team of experts.
You are designed to write Python code to accomplish a sub-task and collaborate with other experts
through a workflow graph to solve the above problem.
The expert team operates based on the following design:
### Experts Design Start ###
_{experts design}_
### Experts Design End ###
Each expert, including you, is responsible for a specific sub-task. The workflow is structured so that
each expert’s output becomes the input for the next, progressively moving towards the final answer.
You should take the previous expert’s output as input, write the Python code, execute the code, and
send the output to the next expert.
The results from the preceding experts are as follows:
### Experts’ Results Start ###
_input data_
### Experts’ Results End ###
Please write the Python code that takes input in {input type} and return output in {output type}.
Guidelines: - Make sure the code includes all the necessary module imports, properly initialize the
variables, and address the problem requirements. - The code needs to be self-contained, and executable
as-is. Output only code, without any explanations or comments.
The code output must follow this structure:
‘‘‘python
def f1(...):
...
return ...
def f2(...):
...
return ...
...
if __name__ == "__main__":
...
‘‘‘
_how to read input_
The output should be printed without additional words using the ’print()’ method.
Answer:
‘‘‘python
-----
Verification Prompt
### Problem Statement Start ###
_{task problem}_
### Problem Statement End ###
### Problem Reflection Start ###
_{problem reflection}_
### Problem Reflection End ###
**Experts Design:** - Based on the problem reflection, a team of experts has been designed and
organized through a workflow to tackle and solve the problem described above. - Experts are designed
to operate in a sequential workflow, meaning each expert’s output becomes the input for the next,
progressively moving towards the final answer. - The final expert is responsible for reviewing the
findings of previous experts and then generating the final answer to the problem.
Here is a description of the experts’ roles and the workflow structure:
### Experts Design Start ###
_{experts design}_
### Experts Design End ###
Based on the workflow design, the experts have provided the following results:
### Experts’ Results Start ###
_{experts results}_
### Experts’ Results End ###
Given the described workflow design and the results produced by the experts, your task is to evaluate whether the final output of the ”{final expert}” successfully and correctly solves the problem
presented.
Please provide your analysis and then conclude your evaluation by stating ’FINAL EVALUATION:
YES’ or ’FINAL EVALUATION: NO’.
DATA SYNTHESIS OF REASONING PROBLEMS
Data Synthesis Prompt 1
Please develop 10 new and diverse reasoning tasks, one per line, inspired by but distinct from the
following 10 example reasoning tasks:
_{example tasks}_
Guidelines for task creation:
- Ensure each new task is distinctly different from the example tasks provided; avoid mere variations.
- Clearly and accurately define each task, making its objective and scope explicit.
- Design tasks that yield deterministic answers, facilitating the creation of single, definitive standard
answers for subsequent problems derived from these tasks. This helps straightforward evaluation of
correctness.
- Target a moderate to hard difficulty level for each task, requiring thorough analysis and in-depth
reasoning to solve.
Data Synthesis Prompt 2
Please develop 10 new and diverse puzzle tasks, one per line, to test various reasoning abilities.
Guidance:
- Each new puzzle task should clearly and accurately describe what the task is.
- Design puzzle tasks that yield deterministic answers, facilitating the creation of single, definitive
standard answers for subsequent problems derived from these tasks. This helps straightforward evalu-
ation of correctness.
- Puzzle tasks should have a moderate to hard difficulty level - they should require thorough analysis
and in-depth reasoning to work through.
-----
Problem Validation Prompt
### Problem Start ###
_{problem}_
### Problem End ###
Your task is to verify whether the above problem is a valid reasoning problem or not.
Valid Criteria:
- It is clear and unambiguous (NO multiple interpretations).
- It provides all necessary information required to solve the problem.
- The problem is logically structured so that it can be approached through reasoning skills. It does not
depend on subjective judgments or opinions.
- The problem is solvable and has one single, definitive correct answer that can be derived through
reasoning.
- There are no internal contradictions or conflicts in the problem.
Please provide a concise analysis and then output ’## VALID ##’ or ’## INVALID ##’. Next, if it is
invalid, please rewrite it into a new valid reasoning problem following the format below. Make sure
the new problem is challenging enough.
### New Valid Problem Start ###
[new problem]
### New Valid Problem End ###
-----
| [
"Wenlin, Yao",
"Haitao, Mi",
"Dong, Yu"
] | 2024-09-25T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.17433 | https://arxiv.org/abs/2409.17433 | https://www.semanticscholar.org/paper/24d409dbab35e16940a1b959506c6ee955a0918c |
How to Leverage Digit Embeddings to Represent Numbers? | Apart from performing arithmetic operations, understanding numbers themselves is still a challenge for existing language models. Simple generalisations, such as solving 100+200 instead of 1+2, can substantially affect model performance (Sivakumar and Moosavi, 2023). Among various techniques, character-level embeddings of numbers have emerged as a promising approach to improve number representation. However, this method has limitations as it leaves the task of aggregating digit representations to the model, which lacks direct supervision for this process. In this paper, we explore the use of mathematical priors to compute aggregated digit embeddings and explicitly incorporate these aggregates into transformer models. This can be achieved either by adding a special token to the input embeddings or by introducing an additional loss function to enhance correct predictions. We evaluate the effectiveness of incorporating this explicit aggregation, analysing its strengths and shortcomings, and discuss future directions to better benefit from this approach. Our methods, while simple, are compatible with any pretrained model and require only a few lines of code, which we have made publicly available. | This paper explores the use of mathematical priors to compute aggregated digit embeddings and explicitly incorporate these aggregates into transformer models and evaluates the effectiveness of incorporating this explicit aggregation. | ### How to Leverage Digit Embeddings to Represent Numbers?
**Jasivan Alex Sivakumar and Nafise Sadat Moosavi**
Department of Computer Science
University of Sheffield
United Kingdom
{jasivakumar1|n.s.moosavi}@sheffield.ac.uk
**Abstract**
Apart from performing arithmetic operations,
understanding numbers themselves is still a
challenge for existing language models. Simple generalisations, such as solving 100+200
instead of 1+2, can substantially affect model
performance (Sivakumar and Moosavi, 2023).
Among various techniques, character-level embeddings of numbers have emerged as a promising approach to improve number representation. However, this method has limitations as it
leaves the task of aggregating digit representations to the model, which lacks direct supervision for this process. In this paper, we explore
the use of mathematical priors to compute aggregated digit embeddings and explicitly incorporate these aggregates into transformer models. This can be achieved either by adding a special token to the input embeddings or by introducing an additional loss function to enhance
correct predictions. We evaluate the effectiveness of incorporating this explicit aggregation,
analysing its strengths and shortcomings, and
discuss future directions to better benefit from
this approach. Our methods, while simple, are
compatible with any pretrained model and require only a few lines of code, which we have
made publicly available.[1]
et al., 2022a; Yu et al., 2024), scaling up models
(Lewkowycz et al., 2022; Kojima et al., 2022), or integrating methods like chain-of-thought reasoning
(Wei et al., 2022b; Yue et al., 2024). The effectiveness of such methods is significantly amplified
when applied in conjunction with larger model architectures. With smaller models, the improvement
shown is often minimal, for example, Wei et al.
(2022b) use of chain-of-thought on a 20B parameter model only showed a 2.5% improvement on the
MAWPS (Koncel-Kedziorski et al., 2016) dataset
whereas it jumps to 14.7% with a 137B parameter
model. In addition, many of these solutions are
computationally expensive or inaccessible, alternatively we seek a low cost approach that may have
minimal impact on small scale models but greater
effects on larger models.
One main problem for number understanding
is that the widely used tokenisation methods, like
Byte-Pair Encoding (BPE) (Sennrich et al., 2016),
work well for common words but not for numbers. Specifically, rarer numbers might be broken down into random and meaningless pieces. In
light of this, digit tokenisation (Spithourakis and
Riedel, 2018) stands out for its simplicity and efficacy at representing numbers. This technique
involves breaking down numbers into their individual digits, reducing vocabulary size and ensuring
all decimal numbers can be accurately represented
enhancing numerical reasoning abilities across various model architectures, tasks, and datasets (Geva
et al., 2020; Petrak et al., 2023; Sivakumar and
Moosavi, 2023). However, the aggregation of digit
embeddings into a complete number representation
is implicitly handled by the model, which raises
the question: can explicit aggregation using mathematical priors improve numerical understanding?
In this paper, we investigate this hypothesis by integrating a mathematically grounded aggregation
of digit embeddings explicitly, rather than relying
solely on the model’s inherent capabilities. We
**1** **Introduction**
Numbers play an integral role in language
(Thawani et al., 2021), and they are crucial across
various domains such as finance (Chen et al., 2018),
medicine (Jullien et al., 2023) or even sarcasm
(Dubey et al., 2019). Despite, large language models improving their capacity in many tasks, numerical reasoning still poses a challenge (Hong
et al., 2024). Recent advancements in enhancing numerical reasoning within language models
have predominantly stemmed from using more
extensive or higher-quality training datasets (Li
[1https://github.com/jasivan/](https://github.com/jasivan/Number-Embeddings)
[Number-Embeddings](https://github.com/jasivan/Number-Embeddings)
-----
propose a novel approach to number embedding
that requires no changes to the model’s architecture
or additional pretraining. Our hypothesis is that
an effective aggregation should meet two criteria:
(1) it should distinguish between distinct numbers,
ensuring unique representations for each value, and
(2) the aggregated embedding should reflect natural numerical proximity. We also explore two
approaches for this integration: adding a special
token before the representation of individual digits
to enhance input number representations, and incorporating an additional loss function to improve
the representation of output digits.
Our findings show that the integration of explicitly aggregated digit embeddings enhances performance on small-scale models, potentially leading
to even greater improvements in larger models. The
effectiveness of our integration strategy depends
on the size and pretraining of the model used. Our
proposed method has promising prospects thus we
also enumerate some future directions to further
improve number understanding, consequently numerical reasoning.
**2** **Related Work**
Numerical reasoning is the ability to interact with
numbers using fundamental mathematical properties and thus model an area of human cognitive
thinking (Saxton et al., 2019). Given a maths
worded problem, the model needs to interpret the
relation between both numbers and the text to then
solve the problem by means of arithmetic operations (Ahn et al., 2024). Therefore, an accurate
number representation is primordial to both distinguish between different numbers but also predict
an accurate answer. The literature focuses on five
different areas to better represent numbers.
**2.1** **Scaling**
Increasing the number of parameters of pretrained
models has improved their numerical reasoning but
it is still nowhere near perfect. For example, Minerva (540B) (Lewkowycz et al., 2022) continued to
struggle with higher than seven digit multiplication.
Moreover, Frieder et al. (2023) evaluate ChatGPT
and GPT4 to conclude that these very large models
are inconsistent in their response when answering
mathematical questions ranging from arithmetic
problems to symbolic maths. This suggest that the
models lack fundamental understanding of maths
and thus also numbers. One approach to improve
number representation is to scale up the vocabulary
by having more individual number tokens. For example, GPT3 has unique tokens from the numbers
0-520, whereas GPT4 has them up to 999. Despite
general better performance of GPT4, it is not feasible to represent infinitely many numbers in finite
memory capacity, making the vocabulary larger
would increase the computational costs as well.
**2.2** **Tokenisation**
A more practical approach for representing all numbers is digit tokenisation (Spithourakis and Riedel,
2018; Geva et al., 2020); this separates numbers
into a sequence of individual digits. This method
improves upon conventional wordpiece tokenisation as shown with GenBERT (Geva et al., 2020)
and Mistral-7B (Jiang et al., 2023) by reducing vocabulary size and ensuring precise representation
of all numbers. Despite its advantages over conventional tokenisation algorithms, digit tokenisation
has limitations. It relies on the model to aggregate
digit embeddings into complete number representations, a process for which the model lacks direct
supervision. During pretraining, models typically
learn to aggregate subword tokens effectively for
common words. However, not all numbers are encountered frequently enough during pretraining for
the model to learn accurate aggregation. As an
example, when the same question is posed with
numbers represented differently (once as an integer and once scaled to the thousands), FLAN large
with digit tokenisation shows a performance drop
of 10% (Sivakumar and Moosavi, 2023). This indicates that the model struggles with numerical
consistency and accurate aggregation of digit embeddings.
**2.3** **Architectural level**
Change in model architecture also aids numerical
reasoning as shown by NumNET (Ran et al., 2019)
and xVAL (Golkar et al., 2024). NumNET extracts
the numbers from the input question and passage to
create a directed graph with magnitude information
about each number present, e.g. which is greater
than the others. This information is passed to the
model after encoding the input question to supplement it with comparative information about each
number so that the model can use this to answer
the query. Alternatively, xVAL generates two input
encodings, one with the text where numbers are
replaced by [NUM], and one with empty space for
the text but the actual value of the number in their
-----
corresponding positions. From the number preserving encoding, each number is converted to vector
embeddings that are composed of themselves at
each entry. The product of this vector with the
embedding of [NUM] is then injected into the first
layer of the transformer for each number in the input sequence. For decoding, a bespoke process is
created to extract the predicted number instead of
outputting the [NUM] token. Despite the positive
contributions of these papers, their methods lack
versatility as they are not adaptable off-the-shelf to
any pretrained model.
**2.4** **Loss Functions**
Another approach to improve numerical reasoning is for models to intrinsically learn better representation by introducing an inductive bias in the
loss function. A simple approach is Wallace et al.
(2019)’s use of the mean squared error (MSE) loss
across the batch to directly predict floats on a subset of DROP (Dua et al., 2019) which consists of
numerical answers. However, this method is limited to datasets that only predict numbers. Contrastive loss is also used to manipulate the representation of numbers, for instance, Petrak et al. (2023)
draws nearer the representation generated by BPE
and digit tokenisation of numbers through an auxiliary loss when doing extended pretraining to improve arithmetic reasoning in worded problems like
DROP but also tables like SciGen (Moosavi et al.,
2021). Similarly, Li et al. (2022b) use contrastive
learning but on computation trees. They first generate computation trees for the mathematical operations and use contrastive loss to pull nearer the
graph representing the same operation, e.g. addition, and push other ones further. This is then integrated in the main loss and improves performance
on two maths worded problem datasets, MathQA
(Amini et al., 2019) and Math23K (Wang et al.,
2017). While these loss functions are adaptable
with different models, contrastive training is computationally expensive.
**2.5** **Input Representation**
The most model agnostic method is changing the
representation of the numbers in the input text. Wallace et al. (2019) explore worded forms of numbers,
but this approach would overly rely on the tokeniser
which would split them into subwords. Muffo et al.
(2022) decomposes the numbers into place values
in reverse order, e.g. 123 = 3 units, 2 tens, 1 hundreds which helps when working with remainders,
e.g. when adding. However, this introduces many
more tokens which is undesirable as well as either
creating new vocabulary for each place value term
or the danger of them being split into subword tokens. Zhang et al. (2020) preserves the numerical
aspect and converts all numbers into scientific notation, e.g. 314.1 is represented as 3141[EXP]2,
improving models’ ability to identify the magnitude of a number. Despite providing magnitudinal
information, the number before [EXP] still needs
to be represented. In fact, all the above strategies
require the model to implicitly compute an overall
aggregation for the numbers based on their individual components generated by the tokeniser of
the model, whether these are digits or subwords. A
simple, yet effective method is to introduce pause
tokens before predicting the answer (Goyal et al.,
2024). This is evaluated by training a 1B parameter
transformer model on C4 using [PAUSE] tokens
and a 1% improvement is shown on the numerical
reasoning dataset, GSM8K (Cobbe et al., 2021).
While this method can be used for inference only,
they conclude that pretraining is recommended,
therefore less applicable to existing models.
Our work is versatile within this line of research.
Unlike previous methods that rely on the model to
implicitly learn aggregation, we focus on the explicit aggregation of digit embeddings using mathematical priors. This provides direct supervision for
the aggregation process, improving the accuracy of
number representation. Furthermore, our method
ensures that the embedding for a given number
aligns with its numerical neighbours, enhancing
the model’s numerical reasoning capabilities without altering the model architecture or requiring
extensive retraining.
**3** **Aggregation of Digit Embeddings**
We explore an approach which is a natural continuation of digit tokenisation as this has demonstrated its efficacy in enhancing numerical reasoning compared to BPE tokenisation. This improvement can be attributed to digit tokenisation’s utilisation of pretrained embeddings for individual digits,
allowing the model to learn the overall representation through contextualised embeddings. In contrast, BPE may fragment longer and less frequent
numbers into random subsequences, resulting in
less meaningful aggregations than those achieved
through digit tokenisation. However, the implicit
aggregation process employed by digit tokenisa
-----
Figure 1: A 2D projection of the neighbourhood of the
number token “55” in FLAN large is represented on the
left. Ideally, number embeddings should reflect natural
numerical proximity. In other words, the embedding for
any given number should closely align with those of its
immediate numerical neighbours, depicted on the right.
tion remains unclear; specifically, how the model
forms the overall aggregation of a number given
the embeddings of its individual digits.
In this paper, we investigate a mathematically
motivated aggregation that takes into account the
relative position of each digit within a number. Our
approach generates an overall embedding for the
number by considering the positional weight of
each individual digit in that number. For example,
given “123”, the common understanding of numbers as base-10 is “1×100+2×10+3×1”, so left
most digits are weighted higher as they represent a
greater portion of the number.
We design our weighted scheme such that (1) the
embeddings of single-digit numbers remain intact,
as these embeddings are effectively learned during pretraining, evidenced by the high performance
of models on single-digit operations (Sivakumar
and Moosavi, 2023), (2) the weights of consecutive place values increase exponentially to reflect
base-10, and (3) the weights do not sum to 1, meaning that it is not normalising the sum, allowing for
number composed of the same digits, e.g. “111”
and “11”, to be represented differently. These properties would introduce a bias towards an accurate
length of numbers and the correct digits from left
to right as the left most digits are amplified, hence
preserving natural numerical order.
We propose to calculate the weighted aggregated
embedding a with ai = _wi · di for 1 ≤_ _i ≤_ _N_
where N is the number of digits, and the weights
_wi are defined as:_ [P]
_wi = 2[N]_ _[−][i]_ [3(][N][ + 1][ −] _[i][)(][N][ + 2][ −]_ _[i][)]_ _._ (1)
_×_ _N_ (N + 1)(N + 2)
These weights are designed to satisfy three key
properties. (1) Alignment with single-digit rep
FLAN large
100
75
1 digit
2 digit
50 3 digit
Average F1 4 digit
5 digit
25
6 digit
all
0
sum Weighted
Aggregation
Figure 2: Average F1-score of FLAN large layer 1 numbers using sum and our weighted aggregation function
with neighbourhood of 10.
**resentations: when N = 1, w1 = 1, ensuring**
compatibility with the model’s pretraining on single digits. (2) Exponential growth: the exponential component 2[N] _[−][i]_ mimics the base-10 system,
providing an appropriate scale without causing the
weights to grow too rapidly. This also ensures that
the weights are not normalised. (3) Regularisation
**Term: the fractional component acts as a regu-**
larisation term, forming a normalised triangular
number sequence. For instance, for a 3-digit number, the sequence is 1,3,6, normalised to 0.1,0.3,0.6.
This ensures that the difference between consecutive digit weights increases proportionally, i.e.,
_wi_ _wi_ 1 = w0 _i, replicating the exponential_
_−_ _−_ _×_
ratio between digit positions in a logarithmic space.
To validate the ability of an aggregated embedding to accurately represent numerical relationships, we use the F1-score to compare natural
k-Nearest Neighbours (nkNN) with embedding
k-Nearest Neighbours (ekNN). This comparison
serves two purposes: firstly, to assess the embeddings’ capacity to distinguish between distinct numbers, and secondly, to evaluate how well these embeddings mirror the natural numerical order. By
defining nkNN as the set of mathematically adjacent numbers to a given integer n, and ekNN as the
set of its closest neighbours in the embedding space,
we create a direct measure of the embedding’s effectiveness in preserving numerical proximity. The
F1-score evaluates the alignment between nkNN
and ekNN, penalising both the inclusion of incorrect neighbours and the omission of correct ones.
A strong correlation between nkNN and ekNN,
as reflected in a high F1-score, indicates that the
embeddings faithfully capture the essence of numerical data as illustrated in Figure 1.
-----
We compare our bespoke weighted aggregation
function to a more standard aggregation function,
sum. For a set of digit embeddings, we apply
these functions along each dimension to generate
a unique embedding for the number represented
by these digits. Figure 2 graphs the F1-score for
both functions and different digit length, i.e. 2digit would be the numbers 10 to 99. Appendix A
has results for other aggregation functions: max,
min, mean and median; these have the lowest alignment with natural order with an F1-score below
5%. These functions all have a normalising property meaning that the length of the number has no
bearing on the aggregated embedding, as the functions only retrieve one entry for each dimension
therefore cases like “1111” would be equivalent to
both “11” and “1”. Contrastingly, sum has better
F1-scores for up to 3 digits as it possesses magnitudinal information since all the entries are summed
up for each dimension distinguishing, for instance,
a 2-digit set from a 3-digit set as it simply adds
more numbers. However, it is position agnostic - it
assigns equal weight to all the digit irrespective of
their relative positions. Therefore, the embeddings
generated from permutations of the same digits will
always be equivalent, e.g. “85” and “58”. Since
larger digit numbers have more such permutations,
the F1-score reduces as the number of digits increases. Using this metric, the best aggregation
is our weighted sum, the average F1-score rounds
to 69% for 2 digits onwards suggesting that our
weighted sum is closer to the ideal depiction in
Figure 1. Undoubtedly, 1-digit F1-score is better as
these embeddings are generated from pretraining,
but also because the weighted scheme ensures that
they are separated from the other number embeddings.
Despite this weighted scheme aligning the number embeddings with their natural order, the
weights generated by Equation 1 can become excessively large after a certain point. This behaviour
is, however, attenuated by the regularisation term
which maintains the high F1-score of 69% for, at
least, up to 6-digit long numbers.
**4** **Integrating Aggregated Embeddings**
Given the construction of our mathematically
grounded aggregation, we explore two distinct
methodologies for enhancing numerical understanding in models, each targeting different aspects
of number representation. The first method focuses
on enriching the input data by integrating a mathematical aggregation directly into the input embedding as a special token. This approach requires no
changes to the model’s architecture, making it a
flexible solution compatible with various models
and suitable for a broad spectrum of tasks.
In contrast, the second approach aims to refine
the model’s output by improving how numbers
are represented in the learned outcomes. This is
achieved by incorporating the aggregation in the
loss function, encouraging the model to generate
number embeddings that align more closely to the
correct numerical values. Specifically, this method
includes an additional term in the loss calculation,
which accounts for the distance between the aggregated embedding of the predicted numbers and
that of the true numbers. This targeted intervention
is particularly effective in tasks requiring precise
numerical predictions, helping the model develop a
more nuanced and accurate representation of numbers.
The baseline implementation for both methods
is the same as Petrak et al. (2023) with digit tokenisation surrounded by [F] and [/F] tokens to mark
the start and end of the number identified using the
regular expression “(\d*\.)?\d+”.
**4.1** **Aggregation in Input Embeddings**
In our first approach, we enhance the input embedding by incorporating the computed aggregation
directly. This is achieved by first digitising numbers and delineating them with special tokens as
done by Petrak et al. (2023). Additionally, we introduce a special token, [AGG], positioned as follows
where di represent the digit tokens: [F] [AGG] [d1]
... [dn] [/F]. The embedding for this [AGG] token is initialised with the aggregation of the digit
embeddings based on Equation 1.
**4.2** **Aggregation in Loss Function**
Language generation models typically use a crossentropy loss function ( _CE) (Lewis et al., 2020;_
_L_
Raffel et al., 2020). To improve the model’s ability
to predict numbers accurately, we introduce an auxiliary loss ( _AUX_ ) to calculate the mean squared
_L_
error between the aggregate embedding of the gold
and predicted numbers. Understanding and predicting numbers is inherently more complex than
predicting a single word or sub-word because they
consist of multiple digits, each carrying different
significance. For example, in answering the question “Mary’s salary is £900 a month, but she pays
-----
£579 in rent. How much salary does she have left
at the end of each month?”, the answers 320, 230,
32, or 456 are all incorrect. However, 320 is more
accurate compared to others because its magnitude
is closer to the correct answer, 321. Incorporating this new auxiliary loss would help the model
predict digits that are closer to the gold answer,
enhancing its precision in numerical predictions by
recognising the relative significance of each digit
within a number.
Given a prediction p and the gold label l, we
compute the weighted sum of the digits[2] for both p
and l. This process generates two single embedding
representations: W (p) for the prediction, and W (l)
for the gold label. The distance between these two
embeddings is then calculated using the log[3] mean
squared error (equivalent to the euclidean distance):
_LAUX = log2 ( ∥W_ (p) − _W_ (l)∥2 ) (2)
The two losses are linearly interpolated by a hyperparameter, λ:
_L = λ × LCE + (1 −_ _λ) × LAUX_ (3)
**5** **Experimental Setup**
Both methods are evaluated on two different pretrained models, BART base (140M) (Lewis et al.,
2020) and FLAN base (250M) (Wei et al., 2022a).
Additionally, we evaluate on FLAN large (780M)
to explore the effect of model size. All of these
models are encoder-decoders. BART is pre-trained
on five corrupted document tasks from books and
Wikipedia data. FLAN is an instruction-finetuned
version of T5 (Raffel et al., 2020) which is trained
on C4 using transfer learning.
We evaluate our proposed methods on two different test sets: FERMAT (Sivakumar and Moosavi,
2023), and MAWPS (Koncel-Kedziorski et al.,
2016). Both FERMAT and MAWPS consist of
English maths worded problem that can be tackled
by BART and FLAN as shown by Sivakumar and
Moosavi (2023) and where the answer is a single
number. This enables us to evaluate our method
strictly on numerical outputs reducing the interference of other difficulties such as predicting words
and units, or extracting spans. FERMAT is a multiview evaluation set which has different test sets
2Should the answers not be numerical, the model is penalise by arbitrarily setting3Log base 2 is used to regularise the auxiliary loss. LAUX to 20.
with different number representations while keeping the maths problem fixed. The different test sets
distinguish different number types of which we
select the ones that separate integers into number
lengths, mix integers less than 1000, mix integers
greater than 1000, one and two decimal place numbers, and a test set scaled up to more than 4-digit
numbers; these allow us to evaluate which number representation the models support better. FERMAT’s training set is augmented from templates
making it independent to its test sets. MAWPS,
on the other hand, has the same domain for both
training and testing. It is a widely used dataset to
evaluate numerical reasoning, chiefly because it
is small and easy to train with small models. We
finetune the models on each dataset’s respective
training data (see Appendix B) using the hyperparameters described in Appendix C.
Accuracy is the general metric used to evaluate these datasets, however, since it is sometimes
too stringent and neglects to reflect some improvements of the model, we also use a variation of edit
distance (Levenshtein, 1966) as a supplementary
metric. Edit distance helps see improvement in the
predictions despite being incorrect; it calculates
how many insertions, deletions or substitutions is
required for the prediction to be transformed into
the gold label number on a string level. In this paper, we will use Character Error Rate (CER) which
is a character level (digit level) edit distance as a
percentage over the string length of the target. The
lower the CER, the closer the prediction is to the
gold label.
**6** **Impact of Integrating Aggregations**
Table 1 presents the results of our exploration into
the effects of integrating mathematical aggregation
into the three models across two distinct settings.
The bold values indicate the stronger improvement
between the two incorporation strategies. For the
majority of the test splits, the strongest performance of the examined models is observed when
the aggregation is incorporated into the auxiliary
loss. This suggests that incorporating aggregation
at the output level is more effective than incorporating it in the input embedding. However, this
may be due to the fact that adding a new token in
the input might require more than just fine-tuning,
such as an extended pretraining phase. This aligns
with the observations made by Goyal et al. (2024),
who found that the addition of the pause token only
-----
|Incorporating Weights (Accuracy %)|Col2|MAWPS|FERMAT|Col5|
|---|---|---|---|---|
||||Original Commuted Integers 0 to 1000 2-digit integers 3-digit integers 4-digit integers 1000+ 1000+ same 1dp random 2dp random|a+b a-b a*b a/b|
|BART base (140M)|Digits [AGG] + Digits Digits + Aux Loss|19.20 +2.00 +1.40|16.65 8.73 10.26 13.41 10.89 7.74 5.58 10.89 17.82 8.37 +0.63 +1.53 -1.17 -0.90 -2.16 -0.27 +0.09 +0.09 +1.08 -0.27 +1.89 +1.80 +0.54 +0.81 0.00 +0.81 +1.17 -1.26 +0.18 +0.63|40.91 10.62 9.56 11.76 -3.90 -0.74 +1.77 0.00 +2.01 +0.19 +4.25 -1.27|
|FLAN base (250M)|Digits [AGG] + Digits Digits + Aux Loss|23.00 +0.80 +1.80|28.35 17.82 17.10 22.86 17.37 13.77 10.35 18.72 25.83 18.45 +2.79 +0.27 +2.52 +0.81 +1.80 +2.79 +1.80 +0.90 +0.45 -0.09 +2.25 +0.36 +3.15 +2.16 +1.71 +2.79 +0.81 +3.87 +1.89 -0.18|63.38 19.57 12.92 11.27 +4.48 +3.21 -0.27 +1.08 +3.90 +5.80 +0.27 +1.57|
|FLAN large (780M)|Digits [AGG] + Digits Digits + Aux Loss|28.80 +1.20 +1.00|42.39 21.06 25.65 31.32 24.30 21.87 16.47 23.31 36.36 25.83 +0.45 +0.45 +0.81 +2.07 +2.79 +0.99 +1.35 +2.88 +0.27 +0.54 +0.99 -0.18 +1.62 +2.88 +2.79 +0.72 +1.53 +1.26 +1.26 +0.63|63.12 39.88 18.23 18.14 +6.17 +3.83 +0.53 +1.47 -0.39 +1.79 +0.18 -1.08|
Table 1: Results change from baseline after including aggregate embeddings in input embedding ([AGG] + Digits)
and auxiliary loss (Digits + Aux Loss) for BART base, FLAN base and FLAN large. Darker shades of green and red
indicate an absolute change greater than 1%.
**7** **Analysis of Aggregation Embedding in**
**the Input**
The first integration method relies on prepending
the aggregated embedding token, [AGG], before
the digits. The position of the token is before what
it represents, similar in nature to BERT’s (Devlin
et al., 2019) [CLS] token, which is an aggregation
token of the entire input. However, Goyal et al.
(2024) use a [PAUSE] token posteriori to the digit
tokens to act as processing time after concluding
that prepending it had less impact. Consequently,
we also evaluate our proposed method by appending the aggregation token, i.e. Digits + [AGG]. Table 2 clearly shows that this configuration for both
base models underperforms compared to [AGG]
+ Digit as rows have more red entries. In fact, it
performs worse than the baseline with only digit
tokenisation. For FLAN large, the results between
[AGG] prepended and appended are closer to one
another, but prepended, the impact is positive for
each test set and on average better by 1% than
[AGG] used posteriori. Seeing the token before
the digits might provide magnitude information of
the overall number which would indicate the importance of each digit to come, whereas having it
after might interfere with the representation that
the model has already started to create implicitly
from seeing the digits first.
Additionally, we test the impact of providing
the aggregated token by replacing it with a randomly initialised [PAUSE] token akin to Goyal
et al. (2024). From Table 2, we observe that for
BART, nor [AGG], nor [PAUSE] have a great positive impact on the performance. This confirms
that BART struggles to learn new tokens from finetuning alone. The FLAN models are more adapt
became effective from pretraining.
FLAN large, on the other hand, has a more balanced performance but an overall higher improvement when the aggregation is incorporate in the
input as shown particularly from all the green cells
in the row [AGG] + Digits. Therefore, a certain
model size may be required to learn a new token
and leverage the information it provides. This reinforces that an aggregated embedding provides
useful signal to improve number understanding but
how it is integrated is also crucial.
When focusing on smaller integers (columns
“Integers 0 to 1000” to “4-digit integers”), incorporating the weighted embedding in the auxiliary
loss consistently yields better performance, with all
cells being green and showing the highest scores.
For smaller integers, models likely already possess
a strong implicit representation, making the explicit
[AGG] token less impactful. However, at the decoding stage, the auxiliary loss enhances precision
by penalising incorrect predictions.
For the 1000+ columns, using accuracy, the pattern is not evident, however, from Appendix D,
using the auxiliary loss clearly reduces the CER
more than explicitly using the aggregation in the
input. The auxiliary loss encourages the model
to predict the correct answer as the CER is lower.
However, since the weights assigned to each digit
position is lower as it gets closer to the units, the
auxiliary accounts less for it, reducing precision.
As a consequence, despite the CER reducing, since
the entire number is not predicted correctly, improvement fails to be reflect in the accuracy.
-----
|Aggregated Embedding (Accuracy %)|Col2|MAWPS|FERMAT|Col5|
|---|---|---|---|---|
||||Original Commuted Integers 0 to 1000 2-digit integers 3-digit integers 4-digit integers 1000+ 1000+ same 1dp random 2dp random|a+b a-b a*b a/b|
|BART base (140M)|Digits Digits + [AGG] [AGG] + Digits [PAUSE] + Digits|19.20 -1.40 +2.00 -1.40|16.65 8.73 10.26 13.41 10.89 7.74 5.58 10.89 17.82 8.37 -14.76 -7.74 -8.82 -10.98 -8.73 -6.75 -5.58 -10.35 -14.76 -7.83 +0.63 +1.53 -1.17 -0.90 -2.16 0.27 +0.09 +0.09 +1.08 0.27 +0.18 -0.45 -0.18 -0.63 -0.90 -0.36 -0.27 -3.87 -0.90 0.00|40.91 10.62 9.56 11.76 -36.82 -9.38 -8.94 -9.51 3.90 0.74 +1.77 0.00 -8.51 -0.31 +1.68 -2.06|
|FLAN base (250M)|Digits Digits + [AGG] [AGG] + Digits [PAUSE] + Digits|23.00 +1.80 +0.80 +1.00|28.35 17.82 17.10 22.86 17.37 13.77 10.35 18.72 25.83 18.45 -1.53 -2.07 +0.99 -1.89 -0.36 +0.63 +1.35 -0.63 -1.98 -0.99 +2.79 +0.27 +2.52 +0.81 +1.80 +2.79 +1.80 +0.90 +0.45 -0.09 +2.07 -0.54 +1.98 +1.44 +1.80 +2.61 +2.52 +2.16 +2.61 +1.71|63.38 19.57 12.92 11.27 +0.45 +3.89 -2.39 -0.10 +4.48 +3.21 -0.27 +1.08 +3.18 +5.99 1.95 +3.43|
|FLAN large (780M)|Digits Digits + [AGG] [AGG] + Digits [PAUSE] + Digits|28.80 -2.80 +1.20 -1.40|42.39 21.06 25.65 31.32 24.30 21.87 16.47 23.31 36.36 25.83 -2.16 +1.35 +1.89 +1.08 +1.44 +1.62 +2.16 +5.40 -1.17 +0.54 +0.45 +0.45 +0.81 +2.07 +2.79 +0.99 +1.35 +2.88 +0.27 +0.54 -0.45 -0.45 +1.89 +3.69 +2.88 +3.06 +2.25 +5.04 +1.17 +2.61|63.12 39.88 18.23 18.14 +8.57 -8.15 -0.97 +1.18 +6.17 +3.83 +0.53 +1.47 +6.17 +1.17 -1.77 +3.53|
Table 2: Comparing the aggregated embedding at the input level with a pause token and positioning the token after
the digits. Darker shades of green and red indicate an absolute change greater than 1%.
able to the new tokens as seen by the greener rows.
However, the overwhelming bold entries with the
[PAUSE] token indicate that both FLAN base and
large perform better with a [PAUSE] token acting
as a blank space for the model to process the information. It may also be that the model uses this
token to create an implicit representation of the
number. Nevertheless, the average improvement
between the [PAUSE] and [AGG] differs by less
than 0.5% implying that a different aggregation
function or a full hyperparameter search could reverse the trend.
**8** **Future Work**
Our proposed aggregation strategy has shown encouraging steps towards better number representation. However, as with observation made in previous work, the effect of new strategies report minimal improvement on smaller models but greater
impact on larger models (Cobbe et al., 2021; Wei
et al., 2022b). Therefore, an evaluation of our proposed method on larger scale models would verify
the scalability of this approach.
The weighting scheme, presented in Equation 1,
offers a straightforward method for aggregating
digit embeddings. However, as numbers increase
in length, their aggregated embeddings tend to drift
away from the original numerical embedding space.
This divergence could be addressed by enabling
the model to adapt to this new embedding space
by exploring extended pretraining, or alternative
weighting schemes that remain closer to the numerical subspace while satisfying the criteria outlined
in Section 3.
Our auxiliary loss, grounded in Mean Squared
Error, shows promising results for penalising the
model’s erroneous predictions and nudging it towards more accurate outcomes. Given that the
values resulting from standard cross-entropy and
the MSE of the aggregated embeddings may span
vastly different value ranges, crafting a loss function that aligns more closely in magnitude with the
output of cross-entropy could mitigate the risk of
exerting excessive regularisation pressure.
**9** **Conclusion**
Improving numerical reasoning is a challenging
task, increasing model sizes or focusing on data
augmentation helps but at the cost of a substantial additional training time or computations. Digit
tokenisation has been a pioneering work in improving how models encode and decode numbers, however the aggregation of the digit is done implicitly.
We advance this idea by explicitly providing an
aggregated number embedding that is more mathematically sound. These embeddings are generated as weighted sums of the digit embeddings by
accounting for the digits relative position in the
number. We then incorporate them in two model
agnostic forms: in the input level as an additional
token, and in an auxiliary MSE loss. Our promising results demonstrate that, as a proof-of-concept,
even a straightforward aggregation with simple incorporation techniques can positively impact number understanding. Therefore, testing it at larger
scale, developing sophisticated aggregation functions, and refining the integration of the auxiliary
loss presents valuable avenues for future research.
-----
**10** **Limitations**
Some of the limitations of this work is discussed
in the Future Work section. However, we give
detail of more limitations relating to the size of the
models used, and the compatibility and growth of
our proposed weighted aggregation function.
Due to financial and resource constraints the hypothesis that the methods for incorporating the aggregated embedding in larger architectures would
lead to greater performance based on the improvement observed on smaller model is not verified.
In addition, while the weighted scheme is designed using mathematical priors, it is specifically
created for integers, therefore it may not be compatible with decimals or alternative representation
of numbers such as 01 for 1. Nonetheless, from
Table 5, we note that CER reduces for both 1dp and
2dp therefore our aggregated embedding method
has promising scope for all numbers. Lastly, the
weights function described in Equation 1 does not
converge, therefore for a sufficiently large number of digit it would grow beyond the accuracy
provided by the model. However, we explain in
Section 3 with the aid of Figure 2 that, for up to
6-digits, the weighted scheme functions well with
no signs of deterioration. Moreover, in natural text,
very large numbers tend to be shorten using a more
appropriate unit, for example, the world population
of 8114693010 is more often expressed as 8 billion
reducing the numbers of digits needed considerably.
**Acknowledgements**
This work was supported by the Centre for Doctoral Training in Speech and Language Technologies (SLT) and their Applications funded
by UK Research and Innovation [grant number EP/S023062/1]. Additional thanks to Danae
Sanchez Villegas for her continued feedback in this
research.
**References**
Janice Ahn, Rishu Verma, Renze Lou, Di Liu, Rui
[Zhang, and Wenpeng Yin. 2024. Large language](https://arxiv.org/abs/2402.00157)
[models for mathematical reasoning: Progresses and](https://arxiv.org/abs/2402.00157)
[challenges. arXiv preprint arXiv:2402.00157.](https://arxiv.org/abs/2402.00157)
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik
Koncel-Kedziorski, Yejin Choi, and Hannaneh Ha[jishirzi. 2019. MathQA: Towards interpretable math](https://doi.org/10.18653/v1/N19-1245)
[word problem solving with operation-based for-](https://doi.org/10.18653/v1/N19-1245)
[malisms. In Proceedings of the 2019 Conference](https://doi.org/10.18653/v1/N19-1245)
_of the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
2357–2367, Minneapolis, Minnesota. Association for
Computational Linguistics.
Chung-Chi Chen, Hen-Hsen Huang, Yow-Ting Shiue,
[and Hsin-Hsi Chen. 2018. Numeral understanding](https://doi.org/10.1109/WI.2018.00-97)
[in financial tweets for fine-grained crowd-based fore-](https://doi.org/10.1109/WI.2018.00-97)
[casting. In 2018 IEEE/WIC/ACM International Con-](https://doi.org/10.1109/WI.2018.00-97)
_ference on Web Intelligence (WI), pages 136–143._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
[Nakano, et al. 2021. Training verifiers to solve math](https://arxiv.org/pdf/2110.14168.pdf?curius=520)
[word problems. arXiv preprint arXiv:2110.14168.](https://arxiv.org/pdf/2110.14168.pdf?curius=520)
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
[Kristina Toutanova. 2019. BERT: Pre-training of](https://doi.org/10.18653/v1/N19-1423)
[deep bidirectional transformers for language under-](https://doi.org/10.18653/v1/N19-1423)
[standing. In Proceedings of the 2019 Conference of](https://doi.org/10.18653/v1/N19-1423)
_the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
4171–4186, Minneapolis, Minnesota. Association for
Computational Linguistics.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel
Stanovsky, Sameer Singh, and Matt Gardner. 2019.
[DROP: A reading comprehension benchmark requir-](https://doi.org/10.18653/v1/N19-1246)
[ing discrete reasoning over paragraphs. In Proceed-](https://doi.org/10.18653/v1/N19-1246)
_ings of the 2019 Conference of the North American_
_Chapter of the Association for Computational Lin-_
_guistics: Human Language Technologies, Volume 1_
_(Long and Short Papers), pages 2368–2378, Min-_
neapolis, Minnesota. Association for Computational
Linguistics.
Abhijeet Dubey, Lakshya Kumar, Arpan Somani,
Aditya Joshi, and Pushpak Bhattacharyya. 2019.
[“when numbers matter!”: Detecting sarcasm in nu-](https://doi.org/10.18653/v1/W19-1309)
[merical portions of text. In Proceedings of the Tenth](https://doi.org/10.18653/v1/W19-1309)
_Workshop on Computational Approaches to Subjec-_
_tivity, Sentiment and Social Media Analysis, pages_
72–80, Minneapolis, USA. Association for Computational Linguistics.
Simon Frieder, Luca Pinchetti, Alexis Chevalier,
Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas
Lukasiewicz, Philipp Christian Petersen, and Julius
[Berner. 2023. Mathematical capabilities of chatGPT.](https://openreview.net/forum?id=xJ7YWXQOrg)
In Thirty-seventh Conference on Neural Information
_Processing Systems Datasets and Benchmarks Track._
Mor Geva, Ankit Gupta, and Jonathan Berant. 2020.
[Injecting numerical reasoning skills into language](https://doi.org/10.18653/v1/2020.acl-main.89)
[models. In Proceedings of the 58th Annual Meet-](https://doi.org/10.18653/v1/2020.acl-main.89)
_ing of the Association for Computational Linguistics,_
pages 946–958, Online. Association for Computational Linguistics.
Siavash Golkar, Mariel Pettee, Alberto Bietti, Michael
Eickenberg, Miles Cranmer, Geraud Krawezik, Francois Lanusse, Michael McCabe, Ruben Ohana,
-----
Liam Holden Parker, Bruno Régaldo-Saint Blancard,
Tiberiu Tesileanu, Kyunghyun Cho, and Shirley Ho.
[2024. xval: A continuous number encoding for large](https://openreview.net/forum?id=OinvjdvPjp)
[language models.](https://openreview.net/forum?id=OinvjdvPjp)
Sachin Goyal, Ziwei Ji, Ankit Singh Rawat, Aditya Krishna Menon, Sanjiv Kumar, and Vaishnavh Nagara[jan. 2024. Think before you speak: Training lan-](https://openreview.net/forum?id=ph04CRkPdC)
[guage models with pause tokens. In The Twelfth](https://openreview.net/forum?id=ph04CRkPdC)
_International Conference on Learning Representa-_
_tions._
Pengfei Hong, Deepanway Ghosal, Navonil Majumder,
Somak Aditya, Rada Mihalcea, and Soujanya Poria.
[2024. Stuck in the quicksand of numeracy, far from](https://arxiv.org/abs/2401.09395)
[agi summit: Evaluating llms’ mathematical compe-](https://arxiv.org/abs/2401.09395)
[tency through ontology-guided perturbations. arXiv](https://arxiv.org/abs/2401.09395)
_preprint arXiv:2401.09395._
Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud,
Marie-Anne Lachaux, Pierre Stock, Teven Le Scao,
Thibaut Lavril, Thomas Wang, Timothée Lacroix,
[and William El Sayed. 2023. Mistral 7b.](http://arxiv.org/abs/2310.06825)
Maël Jullien, Marco Valentino, Hannah Frost, Paul
O’regan, Donal Landers, and André Freitas. 2023.
[SemEval-2023 task 7: Multi-evidence natural lan-](https://aclanthology.org/2023.semeval-1.307)
[guage inference for clinical trial data. In Proceedings](https://aclanthology.org/2023.semeval-1.307)
_of the The 17th International Workshop on Seman-_
_tic Evaluation (SemEval-2023), pages 2216–2226,_
Toronto, Canada. Association for Computational Linguistics.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu[taka Matsuo, and Yusuke Iwasawa. 2022. Large lan-](https://openreview.net/forum?id=6p3AuaHAFiN)
[guage models are zero-shot reasoners. In ICML 2022](https://openreview.net/forum?id=6p3AuaHAFiN)
_Workshop on Knowledge Retrieval and Language_
_Models._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate
[Kushman, and Hannaneh Hajishirzi. 2016. MAWPS:](https://doi.org/10.18653/v1/N16-1136)
[A math word problem repository. In Proceedings of](https://doi.org/10.18653/v1/N16-1136)
_the 2016 Conference of the North American Chapter_
_of the Association for Computational Linguistics: Hu-_
_man Language Technologies, pages 1152–1157, San_
Diego, California. Association for Computational
Linguistics.
[Vladimir I. Levenshtein. 1966. Binary codes capable of](https://nymity.ch/sybilhunting/pdf/Levenshtein1966a.pdf)
[correcting deletions, insertions, and reversals. Soviet](https://nymity.ch/sybilhunting/pdf/Levenshtein1966a.pdf)
_physics. Doklady, 10:707–710._
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan
Ghazvininejad, Abdelrahman Mohamed, Omer Levy,
Veselin Stoyanov, and Luke Zettlemoyer. 2020.
[BART: Denoising sequence-to-sequence pre-training](https://doi.org/10.18653/v1/2020.acl-main.703)
[for natural language generation, translation, and com-](https://doi.org/10.18653/v1/2020.acl-main.703)
[prehension. In Proceedings of the 58th Annual Meet-](https://doi.org/10.18653/v1/2020.acl-main.703)
_ing of the Association for Computational Linguistics,_
pages 7871–7880, Online. Association for Computational Linguistics.
Aitor Lewkowycz, Anders Johan Andreassen,
David Dohan, Ethan Dyer, Henryk Michalewski,
Vinay Venkatesh Ramasesh, Ambrose Slone, Cem
Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu,
Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra.
[2022. Solving quantitative reasoning problems with](https://openreview.net/forum?id=IFXTZERXdM7)
[language models. In Advances in Neural Information](https://openreview.net/forum?id=IFXTZERXdM7)
_Processing Systems._
Ailisi Li, Yanghua Xiao, Jiaqing Liang, and Yunwen
[Chen. 2022a. Semantic-based data augmentation for](https://doi.org/10.1007/978-3-031-00129-1_3)
[math word problems. In Database Systems for Ad-](https://doi.org/10.1007/978-3-031-00129-1_3)
_vanced Applications: 27th International Conference,_
_DASFAA 2022, Virtual Event, April 11–14, 2022, Pro-_
_ceedings, Part III, page 36–51, Berlin, Heidelberg._
Springer-Verlag.
Zhongli Li, Wenxuan Zhang, Chao Yan, Qingyu Zhou,
[Chao Li, Hongzhi Liu, and Yunbo Cao. 2022b. Seek-](https://doi.org/10.18653/v1/2022.findings-acl.195)
[ing patterns, not just memorizing procedures: Con-](https://doi.org/10.18653/v1/2022.findings-acl.195)
[trastive learning for solving math word problems.](https://doi.org/10.18653/v1/2022.findings-acl.195)
In Findings of the Association for Computational
_Linguistics: ACL 2022, pages 2486–2496, Dublin,_
Ireland. Association for Computational Linguistics.
Nafise Sadat Moosavi, Andreas Rücklé, Dan Roth,
[and Iryna Gurevych. 2021. Scigen: a dataset for](https://openreview.net/forum?id=Jul-uX7EV_I)
[reasoning-aware text generation from scientific ta-](https://openreview.net/forum?id=Jul-uX7EV_I)
[bles. In Thirty-fifth Conference on Neural Informa-](https://openreview.net/forum?id=Jul-uX7EV_I)
_tion Processing Systems Datasets and Benchmarks_
_Track (Round 2)._
Matteo Muffo, Aldo Cocco, and Enrico Bertino. 2022.
[Evaluating transformer language models on arith-](https://aclanthology.org/2022.lrec-1.30)
[metic operations using number decomposition. In](https://aclanthology.org/2022.lrec-1.30)
_Proceedings of the Thirteenth Language Resources_
_and Evaluation Conference, pages 291–297, Mar-_
seille, France. European Language Resources Association.
Dominic Petrak, Nafise Sadat Moosavi, and Iryna
[Gurevych. 2023. Arithmetic-based pretraining im-](https://doi.org/10.18653/v1/2023.starsem-1.42)
[proving numeracy of pretrained language models. In](https://doi.org/10.18653/v1/2023.starsem-1.42)
_Proceedings of the 12th Joint Conference on Lexical_
_and Computational Semantics (*SEM 2023), pages_
477–493, Toronto, Canada. Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
Wei Li, and Peter J. Liu. 2020. Exploring the limits
of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(1).
Qiu Ran, Yankai Lin, Peng Li, Jie Zhou, and Zhiyuan
[Liu. 2019. NumNet: Machine reading comprehen-](https://doi.org/10.18653/v1/D19-1251)
[sion with numerical reasoning. In Proceedings of](https://doi.org/10.18653/v1/D19-1251)
_the 2019 Conference on Empirical Methods in Natu-_
_ral Language Processing and the 9th International_
_Joint Conference on Natural Language Processing_
_(EMNLP-IJCNLP), pages 2474–2484, Hong Kong,_
China. Association for Computational Linguistics.
David Saxton, Edward Grefenstette, Felix Hill, and
[Pushmeet Kohli. 2019. Analysing mathematical rea-](https://openreview.net/forum?id=H1gR5iR5FX)
-----
[soning abilities of neural models. In International](https://openreview.net/forum?id=H1gR5iR5FX)
_Conference on Learning Representations._
Rico Sennrich, Barry Haddow, and Alexandra Birch.
[2016. Neural machine translation of rare words with](https://doi.org/10.18653/v1/P16-1162)
[subword units. In Proceedings of the 54th Annual](https://doi.org/10.18653/v1/P16-1162)
_Meeting of the Association for Computational Lin-_
_guistics (Volume 1: Long Papers), pages 1715–1725,_
Berlin, Germany. Association for Computational Linguistics.
Jasivan Sivakumar and Nafise Sadat Moosavi. 2023.
[FERMAT: An alternative to accuracy for numerical](https://aclanthology.org/2023.acl-long.838)
[reasoning. In Proceedings of the 61st Annual Meet-](https://aclanthology.org/2023.acl-long.838)
_ing of the Association for Computational Linguis-_
_tics (Volume 1: Long Papers), pages 15026–15043,_
Toronto, Canada. Association for Computational Linguistics.
[Georgios Spithourakis and Sebastian Riedel. 2018. Nu-](https://doi.org/10.18653/v1/P18-1196)
[meracy for language models: Evaluating and improv-](https://doi.org/10.18653/v1/P18-1196)
[ing their ability to predict numbers. In Proceedings](https://doi.org/10.18653/v1/P18-1196)
_of the 56th Annual Meeting of the Association for_
_Computational Linguistics (Volume 1: Long Papers),_
pages 2104–2115, Melbourne, Australia. Association
for Computational Linguistics.
Avijit Thawani, Jay Pujara, Filip Ilievski, and Pedro
[Szekely. 2021. Representing numbers in NLP: a](https://doi.org/10.18653/v1/2021.naacl-main.53)
[survey and a vision. In Proceedings of the 2021](https://doi.org/10.18653/v1/2021.naacl-main.53)
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 644–656, Online. As-_
sociation for Computational Linguistics.
Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh,
[and Matt Gardner. 2019. Do NLP models know num-](https://doi.org/10.18653/v1/D19-1534)
[bers? probing numeracy in embeddings. In Proceed-](https://doi.org/10.18653/v1/D19-1534)
_ings of the 2019 Conference on Empirical Methods_
_in Natural Language Processing and the 9th Inter-_
_national Joint Conference on Natural Language Pro-_
_cessing (EMNLP-IJCNLP), pages 5307–5315, Hong_
Kong, China. Association for Computational Linguistics.
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
[Deep neural solver for math word problems. In Pro-](https://doi.org/10.18653/v1/D17-1088)
_ceedings of the 2017 Conference on Empirical Meth-_
_ods in Natural Language Processing, pages 845–854,_
Copenhagen, Denmark. Association for Computational Linguistics.
Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu,
Adams Wei Yu, Brian Lester, Nan Du, Andrew M.
[Dai, and Quoc V Le. 2022a. Finetuned language](https://openreview.net/forum?id=gEZrGCozdqR)
[models are zero-shot learners. In International Con-](https://openreview.net/forum?id=gEZrGCozdqR)
_ference on Learning Representations._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le,
[and Denny Zhou. 2022b. Chain of thought prompt-](https://openreview.net/forum?id=_VjQlMeSB_J)
[ing elicits reasoning in large language models. In](https://openreview.net/forum?id=_VjQlMeSB_J)
_Advances in Neural Information Processing Systems._
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo
[Li, Adrian Weller, and Weiyang Liu. 2024. Meta-](https://openreview.net/forum?id=N8N0hgNDRt)
[math: Bootstrap your own mathematical questions](https://openreview.net/forum?id=N8N0hgNDRt)
[for large language models. In The Twelfth Interna-](https://openreview.net/forum?id=N8N0hgNDRt)
_tional Conference on Learning Representations._
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao
Huang, Huan Sun, Yu Su, and Wenhu Chen. 2024.
[Mammoth: Building math generalist models through](https://openreview.net/forum?id=yLClGs770I)
[hybrid instruction tuning. In The Twelfth Interna-](https://openreview.net/forum?id=yLClGs770I)
_tional Conference on Learning Representations._
Xikun Zhang, Deepak Ramachandran, Ian Tenney,
[Yanai Elazar, and Dan Roth. 2020. Do language](https://doi.org/10.18653/v1/2020.findings-emnlp.439)
[embeddings capture scales? In Findings of the Asso-](https://doi.org/10.18653/v1/2020.findings-emnlp.439)
_ciation for Computational Linguistics: EMNLP 2020,_
pages 4889–4896, Online. Association for Computational Linguistics.
-----
**Appendix**
**A** **Aggregation functions**
Figure 3 shows that F1-score for numbers with up
to 6-digits across six different aggregation functions. The F1-score for max, min, mean and median are all below 5%.
**B** **Datasets**
The datasets’ split is given in Table 3. MAWPS
is a dataset generated by combining different ones
ranging from addition and subtraction to simultaneous equations. The collation of questions is
split to create the train, development and test set.
FERMAT is a large dataset which has a training
and development set automatically generated from
100 templates using different numbers from the following four categories: small integers (less than
1000), large integers (between 1000 and 100000),
1 decimal place and 2 decimal place numbers. The
test set is independently generated from two maths
worded problem datasets, and then augmented to
create 21 test sets of which we use 11.
|Datasets|Train Dev Test|
|---|---|
|MAWPS FERMAT|1500 373 500 200000 1000 1111x11|
Table 3: Train, development, and test splits of MAWPS
and FERMAT.
**C** **Hyperparameters**
All experiments were conducted using an Nvidia
Tesla A100 with 80G and with a weight decay
of 0.005, warm-up of 100, float32 and 3 generation beams, max input length = 128, max target
length=16, and seed=42. Due to limited computational resources, a full grid search of hyperparameter was impossible, however, we do a lambda
search in the range 0.4 to 0.8 in 0.05 increments.
Specific hyperparameters as well as computation
time for dataset and model combinations can be
found in Table 4.
**D** **Character Error Rate (CER) Results**
Table 5 presents the character error rate (CER) for
incorporating the weighted aggregation as an input
token and in the auxiliary loss, for all three models.
-----
## FLAN large
100
75
1 digit
2 digit
50 3 digit
4 digit
Average F1
5 digit
25
6 digit
all
0
max min mean median sum Weighted
Aggregation
Figure 3: Average F1-score of FLAN large layer 1 numbers using max, min, median, mean sum and our weighted
aggregation function with neighbourhood of 10.
|Datasets|Models|Learning Rate|Epochs|Batch Size|Lambda|Training Time|
|---|---|---|---|---|---|---|
|MAWPS|BART base|1.00E-04|150|128|0.6|1h|
||FLAN base||150|64|0.6|1h|
||FLAN large||100|16|0.65|1.5h|
|FERMAT|BART base|1.00E-05|50|128|0.6|37h|
||FLAN base||50|64|0.65|48h|
||FLAN large||50|16|0.4|87h|
Table 4: Specific hyperparameters for MAWPS and FERMAT based on the models trained. Training time is also
provided as a rounded figure.
|Incorporating Weights (CER %)|Col2|MAWPS|FERMAT|Col5|
|---|---|---|---|---|
||||Original Commuted Integers 0 to 1000 2-digit integers 3-digit integers 4-digit integers 1000+ 1000+ same 1dp random 2dp random|a+b a-b a*b a/b|
|BART base (140M)|Digits [AGG] + Digits Digits + Aux Loss|77.73 -1.79 +0.76|89.59 90.32 72.87 71.93 72.25 74.04 77.01 50.29 54.42 62.23 -12.40 -0.83 +0.46 +0.51 +1.19 -0.16 -0.44 +0.94 -1.38 -1.28 -1.88 -0.53 +0.17 +0.20 +0.34 -1.06 -0.53 -1.89 -1.59 -1.78|50.31 74.12 60.73 75.51 +3.08 -1.58 +1.21 -2.22 -2.45 -0.23 -2.75 +0.26|
|FLAN base (250M)|Digits [AGG] + Digits Digits + Aux Loss|67.71 -0.98 -1.54|75.32 169.52 67.37 67.68 67.94 67.86 68.86 50.95 43.77 47.80 -1.40 -0.29 -1.11 -1.41 -1.19 -1.67 -0.96 +1.26 -1.33 -0.39 -0.83 -1.09 -1.09 -1.15 -0.80 -1.39 -1.23 -2.09 -1.82 -0.30|39.84 87.81 60.96 91.52 -1.64 -1.94 -0.17 -0.50 -1.25 -3.15 -0.72 -0.93|
|FLAN large (780M)|Digits [AGG] + Digits Digits + Aux Loss|63.13 -2.57 -3.45|69.71 76.46 63.02 62.69 63.53 63.96 66.67 49.90 37.63 42.31 -44.77 -10.81 -1.02 -0.10 -1.63 -0.65 -0.89 +1.78 -0.93 -1.23 -45.42 -2.72 -1.20 -0.24 -1.09 -1.23 -1.31 -2.57 -1.11 -1.27|39.00 58.84 52.84 70.49 -6.16 -7.80 -5.49 -7.19 -3.47 -6.14 -2.93 -4.74|
Table 5: Results in Character Error Rate (CER) as a percentage over the target string with change from baseline
after including aggregate embeddings in input embedding ([AGG] + Digits) and auxiliary loss (Digits + Aux Loss)
for BART base, FLAN base and FLAN large. With CER, lower CER indicates a better performance, green highlight
reduced CER i.e. negative change, and red the opposite. Darker shades of green and red indicate an absolute change
greater than 1%.
-----
| [
"Jasivan Alex, Sivakumar",
"Nafise Sadat, Moosavi"
] | 2024-06-30T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2407.00894 | https://arxiv.org/abs/2407.00894 | https://www.semanticscholar.org/paper/d0d3536eb68ce4b1f3652c381bb6df830fdc270f |
Improving LLM Reasoning with Multi-Agent Tree-of-Thought Validator Agent | Multi-agent strategies have emerged as a promising approach to enhance the reasoning abilities of Large Language Models (LLMs) by assigning specialized roles in the problem-solving process. Concurrently, Tree of Thoughts (ToT) methods have shown potential in improving reasoning for complex question-answering tasks by exploring diverse reasoning paths. A critical limitation in multi-agent reasoning is the 'Reasoner' agent's shallow exploration of reasoning paths. While ToT strategies could help mitigate this problem, they may generate flawed reasoning branches, which could harm the trustworthiness of the final answer. To leverage the strengths of both multi-agent reasoning and ToT strategies, we introduce a novel approach combining ToT-based Reasoner agents with a Thought Validator agent. Multiple Reasoner agents operate in parallel, employing ToT to explore diverse reasoning paths. The Thought Validator then scrutinizes these paths, considering a Reasoner's conclusion only if its reasoning is valid. This method enables a more robust voting strategy by discarding faulty reasoning paths, enhancing the system's ability to tackle tasks requiring systematic and trustworthy reasoning. Our method demonstrates superior performance compared to existing techniques when evaluated on the GSM8K dataset, outperforming the standard ToT strategy by an average 5.6\% across four LLMs. | This work introduces a novel approach combining ToT-based Reasoner agents with a Thought Validator agent, enabling a more robust voting strategy by discarding faulty reasoning paths, enhancing the system's ability to tackle tasks requiring systematic and trustworthy reasoning. | ## Improving LLM Reasoning with Multi-Agent Tree-of-Thought Validator Agent
**Fatemeh Haji**
Secure AI and Autonomy Lab
University of Texas at San Antonio
```
[email protected]
```
**Maryam Tabar**
University of Texas at San Antonio
```
[email protected]
```
**Mazal Bethany**
Secure AI and Autonomy Lab
University of Texas at San Antonio
```
[email protected]
```
**Jason Chiang**
Peraton Labs
```
[email protected]
```
**Anthony Rios**
University of Texas at San Antonio
```
[email protected]
```
**Peyman Najafirad**
Secure AI and Autonomy Lab
University of Texas at San Antonio
```
[email protected]
```
**Abstract**
Multi-agent strategies have emerged as a promising approach to enhance the reasoning abilities of Large Language Models (LLMs) by assigning specialized roles
in the problem-solving process. Concurrently, Tree of Thoughts (ToT) methods
have shown potential in improving reasoning for complex question-answering tasks
by exploring diverse reasoning paths. A critical limitation in multi-agent reasoning is the ’Reasoner’ agent’s shallow exploration of reasoning paths. While ToT
strategies could help mitigate this problem, they may generate flawed reasoning
branches, which could harm the trustworthiness of the final answer. To leverage the
strengths of both multi-agent reasoning and ToT strategies, we introduce a novel
approach combining ToT-based Reasoner agents with a Thought Validator agent.
Multiple Reasoner agents operate in parallel, employing ToT to explore diverse
reasoning paths. The Thought Validator then scrutinizes these paths, considering
a Reasoner’s conclusion only if its reasoning is valid. This method enables a
more robust voting strategy by discarding faulty reasoning paths, enhancing the
system’s ability to tackle tasks requiring systematic and trustworthy reasoning. Our
method demonstrates superior performance compared to existing techniques when
evaluated on the GSM8K dataset, outperforming the standard ToT strategy by an
average 5.6% across four LLMs.
**1** **Introduction**
LLMs have demonstrated remarkable capabilities across various tasks, yet they often struggle with
complex reasoning, particularly in situations where human-like reasoning capabilities are crucial Zelikman et al. [2023]. To address this, multi-agent strategies have emerged as a promising method
to enhance LLM reasoning. Using multi-agent strategies, multiple specialized agents collaborate,
with each agent assigned distinct roles in the problem-solving process. By allowing different agents
to tackle various aspects of a task, we they are able to utilize specialized expertise to each phase of
the task to improve performance Guo et al. [2024]. This has been shown to improve the quality of
answers in reasoning-intensive domains. However, despite the promise of multi-agent reasoning, one
Preprint. Under review.
-----
critical limitation remains: Reasoner agents often explore reasoning paths shallowly, failing to fully
consider the complexity of the problem space. Tree of Thoughts (ToT) methods offer a potential
solution to this limitation by encouraging a more systematic exploration of multiple reasoning paths.
ToT allows LLMs to simulate human-like thought processes by branching out and examining various
possibilities before converging on a solution Yao et al. [2023]. By enabling LLMs to consider diverse
reasoning pathways, ToT can mitigate the shallow exploration issue seen in some other multi-agent
systems. However, while ToT encourages exploration, it also introduces a new challenge: the risk of
generating flawed reasoning branches. Without proper validation, these erroneous paths could lower
the overall trustworthiness of the final answer.
To address these challenges, we propose a novel approach that combines the strengths of multi-agent
reasoning with ToT while introducing a critical validation mechanism. In our framework, multiple
Reasoner agents operate in parallel, each employing ToT to explore different reasoning paths. These
Reasoner agents are supported by a Thought Validator agent, which evaluates the proposed reasoning
branches produced by the Reasoners. The Validator discards faulty reasoning branches, ensuring that
only logically sound paths contribute to the final decision. This approach allows for both exploration
of the problem space and increased reliability of the answers by eliminating flawed reasoning paths
before they can impact the outcome. Our proposed approach is evaluated on the GSM8K dataset
Cobbe et al., a benchmark known for its challenging arithmetic reasoning tasks. Results show that
our method outperforms existing techniques, demonstrating improved accuracy and trustworthiness
in solving complex reasoning problems.
Our key contributions are as follows:
- The integration of ToT into a multi-agent reasoning framework.
- The introduction of a novel Thought Validator agent that evaluates and filters reasoning
branches produced by Reasoner agents.
- Experimental results on the GSM8K dataset demonstrating improved accuracy and performance in complex arithmetic reasoning tasks compared to existing techniques.
**2** **Background**
**2.1** **Multi-agent Systems for Enhancing LLM Reasoning**
By distributing tasks among multiple agents, multi-agent systems aim to improve performance on
reasoning-based tasks Du et al., Talebirad and Nadiri. For example, CausalGPT Tang et al. introduces
evaluative layers to verify the reasoning branches produced by LLMs, while the Counterfactual
Multi-Agent Debate (CFMAD) framework Fang et al. provides an innovative method to mitigate
the potentially biased reasoning branches of LLMs by assigning agents to fixed roles to generate
justifications from specific perspectives, and a third-party judge evaluates these arguments to decide
the most rational outcome. Despite these advancements, current methods still suffer from shallow
sampling of reasoning paths or majority vote schemes. These techniques can overlook critical inferential errors and are especially prone to early-stage errors, which can propagate through multiple rounds
of reasoning. This limitation is especially problematic in complex scenarios where systematically
evaluating and eliminating incorrect options is crucial. Recent research has demonstrated that LLMs
can effectively identify both factual and inferential mistakes Li et al., making the integration of a
dedicated verification component in multi-agent systems particularly beneficial for assessing the
faithfulness and reliability of generated solutions.
**2.2** **The Role of the ’Reasoner’ Agent in Multi-Agent Frameworks**
Within multi-agent architectures, the Reasoner agent plays a pivotal role. It serves as the system’s core
decision-maker, ensuring that valid conclusions are derived from the reasoning process. However,
Reasoner agents in current frameworks often struggle to systematically evaluate and eliminate
incorrect reasoning paths, particularly in more challenging problem spaces. This bottleneck highlights
the need for more advanced reasoning strategies to be integrated into the Reasoner agent. CFMAD
has also shown that checking all available options can enhance the overall ability of the multi-agent
systems Fang et al..
-----
Figure 1: The process starts with a query being processed by multiple Reasoner agents. Each Reasoner
agent explores various reasoning paths using the ToT strategy, which includes decomposition of
thought steps, generation of paths, state evaluation, and path selection. The Thought Validator agent
then evaluates the proposed reasoning branches, followed by a consensus-based voting mechanism.
If consensus is not reached, a new reasoning round is initiated with feedback incorporation.
**3** **Method**
We propose a novel multi-agent reasoning framework that integrates the ToT strategy with a robust
validation mechanism to enhance complex problem-solving. Our approach employs multiple concurrent Reasoner agents, each using ToT to explore diverse reasoning paths. At each tree level, a
state evaluation agent scores the generated reasoning, with the highest-scored reasoning expanded in
the subsequent level. Upon reaching the final tree level, each Reasoner agent produces a proposed
reasoning chain composed of the chain of the highest-scored reasoning in the tree. These reasoning
branches are then independently assessed by a Thought Validator agent to either validate or invalidate
the proposed reasoning. We then use a consensus-based voting mechanism, where verified reasoning
paths contribute to the vote, and invalidated ones are abstained. If consensus is not reached, we
initiate a new reasoning round, incorporating feedback from the Thought Validator on the reasoning
branch to refine the next reasoning round. Our proposed framework is illustrated in Figure 1.
**3.1** **Reasoner Agent**
The Reasoner agents in our multi-agent architecture employ the ToT strategy, which enables structured
exploration of reasoning paths in parallel. ToT improves upon Chain of Thought (CoT) prompting Wei
et al. by enabling parallel exploration and dynamic path evaluation. While CoT follows a single,
linear path, ToT actively explores and evaluates multiple reasoning paths, making it better suited
for complex problems that benefit from diverse thought exploration Yao et al.. We formalize the
reasoning process as a search over a tree of states. Let Q denote the input prompt or query, and each
Reasoner agent Ri constructs a Tree of Thoughts Ti(Q), where each node represents a state st, which
is a distinct point along a reasoning path. A state st consists of the problem Q and a sequence of
intermediate reasoning steps up to that point z1, z2, . . ., zt, with each step zj being a coherent unit of
reasoning generated by the language model.
_st = [Q, z1, z2, . . ., zt]_
**Step 1: Decomposition and Generation of Thought Paths**
The process is decomposed into intermediate thought steps using LLM prompting. For each state
_st, the next potential thought zt+1 is generated by the Thought Generator G(pθ, st, k), where pθ_
denotes the language model. The Reasoner agents explore multiple branches from any given state st,
corresponding to different continuations of the reasoning process. This approach ensures that the
exploration process covers a diverse range of possible solutions, avoiding the linearity of CoT and
allowing reconsideration of earlier steps.
**Step 2: State Evaluation and Path Selection**
To evaluate each state st, we introduce a state evaluation agent that assigns a score to the generated
reasoning. This evaluation can be implemented through prompting, where the state evaluation agent
assesses the quality and potential of each reasoning step. At each tree level, the highest-scored
reasoning is selected for expansion in the subsequent level. This process continues until the final tree
-----
level is reached. The selection mechanism can be formalized as:
_s[∗]t+1_ [= arg max]st+1 _[V][ (][p][θ][, s][t][+1][)]_
where V (pθ, st+1) is the evaluation score assigned by the state evaluation agent.
**Step 3: Reasoning Branch Construction**
Upon reaching the final tree level, each Reasoner agent constructs a proposed reasoning chain. This
chain is composed of the highest-scored reasoning steps from each level of the tree. Formally, the
reasoning branch Ci for Reasoner Agent Ri can be represented as:
_Ci = [z1[,]_ _[z]2[,]_ _[. . ., z]T[∗]_ []]
where zt[∗] [is the highest-scored reasoning step at level][ t][ of the tree.]
**3.2** **Thought Validator Agent**
The Thought Validator agent, inspired by the role of a teacher providing feedback to students, plays
a crucial role in assessing the validity of the reasoning branches produced by the Reasoner agents.
Much like a teacher helping students refine their answers, this agent independently evaluates each
proposed reasoning branch to either validate or invalidate it. For each reasoning branch Ci, the
Thought Validator agent performs several key steps. It begins with a logical consistency check to
evaluate the internal logic and coherence of the reasoning chain, similar to how a teacher might
assess a student’s argument. This is followed by a factual accuracy assessment to verify any factual
claims made within the reasoning, akin to a teacher fact-checking a student’s work. Finally, the agent
conducts a completeness evaluation to ensure that the reasoning branch adequately addresses all
aspects of the original query, much as a teacher would ensure a student’s response fully answers the
question. Through this comprehensive process, the Thought Validator agent ensures the robustness
and reliability of the reasoning branches, ultimately helping to improve the quality of the final output.
Based on these assessments, the Thought Validator assigns a binary validation status Vi to each
reasoning chain:
1 if Ci is validated
_Vi =_
0 if Ci is invalidated
**Consensus-Based Voting Mechanism**
After the validation process, we employ a consensus-based voting mechanism to determine the
final outcome. Only validated reasoning branches contribute to the vote, while invalidated ones are
abstained. The consensus solution S[∗] can be represented as:
_S[∗]_ = arg max
_Vi_ _δ(S = Si)_
_i=1_ _·_
X
Where Si represents the solution derived from reasoning branch Ci, Vi is the validation status of Ci,
_δ is an indicator function that returns 1 if the solutions match and 0 otherwise, and N is the total_
number of Reasoner agents.
**3.3** **Iterative Refinement**
If consensus is not reached (i.e., no solution receives a majority of validated votes), we initiate a new
reasoning round. This refinement process incorporates feedback from the Thought Validator on the
reasoning branches to guide the next iteration. This iterative process continues until consensus is
reached or a predefined maximum number of iterations is exceeded.
**4** **Experiments**
**Dataset: GSM8K Cobbe et al. is a dataset of 8.5K high-quality linguistically diverse grade school**
math word problems created by human problem writers. GSM8K is widely recognized as a benchmark
for testing arithmetic reasoning in LLMs. The dataset comprises complex, multi-step mathematical
word problems that challenge both the reasoning and computation capabilities of LLMs. Our
-----
Table 1: Performance comparison of our Multi-agent ToT Reasoner with a Thought Validator
|Method|Gpt-3.5-turbo|Gpt-4o-mini|Llama3.1-8B|Llama3.1-70B|
|---|---|---|---|---|
|Standard IO CoT ToT MA ToT with Thought Validator|60.0 68.0 75.4 84.2|91.2 89.2 91.6 92.2|75.4 76.0 80.2 89.0|93.0 89.4 92.8 94.8|
compared to other LLM reasoning methods on the GSM8K reasoning dataset, evaluated across
different LLMs.
experiments utilized a random subset of 500 samples from the GSM8K dataset as the test set.
Following other works on LLM reasoning on the GSM8K dataset, we evaluated the performance of
reasoning approaches using accuracy as the primary metric Yao et al. [2023].
**Implementation Details: Our experiments cover two versions of OpenAI’s GPT models and two**
versions of Meta’s Llama 3.1 models Dubey et al. [2024]. Specifically, we use GPT-3.5 Turbo OpenAI
[2024b] and GPT-4o-mini OpenAI [2024a] from OpenAI, accessed through their API. For the Llama
3.1 models, we employ the 8B NousResearch [2024b] and 70B NousResearch [2024a] parameter
variants. These models offer a range of capabilities and sizes, allowing us to explore different
trade-offs between model complexity and performance in our experiments. We conduct all of our
experiments on four Nvidia DGX A100 80 GB GPUs, and running all these experiments in parallel
took about 18 hours. For our baseline comparisons, we employed several prompting strategies. We
began with input-output (IO) prompting, a standard approach that transforms a problem input into
an output by conditioning on task instructions. We then implemented more advanced techniques,
including Chain of Thought (CoT) Wei et al. and the original ToT strategy Yao et al. [2023]. For
the ToT implementation, we followed the parameters used by Yao et al. [2023] on the GSM8K
dataset, setting a tree depth of 2 and a width of 5. To ensure consistency across our baseline models,
we used a temperature of 1 and a top_p value of 1 for IO, CoT, and ToT. However, for our novel
Thought Validator Agent, we adjusted these parameters to a temperature of 0.5 and a top_p of 0.4.
This adjustment was made to increase the determinism of the Thought Validator’s outputs, as its role
is to validate existing reasoning rather than generate creative solutions.
**Experimental Results:**
Table 1 shows the performance comparison of these different methods. Our results show that our
proposed multi-agent ToT reasoner with a Thought Validator agent outperforms the other reasoning
methods, showing an improvement of 8.8 percentage points over ToT for GPT-3.5-turbo (from 75.4%
to 84.2%). We also see that while ToT and other techniques showed significant improvements over
standard IO prompting when the LLM struggled with a task (such as with GPT 3.5 Turbo and Llama
3.1 8B), the performance gap narrowed considerably for problems where the model with standard
IO prompting already exhibited strong capabilities (such as with GPT 4o mini, and Llama 3.1 70B).
This observation suggests that the efficacy of ToT may be dependent on the complexity of the task
and capability of the model, with its benefits more pronounced in challenging reasoning tasks that
push the boundaries of the model’s baseline abilities. The effectiveness of ToT in these scenarios can
be likened to a teacher providing feedback to a struggling student, guiding them through complex
problems, and reinforcing correct thought processes.
**5** **Limitations and Conclusion**
While the ToT approach has shown promise in enhancing reasoning capabilities, our observations
of the outputs and reasoning trees revealed several limitations that warrant further investigation. A
key challenge we observed is the lack of dynamic exploration in the search space. The ToT method
proposed by Yao et al. [2023] employs a fixed width and depth for the tree structure, which our analysis
showed can lead to suboptimal performance in certain scenarios. For instance, when examining the
reasoning trees for problems that could be solved efficiently without extensive reasoning, we found
that the predetermined depth of exploration often introduced unnecessary complexity, potentially
leading to errors or confusion in the reasoning process. Conversely, for problems requiring more
in-depth analysis, we observed that the fixed depth proved insufficient, limiting the model’s ability to
fully explore complex reasoning paths. Additionally, our proposed approach, while addressing some
-----
of these limitations, is computationally expensive due to the use of the ToT method, which requires
significant resources for generating and evaluating multiple thought paths.
In conclusion, we have presented a novel approach that combines the ToT strategy with a multi-agent
reasoning framework enhanced by a Thought Validator agent. Our method addresses key limitations
in existing reasoning strategies for LLMs by enabling a more systematic exploration of reasoning
paths while simultaneously improving the reliability of generated solutions. Experimental results on
the GSM8K dataset demonstrate that our approach outperforms state-of-the-art methods, particularly
for complex arithmetic reasoning tasks.
**6** **Social Impact Statement**
By improving the depth of reasoning and enabling more systematic option elimination, our approach
could lead to more trustworthy AI applications. However, these advancements also raise ethical
considerations regarding the deployment of highly autonomous reasoning systems, particularly in
high-stakes domains. It is essential to carefully manage the use of such systems to avoid over-reliance
on AI, ensuring that human oversight and accountability remain integral to decision-making processes.
Additionally, the broader societal implications must be monitored to prevent unintended consequences,
such as biases being amplified through algorithmic decision-making or the replacement of human
expertise in fields where nuanced judgment is required.
**References**
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John
[Schulman. Training verifiers to solve math word problems. URL http://arxiv.org/abs/2110.](http://arxiv.org/abs/2110.14168)
```
14168.
```
Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, and Igor Mordatch. Improving
[factuality and reasoning in language models through multiagent debate. URL http://arxiv.](http://arxiv.org/abs/2305.14325)
```
org/abs/2305.14325.
```
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha
Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models.
_arXiv preprint arXiv:2407.21783, 2024._
Yi Fang, Moxin Li, Wenjie Wang, Hui Lin, and Fuli Feng. Counterfactual debating with preset
[stances for hallucination elimination of LLMs. URL http://arxiv.org/abs/2406.11514.](http://arxiv.org/abs/2406.11514)
Taicheng Guo, Xiuying Chen, Yaqi Wang, Ruidi Chang, Shichao Pei, Nitesh V. Chawla, Olaf Wiest,
and Xiangliang Zhang. Large language model based multi-agents: A survey of progress and
challenges. In Kate Larson, editor, Proceedings of the Thirty-Third International Joint Conference
_on Artificial Intelligence, IJCAI-24, pages 8048–8057. International Joint Conferences on Artificial_
Intelligence Organization, 8 2024. Survey Track.
Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. HaluEval: A large-scale
[hallucination evaluation benchmark for large language models. URL http://arxiv.org/abs/](http://arxiv.org/abs/2305.11747)
```
2305.11747.
```
[NousResearch. Nousresearch/meta-llama-3.1-70b. https://huggingface.co/NousResearch/](https://huggingface.co/NousResearch/Meta-Llama-3.1-70B)
```
Meta-Llama-3.1-70B, 2024a.
```
[NousResearch. Nousresearch/meta-llama-3.1-8b. https://huggingface.co/NousResearch/](https://huggingface.co/NousResearch/Meta-Llama-3.1-8B)
```
Meta-Llama-3.1-8B, 2024b.
```
[OpenAI. GPT-4o-mini. https://platform.openai.com/docs/models/gpt-4o-mini, 2024a.](https://platform.openai.com/docs/models/gpt-4o-mini)
Accessed via API gpt-4o-mini-2024-07-18.
[OpenAI. GPT-3.5 Turbo. https://platform.openai.com/docs/models/gpt-3-5-turbo,](https://platform.openai.com/docs/models/gpt-3-5-turbo)
2024b. Accessed via API gpt-3.5-turbo-0125.
-----
Yashar Talebirad and Amirhossein Nadiri. Multi-agent collaboration: Harnessing the power of
[intelligent LLM agents. URL http://arxiv.org/abs/2306.03314.](http://arxiv.org/abs/2306.03314)
Ziyi Tang, Ruilin Wang, Weixing Chen, Keze Wang, Yang Liu, Tianshui Chen, and Liang Lin.
Towards CausalGPT: A multi-agent approach for faithful knowledge reasoning via promoting
[causal consistency in LLMs. URL http://arxiv.org/abs/2308.11914.](http://arxiv.org/abs/2308.11914)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le,
and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. URL
```
http://arxiv.org/abs/2201.11903.
```
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik
Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. URL
```
http://arxiv.org/abs/2305.10601.
```
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan.
Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural
_Information Processing Systems, 36, 2023._
Eric Zelikman, Qian Huang, Gabriel Poesia, Noah Goodman, and Nick Haber. Parsel: Algorithmic
reasoning with language models by composing decompositions. Advances in Neural Information
_Processing Systems, 36:31466–31523, 2023._
**Appendix**
**Experiment Prompts**
In our experiments, we designed a number of carefully crafted prompts to guide language models
during reasoning tasks. Here are the key prompts used and their purposes:
**Standard Input-Output (IO) Prompt**
The standard IO prompt is used as a baseline approach:
```
Answer the following math problem. Your response should
conclude with "the answer is n", where n is a number:
{input}
```
This prompt directly asks the model to solve the math problem and provide the answer in a specific
format.
**Chain of Thought (CoT) Prompt**
The CoT prompt encourages the model to show its reasoning:
```
Answer the following question: {input}
Make a strategy, then write. Your output should be in
the following format:
Strategy:
Your strategy about how to answer the question.
Answer:
Your answer to the question. It should end with
"the answer is n", where n is a number.
```
This prompt explicitly asks the model to formulate a strategy before providing an answer, leading to
a more structured thought process.
-----
**Tree of Thoughts (ToT) Prompt**
Our implementation of ToT is inspired by the approach described by Yao et al. [2023] but with
specific modifications tailored to our multi-agent framework. The ToT method uses the CoT prompt
as a base and applies it iteratively, allowing for branching and exploration of multiple reasoning paths.
Our implementation includes the following components:
1. Thought Generation: We use the ’sample’ method for generating thoughts. This method
uses the CoT prompt as a base but applies it iteratively, allowing for branching and exploration of multiple reasoning paths. The prompt flow for ToT includes:
```
Answer the following question: {input}
Make a strategy, then write. Your output should be in
the following format:
Strategy:
Your strategy about how to answer the question.
Answer:
Your answer to the question. It should end with
"the answer is n", where n is a number.
```
2. State Evaluation: For evaluating the generated thoughts, we employ the ’vote’ method.
This involves using a prompt to assign votes to different reasoning paths:
```
Given an instruction and several choices, decide which
choice is most promising. Analyze each choice in detail,
then conclude in the last line "The best choice is {s}",
where s the integer id of the choice.
```
3. Path Selection: We use the ’greedy’ method for selecting the most promising paths to
expand further. This doesn’t involve a specific prompt but rather selection of the highestscored paths from the evaluation step.
Each step involves multiple API calls to the language model, with the generated thoughts and their
evaluations guiding the exploration of the reasoning space. This approach allows for a dynamic and
adaptive exploration of potential solution paths, enhancing the model’s ability to tackle complex
reasoning tasks.
**Verifier Prompt**
A crucial component of our approach is the Thought Validator agent, which uses the following
prompt:
```
As a critical mathematical reasoning verifier, evaluate
the following thought process, which builds upon previous
steps to reach a final conclusion. Focus on:
1. **Question Relevance**:
- Ensure the entire reasoning process directly
addresses the original question.
- Check if the final answer actually solves what
was asked.
2. **Reasoning Progression**:
- Assess logical flow and consistency, especially
in final steps.
- Verify mathematical operations’ correctness and
appropriateness.
- Identify logical fallacies or unjustified leaps.
```
-----
```
3. **Factual Accuracy**:
- Check accuracy and relevance of facts and numbers,
particularly in final calculations.
- Spot any misuse of mathematical concepts.
4. **Completeness**:
- Ensure all necessary aspects are addressed,
particularly in concluding thoughts.
- Identify significant omissions that could affect
the result.
5. **Critical Assessment**:
- Actively seek potential errors or weak points.
- Don’t hesitate to invalidate reasoning if
significant issues are found.
Provide a holistic evaluation of the entire reasoning
process, from start to finish. Conclude with
"Reasoning is Valid" only if the entire process is
relevant, logically sound, and error-free. Otherwise,
conclude with "Reasoning is Invalid" and briefly
explain why.
```
This comprehensive prompt guides the Verifier in thoroughly assessing the validity of the reasoning
process, ensuring that the final answer is not only correct but also logically sound and relevant to the
original question.
**Examples**
To demonstrate the effectiveness of our approach, we show a challenging example from the GSM8K
dataset using the gpt-3.5-turbo model. Using this example, we can see how the Thought Validator
Agent prevents incorrect reasoning from the ToT Reasoner agents from leading to errors in the final
answer.
**Problem 1: Last month, Tasha made $80 from selling lemonade and mowing lawns. The first week,**
_she mowed Kamala’s lawn three times as many times as Joe’s. The following week, she mowed Alba’s_
_lawn five times as Joe’s. If Joe paid Tasha $6 for her work, how much did she make from lemonade_
_sales? Answer: 26._
We have three rounds, each involving three Reasoner agents (R1, R2, and R3). After each round, the
Thought Validator Agent evaluates their reasoning.
Table 2: (Round 1) Reasoner Outputs and Verification Status
|Reasoner|Reasoning Summary|Final Answer|Verified|
|---|---|---|---|
|R1|Incorrect reasoning. Algebraic error leads to the incorrect conclusion that Tasha did not mow Joe’s lawn.|$80|False|
|R2|Correct strategy, but incorrect total earnings from mowing lawns ($60x instead of $48x).|$80|False|
|R3|Accurate reasoning. Correctly calculates the total earnings from mowing lawns and finds lemonade income to be $26.|$26|True|
**Reasoner** **Reasoning Summary** **Final Answer** **Verified**
R1 Incorrect reasoning. Algebraic error leads to the incorrect $80 False
conclusion that Tasha did not mow Joe’s lawn.
R2 Correct strategy, but incorrect total earnings from mowing $80 False
lawns ($60x instead of $48x).
R3 Accurate reasoning. Correctly calculates the total earnings $26 True
from mowing lawns and finds lemonade income to be $26.
**Round 1 Analysis**
In Round 1, Reasoner 1 (R1) incorrectly calculated that Tasha did not mow Joe’s lawn, leading to
an invalid final answer of $80. Reasoner 2 (R2) correctly identified the structure of the problem but
made a calculation error, arriving at $80. Reasoner 3 (R3) provided the correct reasoning and final
answer of $26, which was verified as valid. However we have not reached an agreement since the
only valid answer is the R3 response.
-----
Round 1 Conclusion: No Consensus Reached.
Table 3: (Round 2) Reasoner Outputs and Verification Status
|Reasoner|Reasoning Summary|Final Answer|Verified|
|---|---|---|---|
|R1|Repeated the previous error, miscalculating Tasha’s in- come from lawn mowing.|$80|False|
|R2|Corrected earlier miscalculation but again used the wrong mowing total, leading to an incorrect conclusion.|$80|False|
|R3|Maintained the correct reasoning and final answer ($26) as in Round 1.|$26|True|
**Reasoner** **Reasoning Summary** **Final Answer** **Verified**
R1 Repeated the previous error, miscalculating Tasha’s in- $80 False
come from lawn mowing.
R2 Corrected earlier miscalculation but again used the wrong $80 False
mowing total, leading to an incorrect conclusion.
R3 Maintained the correct reasoning and final answer ($26) $26 True
as in Round 1.
**Round 2 Analysis**
In Round 2, Reasoner 1 (R1) repeated its earlier algebraic mistake. Reasoner 2 (R2) adjusted its
calculations but still produced an incorrect final answer. Reasoner 3 (R3) again provided the correct
reasoning and final answer of $26.
Round 2 Conclusion: No Consensus Reached.
Table 4: (Round3) Reasoner Outputs and Verification Status
|Reasoner|Reasoning Summary|Final Answer|Verified|
|---|---|---|---|
|R1|Corrects earlier algebraic error, but still does not address the lemonade sales correctly.|$80|False|
|R2|Adjusts previous mistake but there is a slight issue in the final calculation but the Validator Agent was not able to detect it.|$32|False|
|R3|Maintained the correct reasoning and final answer ($26) as in Round 1.|$26|True|
**Reasoner** **Reasoning Summary** **Final Answer** **Verified**
R1 Corrects earlier algebraic error, but still does not address $80 False
the lemonade sales correctly.
R2 Adjusts previous mistake but there is a slight issue in the $32 False
final calculation but the Validator Agent was not able to
detect it.
R3 Maintained the correct reasoning and final answer ($26) $26 True
as in Round 1.
**Round 3 Analysis**
In Round 3, Reasoner 1 corrected its earlier algebraic errors but still provided an invalid answer ($80).
Reasoner 2 finally corrected its calculations, but still has a small issue in final answer and reach to
$32. Reasoner 3 remained consistent and accurate, providing the correct answer of $26.
Final Conclusion: Most frequent valid answer is $26. Final answer: 26
**Problem 2: Bob is in charge of doing laundry for a large hotel. Each room has two sheets, one**
_comforter, twice as many pillow cases as sheets and twice as many towels as pillow cases. How many_
_pieces of laundry are there in 80 rooms? Answer: 26._
Table 5: (Round 1) Reasoner Outputs and Verification Status
|Reasoner|Reasoning Summary|Final Answer|Verified|
|---|---|---|---|
|R1|Accurate breakdown of laundry items per room and multi- plication across 80 rooms. Correct total of 1200 pieces of laundry.|1200|True|
|R2|CCorrect approach, but overestimated the number of pil- lowcases, leading to an incorrect total of 1280 pieces of laundry.|1280|False|
|R3|Correct breakdown of laundry per room, yielding the cor- rect total of 1200 pieces of laundry.|1200|True|
**Reasoner** **Reasoning Summary** **Final Answer** **Verified**
R1 Accurate breakdown of laundry items per room and multi- 1200 True
plication across 80 rooms. Correct total of 1200 pieces of
laundry.
R2 CCorrect approach, but overestimated the number of pil- 1280 False
lowcases, leading to an incorrect total of 1280 pieces of
laundry.
R3 Correct breakdown of laundry per room, yielding the cor- 1200 True
rect total of 1200 pieces of laundry.
**Round 1 Analysis**
In Round 1, the Thought Validator Agent evaluated the responses from all three Reasoners. Both
Reasoner 1 (R1) and Reasoner 3 (R3) provided correct and consistent reasoning, each arriving at
-----
the total of 1200 pieces of laundry, which was validated as accurate. However, Reasoner 2 (R2)
overestimated the number of pillowcases, leading to an incorrect answer of 1280 pieces, which was
marked as invalid.
Since two verified Reasoners (R1 and R3) agreed on the correct answer, the final result of 1200 pieces
of laundry was confidently returned.
Round 1 Conclusion: At least two verified reasoners agree. Final Answer: 1200
-----
| [
"Mazal, Bethany",
"Fatemeh, Haji",
"Maryam, Tabar",
"Jason, Chiang",
"Anthony, Rios",
"Peyman, Najafirad"
] | 2024-09-17T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2409.11527v1 | https://arxiv.org/abs/2409.11527 | https://www.semanticscholar.org/paper/69ba84363777acefbdfda225204430b232560c72 |
Incorporating Graph Attention Mechanism into Geometric Problem Solving Based on Deep Reinforcement Learning | In the context of online education, designing an automatic solver for geometric problems has been considered a crucial step towards general math Artificial Intelligence (AI), empowered by natural language understanding and traditional logical inference. In most instances, problems are addressed by adding auxiliary components such as lines or points. However, adding auxiliary components automatically is challenging due to the complexity in selecting suitable auxiliary components especially when pivotal decisions have to be made. The state-of-the-art performance has been achieved by exhausting all possible strategies from the category library to identify the one with the maximum likelihood. However, an extensive strategy search have to be applied to trade accuracy for ef-ficiency. To add auxiliary components automatically and efficiently, we present deep reinforcement learning framework based on the language model, such as BERT. We firstly apply the graph attention mechanism to reduce the strategy searching space, called AttnStrategy, which only focus on the conclusion-related components. Meanwhile, a novel algorithm, named Automatically Adding Auxiliary Components using Reinforcement Learning framework (A3C-RL), is proposed by forcing an agent to select top strategies, which incorporates the AttnStrategy and BERT as the memory components. Results from extensive experiments show that the proposed A3C-RL algorithm can substantially enhance the average precision by 32.7% compared to the traditional MCTS. In addition, the A3C-RL algorithm outperforms humans on the geometric questions from the annual University Entrance Mathematical Examination of China. | A novel algorithm, named Automatically Adding Auxiliary Components using Reinforcement Learning framework (A3C-RL), is proposed by forcing an agent to select top strategies, which incorporates the AttnStrategy and BERT as the memory components. | [
"Liang, Xu",
"Shengyuan, Yan",
"Gongqi, Lin",
"Xiuqin, Zhong",
"Hongguang, Fu",
"Siwen, Jiang",
"Lei, Huang",
"Wei, Fang"
] | 2024-03-14T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2403.14690v1 | https://arxiv.org/abs/2403.14690 | https://www.semanticscholar.org/paper/77867ba4efa44716c19641e590e42959b3bf4fde |
|
InfiMM-WebMath-40B: Advancing Multimodal Pre-Training for Enhanced Mathematical Reasoning | Pre-training on large-scale, high-quality datasets is crucial for enhancing the reasoning capabilities of Large Language Models (LLMs), especially in specialized domains such as mathematics. Despite the recognized importance, the Multimodal LLMs (MLLMs) field currently lacks a comprehensive open-source pre-training dataset specifically designed for mathematical reasoning. To address this gap, we introduce InfiMM-WebMath-40B, a high-quality dataset of interleaved image-text documents. It comprises 24 million web pages, 85 million associated image URLs, and 40 billion text tokens, all meticulously extracted and filtered from CommonCrawl. We provide a detailed overview of our data collection and processing pipeline. To demonstrate the robustness of InfiMM-WebMath-40B, we conducted evaluations in both text-only and multimodal settings. Our evaluations on text-only benchmarks show that, despite utilizing only 40 billion tokens, our dataset significantly enhances the performance of our 1.3B model, delivering results comparable to DeepSeekMath-1.3B, which uses 120 billion tokens for the same model size. Nevertheless, with the introduction of our multi-modal math pre-training dataset, our models set a new state-of-the-art among open-source models on multi-modal math benchmarks such as MathVerse and We-Math. We release our data at https://huggingface.co/datasets/Infi-MM/InfiMM-WebMath-40B. | This work introduces InfiMM-WebMath-40B, a high-quality dataset of interleaved image-text documents that significantly enhances the performance of the 1.3B model, and sets a new state-of-the-art among open-source models on multi-modal math benchmarks such as MathVerse and The authors-Math. | [
"Xiaotian, Han",
"Yiren, Jian",
"Ran, He",
"Xuefeng, Hu",
"Haogeng, Liu",
"Yiqi, Wang",
"Qihang, Fan",
"Yuang, Ai",
"Huaibo, Huang",
"Zhenheng, Yang",
"Quanzeng, You"
] | 2024-09-19T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.12568 | https://arxiv.org/abs/2409.12568 | https://www.semanticscholar.org/paper/6c9986551849422223eb08deff01a86a1040b957 |
|
Instance-adaptive Zero-shot Chain-of-Thought Prompting | Zero-shot Chain-of-Thought (CoT) prompting emerges as a simple and effective strategy for enhancing the performance of large language models (LLMs) in real-world reasoning tasks. Nonetheless, the efficacy of a singular, task-level prompt uniformly applied across the whole of instances is inherently limited since one prompt cannot be a good partner for all, a more appropriate approach should consider the interaction between the prompt and each instance meticulously. This work introduces an instance-adaptive prompting algorithm as an alternative zero-shot CoT reasoning scheme by adaptively differentiating good and bad prompts. Concretely, we first employ analysis on LLMs through the lens of information flow to detect the mechanism under zero-shot CoT reasoning, in which we discover that information flows from question to prompt and question to rationale jointly influence the reasoning results most. We notice that a better zero-shot CoT reasoning needs the prompt to obtain semantic information from the question then the rationale aggregates sufficient information from the question directly and via the prompt indirectly. On the contrary, lacking any of those would probably lead to a bad one. Stem from that, we further propose an instance-adaptive prompting strategy (IAP) for zero-shot CoT reasoning. Experiments conducted with LLaMA-2, LLaMA-3, and Qwen on math, logic, and commonsense reasoning tasks (e.g., GSM8K, MMLU, Causal Judgement) obtain consistent improvement, demonstrating that the instance-adaptive zero-shot CoT prompting performs better than other task-level methods with some curated prompts or sophisticated procedures, showing the significance of our findings in the zero-shot CoT reasoning mechanism. | This work introduces an instance-adaptive prompting algorithm as an alternative zero-shot CoT reasoning scheme by adaptively differentiating good and bad prompts, and proposes an instance-adaptive prompting strategy (IAP) for zero-shot CoT reasoning. | [
"Xiaosong, Yuan",
"Chen, Shen",
"Wenxiao, Wang",
"Shaotian, Yan",
"Xiaofeng, Zhang",
"Liang, Xie",
"Jieping, Ye",
"Renchu, Guan",
"Ying, Wang"
] | 2024-09-30T00:00:00 | NeurIPS 2024 | true | 0 | 0 | null | http://arxiv.org/abs/2409.20441 | https://arxiv.org/abs/2409.20441 | https://www.semanticscholar.org/paper/d0da6b0d60d6400e24368d80c5412026a403997e |
|
Integrating graph neural networks into cvc5 | N/A | null | [
"Jelle, Piepenbrock",
"Josef, Urban",
"Jan, Jakub",
"Mikolasˇ, Janota"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
|
Interpretable Contrastive Monte Carlo Tree Search Reasoning | We propose SC-MCTS*: a novel Monte Carlo Tree Search (MCTS) reasoning algorithm for Large Language Models (LLMs), significantly improves both reasoning accuracy and speed. Our motivation comes from: 1. Previous MCTS LLM reasoning works often overlooked its biggest drawback--slower speed compared to CoT; 2. Previous research mainly used MCTS as a tool for LLM reasoning on various tasks with limited quantitative analysis or ablation studies of its components from reasoning interpretability perspective. 3. The reward model is the most crucial component in MCTS, however previous work has rarely conducted in-depth study or improvement of MCTS's reward models. Thus, we conducted extensive ablation studies and quantitative analysis on components of MCTS, revealing the impact of each component on the MCTS reasoning performance of LLMs. Building on this, (i) we designed a highly interpretable reward model based on the principle of contrastive decoding and (ii) achieved an average speed improvement of 51.9% per node using speculative decoding. Additionally, (iii) we improved UCT node selection strategy and backpropagation used in previous works, resulting in significant performance improvement. We outperformed o1-mini by an average of 17.4% on the Blocksworld multi-step reasoning dataset using Llama-3.1-70B with SC-MCTS*. | This work conducted extensive ablation studies and quantitative analysis on components of MCTS, revealing the impact of each component on the MCTS reasoning performance of LLMs and designed a highly interpretable reward model based on the principle of contrastive decoding. | [
"Zitian, Gao",
"Haotian, Xu",
"Xuzheng, He",
"Boye, Niu",
"Hongzhang, Liu",
"Aiwei, Liu",
"Xuming, Hu",
"Lijie, Wen"
] | 2024-10-02T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.01707 | https://arxiv.org/abs/2410.01707 | https://www.semanticscholar.org/paper/eb76a0f47202e7d535d78d978d3cb9629e49159f |
|
Iteration of Thought: Leveraging Inner Dialogue for Autonomous Large Language Model Reasoning | Iterative human engagement is a common and effective means of leveraging the advanced language processing power of large language models (LLMs). Using well-structured prompts in a conversational manner, human users can effectively influence an LLM to develop more thoughtful and accurate responses. Motivated by this insight, we propose the Iteration of Thought (IoT) framework for enhancing LLM responses by generating "thought"-provoking prompts vis a vis an input query and the current iteration of an LLM's response. Unlike static or semi-static approaches, e.g. Chain of Thought (CoT) or Tree of Thoughts (ToT), IoT adapts its reasoning path dynamically, based on evolving context, and without generating alternate explorative thoughts which are ultimately discarded. The three components of the IoT framework are (1) an Inner Dialogue Agent (IDA) responsible for generating instructive, context-specific prompts; (2) an LLM Agent (LLMA) that processes these prompts to refine its responses; and (3) an iterative prompting loop that implements a conversation between the former two components. We introduce two variants of our framework: Autonomous Iteration of Thought (AIoT), where an LLM decides when to stop iterating, and Guided Iteration of Thought (GIoT), which always forces a fixed number iterations. We investigate the performance of IoT across various datasets, spanning complex reasoning tasks from the GPQA dataset, explorative problem-solving in Game of 24, puzzle solving in Mini Crosswords, and multi-hop question answering from the HotpotQA dataset. Our results show that IoT represents a viable paradigm for autonomous response refinement in LLMs, showcasing significant improvements over CoT and thereby enabling more adaptive and efficient reasoning systems that minimize human intervention. | The Iteration of Thought (IoT) framework for enhancing LLM responses by generating "thought"-provoking prompts vis a vis an input query and the current iteration of an LLM's response is proposed. | [
"Santosh Kumar, Radha",
"Yasamin Nouri, Jelyani",
"Ara, Ghukasyan",
"Oktay, Goktas"
] | 2024-09-19T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.12618 | https://arxiv.org/abs/2409.12618 | https://www.semanticscholar.org/paper/287aa18cec7808fb5364d6ec25099f59c5014db7 |
|
Key-Point-Driven Mathematical Reasoning Distillation of Large Language Model | Large Language Models (LLMs) have demonstrated exceptional proficiency in mathematical reasoning tasks due to their extensive parameter counts and training on vast datasets. Despite these capabilities, deploying LLMs is hindered by their computational demands. Distilling LLM mathematical reasoning into Smaller Language Models (SLMs) has emerged as a solution to this challenge, although these smaller models often suffer from errors in calculation and semantic understanding. Prior work has proposed Program-of-Thought Distillation (PoTD) to avoid calculation error. To further address semantic understanding errors, we propose Key-Point-Driven Mathematical Reasoning Distillation (KPDD). KPDD enhances the reasoning performance of SLMs by breaking down the problem-solving process into three stages: Core Question Extraction, Problem-Solving Information Extraction, and Step-by-Step Solution. This method is further divided into KPDD-CoT, which generates Chain-of-Thought rationales, and KPDD-PoT, which creates Program-of-Thought rationales. The experiment results show that KPDD-CoT significantly improves reasoning abilities, while KPDD-PoT achieves state-of-the-art performance in mathematical reasoning tasks. Our approach effectively mitigates misunderstanding errors, advancing the deployment of efficient and capable SLMs. | KPDD enhances the reasoning performance of SLMs by breaking down the problem-solving process into three stages: Core Question Extraction, Problem-Solving Information Extraction, and Step-by-Step Solution, and KPDD-PoT achieves state-of-the-art performance in mathematical reasoning tasks. | ## Key-Point-Driven Mathematical Reasoning Distillation of Large Language Model
**Xunyu Zhu[1][,][2], Jian Li[1][,][2][*], Can Ma[1][,][2], Weiping Wang[1][,][2]**
1Institute of Information Engineering, Chinese Academy of Sciences
2School of Cyber Security, University of Chinese Academy of Sciences
{zhuxunyu, lijian9026, macan, wangweiping}@iie.ac.cn
**Abstract**
Large Language Models (LLMs) have demonstrated exceptional proficiency in mathematical
reasoning tasks due to their extensive parameter counts and training on vast datasets. Despite these capabilities, deploying LLMs is hindered by their computational demands. Distilling LLM mathematical reasoning into Smaller
Language Models (SLMs) has emerged as a solution to this challenge, although these smaller
models often suffer from errors in calculation and semantic understanding. Prior work
has proposed Program-of-Thought Distillation
(PoTD) to avoid calculation error. To further
address semantic understanding errors, we propose Key-Point-Driven Mathematical Reasoning Distillation (KPDD). KPDD enhances the
reasoning performance of SLMs by breaking
down the problem-solving process into three
stages: Core Question Extraction, ProblemSolving Information Extraction, and Step-byStep Solution. This method is further divided
into KPDD-CoT, which generates Chain-ofThought rationales, and KPDD-PoT, which
creates Program-of-Thought rationales. The
experiment results show that KPDD-CoT significantly improves reasoning abilities, while
KPDD-PoT achieves state-of-the-art performance in mathematical reasoning tasks. Our approach effectively mitigates misunderstanding
errors, advancing the deployment of efficient
and capable SLMs.
41
40
35
30
28
25
20
Error Count (Num) 15
12
10
5
0
Understanding Error Calculation Error Step Missing Error
Error Type
Figure 1: Error analysis of 50 GSM8K problems with
**incorrect answers returned by CoTD using FlanT5-**
**Base. The experimental results indicate that multiple**
errors may exist in the reasoning process of CoTD, with
understanding errors and calculation errors being the
major factors affecting CoTD’s reasoning performance.
reasoning abilities from LLMs to SLMs. Chainof-Thought Distillation (CoTD) (Ho et al., 2023)
is a representative mathematical reasoning distillation method. CoTD prompts LLMs to generate
reasoning processes for each question, constructing a mathematical reasoning dataset. This dataset
is then used to fine-tune SLMs, enhancing their
mathematical reasoning abilities. However, there
remains a significant performance gap between
SLMs and LLMs. Prior work (Wei et al., 2022)
identifies three main error types in CoT reasoning:
Calculation errors, Missing Step errors, and Semantic misunderstanding errors. To explore the reasons
for the performance gap between SLMs and LLMs,
we conducted the same error analysis on CoTD.
Our preliminary experiments (shown in Figure 1)
reveal numerous error combinations in CoTD, with
calculation and semantic misunderstanding errors
being the most prevalent. Prior work (Zhu et al.,
2023) proposed Program-of-Thought Distillation
(PoTD) to avoid calculation errors by formulating
**1** **Introduction**
Large language models (LLMs) have achieved impressive performance in mathematical reasoning
tasks. Recent work (Wang et al., 2023) further
proposes Chain-of-Thought (CoT) to enhance the
mathematical reasoning abilities of LLMs. However, the massive scale of LLMs presents significant challenges for deployment.
A feasible solution to address this problem is to
use black-box distillation to transfer mathematical
*Corresponding author
-----
the reasoning process as a Python program executed by an external interpreter. This approach
allows the SLM to focus on generating the program, avoiding calculation errors and improving
reasoning performance. Given these circumstances,
our paper focuses on addressing semantic misunderstanding errors in CoTD to further enhance the
reasoning performance of SLMs.
In our paper, we propose a novel mathematical reasoning distillation method called KeyPoint-Driven Mathematical Reasoning Distillation
(KPDD) to enhance the mathematical reasoning
performance of SLMs. KPDD breaks the reasoning process into three parts: (1) Core Question
Extraction: Identifies the core question from the
original problem. (2) Problem-Solving Information
Extraction: Extracts relevant data and information
needed to solve the problem. (3) Step-by-Step Solution: Uses the extracted key points to solve the
problem in a step-by-step manner. The third part
is further divided into two formats, KPDD-CoT
and KPDD-PoT: (1) KPDD-CoT: Generates rationales in the form of Chain-of-Thought (CoT). This
method focuses on reducing misunderstanding errors and explicitly illustrates the reasoning process,
aiding in error analysis. (2) KPDD-PoT: Generates rationales in the form of Program-of-Thought
(PoT). This approach not only reduces misunderstanding errors but also avoids calculation errors,
further enhancing the SLM’s mathematical reasoning performance.
We use KPDD to fine-tune FlanT5 models, and
evaluate these models on several mathematical reasoning dataset, including GSM8K, ASDiv, SVAMP,
and MultiArith. Our experiment results show that
KPDD-CoT significantly enhances SLMs’ reasoning abilities, while KPDD-PoT enables SLMs to
achieve state-of-the-art (SOTA) mathematical reasoning performance. Furthermore, our error analysis on KPDD confirms that KPDD effectively mitigates misunderstanding errors, thereby improving
the mathematical reasoning performance of SLMs.
Our contributions are summarized as follows:
1. Our study reveals that misunderstanding errors and calculation errors are the major factors limiting CoTD’s reasoning.
2. We propose Key-Point-Driven Mathematical
Reasoning Distillation (KPDD) to alleviate
misunderstanding errors and effectively improve the reasoning performance of SLMs.
3. Extensive experiments show that KPDD
outperforms other methods across various
benchmarks and achieves new state-of-theart results on these mathematical reasoning
datasets.
**2** **Related Work**
**2.1** **Chain-of-Thought Reasoning**
Chain-of-Thought refers to prompt LLMs to solve
the question step by step. Prior work (Wei et al.,
2022) find that Chain-of-Thought can effectively
improve the reasoning performance of LLMs.
Based on the findings, Kojima et al. (Kojima et al.,
2022) further introduce zero-shot CoT, which significantly improves the model’s reasoning performance by simply adding the prompt "Let’s think
step by step" before answering. To avoid calculation error in CoT, Chen et al. (Chen et al., 2023)
formulate the reasoning process into program. Furthermore, Wang et al. (Wang et al., 2023) introduce
a self-consistency decoding strategy, which generates diverse reasoning paths and then selects the
most consistent answer by considering these paths
comprehensively. Least-to-most prompting (Zhou
et al., 2023) breaks down complex problems into
a series of simpler subproblems and solves these
subproblems sequentially. Zhong et al. (Zhong
et al., 2024) encourage LLMs to deeply understand
problems and leverage key information for better
reasoning. Inspired by these methods, our work introduces Key-Point-Driven Mathematical Reasoning Distillation (KPDD) to further enhance SLMs’
mathematical reasoning.
**2.2** **Black-box Distillation**
Currently, close-source LLMs usually have
stronger reasoning performance than open-source
LLMs. However, in general, we can only obtain
the output of close-source LLMs. Based on this
situation, black-box distillation is proposed to distill abilities from LLMs to SLMs. Specifically,
black-box distillation first prompts LLMs to generate a distillation dataset, and then this dataset is
used to fine-tune SLMs to improve their reasoning
performance. For example, Ho et al. (Ho et al.,
2023) prompt LLMs to generate a CoT reasoning
dataset, which was then used to fine-tune an SLM,
thereby enhancing its reasoning ability. Shridhar
et al. (Shridhar et al., 2023) utilize a LLM to train
a problem decomposer and a subproblem solver
separately. They used the decomposer to break
-----
down the original problem into multiple subproblems, and then employed the subproblem solver to
solve each subproblem individually. By integrating
the solutions to these subproblems, they derived
the final answer. Fu et al. (Fu et al., 2023) analyze
CoT distillation and find that there is a trade-off between the specific abilities and the general abilities
of SLMs. When CoT distillation is used to enhance the specific abilities of SLMs, their general
abilities decrease correspondingly. Zhu et al. (Zhu
et al., 2023) construct a distillation dataset, where
the reasoning format changed from CoT reasoning to PoT reasoning. Through this method, the
SLM can delegate the calculation process to an
additional Python interpreter to avoid calculation
errors, thereby enhancing the SLM’s mathematical reasoning performance. Zhu et al. (Zhu et al.,
2024) further formalize the reasoning process into
equations and find that a diverse range of reasoning
formats can effectively enhance the mathematical
reasoning performance of SLMs. Our work introduces a novel distillation approach where two
SLMs independently extract the core question and
key problem-solving information from an original
question. These key points are then utilized to
guide another SLM in solving the original question
effectively.
**3** **Method**
In this work, we introduce a novel distillation
method for mathematical reasoning tasks called
Key-Point-Driven Distillation (KPDD), structured
into three stages: (1) Stage 1: KPDD distills the
first SLM to extract the core question from the
original question. (2) Stage 2: KPDD distills the
second SLM to extract problem-solving information from the original question. (3) Stage 3: KPDD
distills the third SLM to solve the original problem using the core question and problem-solving
information. In Stage 3, we prompt the LLM to
construct two types of reasoning datasets: (1) CoT
Rationales: These are more comprehensible to both
humans and LLMs, showcasing a detailed reasoning process. (2) PoT Rationales: These rationales
delegate computational tasks to an external Python
interpreter, thereby avoiding calculation errors.
**3.1** **Data Generation from LLMs**
When given a mathematical dataset, our primary
task is to utilize an LLM to generate a reasoning process for each mathematical problem in the
dataset. By doing so, we augment the mathematical
dataset to construct the distillation dataset. Furthermore, in stage 3, our KPDD method employs two
distillation approaches: one distills the SLM to
generate CoT rationales for problem-solving, and
the other distills the SLM to generate PoT rationales for problem-solving. In other words, our
KPDD method can be divided into two approaches:
KPDD-CoT and KPDD-PoT.
**3.1.1** **Data Generation for KPDD-CoT**
Our method uses few-shot prompting to prompt
the LLM to synthesize the reasoning dataset. Figure 2 reveals the distillation dataset generation process. Specifically, we first randomly sample k data
pairs (x, y) from a mathematical dataset D, where
_x represents the question and y represents the an-_
swer. Then, for these sampled data, we manually
construct the reasoning process c. Each reasoning process c includes a core question, problemsolving information, and rationales in CoT format. These elements are separated by HTML tags:
"<core>{core question}</core><info>{problemsolving information}</info><cot>{rationales in
CoT format}</cot>". Finally, we obtain a demonstration dataset _c. At the same time, we further_
_D_
introduce an instruction "Firstly, let’s extract the
**most comprehensive and detailed key question.**
**Then, let’s identify and list the most useful in-**
**formation related to the question. Finally, let’s**
**understand the key question and the problem-**
**solving information, solve the question step by**
**step, and show the answer.". By leveraging the**
demonstration set and the instruction, we prompt
the LLM to generate the reasoning process for the
mathematical questions. The KPDD-CoT dataset
generation is formalized as:
_ci = f_ (xi, _c),_ (1)
_M_ _D_
where M denotes the LLM, f represents the decoding function, and i is the index in D.
**Data Filtering—To further improve the quality**
of the dataset, we attempt to filter out incorrect
reasoning processes from the KPDD-CoT dataset.
Specifically, for each data point in the KPDD-CoT
dataset, we extract the answer from its reasoning
process. If this answer matches the gold answer,
we retain the data point; otherwise, we discard it
as the reasoning process is deemed incorrect. By
ensuring that the KPDD-CoT dataset contains only
correct reasoning processes, we enhance the LLM’s
mathematical reasoning performance.
-----
**Q: A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total** **<core>**
does it take? How many clips did Natalia sell altogether in April and May, given that she sold
**Firstly, let's extract the most comprehensive and detailed core question. Then, let's identify** 48 clips in April and half as many in May?
**and list the most useful information related to the question. Finally, let's understand the** **</core>**
**core question and the problem-solving information, solve the question step by step, and** **<info>**
**show the answer.** 1. Natalia sold 48 clips in April.
**<core>** 2. She sold half as many clips in May as she did in April.
How many bolts of fiber in total does it take to make a robe, considering it takes 2 bolts of blue **</info>**
fiber and half that amount of white fiber? **KPDD-CoT** **<cot>**
**</core>** In April, Natalia sold 48 clips.
**<info>** In May, she sold half as many clips as she did in April, which is 48 / 2 = 24 clips.
1. A robe takes 2 bolts of blue fiber. To find the total number of clips sold in April and May, we add the number of
2. A robe takes half the amount of white fiber as blue fiber. clips sold in each month: 48 (April) + 24 (May) = 72 clips.
**</info><cot>** **LLMs** The answer is 72</cot>
The robe takes 2 bolts of blue fiber.
Half the amount of 2 bolts is 2 / 2 = 1 bolt of white fiber.
So, the total amount of fiber needed is 2 bolts of blue fiber + 1 bolt of white fiber = 3 bolts of **Data Filtering**
fiber in total.
The answer is 3</cot> **Gold Answer**
**Q: Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in**
May. How many clips did Natalia sell altogether in April and May?Firstly, let's extract the most comprehensive and detailed core question. Then, let's **Generating Correct Answer** **Generating Wrong Answer**
**identify and list the most useful information related to the question. Finally, let's**
**understand the core question and the problem-solving information, solve the**
**question step by step, and show the answer.**
**Saving** **Dropping**
Figure 2: Detailed data generation for KPDD-CoT. Initially, we use few-shot prompting to guide LLMs in
producing reasoning processes. We then extract the correct answers from these processes, discarding any reasoning
process that does not align with the correct answers. In this way, we have constructed a high-quality KPDD-CoT
reasoning distillation dataset.
**Q: A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total** **<core>**
does it take? How many clips did Natalia sell altogether in April and May, given that she sold
**Firstly, let's extract the most comprehensive and detailed core question. Then, let's identify** 48 clips in April and half as many in May?
**and list the most useful information related to the question. Finally, let's understand the** **</core>**
**core question and the problem-solving information, and generate the python code (return** **<info>**
**ans) to solve the question..** 1. Natalia sold 48 clips in April.
**<core>** 2. She sold half as many clips in May as she did in April.
How many bolts of fiber in total does it take to make a robe, considering it takes 2 bolts of bluefiber and half that amount of white fiber? **KPDD-PoT** **</info><pot>**
**</core>** clips_sold_april = 48
**<info>** clips_sold_may = clips_sold_april / 2
1. A robe takes 2 bolts of blue fiber. total_clips_sold = clips_sold_april + clips_sold_may
2. A robe takes half the amount of white fiber as blue fiber. ans = total_clips_sold
**</info>** **LLMs** **</pot>**
**<pot>**
blue_fiber_bolts = 2
white_fiber_bolts = blue_fiber_bolts / 2 **Data Filtering**
total_bolts = blue_fiber_bolts + white_fiber_bolts
ans = total_bolts
**</pot>** **Python Interpreter**
**Q: Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in**
May. How many clips did Natalia sell altogether in April and May? **Compile Error**
**Firstly, let's extract the most comprehensive and detailed core question. Then, let's** **Generating Correct Answer** **Generating Wrong Answer**
**identify and list the most useful information related to the question. Finally, let's**
**understand the core question and the problem-solving information, and generate**
**the python code (return ans) to solve the question.**
**Saving** **Dropping**
Figure 3: Detailed data generation for KPDD-PoT. Similar to KPDD-CoT, we prompt LLMs to generate the
reasoning process in the KPDD-PoT format. We extract the rationale in the PoT format from this process and run it
through a Python interpreter. If there are errors or incorrect answers, we discard the reasoning process. Finally, we
constructed a high-quality KPDD-PoT reasoning dataset.
**3.1.2** **Data Generation for KPDD-PoT**
Similar to KPDD-CoT, we first sample k examples from a mathematical dataset D, then use these
samples to manually create a demonstration set
_p._ Each reasoning process in the demonstra_D_
tion set Dp includes a core question, problemsolving information, and rationales in PoT format.
These elements are also separated by HTML tags:
"<core>{core question}</core><info>{problemsolving information}</info><pot>{rationales in
PoT format}</pot>". Then, we further introduce
the instruction a instruction "Firstly, let’s extract
**the most comprehensive and detailed key ques-**
**tion. Then, let’s identify and list the most useful**
**information related to the question. Finally, let’s**
**understand the key question and the problem-**
**solving information, and generate the python**
**code (return ans) to solve the question.". By uti-**
lizing the demonstration set Dp and the instruction,
we prompt LLMs to generate reasoning processes
in for questions. Figure 3 shows the data synthesis process for KPDD-PoT, and the KPDD-PoT
dataset generation process can be formalized as:
_pi = f_ (xi, _p)._ (2)
_M_ _D_
**Data Filtering—Similar to data filtering for**
KPDD-CoT, we also apply filtering to the KPDD
-----
PoT dataset. Specifically, for each data point in
the KPDD-PoT dataset, we extract the rationale in
PoT format from its reasoning process. An external
Python interpreter is then used to run the rationale.
If the obtained answer do not match the gold answer, it indicate that the reasoning process is incorrect. Consequently, we remove the data point with
the flawed reasoning process from the KPDD-PoT
dataset, thereby enhancing the dataset’s quality.
**3.2** **Fine-tuning SLMs**
After constructing these reasoning datasets, we use
them to fine-tune the SLMs. In the KPDD, we
fine-tune three SLMs: the first SLM, called KPDDCoT/PoT-core, is used to extract the core question
from the original problem, the second SLM, called
KPDD-CoT/PoT-info, extracts the problem-solving
information, and the third SLM, called KPDDCoT/PoT-solve, uses both the core question and
problem-solving information to solve the original
question.
**3.2.1** **Fine-tuning SLMs for KPDD-CoT**
Firstly, we construct a core question subset from
the KPDD-CoT dataset, denoted as _CC. Each_
_D_
sample in this subset can be represented as (x, cc),
where x represents the original question and cc
represents the core question. For each training instance (x, cc) from _CC, we prepend the prompt_
_D_
_pcc "Let’s extract the most comprehensive and_
**detailed core question." to the question x. This**
guides the KPDD-CoT-core in fine-tuning to accurately extract the corresponding core question cc.
The fine-tuning loss function can be represented as
follows:
problem-solving information ci. The fine-tuning
loss function can be represented as follows:
_t=1_ log P (ci[i]t _[|][ ci]<t[i]_ _[, x][i][, p][ci][)][,]_ (4)
X
_L = −_
_i=1_
where N is the number of examples in DCI, pci is
the prompt, and ci:T is the sequence of the problemsolving information.
Finally, we construct a problem-solving subset
from the KPDD-CoT dataset, denoted as _CS._
_D_
Each sample in this subset can be represented
as (x, cc, ci, cs), where x represents the original
question, cc represents the core question, ci represents the problem-solving information, and cs
represents the rationales in CoT format. For each
data (x, cc, ci, cs) from _CS, we integrate the orig-_
_D_
inal question x, the core question cc, the problemsolving information ci and the prompt pcs "Let’s
**understand the core question and the problem-**
**solving information, solve the question step by**
**step, and show the answer." to construct a new**
input. This guides the KPDD-CoT-solve in finetuning to generate rationales cs for solving the origin question in CoT format. The fine-tuning loss
function can be represented as follows:
_N_ _T_
_L = −_ _i=1_ _t=1_ log P (cs[i]t _[|][ cs]<t[i]_ _[, x][i][, cc][i][, ci][i][, p][cs][)][,]_
X X
(5)
where N is the number of examples in DCS, pcs is
the prompt, and cs:T is the sequence of the rationale in CoT format.
**3.2.2** **Fine-tuning SLMs for KPDD-PoT**
In KPDD-PoT, aside from replacing the KPDDCoT dataset with the KPDD-PoT dataset, the finetuning method for KPDD-PoT-core remains consistent with that of KPDD-CoT-core, and the finetuning method for KPDD-PoT-info remains consistent with that of KPDD-CoT-info. However, the
fine-tuning method of KPDD-PoT-solve is different with KPDD-CoT-solve. The main difference
between them is the input instruction. Specifically,
when fine-tuning KPDD-PoT-solve, the input instruction is: "Let’s understand the core question
**and the problem-solving information, and gen-**
**erate the python code (return ans) to solve the**
**question." This instruction guides the model to not**
only understand the core question and the problemsolving information but also to generate Python
code that can compute the answer. This approach
log P (cc[i]t _<t[, x][i][, p][cc][)][,]_ (3)
_t=1_ _[|][ cc][i]_
X
_L = −_
_i=1_
where N is the number of examples in _CC, pcc_
_D_
is the prompt, and cc:T is the sequence of the core
question.
Then, we construct a problem-solving subset
from the KPDD-CoT dataset, denoted as _CI_ .
_D_
Each sample in this subset can be represented as
(x, ci), where x represents the original question
and ci represents the problem-solving information.
For each data point (x, ci) from _CI_, we add the
_D_
prompt pci "Let’s identify and list the most use**ful information related to the question." to the**
question x. This guides the KPDD-CoT-info in
fine-tuning to accurately extract the corresponding
-----
Moreover, the fine-tuning loss functions for the
Core
**KPDD-CoT/PoT-core** question
Rationale in
CoT/PoT format
Question **KPDD-CoT/PoT-solve**
Problem-solving
**KPDD-CoT/PoT-info** information
Figure 4: The inference process of KPDD. When given an original question, KPDD-CoT/PoT-core and KPDDCoT/PoT-info first extract the core question and the problem-solving information. Then, KPDD-CoT/PoT-solve
uses these key points to generate rationales to solve the original question.
leverages the model’s ability to perform code gen- **Dataset** **Size**
eration, which can be particularly effective for solv- Train GSM8K 7473
(+) augmented 29892
ing mathematical problems programmatically.
GSM8K 1319
SLMs in KPDD-PoT are identical to those in
KPDD-CoT. This ensures that the optimization
process remains consistent across both methods,
focusing on minimizing the discrepancies between
the model’s output and the expected solutions.
**3.3** **Inference-time Predictions**
Figure 4 illustrates the inference process of KPDD.
After fine-tuning, the process for solving a given
question involves three main steps:
1. Core Question Extraction: First, we use
the KPDD-CoT/PoT-core model to extract the
core question from the original problem. This
step isolates the essential part of the problem
that needs to be addressed.
2. Problem-Solving Information Extraction:
Next, the KPDD-CoT/PoT-info model extracts the relevant problem-solving information. This model identifies and lists the necessary context and data required to solve the
core question.
3. Solution Generation: Finally, based on the
original question, the core question, and
the problem-solving information, the KPDDCoT/PoT-solve model generates rationales in
either CoT or PoT format to solve the original question. For KPDD-PoT, this involves
generating Python code that can compute the
answer.
This structured approach ensures that each
model focuses on a specific aspect of the problemsolving process, leading to more accurate and reliable solutions.
|Col1|Dataset Size|
|---|---|
|Train|GSM8K 7473 (+) augmented 29892|
|---|---|
|Test|GSM8K 1319 ASDiv 2096 SVAMP 1000 MultiArith 600|
|---|---|
Table 1: Statistics of the datasets used in our exper**iments. Augmented refers that we run 4 times data**
synthesis on the training set of GSM8K.
**4** **Experiments**
**4.1** **Dataset**
In our paper, we generate KPDD distillation
datasets based on the GSM8K training set, which
comprises diverse grade school math word problems (Cobbe et al., 2021). Then, we evaluate the
mathematical reasoning performance of SLM on
the GSM8K test set. Furthermore, to assess the
transferability of SLMs’ mathematical reasoning
capabilities, we evaluate SLM on several additional
mathematical datasets. These datasets include
ASDiv, which contains diverse math word problems (Miao et al., 2020), SVAMP, which features
math word problems with varying structures (Patel et al., 2021), and MultiArith, which consists of
arithmetic word problems (Roy and Roth, 2015).
The statistics of these datasets are summarized in
Table 1. This comprehensive evaluation approach
ensures that the SLMs’ mathematical reasoning capabilities are thoroughly tested across a variety of
problem types and structures, providing a robust
assessment of their performance.
**4.2** **Implementation**
In our paper, we use DeepSeek-V2 (DeepSeek-AI,
2024) as our teacher LLM to generate reasoning
processes for questions. Specifically, we manu
-----
ally construct a demonstration set containing eight
demonstrations. We then use this demonstration
set to prompt DeepSeek-V2 to generate four reasoning paths for each question. Each reasoning
path is subsequently filtered, resulting in the creation of a KPDD dataset. Next, we use FlanT5
models—Small (60M), Base (250M), and Large
(760M) (Chung et al., 2022)—as our student LMs.
By fine-tuning the FlanT5 models with the KPDD
dataset, we aim to enhance their mathematical reasoning abilities. During the fine-tuning process, we
set the learning rate to 5e-4, the batch size to 32,
and the total number of training epochs to 10.
**4.3** **Main Results**
Table 2 showcases our method’s performance on
four mathematical datasets, revealing key insights:
1. KPDD-CoT Enhances Mathematical Rea**soning: When FlanT5-small is used as the**
SLM, KPDD-CoT achieves an average accuracy improvement of 5.01% across several
mathematical reasoning tasks. With FlanT5base as the SLM, KPDD-CoT yields an average accuracy improvement of 11.71%. For
FlanT5-large, KPDD-CoT results in an average accuracy improvement of 15.51%. The
experimental result demonstrates that KPDDCoT can significantly enhance the mathematical reasoning performance of SLMs. We
attribute this experimental result to the reason that baselines often encounters semantic
misunderstanding errors that hinder the improvement of SLMs’ mathematical reasoning
abilities. In contrast, KPDD-CoT employs extra SLMs to extract key points (including the
core question and problem-solving information) of the question and uses these key points
to guide the SLMs’ reasoning. This approach
significantly reduces the semantic misunderstanding errors of CoTD, making KPDD-CoT
better suited for improving the mathematical
reasoning ability of SLMs.
2. KPDD-PoT Outperforms State-of-the-Art:
When FlanT5-small is used as the SLM,
KPDD-PoT achieves an average accuracy improvement of 32.18% across several mathematical reasoning tasks. With FlanT5-base as
the SLM, KPDD-PoT yields an average accuracy improvement of 48.25%. For FlanT5large, KPDD-PoT results in an average accuracy improvement of 54.63%. The experimen
tal result shows that KPDD-PoT make SLMs
achieve state-of-the-art mathematical reasoning accuracy. Furthermore, KPDD-PoT’s accuracy is higher than that of KPDD-CoT, highlighting the advantage of rationales in PoT
format in enhancing SLMs’ reasoning capabilities. Our analysis finds that the mathematical reasoning performance of CoTD is limited not only by semantic misunderstanding
errors but also by calculation errors. PoTD
converts rationales from CoT format into PoT
format, formulating the reasoning process into
a Python program and sending it to an extra
Python interpreter to generate the final answer.
This method transfers numerical computation
from SLMs to a Python interpreter, avoiding
calculation errors. Additionally, by extracting
key points of the question, KPDD-PoT implicitly enhances the SLMs’ understanding of
the question, thereby improving their overall
mathematical reasoning capabilities.
3. Strong Transferability of KPDD: KPDD
exhibits strong transferability. The distillation dataset of KPDD is constructed based on
the GSM8K training dataset, and we evaluate
our SLMs on several mathematical reasoning
datasets, including the GSM8K test dataset,
ASDiv dataset, SVAMP dataset, and MultiArith dataset. Our experimental results show
that KPDD not only achieves good reasoning
performance on the GSM8K test dataset but
also performs well on the ASDiv, SVAMP,
and MultiArith datasets. These results demonstrate that KPDD has strong transferability
and further corroborate that SLMs do not improve their reasoning performance through
data leakage.
**4.4** **Effect of Different Components in KPDD**
In this subsection, we delve into the impact of various components within KPDD. We have considered
five distinct categories, which include: 1. Original SLMs without any fine-tuning; 2. SLMs with
original CoT/PoT distillation; 3. SLMs with core
distillation combined with CoT/PoT distillation;
4. SLMs with problem-solving information distillation combined with CoT/PoT distillation; 5.
SLMs with KPDD. For each of the latter four categories, we have constructed corresponding reasoning datasets, each containing a single reasoning
-----
|Models|#Params|GSM8K ASDiv SVAMP MultiArith|AVG|
|---|---|---|---|
_Proprietary Large Language Models_
|GPT-4 (OpenAI, 2023) ChatGPT Claude-2 (Anthropic, 2023) PaLM-2 (Anil et al., 2023) DeepSeek-V2 (DeepSeek-AI, 2024)|- - - 540B 236B|92.0 91.3 93.1 - 80.8 87.3 83.0 - 85.2 - - - 80.7 - - - 92.2 - - -|92.13 83.7 85.2 80.7 92.2|
|---|---|---|---|
_Open-Source Large Language Models_
|Llama-2 (Touvron et al., 2023) CodeLLaMA (Rozière et al., 2023) Platypus-2 (Lee et al., 2023) WizardMath (Luo et al., 2023) TORA (Gou et al., 2023)|7B 7B 7B 7B 7B|13.3 50.7 38.0 - 34.0 61.4 59.0 - 14.4 47.9 36.7 - 54.9 59.1 57.3 - 68.8 73.9 68.2 -|34 51.46 33 57.1 70.3|
|---|---|---|---|
_Fine-tuned Small Language Models_
|Ho et al. (Ho et al., 2023) Fu et al. (Fu et al., 2023) Fu et al. (Fu et al., 2023) Shridhar et al. (Shridhar et al., 2023) Zhu et al. (Zhu et al., 2023) Zhu et al. (Zhu et al., 2024)|0.3B 0.76B 0.25B 0.77B 0.77B 0.77B|3.11 - - - 20.2 23.8 20.4 38.5 13.4 20.9 14.2 29.7 17.89 - 18.14 - 39.2 51.2 48.2 79.2 42.45 52.81 49.59 85.5|3.11 25.72 19.55 18.01 54.45 57.58|
|---|---|---|---|
_Our fine-tuned Small Language Models_
|FlanT5-Small (+) KPDD-CoT (+) KPDD-PoT|0.06B|2.1 2.8 2.1 4.0 7.58 8.73 6.9 7.83 20.77 40.07 34.1 44.16|2.75 7.76 34.93|
|---|---|---|---|
|FlanT5-Base (+) KPDD-CoT (+) KPDD-PoT|0.25B|3.0 4.2 3.8 7.0 14.63 14.93 13.8 21.5 34.57 52.29 50.5 73.66|4.5 16.21 52.75|
|FlanT5-Large (+) KPDD-CoT (+) KPDD-PoT|0.76B|6.9 10.1 6.8 13.0 21.75 22.51 19.1 35.5 46.32 59.92 61.6 87.5|9.2 24.71 63.83|
Table 2: Overall test set performance.
**Category** **Core** **Info** **Solve** **GSM8K** **ASDiv** **SVAMP** **MultiArith** **AVG**
1 _×_ _×_ _×_ 3.0 4.2 3.8 7.0 4.5
2 _×_ _×_ ✓ 8.71 9.2 8.2 10.33 9.11
3 ✓ _×_ ✓ 9.02 9.25 8.9 11.5 9.66
4 _×_ ✓ ✓ 8.87 9.73 8.9 11.0 9.59
5 ✓ ✓ ✓ 9.17 9.92 9.03 11.83 9.98
Table 3: Effect of Different Components in KPDD-CoT. We consider five different categories to analyse the effect
of different components in KPDD-CoT. The experiment result shows that key points in questions can deepen SLMs’
understanding of the questions, and combining several key points can provide richer information, leading to further
improvements in SLMs’ reasoning abilities.
**Category** **Core** **Info** **Solve** **GSM8K** **ASDiv** **SVAMP** **MultiArith** **AVG**
1 _×_ _×_ _×_ 3.0 4.2 3.8 7.0 4.5
2 _×_ _×_ ✓ 19.40 44.32 40.6 45.33 37.41
3 ✓ _×_ ✓ 23.19 45.89 44.1 53.33 41.62
4 _×_ ✓ ✓ 25.39 46.85 44.6 57.33 43.54
5 ✓ ✓ ✓ 27.06 49.33 46.1 58.33 45.20
Table 4: Effect of Different Components in KPDD-PoT. We consider five different categories to analyse the effect
of different components in KPDD-PoT. The experiment result shows that key points in questions can deepen SLMs’
understanding of the questions, and combining several key points can provide richer information, leading to further
improvements in SLMs’ reasoning abilities.
path per question. Following this, we have utilized FlanT5-base as our foundation for SLMs, and
we have fine-tuned these models using the aforementioned reasoning datasets. To evaluate the reasoning capabilities of these SLMs, we have tested
them on the GSM8K test dataset, as well as on the
ASDiv, SVAMP, and MultiArith datasets.
Tables 3 and 4 present the results of our experiments, from which we make several observations:
(1) We observe a significant performance improvement in Category 2 compared to original SLMs.
Specifically, under CoT reasoning, Category 2
achieves an average accuracy gain of 4.61% across
multiple datasets, while under PoT reasoning, it
achieves a substantial average accuracy improvement of 32.91%. These experimental results indicate that CoTD and PoTD can markedly enhance
the mathematical reasoning ability of SLMs. (2)
We find that Categories 3 and 4 exhibit a further performance increase relative to Category 2. Specifically, in the context of CoT reasoning, Categories 3
and 4 achieve average accuracy gains of 0.55% and
-----
0.45% respectively over Category 2 across multiple datasets. Under PoT reasoning, the gains are
more pronounced with Categories 3 and 4 achieving average accuracy improvements of 4.21% and
6.13% respectively. This suggests that SLMs can
deepen their understanding of questions by focusing on key points, thereby further enhancing their
mathematical reasoning ability. (3) In Category 5,
we combine the core questions with the problemsolving information to guide SLMs in addressing
the questions. The results are promising: Category
5 achieves an average accuracy of 9.98% under
CoT reasoning and a remarkable 45.20% under
PoT reasoning across multiple datasets. This indicates that key points in questions play a crucial role
in boosting the reasoning capabilities of SLMs, and
that combining several key points provides richer
information, leading to further improvements in
their reasoning abilities.
**4.5** **Effect of SLM Quantity in KPDD**
In this subsection, we investigate the impact of
SLM Quantity in KPDD. We consider five distinct
categories: I. Using one SLM to simultaneously
extract the core question and problem-solving information, and solve the original question; II. Using
one SLM to extract the core question and problemsolving information, and another SLM to solve
the original question; III. Using one SLM to extract the core question, another SLM to extract the
problem-solving information, and a third SLM to
solve the original question; IV. Using one SLM to
extract the problem-solving information, another
SLM to extract the core question, and both to solve
the original question; V. Using one SLM to extract the core question, another SLM to extract the
problem-solving information, and a third SLM to
solve the original question. For each category, we
create corresponding reasoning datasets, each containing a single reasoning path per question. We
utilize FlanT5-base as our base SLMs, fine-tuning
them on these reasoning datasets. To assess their
reasoning capabilities, we evaluate these SLMs on
the GSM8K test dataset, as well as on the ASDiv,
SVAMP, and MultiArith datasets.
Table 5 and 6 present the results of our experiments, from which we make several observations: (1) Compared to other categories, Category
I performed worse. For KPDD-CoT, Category I
achieved an average accuracy of 7.16% across multiple datasets, while for KPDD-PoT, it achieved an
average accuracy of 39.61%. This suggests that
the limited model size of a single SLM hinders its
performance across multiple tasks. (2) Category
II outperformed Categories III and IV in reasoning performance. For KPDD-CoT, Category II
achieved an average accuracy of 9.51% across multiple datasets, while for KPDD-PoT, it achieved
an average accuracy of 41.80%. We attribute this
result to the importance of the KPDD-CoT/PoTsolve component, where using a single SLM for
this phase yields the best reasoning performance.
(3) For KPDD-CoT, Category V achieved an average accuracy of 9.98% across multiple datasets,
while for KPDD-PoT, it achieved an average accuracy of 45.20%. This is the highest reasoning
performance among all categories, indicating that
our approach of using a separate SLM for each
component maximizes the performance of each
component, thereby maximizing the reasoning performance of KPDD.
|Col1|50 40 Acc 30 20 10|Col3|
|---|---|---|
GSM8K KPDD-CoT KPDD-PoT ASDiv
35
50
30
40
25
Acc 20 Acc 30
15 20
10 10
1 2 3 4 1 2 3 4
The Number of Reasoning Paths The Number of Reasoning Paths
SVAMP MultiArith
50 70
40 60
50
Acc 30 Acc 40
20 30
20
10
10
1 2 3 4 1 2 3 4
The Number of Reasoning Paths The Number of Reasoning Paths
Figure 5: Effect of Reasoning Paths. We fine-tune
CodeT5-Base with different reasoning paths to analyse
the effect of reasoning paths. The experiment results
shows that diverse reasoning paths can improve SLMs’
reasoning performance.
**4.6** **Diverse Reasoning Paths Improve SLMs’**
**Reasoning Performance**
In this subsection, we primarily explore the impact
of the diversity of reasoning paths on the mathematical reasoning performance of SLMs. Specifically,
we fine-tune FlanT5-base using KPDD datasets
with varying numbers of reasoning paths and then
evaluate the fine-tuned SLMs on several mathematical reasoning datasets. By analyzing their mathematical reasoning performance, we assess how
the diversity of reasoning paths affects the SLMs’
capabilities in mathematical reasoning.
-----
**Category** **Core** **Info** **Solve** **GSM8K** **ASDiv** **SVAMP** **MultiArith** **AVG**
I 1[*] 1 1 7.88 4.72 5.4 10.66 7.16
II 1 1 2 9.09 9.44 8.2 11.33 9.51
III 1 2 2 8.41 7.72 6.7 11.24 8.51
IV 2 1 2 7.80 7.58 7.1 11.16 8.41
V 1 2 3 9.17 9.92 9.03 11.83 9.98
- The index of SLM.
Table 5: Effect of SLM Quantity in KPDD-CoT. We consider five different categories to analyse the effect of
SLM quantity in KPDD-CoT. The experimental results show that for KPDD-CoT, using a separate SLM for each
component is necessary to maximize the reasoning performance of KPDD-CoT.
**Category** **Core** **Info** **Solve** **GSM8K** **ASDiv** **SVAMP** **MultiArith** **AVG**
I 1[*] 1 1 24.18 44.32 41.19 48.66 39.61
II 1 1 2 26.0 42.69 42.69 55.83 41.80
III 1 2 2 24.79 46.37 40.6 49.16 40.23
IV 2 1 2 24.63 45.37 41.3 49.33 40.15
V 1 2 3 27.06 49.33 46.1 58.33 45.20
- The index of SLM.
Table 6: Effect of SLM Quantity in KPDD-PoT. We consider five different categories to analyse the effect of
SLM quantity in KPDD-PoT. The experimental results show that for KPDD-PoT, using a separate SLM for each
component is necessary to maximize the reasoning performance of KPDD-PoT.
(a) GSM8K (b) SVAMP
80 79 77 78 77 Vanilla CoTD(+) Core 80 78 76 77 75 Vanilla CoTD(+) Core
(+) Info (+) Info
70 KPDD-CoT 70 KPDD-CoT
60 60
55
53
50 51 50 48 50 51 50
46
40 40 39 37 38 37
Error Count (Num) 34 34 33 32 Error Count (Num)
30 30
20 20
10 10
0 0
Understanding Error Calculation Error Step Missing Error Understanding Error Calculation Error Step Missing Error
Error Type Error Type
Figure 6: Error Analysis for SLMs. We conducted an error analysis of four different categories of distillation
methods. The experiment results show that integrating multiple key points of the questions can significantly
reduce SLMs’ understanding errors, enhance the comprehension of the questions and further improve the reasoning
performance of SLMs.
Figure 5 reveals the relevant experimental results.
As the number of reasoning paths in the distillation dataset increases, the SLM tends to achieve
better performance. For example, when fine-tuning
FlanT5-base with the KPDD-CoT dataset containing a single reasoning path, it achieved an accuracy
of 9.17% on the GSM8K test dataset, 9.92% on
ASDiv, 9.03% on SVAMP, and 11.83% on MultiArith. However, when fine-tuning FlanT5-base
with the KPDD-CoT dataset containing four reasoning paths, it achieved accuracies of 14.63% on the
GSM8K test dataset, 14.93% on ASDiv, 13.8% on
SVAMP, and 21.5% on MultiArith. Similarly, when
fine-tuning SLMs with KPDD-PoT datasets containing different reasoning paths, the experimental
results were consistent with those of KPDD-CoT.
These results indicate that a mathematical reasoning dataset with diverse reasoning paths can effectively enhance the mathematical reasoning performance of SLMs.
**4.7** **Error Analysis**
In this subsection, our aim is to verify whether
KPDD can indeed reduce semantic misunderstand
-----
ing errors. KPDD-PoT implicitly includes the reasoning process within its rationales, making it challenging to conduct error analysis on rationales in
PoT format. Conversely, rationales in CoT format explicitly contain the reasoning steps, allowing
us to clearly understand how the SLM solves the
questions step by step, thus facilitating error analysis. Therefore, in this part, we focus on error
analysis for rationales in CoT format. To achieve
our goal, we randomly sample 100 examples from
GSM8K/SVAMP and perform error analysis on
the questions with incorrect answers. For a better
understanding of KPDD’s effect, we also consider
three other scenarios: (1) vanilla CoTD, (2) reasoning that combines vanilla CoTD and core question extraction, and (3) reasoning that combines
vanilla CoTD and problem-solving information extraction. Furthermore, to simplify our analysis,
we use flanT5-base as our SLMs, and the corresponding reasoning datasets still contain a single
reasoning path per question.
The detailed quantitative results are illustrated
in Figure 6. By analyzing the experimental results,
we found that: (1) Combination of Multiple Er**rors in SLMs: SLMs tend to exhibit combinations**
of multiple errors, with calculation errors having
the most significant impact on reasoning performance. Specifically, vanilla CoTD on the GSM8K
dataset showed 51 understanding errors, 79 calculation errors, and 34 step missing errors, resulting
in a total of 164 errors. This number far exceeds
the original number of problems, with calculation
errors outnumbering other types of errors. Similar results were observed in the SVAMP dataset.
This explains why PoTD achieves better reasoning
performance than CoTD: PoTD converts vanilla
rationales into Python programs, delegating the calculation process to an external Python interpreter
to avoid calculation errors. (2) Reduction of Un**derstanding Errors with Key Points: Introduc-**
ing key points of the original questions effectively
reduces understanding errors. Specifically, when
core questions were introduced in vanilla CoTD,
the number of understanding errors on the GSM8K
dataset decreased to 50, and on the SVAMP dataset,
it decreased to 53. When problem-solving information was introduced in vanilla CoTD, the number
of understanding errors decreased to 48 on GSM8K
and to 51 on SVAMP. These results indicate that
key points of the original questions help SLMs
better understand the questions, thereby reducing
understanding errors and improving reasoning per
formance. (3) Further Reduction of Understand**ing Errors with Multiple Key Points: Combining**
multiple key points can further reduce understanding errors. Specifically, KPDD reduced the number
of understanding errors to 46 on GSM8K and to 50
on SVAMP. This suggests that KPDD’s method of
integrating multiple key points can deepen SLMs’
understanding of the original questions, further reducing understanding errors and enhancing reasoning performance.
**5** **Conclusion**
In this paper, we propose Key-Point-Driven Distillation (KPDD) for enhancing mathematical reasoning in Small Language Models (SLMs). Our
approach leverages the extraction of key points
from questions to improve understanding and reduce errors in reasoning tasks. Experimental results
demonstrate that KPDD significantly reduces understanding errors compared to conventional mathematical reasoning distillation method. However,
PoTD implicitly embeds the reasoning process
within the generated program, making it difficult to
analyze misunderstandings. In the future, we will
explore error analysis methods to facilitate PoTD
error analysis.
**References**
Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak
Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng
Chen, Eric Chu, Jonathan H. Clark, Laurent El
Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin
Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao,
Yuanzhong Xu, Yujing Zhang, Gustavo Hernández
Ábrego, Junwhan Ahn, Jacob Austin, Paul Barham,
Jan A. Botha, James Bradbury, Siddhartha Brahma,
Kevin Brooks, Michele Catasta, Yong Cheng, Colin
Cherry, Christopher A. Choquette-Choo, Aakanksha
Chowdhery, Clément Crepy, Shachi Dave, Mostafa
Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz,
Nan Du, Ethan Dyer, Vladimir Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier
Garcia, Sebastian Gehrmann, Lucas Gonzalez, and
et al. 2023. [Palm 2 technical report.](https://doi.org/10.48550/ARXIV.2305.10403) _CoRR,_
abs/2305.10403.
[Anthropic. 2023. Model card and evaluations for claude](https://www-files.anthropic.com/production/images/Model-Card-Claude-2.pdf?dm=1689034733)
[models. Anthropic blog.](https://www-files.anthropic.com/production/images/Model-Card-Claude-2.pdf?dm=1689034733)
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W. Cohen. 2023. [Program of thoughts](https://openreview.net/forum?id=YfZ4ZPt8zd)
[prompting: Disentangling computation from reason-](https://openreview.net/forum?id=YfZ4ZPt8zd)
[ing for numerical reasoning tasks. Transactions on](https://openreview.net/forum?id=YfZ4ZPt8zd)
_Machine Learning Research._
-----
Hyung Won Chung, Le Hou, Shayne Longpre, Barret
Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang,
Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan
Narang, Gaurav Mishra, Adams Yu, Vincent Y. Zhao,
Yanping Huang, Andrew M. Dai, Hongkun Yu, Slav
Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam
Roberts, Denny Zhou, Quoc V. Le, and Jason Wei.
[2022. Scaling instruction-finetuned language models.](https://doi.org/10.48550/ARXIV.2210.11416)
_CoRR, abs/2210.11416._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training verifiers to solve math word prob-](http://arxiv.org/abs/2110.14168)
[lems. CoRR, abs/2110.14168.](http://arxiv.org/abs/2110.14168)
[DeepSeek-AI. 2024. Deepseek-v2: A strong, economi-](http://arxiv.org/abs/2405.04434)
[cal, and efficient mixture-of-experts language model.](http://arxiv.org/abs/2405.04434)
Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and
[Tushar Khot. 2023. Specializing smaller language](https://proceedings.mlr.press/v202/fu23d.html)
[models towards multi-step reasoning. In Interna-](https://proceedings.mlr.press/v202/fu23d.html)
_tional Conference on Machine Learning, ICML 2023,_
_23-29 July 2023, Honolulu, Hawaii, USA, volume_
202 of Proceedings of Machine Learning Research,
pages 10421–10430. PMLR.
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen,
Yujiu Yang, Minlie Huang, Nan Duan, and Weizhu
Chen. 2023. [Tora: A tool-integrated reasoning](https://doi.org/10.48550/ARXIV.2309.17452)
[agent for mathematical problem solving.](https://doi.org/10.48550/ARXIV.2309.17452) _CoRR,_
abs/2309.17452.
Namgyu Ho, Laura Schmid, and Se-Young Yun. 2023.
[Large language models are reasoning teachers. In](https://doi.org/10.18653/v1/2023.acl-long.830)
_Proceedings of the 61st Annual Meeting of the As-_
_sociation for Computational Linguistics (Volume 1:_
_Long Papers), pages 14852–14882, Toronto, Canada._
Association for Computational Linguistics.
Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yu[taka Matsuo, and Yusuke Iwasawa. 2022. Large lan-](https://proceedings.neurips.cc/paper_files/paper/2022/file/8bb0d291acd4acf06ef112099c16f326-Paper-Conference.pdf)
[guage models are zero-shot reasoners. In Advances in](https://proceedings.neurips.cc/paper_files/paper/2022/file/8bb0d291acd4acf06ef112099c16f326-Paper-Conference.pdf)
_Neural Information Processing Systems, volume 35,_
pages 22199–22213. Curran Associates, Inc.
Ariel N. Lee, Cole J. Hunter, and Nataniel Ruiz. 2023.
[Platypus: Quick, cheap, and powerful refinement of](https://doi.org/10.48550/ARXIV.2308.07317)
[llms. CoRR, abs/2308.07317.](https://doi.org/10.48550/ARXIV.2308.07317)
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
[Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wiz-](https://doi.org/10.48550/ARXIV.2308.09583)
[ardmath: Empowering mathematical reasoning for](https://doi.org/10.48550/ARXIV.2308.09583)
[large language models via reinforced evol-instruct.](https://doi.org/10.48550/ARXIV.2308.09583)
_CoRR, abs/2308.09583._
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su.
[2020. A diverse corpus for evaluating and developing](https://doi.org/10.18653/v1/2020.acl-main.92)
[English math word problem solvers. In Proceedings](https://doi.org/10.18653/v1/2020.acl-main.92)
_of the 58th Annual Meeting of the Association for_
_Computational Linguistics, pages 975–984, Online._
Association for Computational Linguistics.
OpenAI. 2023. [GPT-4 technical report.](https://doi.org/10.48550/ARXIV.2303.08774) _CoRR,_
abs/2303.08774.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
[2021. Are NLP models really able to solve simple](https://doi.org/10.18653/v1/2021.naacl-main.168)
[math word problems? In Proceedings of the 2021](https://doi.org/10.18653/v1/2021.naacl-main.168)
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 2080–2094, Online._
Association for Computational Linguistics.
[Subhro Roy and Dan Roth. 2015. Solving general arith-](https://doi.org/10.18653/v1/D15-1202)
[metic word problems. In Proceedings of the 2015](https://doi.org/10.18653/v1/D15-1202)
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, pages 1743–1752, Lisbon, Portu-_
gal. Association for Computational Linguistics.
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten
Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi,
Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom
Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton-Ferrer, Aaron Grattafiori,
Wenhan Xiong, Alexandre Défossez, Jade Copet,
Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, and Gabriel Synnaeve.
[2023. Code llama: Open foundation models for code.](https://doi.org/10.48550/ARXIV.2308.12950)
_CoRR, abs/2308.12950._
Kumar Shridhar, Alessandro Stolfo, and Mrinmaya
[Sachan. 2023. Distilling reasoning capabilities into](https://doi.org/10.18653/v1/2023.findings-acl.441)
[smaller language models. In Findings of the Asso-](https://doi.org/10.18653/v1/2023.findings-acl.441)
_ciation for Computational Linguistics: ACL 2023,_
pages 7059–7073, Toronto, Canada. Association for
Computational Linguistics.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, Dan Bikel, Lukas Blecher, Cristian CantonFerrer, Moya Chen, Guillem Cucurull, David Esiobu,
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa,
Isabel Kloumann, Artem Korenev, Punit Singh Koura,
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten,
Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu,
Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas
[Scialom. 2023. Llama 2: Open foundation and fine-](https://doi.org/10.48550/ARXIV.2307.09288)
[tuned chat models. CoRR, abs/2307.09288.](https://doi.org/10.48550/ARXIV.2307.09288)
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V.
Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. [Self-consistency](https://openreview.net/pdf?id=1PL1NIMMrw)
[improves chain of thought reasoning in language](https://openreview.net/pdf?id=1PL1NIMMrw)
[models. In The Eleventh International Conference](https://openreview.net/pdf?id=1PL1NIMMrw)
_on Learning Representations, ICLR 2023, Kigali,_
_Rwanda, May 1-5, 2023. OpenReview.net._
-----
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le,
[and Denny Zhou. 2022. Chain of thought prompt-](https://openreview.net/forum?id=_VjQlMeSB_J)
[ing elicits reasoning in large language models. In](https://openreview.net/forum?id=_VjQlMeSB_J)
_Advances in Neural Information Processing Systems._
Qihuang Zhong, Kang Wang, Ziyang Xu, Juhua Liu,
[Liang Ding, Bo Du, and Dacheng Tao. 2024. Achiev-](https://doi.org/10.48550/ARXIV.2404.14963)
[ing >97% on GSM8K: deeply understanding the](https://doi.org/10.48550/ARXIV.2404.14963)
[problems makes llms better reasoners.](https://doi.org/10.48550/ARXIV.2404.14963) _CoRR,_
abs/2404.14963.
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H.
[Chi. 2023. Least-to-most prompting enables com-](https://openreview.net/forum?id=WZH7099tgfM)
[plex reasoning in large language models. In The](https://openreview.net/forum?id=WZH7099tgfM)
_Eleventh International Conference on Learning Rep-_
_resentations._
Xuekai Zhu, Biqing Qi, Kaiyan Zhang, Xingwei Long,
[and Bowen Zhou. 2023. Pad: Program-aided distil-](https://doi.org/10.48550/ARXIV.2305.13888)
[lation specializes large models in reasoning. CoRR,](https://doi.org/10.48550/ARXIV.2305.13888)
abs/2305.13888.
Xunyu Zhu, Jian Li, Yong Liu, Can Ma, and Weiping
[Wang. 2024. Distilling mathematical reasoning capa-](http://arxiv.org/abs/2401.11864)
[bilities into small language models.](http://arxiv.org/abs/2401.11864)
-----
| [
"Xunyu, Zhu",
"Jian, Li",
"Can, Ma",
"Weiping, Wang"
] | 2024-07-30T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2407.10167 | https://arxiv.org/abs/2407.10167 | https://www.semanticscholar.org/paper/64d587539ec00e7d8c4f2c9e5f3354a20660754b |
KnowledgeFMath: A Knowledge-Intensive Math Reasoning Dataset in Finance Domains | We introduce KnowledgeFMath, a novel benchmark designed to evaluate LLMs’ capabilities in solving knowledge-intensive math reasoning problems. Compared to prior works, this study features three core advancements. First, KnowledgeFMath includes 1,259 problems with a hybrid of textual and tabular content. These problems require college-level knowledge in the finance domain for effective resolution. Second, we provide expert-annotated, detailed solution references in Python program format, ensuring a high-quality benchmark for LLM assessment. We also construct a finance-domain knowledge bank and investigate various knowledge integration strategies. Finally, we evaluate a wide spectrum of 26 LLMs with different prompting strategies like Chain-of-Thought and Program-of-Thought. Our experimental results reveal that the current best-performing system (i.e., GPT-4 with CoT prompting) achieves only 56.6% accuracy, leaving substantial room for improvement. Moreover, while augmenting LLMs with external knowledge can improve their performance (e.g., from 33.5% to 47.1% for GPT-3.5), their accuracy remains significantly lower than the estimated human expert performance of 92%. We believe that KnowledgeFMath can advance future research in the area of domain-specific knowledge retrieval and integration, particularly within the context of solving math reasoning problems. | null | # KnowledgeFMATH: Knowledge-Intensive Math Reasoning in Finance Domains
**Yilun Zhao[∗]** [1] **Hongjun Liu[∗][2][,][3]** **Yitao Long[3]**
**Rui Zhang[4]** **Chen Zhao[2][,][3]** **Arman Cohan[1][,][5]**
1Yale University 2NYU Shanghai 3New York University
4Penn State University 5Allen Institute for AI
We introduce KnowledgeFMATH, a novel
benchmark designed to evaluate LLMs’ capabilities in solving knowledge-intensive math
reasoning problems. Compared to prior works,
this study features three core advancements.
First, KnowledgeFMATH includes 1,259 problems with a hybrid of textual and tabular
content. These problems require collegelevel knowledge in the finance domain for
effective resolution. Second, we provide
expert-annotated, detailed solution references
in Python program format, ensuring a highquality benchmark for LLM assessment. We
also construct a finance-domain knowledge
bank and investigate various knowledge integration strategies. Finally, we evaluate a wide
spectrum of 26 LLMs with different prompting
strategies like Chain-of-Thought and Programof-Thought. Our experimental results reveal
that the current best-performing system (i.e.,
GPT-4 with CoT prompting) achieves only
56.6% accuracy, leaving substantial room for
improvement. Moreover, while augmenting
LLMs with external knowledge can improve
their performance (e.g., from 33.5% to 47.1%
for GPT-3.5), their accuracy remains significantly lower than the estimated human expert
performance of 92%. We believe that KnowledgeFMATH can advance future research in
the area of domain-specific knowledge retrieval
and integration, particularly within the context
of solving math reasoning problems.
[§ github.com/yale-nlp/KnowledgeFMath](https://github.com/yale-nlp/KnowledgeFMath)
**1** **Introduction**
Large language models (LLMs) have been increasingly recognized for their potential for complex
problem-solving in real-world scenarios (OpenAI,
2023a; Touvron et al., 2023; Jiang et al., 2023).
Solving math reasoning problems has emerged as a
_∗Equal Contribution_
Knowl
Propo
Defin
The
accou
and
perce
ventu
Math
None
Question: In 2018, Company A had a passive equity ownership
**interest of 15% in Company B. By the close of 2018, Company**
A decided to increase its ownership in Company B to 50%,
effective as of 1st January 2019, through a cash purchase. There
have been no financial transactions between Company A and
Company B. Based on the data in the following table with the
financial statements for both companies, what would be the
changes in the **total liabilities for Company A under the**
**proportionate consolidation method from 2018 to 2019?**
**Company A** **Company B**
**2018** **2019** **2018** **2019**
**Revenue** 5,000 7,000 2,000 2,500
**Cost** 2,000 2,300 1,200 1,300
**Operating**
**income** 3,000 4,700 800 1,200
**Net profit** 1,650 2,300 460 820
**Dividends paid** - - 230 410
**Total assets** 4,000 6,000 1,000 1,100
**Total liabilities** 1,200 900 600 650
**Equity** 2,800 5,100 400 450
Model Output with Chain-of-Thought Prompting:
First, we know from the table that the total
liabilities for company A in 2018 is 1200.
(...abbreviate…)
Therefore, the final answer is 1,200
Model Output with Program-of-Thought Prompting:
```
def solution():
A_liabilities_2018 = 1200
(…abbreviate)
return change
```
Figure 1: An example of KnowledgeFMATH. To answer the given question, LLMs are required to comprehend specialized financial terms, such as “passive equity
ownership interest” and “proportionate consolidation
method”. Additionally, they must interpret tabular data
within the question and accurately identify questionrelevant data points in the table.
key method for assessing LLMs’ capabilities (Roy
and Roth, 2015; Amini et al., 2019; Cobbe et al.,
2021; Chen et al., 2023c), as it demands both understanding contextual information and reasoning
over complex logics.
Recent advancements in LLMs have led to remarkable progress in solving fundamental math
problems (Wei et al., 2022; Lewkowycz et al., 2022;
Chen et al., 2023b; Wang et al., 2023; Luo et al.,
2023a; Azerbayev et al., 2024). However, as illus
12841
-----
Table KnowledgeDataset Domain Level Source # Examples Solution Format
Reasoning? Intensive?
MAWPS (Koncel-Kedziorski et al., 2016) Math Elem. School Generated 3,320 ✗ ✗ Text
ASDiv (Miao et al., 2020) Math Elem. School Internet 2,305 ✗ ✗ Math Equation
SVAMP (Patel et al., 2021) Math Elem. School ASDiv 1,000 ✗ ✗ Math Equation
Math23K (Wang et al., 2017) Math Elem. School Internet 23,162 ✗ ✗ Math Equation
GSM8K (Cobbe et al., 2021) Math Middle School CrowdSource 8,500 ✗ ✗ Text
MATH (Hendrycks et al., 2021) Math High School Competition 12,500 ✗ ✗ Text
AQuA (Ling et al., 2017) Math College GMAT, GRE 100,000 ✗ ✗ Text
MathQA (Amini et al., 2019) Math College AQuA 100,000 ✗ ✗ Math Equation
MathQA-Python (Austin et al., 2021) Math College AQuA 23,914 ✗ ✗ Python Program
MathVista (Lu et al., 2024) Math Elem. to College Internet+Expert 6,141 Few Few Text
TabMWP (Lu et al., 2023) Math Middle School Textbooks 38,431 ✓ ✗ Text
FinQA (Chen et al., 2021) Finance College Expert 8,281 ✓ ✗ Math Program
TAT-QA (Zhu et al., 2021) Finance College Expert 16, 552 ✓ ✗ Text
MultiHiertt (Zhao et al., 2022) Finance College Expert 10,440 ✓ ✗ Math Equation
DocMath-Eval (Zhao et al., 2023a) Finance College Expert 5,974 ✓ few Python Program
TheoremQA(Chen et al., 2023c) STEM College Internet+Expert 800 ✗ ✓ Text
KnowledgeFMATH (ours) Finance College Internet+Expert 1,259 ✓ ✓ Python Program
Table 1: Comparison between KnowledgeFMATH and existing math reasoning datasets. KnowledgeFMATH
is distinguished by three unique characteristics: (1) Knowledge-Intensive: Problems necessitate domain-specific
knowledge, complemented by a financial knowledge bank for research facilitation; (2) Table Reasoning: 39.0%
of problems incorporate table information, requiring models to understand table structure as well as interpret and
reason over tabular data; (3) Expert Annotation: Each problem is accompanied by a detailed, expert-annotated
Python-formatted solution. Such solution annotation combines the explicitness of code execution with the descriptive
power of natural language explanations in python comment format, offering a more effective and adaptable solution
representation for complex math reasoning problems in KnowledgeFMATH.
trated in Table 1, existing math reasoning benchmarks typically do not require specialized domain
knowledge. This becomes a notable shortcoming
when considering practical applications of LLMs.
Measuring progress in specialized areas such as finance and healthcare typically involves addressing
_domain-specific and knowledge-intensive problems,_
which goes beyond the scope of general mathematical reasoning. Recognizing this gap in the
existing benchmarks, we focus on the finance domain. We chose this domain because, as illustrated
in Figure 1, it often involves scenarios requiring
not only basic mathematical skills but also a deep
understanding of financial concepts (Yang et al.,
2023b; Xie et al., 2023; Wu et al., 2023). Additionally, the finance domain frequently employs tables
to represent data (Zhu et al., 2021; Chen et al.,
2021; Zhao et al., 2022; Li et al., 2022; Zhao et al.,
2023b), which adds another layer of complexity to
the knowledge-intensive problem-solving.
We introduce KnowledgeFMATH, the first
benchmark tailored for evaluating LLMs in the
context of Knowledge-intensive Math reasoning in
the Finance domain. The dataset contains 1,259
problems that cover a broad range of finance subareas (e.g., investment analysis, risk assessment,
and financial forecasting), with 39.0% of the prob
lems necessitating data interpretation over tabular
data. Each problem is accompanied by detailed,
expert-annotated solutions and explanations, providing a comprehensive reference for evaluating the
LLMs’ performance. Additionally, we collect and
release a comprehensive knowledge bank, which
includes detailed definitions and explanations for
1,760 financial terms and concepts, facilitating future research on improving knowledge-intensive
problem-solving through knowledge retrieval.
We evaluate a wide spectrum of open-source
and proprietary LLMs, specifically, 26 model models from 14 organizations. Notably, this includes
_math-specific (Luo et al., 2023a), code-based (Xu_
et al., 2023; Luo et al., 2023b; Tunstall et al.,
2023) LLMs, as well as mixture of experts (MoE)
LLMs (Mistral.AI, 2023). Two prompting methods, Chain-of-Thought (CoT) (Wei et al., 2022) and
Program-of-Thought (PoT) (Chen et al., 2023b),
are adopted for experiments. Our experimental results indicate that all evaluated open-source
LLMs scored below 24% in accuracy using various prompting methods, including CoT and PoT
prompting. Proprietary models perform better, with
GPT-4 significantly outperforming other LLMs,
achieving an accuracy of 56.6% when applying
CoT prompting. However, it still lags far behind
12842
-----
human expert performance in the open-book setting, which stands at 92%. This significant gap
between LLMs and human experts demonstrates
the challenges of KnowledgeFMATH, highlighting
the need for further advancements in LLMs for
knowledge-intensive problem-solving capabilities.
Next, we investigate how to integrate domainspecific knowledge to enhance the problem-solving
capabilities of LLMs. We investigate various popular knowledge integration strategies and reveal
that including question-relevant knowledge into
the prompt can consistently improve LLMs’ performance. This provides insights for future work
to develop more advanced knowledge-augmented
strategies to realize higher performance gains.
Our contributions are summarized below:
- We propose KnowledgeFMATH, the first
knowledge-intensive math reasoning benchmark
in finance domains, aimed at evaluating LLMs’
abilities in knowledge-intensive math reasoning.
- We conduct comprehensive evaluations using a
diverse array of LLMs, uncovering a substantial
performance gap between the best-performing
LLM (i.e., GPT-4) and human experts.
- We present a detailed analysis on augmenting
LLMs with various knowledge integration strategies. This provides valuable insights for future
work in knowledge-intensive problem solving.
**2** **KnowledgeFMATH**
In this section, we describe the dataset construction process for KnowledgeFMATH. We begin by
constructing a knowledge bank that includes wellformulated definitions of 1,760 financial terms. We
then instruct expert annotators to use knowledge
terms within the constructed knowledge bank to
create knowledge-intensive questions with a hybrid
of textual and tabular content.
**2.1** **Knowledge Bank Construction**
We construct a knowledge bank that covers a wide
range of 1,760 knowledge terms in the finance
domain. It simplifies the creation of knowledgeintensive questions by annotators and enables the
exploration of various topics within domain knowledge. The knowledge bank includes financedomain-specific terms (e.g., “exchange rate” and
“net present value”) collected from Wikipedia. Each
knowledge term is accompanied with their corresponding textual definitions and, where applicable,
Knowledge Term:
Exchange Rate
Definition:
An exchange rate is the value or price of one country's currency in
relation to another currency. It determines how much of one currency
can be exchanged for another and can fluctuate regularly based on
market conditions, import and export demand, inflation, and a host of
other economic factors.
Mathematical Formula:
def exchange_rate(original_currency, new_currency):
return original_currency / new_currency
Figure 2: An example of knowledge terms “Exchange
Rate” included in the constructed knowledge bank.
_mathematical formulas in python format. An ex-_
ample of included knowledge terms is illustrated
in Figure 2. We detail the process of 1) knowledge
collection, 2) semi-automated knowledge formulation, and 3) knowledge bank update and maintenance in Appendix A.1. It is worth noting that this
knowledge bank is versatile and can be applied to a
variety of finance-relevant tasks for future research.
**2.2** **KnowledgeFMATH Question Annotation**
For each financial term in the knowledge bank, we
instruct annotators to create a corresponding math
reasoning question, if applicable. The answer to the
composed question should be a numeric value. The
annotators are required to adhere to the following
guidelines for a successful question annotation:
**Question Annotation** If the annotators choose
to adapt questions from textbooks or the Internet
instead of creating their own from scratch, they
are asked to adhere to copyright and license regulations, avoiding data from sites prohibiting copy
and redistribution. Furthermore, they are required
not only to modify the surface-level description
of the question but also to change the associated
numeric values. In light of the emerging concerns
about data contamination in LLMs (Shi et al., 2024;
Deng et al., 2024), we instruct annotators to conduct a Google search for each annotated question,
ensuring that no similar question appears on the
first page of the search results. Additionally, we
recognize that many financial problems involve tables, as shown in Figure 1. Such tabular data plays
a crucial role in thoroughly understanding financial problems, and it presents unique challenges for
LLMs in terms of comprehension and interpretation. Therefore, we encourage and reward annotators to include tables that are relevant and accurately represent the data pertinent to the questions.
Finally, out of 1,259 questions, 674 are marked as
12843
-----
having been adapted from existing resources, and
491 are accompanied with tabular data.
**Identifying Question-relevant Knowledge** After a question is annotated, annotators must identify
1-3 key financial concepts for answering this question. They then search for each term in our constructed knowledge bank. If the term is included,
they verify its context and details for relevance.
If a term is absent or with low-quality definition,
annotators receive a bonus for documenting the
term, providing a brief explanation or definition
and outlining its relevance to the problem. These
identified terms are subsequently added or updated
in the knowledge bank, resulting in a total of 346
new inclusions and 83 revisions.
**2.3** **KnowledgeFMATH Solution Annotation**
As illustrated in Table 1, existing math reasoning
benchmarks typically represent solutions using text
or mathematical equations. However, solutions
in text format often lack the precision and unambiguous nature required for computational problemsolving. Solutions in mathematical equations are
explicit, but less descriptive, as the semantic meaning associated with each numeric value in the equations can be ambiguous. Moreover, these two formats are less adaptable for use in automated systems due to variations in language and difficulties
in semantic parsing and execution.
To overcome these limitations, we use Python
programs, starting with “def solution():”, to
represent solutions. Such Python program combines the explicitness of code execution with the
descriptive power of annotated comments, offering a more effective and adaptable solution representation for complex math reasoning problems.
Specifically, annotators are required to first define
variables with meaningful names at the beginning
of the Python function. These variables correspond
to the key elements or quantities mentioned in the
textual or tabular content of questions. The annotators then proceed to write a sequence of Python
statements that logically solve the problem, step by
step. Additionally, annotators are required to write
detailed comments, making the code more readable and understandable. To ensure the accuracy
and functionality of the Python-format solutions,
our annotation interface automatically executes the
Python function. This execution checks that the
return type of the answer is either a float or an int
and verifies that there are no execution errors.
**Annotation Quality** **%S ≥** **4**
Question Fluency 98.0
Question Correctness 95.3
Knowledge Relevance 94.1
Textual Definition Fluency 93.0
Textual Definition Correctness 94.7
Math Formula Correctness 88.0
Final Answer Correctness 98.0
Python Solution Correctness 96.0
Variable Name Meaningfulness 87.7
Comment Comprehensiveness 83.8
Table 2: Human evaluation over 200 samples of KnowledgeFMATH. Three internal evaluators were asked to
rate the samples on a scale of 1 to 5 individually. We
report percent of samples that have an average score ≥ 4
to indicate the annotation quality of KnowledgeFMATH
**Property** **Value**
**Knowledge Bank**
# Knowledge Terms 1,760
Textual Definition Length (Median/Avg) 64.0 / 69.3
% w. Mathematical Definition 62.8%
**KnowledgeFMATH Dataset**
Question Length (Median/Avg) 49.0 / 55.7
% Questions with Table 39.0 %
# Rows per Table (Median/Avg) 3.0 / 3.2
# Columns per Table (Median/Avg) 6.0 / 6.9
# Knowledge Terms per Example (Median/Avg) 2.5 / 2.4
# Math Operations in Python Solution (Median/Avg) 5.0 / 5.3
# Code Lines in Python Solution (Median/Avg) 5.0 / 6.1
# Comment Lines in Python Solution (Median/Avg) 3.0 / 3.5
Validation Set Size 259
Test Set Size 1,000
Table 3: Basic statistics of the constructed knowledge
bank and KnowledgeFMATH dataset.
**2.4** **Data Quality Validation**
We conduct a comprehensive validation protocol
to ensure the high quality of the annotated data.
For each annotated question, we first assign another annotator to validate whether: 1) the question
is meaningful and grammatically correct, 2) the
associated knowledge terms are accurately annotated and complete, 3) the Python-format solution
is logically correct and easy to understand. Validators are asked to revise examples that do not meet
these standards. We also report the human evaluation scores and inter-evaluator agreements over
200 sampled examples. As illustrated in Table 2,
KnowledgeFMATH has a high annotation quality.
12844
-----
**2.5** **Data Statistics and Dataset Release**
Table 3 describes the basic statistics of KnowledgeFMATH, with topic-type distribution shown
in Figure 4 in Appendix. We randomly divide
the dataset into two subsets:TODO validation and test.
The validation set contains 259 examples and is
intended for model development validation. The
_test set comprises the remaining 1,000 examples_
and is designed for standard evaluation. To prevent
data contamination (Shi et al., 2024; Sainz et al.,
2023; Deng et al., 2024), the answer for the test set
will not be publicly released. Instead, we will develop and maintain an online evaluation platform,
allowing researchers to evaluate models and participate in a leaderboard. Following recent LLM
reasoning benchmarks (Chen et al., 2023c; Yue
et al., 2023; Lu et al., 2024), the main evaluation of
KnowledgeFMATH is conducted under a zero-shot
setting on the test set to assess LLMs’ capabilities
to generate accurate answers without fine-tuning or
few-shot demonstrations on our benchmark.
**2.6** **Human-level Performance Evaluation**
To provide a rough but informative estimate of
human-level performance by non-experts and experts on KnowledgeFMATH, we randomly sampled
50 examples from the validation set. We enroll two
experts, both with the CFA license, and two nonexperts to individually solve these questions.
We first evaluate their performance in a closed_book setting, where the evaluators do not have ac-_
cess to the internet or textbooks and are required to
finish the 50 questions within three hours. The nonexpert evaluators achieve accuracy of 54% and 62%
(average 58%), and the expert evaluators achieve
accuracy of 76% and 70% (average 73%).
We then transition to an open-book setting,
where the evaluators are asked to use the internet
and textbooks to correct their initial errors. This setting is designed to assess how external knowledge
resources could enhance human problem-solving
abilities and accuracy. The non-expert evaluators
improved their accuracy to 86% and 82% (average
84%). Similarly, the expert evaluators improved
the accuracy to 94% and 90% (average 92%).
**3** **Evaluated Systems**
**3.1** **Large Language Models**
We evaluate following LLMs on KnowledgeFMATH:
Chain-of-Thought Prompting Method:
[system prompt]
You are a financial expert, you are supposed to to answer the given
question. You need to output the answer in your final sentence like
'Therefore, the answer is ...'. The answer should be a numeric value.
[user input]
Question: {question}
Table: {table}
Let's think step by step to answer the question.
Program-of-Thought Prompting Method:
[system prompt]
You are a financial expert, you are supposed to generate a Python
program to answer the given question. The returned value of the
program is supposed to be the answer.
[user input]
Question: {question}
Table: {table}
Please generate a Python program to answer the given question.
```python
def solution( ):
Figure 3: Examples of zero-shot CoT and PoT prompts.
- General: GPT-3.5&4 (OpenAI, 2022, 2023a),
Gemini-Pro (Google, 2023), Llama-2 (Touvron et al., 2023), Mistral (Jiang et al., 2023),
MPT (Team, 2023), Falcon (Almazrouei et al.,
2023), WizardLM (Luo et al., 2023b), Yi (01.AI,
2023), Baichuan (Yang et al., 2023a), Phi-1.5 (Li
et al., 2023), and DeepSeek (DeepSeek, 2023).
- Math-specific: WizardMath (Luo et al., 2023a).
- Code-based: CodeLlama (Rozière et al., 2023),
WizardCoder (Luo et al., 2023b), and Lemur (Xu
et al., 2023).
- Mixture of Experts (MoE): Mixtral of experts (Mistral.AI, 2023).
By default, we use chat or instruct versions
for each model, when available, otherwise, we used
their base version. Additionally, we select the
most recent, largest, and best-performing checkpoint available as of paper submission (i.e, December 10th, 2023). All the model weights of evaluated open-sourced LLMs can be found at HuggingFace Model Hub[1]. The implementation details
(i.e., LLM parameter setting, tabular data serialization, and final answer extraction and evaluation)
are discussed in Appendix B.1.
**3.2** **Prompting Methods**
Following recent LLM reasoning benchmark
works (Lu et al., 2024; Chen et al., 2023c), we
[1https://huggingface.co/models](https://huggingface.co/models)
12845
-----
evaluate two established prompting methods, with
examples of prompt illustrated in Figure 3.
**Chain-of-Thought** The CoT method (Wei et al.,
2022; Kojima et al., 2022) instructs the LLMs to
articulate a step-by-step reasoning process. This
leads to a detailed explanation that culminates in
the final answer.
**Program-of-Thought** Different from CoT, the
PoT method (Chen et al., 2023b) disentangles computation from the reasoning process by prompting
the LLMs to generate a structured program to represent the reasoning process. The final answer is
then derived by executing the generated program
with an external calculator.
**4** **Experimental Results**
**4.1** **Main Results**
Table 4 illustrates the performance of the evaluated
LLMs using CoT and PoT prompting methods on
the test set of KnowledgeFMATH. From this, we
draw the following conclusions:
**GPT-* significantly outperforms other open-**
**source LLMs** Proprietary models demonstrate
the best performance on KnowledgeFMATH. Notably, as illustrated in Table 4, GPT-4 significantly
outperforms other LLMs, achieving an accuracy
of 56.6% on KnowledgeFMATH with CoT. In contrast, open-source LLMs significantly lag behind.
Furthermore, our case study in Table 7 shows that
while GPT-* models are capable of performing
complex mathematical calculations, other LLMs
often fail to understand financial terms, leading
to incorrect mathematical expressions. This highlights a critical need for future development efforts
to close the performance gap.
**Substantial Discrepancy in Performance Be-**
**tween Human Experts and LLMs** Even the
best-performing LLM, GPT-4, performs much
worse than human experts. For instance, the accuracy of GPT-4 using the CoT prompting method
stands at 56.6%, falling short of the 92% accuracy
achieved by expert evaluators in the open-book
setting. This gap highlights the critical need for further advancements in LLMs, especially in tackling
complex problem-solving tasks within specialized
domains that are knowledge-intensive.
**Analysis of Open-sourced LLMs** Among opensource LLMs with CoT prompting, Mixtral MoE
achieves the best performance, demonstrating the
effectiveness of applying a mixture of experts
framework. Moreover, WizardMath also performs
well as it is further instruction-tuned to learn
mathematical reasoning. Moreover, among opensource LLMs with PoT prompting methods, Lemur
achieves better performance than its backbone (i.e.,
Llama-2), demonstrating the effectiveness of tuning LLMs on code-based tasks for enhanced reasoning and coding capabilities.
**4.2** **Program-of-Thought Analysis – LLMs’**
**Ability to Generate Executable Programs**
We observe that the PoT prompting method consistently improves performance over the CoT method
in GPT-* models and code-based LLMs. In contrast, the performance of several general LLMs,
such as Mistral and WizardLM degrades with PoT
prompting. To better analyze the reasons for these
differing performance outcomes, we examine the
execution rate of each LLM under PoT prompting, measuring how many of the generated Python
programs are executable. Figure 6 illustrates the
relationship between execution rate and accuracy
across different models. It demonstrates that the
degraded performance when applying PoT prompting is attributable to the low execution rate. For
instance, although WizardLM achieves competitive
performance with CoT, it struggles to consistently
generate executable Python solutions, leading to
lower accuracy with the PoT prompting approach.
**4.3** **Case Study and Error Analysis**
In Table 4, we observe that GPT-* models significantly outperform other LLMs. Notably, the latest
GPT-4 version achieves an accuracy of 56.6% using CoT prompting, closely approaching the nonexpert human-level performance in the close-book
setting (i.e., 58%). To gain a deeper insight into the
capabilities and limitations of GPT-* on our dataset,
we conducted a comprehensive error analysis and
case studies. This was based on 100 randomly
sampled examples from KnowledgeFMATH devel_opment set where GPT-3.5-1106 exhibited failures._
We identify four common mistakes that the current
LLMs are likely to make (i.e., misinterpretation of
_required knowledge, computation error, table mis-_
_understanding, and question misunderstanding)._
We provide detailed examples and explanations for
each error type in Table 6 in Appendix. Moreover,
we also present case study of model output from
various LLMs with CoT prompting, as shown in
12846
-----
**Quantitative** **Derivatives** **Accounting** **Management** **Portfolio** **Economics** **Corporate** **Avg.**
**Model** **Size** **Notes**
CoT PoT CoT PoT CoT PoT CoT PoT CoT PoT CoT PoT CoT PoT **CoT PoT**
Close-book
Non-Expert 58.0
Expert 73.0
Open-book
Non-Expert 84.0
Expert 92.0
GPT-4-1106-preview – – 66.8 66.9 50.8 54.1 54.2 34.4 40.5 51.4 66.7 66.7 57.7 50.4 65.2 58.7 56.6 53.1
GPT-4-0613 – – 58.2 67.7 43.1 50.8 43.1 32.1 36.5 50.0 44.1 68.8 43.1 51.1 56.5 63.0 46.3 52.1
GPT-3.5-1106 – – 42.6 56.4 26.2 27.7 32.1 19.5 28.4 28.4 32.3 45.2 35.0 32.8 28.3 32.6 32.4 33.9
GPT-3.5-0613 – – 37.5 49.0 17.7 24.4 24.4 17.2 21.6 24.3 16.1 39.8 27.7 29.2 17.4 26.1 24.3 29.6
Mixtral 8x7B MoE 30.9 15.2 18.7 10.5 23.7 6.1 14.9 13.5 25.8 29.0 24.1 11.7 30.4 8.7 23.5 12.2
Deepseek 67B – 29.7 19.5 20.5 15.4 22.5 22.2 20.3 13.5 24.7 21.6 23.4 7.3 17.4 10.1 23.3 16.7
Gemini-Pro – – 31.6 29.6 15.4 16.7 24.0 17.6 12.2 21.6 21.5 35.5 26.3 19.7 17.4 17.4 22.0 21.5
WizardMath 70B Math 21.1 2.3 15.1 0.9 22.5 1.4 16.2 0.0 21.5 0.0 17.5 5.3 15.2 2.6 18.1 1.7
WizardLM 70B – 23.8 11.3 12.8 10.5 14.5 7.3 16.2 8.1 17.2 14.0 16.8 11.7 13.0 13.0 16.4 10.3
Lemur 70B Code-based 19.9 16.8 12.8 6.8 17.6 8.7 17.6 17.8 12.9 15.1 16.1 11.4 21.7 10.4 16.2 11.3
Llama 2 70B – 19.5 10.5 13.6 6.9 12.6 8.4 8.1 8.1 14.0 8.6 17.5 11.0 17.4 13.0 14.9 8.8
Falcon 180B – 14.1 6.5 13.3 2.5 11.1 2.3 9.5 3.2 8.6 6.5 14.6 5.3 10.9 0.0 12.5 3.8
Llama 2 13B – 8.2 7.0 8.5 3.3 11.5 5.0 16.2 6.8 8.6 8.6 11.7 7.3 10.9 4.4 9.9 5.5
Yi 34B – 10.6 3.5 6.4 2.8 9.9 3.1 8.1 1.4 2.2 5.4 12.4 1.5 13.0 0.0 8.7 2.9
Llama 2 7B – 7.8 2.0 6.9 1.8 8.0 1.5 8.1 0.0 5.4 3.2 11.7 0.7 6.5 2.2 7.8 1.7
WizardCoder-Py 34B Code-based 8.2 4.2 5.6 0.3 9.5 0.5 4.1 1.6 8.6 3.9 10.2 1.8 4.4 0.0 7.6 1.6
MPT 30B – 7.4 3.5 6.7 1.0 5.7 1.5 4.1 0.0 4.3 1.1 12.4 3.7 6.5 2.2 6.9 1.9
Baichuan2 13B – 8.2 3.9 6.4 2.1 8.0 1.2 5.4 0.0 2.2 1.1 8.0 2.2 4.4 2.2 6.8 2.1
Mistral 7B – 10.2 5.1 4.9 1.9 8.0 2.8 2.7 1.6 7.5 2.6 5.1 4.4 4.4 2.6 6.7 3.0
Vicuna 33B – 5.5 5.5 5.4 4.1 5.3 6.5 4.1 4.1 2.2 8.6 5.1 8.0 8.7 4.4 5.2 5.6
Phi-2 2.7B – 9.4 3.5 2.1 1.5 3 1.2 6.8 4.1 2.2 3.2 5.8 1.5 2.2 0.0 4.9 2.1
CodeLlama 34B Code-based 6.3 5.6 5.1 1.9 2.3 2.8 2.7 3.2 2.2 3.9 8.0 0.9 2.2 2.6 4.6 3.0
Llama 1 65B – 3.5 1.6 2.3 1.0 4.6 0.4 0.0 0.0 5.4 0.0 2.9 1.5 6.5 2.2 3.3 1.0
CodeLlama 7B Code-based 3.9 1.9 2.8 2.8 1.9 1.4 1.4 0.1 0.0 1.1 5.8 2.6 4.4 6.5 2.9 2.2
Phi-1_5 1.3B – 3 0.0 2.3 0.3 2.7 0.0 1.4 1.4 0.0 0.0 0.7 2.2 0.0 0.0 1.9 0.4
Llama 1 7B – 1.2 0.4 2.1 1.0 1.9 0.0 0.0 0.0 1.1 0.0 3.7 0.0 0.0 0.0 1.8 0.4
Table 4: Results of Chain-of-Thought and Program-of-Thought prompting on the test set of KnowledgeFMATH. We
use average Accuracy using CoT prompting as the ranking indicator of model performance. Numbers underscored
indicate that models with PoT prompting achieves better results than with CoT prompting.
Table 7 in the Appendix.
**5** **Knowledge Augmentation Analysis**
In this section, we provide a comprehensive analysis to understand the performance of LLMs and
the quality of knowledge incorporated into the input context, aiming to provide insights for future
work on knowledge augmentation in LLMs to solve
knowledge-intensive tasks.
**5.1** **Evaluated Knowledge-Augmented Method**
We develop and evaluate various knowledgeaugmented approaches. For each setting, we include the definition of question-relevant knowledge
terms within the prompts (Figure 5 in Appendix).
- Oracle: To investigate the headroom in knowledge augmentation, we use an oracle setting,
where the ground-truth knowledge terms associated with the question (Section 2.2) are included.
- LLM as Knowledge Base: Recent work (Petroni
et al., 2019; Kang et al., 2023) demonstrates that
LLMs themselves can effectively serve as knowledge bases. Therefore, we prompt LLMs to first
identify the financial terms required to answer the
question. They then generate definitions of each
identified knowledge term using the inherent data
memorization capabilities.
- Knowledge Retrieval: We use the question as
the retrieval query to the constructed knowledge
bank. We investigate 1) BM25 as sparse retriever and 2) OpenAI Ada Embedding[2] as dense
retriever to retrieve the top-n question-relevant
knowledge terms from knowledge bank.
- LLM-Instructed Knowledge Retrieval: While
the method of using “LLM as Knowledge Base”
can effectively identify the knowledge required
to answer a question, it is likely to produce
knowledge definitions that are not entirely accurate (Chen et al., 2023a; Peng et al., 2023).
To address this issue of unfaithfulness, we harness the power of external knowledge retrieval
[2https://platform.openai.com/docs/guides/](https://platform.openai.com/docs/guides/embeddings)
[embeddings, we use the text-embedding-ada-002 version.](https://platform.openai.com/docs/guides/embeddings)
12847
-----
experts in close-book setting (i.e., 92.0%). This
highlights the need for future work on developing
more advanced domain-specific knowledge integration methods. Table 8 and Table 9 in Appendix C
present a case study on effectiveness of various
knowledge integration strategies.
**6** **Related Work**
The development of general-purpose intelligent systems is significantly dependent on the foundational
aspect of mathematical reasoning, a topic that has
garnered considerable attention in the academic
community. As illustrated in Table 1, researchers
have proposed a wide spectrum of math reasoning datasets that cater to a variety of educational
levels, ranging from elementary school to college.
However, these math reasoning benchmarks typically do not require specialized domain knowledge, a notable shortcoming when considering the
practical applications of LLMs. Therefore, recent
work has investigated the LLMs’ capabilities in
knowledge-intensive problem solving. For example, Chen et al. (2023c) collected a theorem-driven
question-answering dataset, designed to evaluate
AI models’ ability to apply theorems in solving
challenging science problems. Contemporary to
our work, MMMU (Yue et al., 2023) and MathVista (Lu et al., 2024) include examples that require
complex visual reasoning in expert domains.
**7** **Conclusion**
This paper introduces KnowledgeFMATH, aimed
at assessing LLMs in knowledge-intensive math
reasoning. Our comprehensive evaluations of 26
LLMs, using both CoT and PoT prompting methods, identify significant areas where LLMs need
to enhance their specialized knowledge for complex problem-solving in expert domains. Additionally, our knowledge augmentation analysis indicate
that integrating domain-specific knowledge can improve LLMs’ problem-solving abilities. We believe this research provides valuable insights for future work in advancing LLMs in complex problemsolving within expert domains.
**Limitations**
In this work, we propose KnowledgeFMATH and
conduct comprehensive analysis of different LLMs’
capabilities in solving knowledge-intensive math
reasoning problems in finance domains. However,
there are still some limitations: (1) Our method for
Setting Llama-2-70B GPT-3.51106
_wo. knowledge augmentation_ 14.3 33.5
LLM as Knowledge Base 13.9 (-0.4) 34.4 (+0.9)
BM25 (n = 3)
Vanilla Retrieval 13.9 (-0.4) 35.1 (+1.6)
LLM as Retrieval Re-Ranker 16.2 (+1.9) 37.1 (+3.6)
LLM-instructed Retrieval 16.2 (+1.9) 40.5 (+7.0)
OpenAI Ada Embed. (n = 3)
Vanilla Retrieval 14.7 (+0.4) 37.1 (+3.6)
LLM as Retrieval Re-Ranker 16.6 (+2.3) 39.7 (+6.2)
LLM-instructed Retrieval 17.0 (+2.7) 41.3 (+7.8)
OpenAI Ada Embed. (n = 5)
Vanilla Retrieval 14.7 (+0.4) 36.9 (+3.0)
LLM as Retrieval Re-Ranker 17.8 (+3.5) 40.5 (+7.0)
LLM-instructed Retrieval 18.9 (+4.6) 41.3 (+7.8)
Oracle 25.1 (+10.8) 47.1 (+13.6)
Table 5: Results of CoT prompting approach under
different knowledge augmentation settings on the devel_opment set of KnowledgeFMATH._
for obtaining more trustworthy knowledge definitions. Specifically, instead of using the original
question as the retrieval query, we utilize each
knowledge term along with its definition generated from the “LLM as Knowledge Base”. This
approach provides a more informative and semantically similar basis for knowledge retrieval.
- LLM as Retrieval Re-Ranker: Recent studies
have demonstrated LLMs’ competitive capabilities in re-ranking retrieved candidates to output a
more precise list (Sun et al., 2023). Therefore, in
this setting, we first use retriever in “Knowledge
Retrieval” to retrieve top-3n candidates. Subsequently, we prompt LLMs to select top-n most
relevant knowledge terms from this candidate set.
**5.2** **Experimental Results**
As illustrated in Table 5, improving the questionrelevance of incorporated knowledge can consistently improve the LLMs’ performance. Specifically, LLMs equipped with retrieved knowledge
from Ada Embedding consistently outperform
those using retrieved knowledge from BM25. This
is due to the more advanced capabilities of the Ada
Embedding-based retriever. Among different LLMaided retrieval strategies, LLM-Instructed Knowl_edge Retrieval achieves the best performance,_
demonstrating the effectiveness of using refined
queries for knowledge retrieval. Nevertheless, it is
worth noting that even when incorporated with the
ground-truth knowledge (i.e., the oracle setting),
GPT-3.5 still performs much worse than human
12848
-----
extracting final answer from model output (Appendix B.1) is still not perfect. In some cases,
this methods fails to locate the answer, leading to
the reported accuracy being an approximate lower
bound. (2) In our experiment, we regard tables in
the question as textual input (Appendix B.1). However, in real-world scenarios, tabular data might
appear as images, where people cannot obtain its
textual content directly. In these cases, OCR tools
to extract table content (Du et al., 2020) or LLMs
with vision capabilities (OpenAI, 2023b; Yue et al.,
2023; Lu et al., 2024) may be required. (3) Due
to computational resource constraints, we do not
tune LLMs on a large-scale finance-domain data
ourselves. However, we believe that training on finance data can help improve LLMs’ capabilities in
solving knowledge-intensive financial problems.
**Acknowledgement**
We are grateful for the compute support provided
by Microsoft Research’s Accelerate Foundation
Models Research (AFMR) program. We would
also like to thank the anonymous reviewers and
area chairs for constructive discussions and feedback. Hongjun Liu and Chen Zhao are supported
by Shanghai Frontiers Science Center of Artificial
Intelligence and Deep Learning, NYU Shanghai.
**References**
[01.AI. 2023. Yi: Open-source llm release.](https://01.ai/)
Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru,
Maitha Alhammadi, Mazzotta Daniele, Daniel Heslow, Julien Launay, Quentin Malartic, Badreddine
Noune, Baptiste Pannier, and Guilherme Penedo.
[2023. The falcon series of language models: To-](https://arxiv.org/abs/2311.16867)
[wards open frontier models.](https://arxiv.org/abs/2311.16867)
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik
Koncel-Kedziorski, Yejin Choi, and Hannaneh Ha[jishirzi. 2019. MathQA: Towards interpretable math](https://doi.org/10.18653/v1/N19-1245)
[word problem solving with operation-based for-](https://doi.org/10.18653/v1/N19-1245)
[malisms. In Proceedings of the 2019 Conference](https://doi.org/10.18653/v1/N19-1245)
_of the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
2357–2367, Minneapolis, Minnesota. Association for
Computational Linguistics.
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten
Bosma, Henryk Michalewski, David Dohan, Ellen
Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021.
[Program synthesis with large language models. arXiv](https://arxiv.org/abs/2108.07732)
_preprint arXiv:2108.07732._
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster,
Marco Dos Santos, Stephen Marcus McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, and Sean
[Welleck. 2024. Llemma: An open language model](https://openreview.net/forum?id=4WnqRR915j)
[for mathematics.](https://openreview.net/forum?id=4WnqRR915j)
Liang Chen, Yang Deng, Yatao Bian, Zeyu Qin, Bingzhe
[Wu, Tat-Seng Chua, and Kam-Fai Wong. 2023a. Be-](https://openreview.net/forum?id=clTPP37Rpu)
[yond factuality: A comprehensive evaluation of large](https://openreview.net/forum?id=clTPP37Rpu)
[language models as knowledge generators. In The](https://openreview.net/forum?id=clTPP37Rpu)
_2023 Conference on Empirical Methods in Natural_
_Language Processing._
[Wenhu Chen. 2023. Large language models are few(1)-](https://doi.org/10.18653/v1/2023.findings-eacl.83)
[shot table reasoners.](https://doi.org/10.18653/v1/2023.findings-eacl.83) In Findings of the Associa_tion for Computational Linguistics: EACL 2023,_
pages 1120–1130, Dubrovnik, Croatia. Association
for Computational Linguistics.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W. Cohen. 2023b. [Program of thoughts](https://openreview.net/forum?id=YfZ4ZPt8zd)
[prompting: Disentangling computation from reason-](https://openreview.net/forum?id=YfZ4ZPt8zd)
[ing for numerical reasoning tasks. Transactions on](https://openreview.net/forum?id=YfZ4ZPt8zd)
_Machine Learning Research._
Wenhu Chen, Ming Yin, Max Ku, Pan Lu, Yixin Wan,
Xueguang Ma, Jianyu Xu, Xinyi Wang, and Tony
[Xia. 2023c. TheoremQA: A theorem-driven question](https://doi.org/10.18653/v1/2023.emnlp-main.489)
[answering dataset. In Proceedings of the 2023 Con-](https://doi.org/10.18653/v1/2023.emnlp-main.489)
_ference on Empirical Methods in Natural Language_
_Processing, pages 7889–7901, Singapore. Associa-_
tion for Computational Linguistics.
Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena
Shah, Iana Borova, Dylan Langdon, Reema Moussa,
Matt Beane, Ting-Hao Huang, Bryan Routledge, and
[William Yang Wang. 2021. FinQA: A dataset of nu-](https://doi.org/10.18653/v1/2021.emnlp-main.300)
[merical reasoning over financial data. In Proceedings](https://doi.org/10.18653/v1/2021.emnlp-main.300)
_of the 2021 Conference on Empirical Methods in Nat-_
_ural Language Processing, pages 3697–3711, Online_
and Punta Cana, Dominican Republic. Association
for Computational Linguistics.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
[Nakano, et al. 2021. Training verifiers to solve math](https://arxiv.org/abs/2110.14168)
[word problems. arXiv preprint arXiv:2110.14168.](https://arxiv.org/abs/2110.14168)
DeepSeek. 2023. Deepseek llm: Let there be
answers. [https://github.com/deepseek-ai/](https://github.com/deepseek-ai/DeepSeek-LLM)
[DeepSeek-LLM.](https://github.com/deepseek-ai/DeepSeek-LLM)
Chunyuan Deng, Yilun Zhao, Xiangru Tang, Mark Ger[stein, and Arman Cohan. 2024. Investigating data](http://arxiv.org/abs/2311.09783)
[contamination in modern benchmarks for large lan-](http://arxiv.org/abs/2311.09783)
[guage models.](http://arxiv.org/abs/2311.09783)
Yuning Du, Chenxia Li, Ruoyu Guo, Xiaoting Yin,
Weiwei Liu, Jun Zhou, Yifan Bai, Zilin Yu, Yehua
[Yang, Qingqing Dang, et al. 2020. Pp-ocr: A prac-](https://arxiv.org/abs/2009.09941)
[tical ultra lightweight ocr system. arXiv preprint](https://arxiv.org/abs/2009.09941)
_arXiv:2009.09941._
[Google. 2023. Gemini.](https://deepmind.google/technologies/gemini/#introduction)
12849
-----
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
[Jacob Steinhardt. 2021. Measuring mathematical](https://openreview.net/forum?id=7Bywt2mQsCe)
[problem solving with the MATH dataset.](https://openreview.net/forum?id=7Bywt2mQsCe)
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
de las Casas, Florian Bressand, Gianna Lengyel, Guil[laume Lample, Lucile Saulnier, et al. 2023. Mistral](https://arxiv.org/abs/2310.06825)
[7b. arXiv preprint arXiv:2310.06825.](https://arxiv.org/abs/2310.06825)
Minki Kang, Seanie Lee, Jinheon Baek, Kenji
[Kawaguchi, and Sung Ju Hwang. 2023. Knowledge-](https://openreview.net/forum?id=xJLEQQrFia)
[augmented reasoning distillation for small language](https://openreview.net/forum?id=xJLEQQrFia)
[models in knowledge-intensive tasks.](https://openreview.net/forum?id=xJLEQQrFia) In Thirty_seventh Conference on Neural Information Process-_
_ing Systems._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu[taka Matsuo, and Yusuke Iwasawa. 2022. Large lan-](https://openreview.net/forum?id=e2TBb5y0yFf)
[guage models are zero-shot reasoners.](https://openreview.net/forum?id=e2TBb5y0yFf)
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate
[Kushman, and Hannaneh Hajishirzi. 2016. MAWPS:](https://doi.org/10.18653/v1/N16-1136)
[A math word problem repository. In Proceedings of](https://doi.org/10.18653/v1/N16-1136)
_the 2016 Conference of the North American Chapter_
_of the Association for Computational Linguistics: Hu-_
_man Language Technologies, pages 1152–1157, San_
Diego, California. Association for Computational
Linguistics.
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying
Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E.
[Gonzalez, Hao Zhang, and Ion Stoica. 2023. Effi-](https://arxiv.org/abs/2309.06180)
[cient memory management for large language model](https://arxiv.org/abs/2309.06180)
[serving with pagedattention. In Proceedings of the](https://arxiv.org/abs/2309.06180)
_ACM SIGOPS 29th Symposium on Operating Systems_
_Principles._
Aitor Lewkowycz, Anders Andreassen, David Dohan,
Ethan Dyer, Henryk Michalewski, Vinay Ramasesh,
Ambrose Slone, Cem Anil, Imanol Schlag, Theo
[Gutman-Solo, et al. 2022. Solving quantitative rea-](https://arxiv.org/abs/2206.14858)
[soning problems with language models. Advances](https://arxiv.org/abs/2206.14858)
_in Neural Information Processing Systems, 35:3843–_
3857.
Moxin Li, Fuli Feng, Hanwang Zhang, Xiangnan He,
[Fengbin Zhu, and Tat-Seng Chua. 2022. Learning](https://doi.org/10.18653/v1/2022.acl-long.5)
[to imagine: Integrating counterfactual thinking in](https://doi.org/10.18653/v1/2022.acl-long.5)
[neural discrete reasoning. In Proceedings of the 60th](https://doi.org/10.18653/v1/2022.acl-long.5)
_Annual Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers), pages 57–69,_
Dublin, Ireland. Association for Computational Linguistics.
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie
Del Giorno, Suriya Gunasekar, and Yin Tat Lee. 2023.
[Textbooks are all you need ii: phi-1.5 technical re-](https://arxiv.org/abs/2309.05463)
[port. arXiv preprint arXiv:2309.05463.](https://arxiv.org/abs/2309.05463)
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun[som. 2017. Program induction by rationale genera-](https://doi.org/10.18653/v1/P17-1015)
[tion: Learning to solve and explain algebraic word](https://doi.org/10.18653/v1/P17-1015)
[problems. In Proceedings of the 55th Annual Meet-](https://doi.org/10.18653/v1/P17-1015)
_ing of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 158–167, Vancouver,_
Canada. Association for Computational Linguistics.
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, KaiWei Chang, Michel Galley, and Jianfeng Gao. 2024.
[Mathvista: Evaluating math reasoning in visual con-](https://openreview.net/forum?id=KUNzEQMWU7)
[texts with gpt-4v, bard, and other large multimodal](https://openreview.net/forum?id=KUNzEQMWU7)
[models.](https://openreview.net/forum?id=KUNzEQMWU7)
Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu,
Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark,
[and Ashwin Kalyan. 2023. Dynamic prompt learning](https://openreview.net/forum?id=DHyHRBwJUTN)
[via policy gradient for semi-structured mathematical](https://openreview.net/forum?id=DHyHRBwJUTN)
[reasoning. In The Eleventh International Conference](https://openreview.net/forum?id=DHyHRBwJUTN)
_on Learning Representations._
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
[Lin, Shifeng Chen, and Dongmei Zhang. 2023a. Wiz-](https://arxiv.org/abs/2308.09583)
[ardmath: Empowering mathematical reasoning for](https://arxiv.org/abs/2308.09583)
[large language models via reinforced evol-instruct.](https://arxiv.org/abs/2308.09583)
_arXiv preprint arXiv:2308.09583._
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo
Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023b. [Wizardcoder:](https://arxiv.org/abs/2306.08568)
[Empowering code large language models with evol-](https://arxiv.org/abs/2306.08568)
[instruct. arXiv preprint arXiv:2306.08568.](https://arxiv.org/abs/2306.08568)
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su.
[2020. A diverse corpus for evaluating and developing](https://doi.org/10.18653/v1/2020.acl-main.92)
[English math word problem solvers. In Proceedings](https://doi.org/10.18653/v1/2020.acl-main.92)
_of the 58th Annual Meeting of the Association for_
_Computational Linguistics, pages 975–984, Online._
Association for Computational Linguistics.
[Mistral.AI. 2023. Mixtral of experts: A high quality](https://mistral.ai/news/mixtral-of-experts/)
[sparse mixture-of-experts.](https://mistral.ai/news/mixtral-of-experts/)
[OpenAI. 2022. Chatgpt: Optimizing language models](https://openai.com/blog/chatgpt/)
[for dialogue.](https://openai.com/blog/chatgpt/)
OpenAI. 2023a. [Gpt-4 technical report.](https://api.semanticscholar.org/CorpusID:257532815) _ArXiv,_
abs/2303.08774.
[OpenAI. 2023b. Gpt-4v(ision) system card.](https://api.semanticscholar.org/CorpusID:263218031)
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
[2021. Are NLP models really able to solve simple](https://doi.org/10.18653/v1/2021.naacl-main.168)
[math word problems? In Proceedings of the 2021](https://doi.org/10.18653/v1/2021.naacl-main.168)
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 2080–2094, Online._
Association for Computational Linguistics.
Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng,
Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou
[Yu, Weizhu Chen, and Jianfeng Gao. 2023. Check](http://arxiv.org/abs/2302.12813)
[your facts and try again: Improving large language](http://arxiv.org/abs/2302.12813)
[models with external knowledge and automated feed-](http://arxiv.org/abs/2302.12813)
[back.](http://arxiv.org/abs/2302.12813)
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel,
Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and
12850
-----
[Alexander Miller. 2019. Language models as knowl-](https://doi.org/10.18653/v1/D19-1250)
[edge bases?](https://doi.org/10.18653/v1/D19-1250) In Proceedings of the 2019 Confer_ence on Empirical Methods in Natural Language Pro-_
_cessing and the 9th International Joint Conference_
_on Natural Language Processing (EMNLP-IJCNLP),_
pages 2463–2473, Hong Kong, China. Association
for Computational Linguistics.
[Subhro Roy and Dan Roth. 2015. Solving general arith-](https://doi.org/10.18653/v1/D15-1202)
[metic word problems. In Proceedings of the 2015](https://doi.org/10.18653/v1/D15-1202)
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, pages 1743–1752, Lisbon, Portu-_
gal. Association for Computational Linguistics.
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle,
Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi
Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom
Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish
Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal
Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier,
[Thomas Scialom, and Gabriel Synnaeve. 2023. Code](http://arxiv.org/abs/2308.12950)
[llama: Open foundation models for code.](http://arxiv.org/abs/2308.12950)
Oscar Sainz, Jon Ander Campos, Iker García-Ferrero,
Julen Etxaniz, Oier Lopez de Lacalle, and Eneko
Agirre. 2023. [Nlp evaluation in trouble: On the](http://arxiv.org/abs/2310.18018)
[need to measure llm data contamination for each](http://arxiv.org/abs/2310.18018)
[benchmark.](http://arxiv.org/abs/2310.18018)
Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo
Huang, Daogao Liu, Terra Blevins, Danqi Chen, and
[Luke Zettlemoyer. 2024. Detecting pretraining data](https://openreview.net/forum?id=zWqr3MQuNs)
[from large language models.](https://openreview.net/forum?id=zWqr3MQuNs)
Weiwei Sun, Lingyong Yan, Xinyu Ma, Shuaiqiang
Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, and
[Zhaochun Ren. 2023. Is ChatGPT good at search?](https://doi.org/10.18653/v1/2023.emnlp-main.923)
[investigating large language models as re-ranking](https://doi.org/10.18653/v1/2023.emnlp-main.923)
[agents. In Proceedings of the 2023 Conference on](https://doi.org/10.18653/v1/2023.emnlp-main.923)
_Empirical Methods in Natural Language Process-_
_ing, pages 14918–14937, Singapore. Association for_
Computational Linguistics.
[MosaicML NLP Team. 2023. Introducing mpt-7b: A](https://mosaicml.com/blog/mpt-7b)
[new standard for open-source, commercially usable](https://mosaicml.com/blog/mpt-7b)
[llms. Accessed: 2023-03-28.](https://mosaicml.com/blog/mpt-7b)
Hugo Touvron, Louis Martin, Kevin R. Stone, Peter
Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Daniel M. Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull,
David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin
Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami,
Naman Goyal, Anthony S. Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor
Kerkez, Madian Khabsa, Isabel M. Kloumann, A. V.
Korenev, Punit Singh Koura, Marie-Anne Lachaux,
Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai
Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew
Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan
Saladi, Alan Schelten, Ruan Silva, Eric Michael
Smith, R. Subramanian, Xia Tan, Binh Tang, Ross
Taylor, Adina Williams, Jian Xiang Kuan, Puxin
Xu, Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and
[Thomas Scialom. 2023. Llama 2: Open foundation](https://arxiv.org/abs/2307.09288)
[and fine-tuned chat models.](https://arxiv.org/abs/2307.09288)
Lewis Tunstall, Nathan Lambert, Nazneen Rajani, Edward Beeching, Teven Le Scao, Leandro von Werra, Sheon Han, Philipp Schmid,
and Alexander Rush. 2023. Creating a coding
assistant with starcoder. _Hugging Face Blog._
Https://huggingface.co/blog/starchat.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le,
Ed H. Chi, Sharan Narang, Aakanksha Chowdhery,
[and Denny Zhou. 2023. Self-consistency improves](https://openreview.net/forum?id=1PL1NIMMrw)
[chain of thought reasoning in language models. In](https://openreview.net/forum?id=1PL1NIMMrw)
_The Eleventh International Conference on Learning_
_Representations._
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
[Deep neural solver for math word problems. In Pro-](https://doi.org/10.18653/v1/D17-1088)
_ceedings of the 2017 Conference on Empirical Meth-_
_ods in Natural Language Processing, pages 845–854,_
Copenhagen, Denmark. Association for Computational Linguistics.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le,
[and Denny Zhou. 2022. Chain of thought prompt-](https://openreview.net/forum?id=_VjQlMeSB_J)
[ing elicits reasoning in large language models. In](https://openreview.net/forum?id=_VjQlMeSB_J)
_Advances in Neural Information Processing Systems._
Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski,
Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, and Gideon Mann. 2023.
[Bloomberggpt: A large language model for finance.](https://api.semanticscholar.org/CorpusID:257833842)
_ArXiv, abs/2303.17564._
Qianqian Xie, Weiguang Han, Xiao Zhang, Yanzhao
Lai, Min Peng, Alejandro Lopez-Lira, and Jimin
[Huang. 2023. Pixiu: A large language model, in-](https://api.semanticscholar.org/CorpusID:259129602)
[struction data and evaluation benchmark for finance.](https://api.semanticscholar.org/CorpusID:259129602)
_ArXiv, abs/2306.05443._
Yiheng Xu, Hongjin Su, Chen Xing, Boyu Mi, Qian
Liu, Weijia Shi, Binyuan Hui, Fan Zhou, Yitao Liu,
Tianbao Xie, Zhoujun Cheng, Siheng Zhao, Lingpeng Kong, Bailin Wang, Caiming Xiong, and Tao
[Yu. 2023. Lemur: Harmonizing natural language and](http://arxiv.org/abs/2310.06830)
[code for language agents.](http://arxiv.org/abs/2310.06830)
Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang,
Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang,
Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng
Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao,
Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su,
Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang
Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li,
Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong
Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin
Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li,
12851
-----
Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan
[Zhou, and Zhiying Wu. 2023a. Baichuan 2: Open](http://arxiv.org/abs/2309.10305)
[large-scale language models.](http://arxiv.org/abs/2309.10305)
Hongyang Yang, Xiao-Yang Liu, and Christina Dan
Wang. 2023b. Fingpt: Open-source financial large
language models. arXiv preprint arXiv:2306.06031.
Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng,
Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu
Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao
Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan
Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang,
[Huan Sun, Yu Su, and Wenhu Chen. 2023. Mmmu:](http://arxiv.org/abs/2311.16502)
[A massive multi-discipline multimodal understand-](http://arxiv.org/abs/2311.16502)
[ing and reasoning benchmark for expert agi.](http://arxiv.org/abs/2311.16502)
Yilun Zhao, Yunxiang Li, Chenying Li, and Rui Zhang.
[2022. MultiHiertt: Numerical reasoning over multi](https://doi.org/10.18653/v1/2022.acl-long.454)
[hierarchical tabular and textual data. In Proceedings](https://doi.org/10.18653/v1/2022.acl-long.454)
_of the 60th Annual Meeting of the Association for_
_Computational Linguistics (Volume 1: Long Papers),_
pages 6588–6600, Dublin, Ireland. Association for
Computational Linguistics.
Yilun Zhao, Yitao Long, Hongjun Liu, Linyong Nan,
Lyuhao Chen, Ryo Kamoi, Yixin Liu, Xiangru Tang,
[Rui Zhang, and Arman Cohan. 2023a. Docmath-eval:](http://arxiv.org/abs/2311.09805)
[Evaluating numerical reasoning capabilities of llms](http://arxiv.org/abs/2311.09805)
[in understanding long documents with tabular data.](http://arxiv.org/abs/2311.09805)
Yilun Zhao, Zhenting Qi, Linyong Nan, Boyu Mi, Yixin
Liu, Weijin Zou, Simeng Han, Ruizhe Chen, Xiangru
Tang, Yumo Xu, Dragomir Radev, and Arman Cohan.
[2023b. QTSumm: Query-focused summarization](https://doi.org/10.18653/v1/2023.emnlp-main.74)
[over tabular data. In Proceedings of the 2023 Con-](https://doi.org/10.18653/v1/2023.emnlp-main.74)
_ference on Empirical Methods in Natural Language_
_Processing, pages 1157–1172, Singapore. Associa-_
tion for Computational Linguistics.
Yilun Zhao, Haowei Zhang, Shengyun Si, Linyong Nan,
[Xiangru Tang, and Arman Cohan. 2023c. Investi-](https://doi.org/10.18653/v1/2023.emnlp-industry.17)
[gating table-to-text generation capabilities of large](https://doi.org/10.18653/v1/2023.emnlp-industry.17)
[language models in real-world information seeking](https://doi.org/10.18653/v1/2023.emnlp-industry.17)
[scenarios. In Proceedings of the 2023 Conference on](https://doi.org/10.18653/v1/2023.emnlp-industry.17)
_Empirical Methods in Natural Language Processing:_
_Industry Track, pages 160–175, Singapore. Associa-_
tion for Computational Linguistics.
Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao
Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and Tat[Seng Chua. 2021. TAT-QA: A question answering](https://doi.org/10.18653/v1/2021.acl-long.254)
[benchmark on a hybrid of tabular and textual con-](https://doi.org/10.18653/v1/2021.acl-long.254)
[tent in finance. In Proceedings of the 59th Annual](https://doi.org/10.18653/v1/2021.acl-long.254)
_Meeting of the Association for Computational Lin-_
_guistics and the 11th International Joint Conference_
_on Natural Language Processing (Volume 1: Long_
_Papers), pages 3277–3287, Online. Association for_
Computational Linguistics.
**A** **KnowledgeFMATH Dataset**
**A.1** **Knowledge Bank Construction**
**Knowledge Collection** To construct a knowledge
bank, we first collect knowledge relevant to the finance domain from Wikipedia using “finance” and
Derivatives
31.0%
Accounting
20.8%
Issuance
3.7%
Market
10.9%
Portfolio
7.4%
Management
5.9%
Figure 4: Topic distribution of KnowledgeFMATH.
“economics” as key search terms. After collecting
the raw financial data, we adopt comprehensive
heuristics, embedding-based methods to remove
duplicates. This procedure ensures the uniqueness
of each knowledge term in our bank.
**Automatic Knowledge Formulation** To enhance the adaptability and usability of the knowledge bank, we incorporate a two-step automatic
knowledge formulation process, making each piece
of collected knowledge standardized and distilled
into a clear, concise format. The primary motivation for using automatic knowledge formulation
is cost efficiency and effectiveness. We have observed that GPT-* models are adept at handling
this straightforward task with minimal bias, as this
process does not involve the addition of extraneous
knowledge. We first prompt GPT-3.5 to reformulate the gathered information for each financial
term into a concise, paragraph-long textual definition. Since some financial terms come with mathematical definitions, we address the issue of varied
formula formats in the original sources (e.g., LaTeX and HTML). We instruct GPT-4 to transform
these formulas into a unified python program format. Figure 2 illustrates an example collected in
the knowledge bank.
**Knowledge Bank Update and Maintenance** After formulating knowledge using LLMs, during the
dataset annotation stage (Section 2.2), we dynamically update and maintain the constructed knowledge bank, incorporating new knowledge that, although not initially covered, is essential for answering the annotated questions. Additionally, we
remove any duplicate entries identified by the annotators. We eventually collect 1,760 pieces of
12852
-----
financial knowledge in the knowledge bank, with
52% of the terms including Python-formatted mathematical definitions.
**B** **Experiment Setup**
**B.1** **Implementation Details**
**LLM Experiment** The experiments for opensourced LLMs were conducted using vLLM framework (Kwon et al., 2023). For all the experiments,
we set temperature as 1.0, Top P as 1.0, and maximum output length as 512. For questions involving
tabular data, we converted the tables into Markdown format for model input.
**Final Answer Extraction** For LLM with CoT Chain-of-Thought Prompting Method:
prompting, we adopt the answer extraction pipeline
from Chen et al. (2023c) to identify the final an- [system prompt] You are a financial expert, you are supposed to to answer the given
swer from the model’s output. For LLM with PoT question. You need to output the answer in your final sentence like 'Therefore, the answer is ...'. The answer should be a numeric value.
prompting, we first extract the generated python [user input]
solution from the model’s output. If this pythonTODO Relevant Knowledge: {knowledge}
solution is executable, we execute it to obtain the
final answer. Once we obtain the final answer from
model’s output, we compare it with the groundtruth answer for accuracy measurement.
Chain-of-Thought Prompting Method:
[system prompt]
You are a financial expert, you are supposed to to answer the given
question. You need to output the answer in your final sentence like
'Therefore, the answer is ...'. The answer should be a numeric value.
[user input]
Relevant Knowledge:
{knowledge}
Question: {question}
Table: {table}
Let's think step by step to answer the question.
**Tabular Data Serialization** Following previous
work on table-relevant tasks (Chen, 2023; Zhao [system prompt] You are a financial expert, you are supposed to generate a Python
et al., 2023c), we use Markdown format to present program to answer the given question. The returned value of the program is supposed to be the answer.
tabular data in math reasoning problems. In our [user input]
preliminary study, we discovered that GPT-* and Relevant Knowledge:
llama-2 can effectively understand such table representations.
**C** **Experimental Results and Analysis** Please generate a Python program to answer the given question.```python
Program-of-Thought Prompting Method:
[system prompt]
You are a financial expert, you are supposed to generate a Python
program to answer the given question. The returned value of the
program is supposed to be the answer.
[user input]
Relevant Knowledge:
{knowledge}
Question: {question}
Table: {table}
Please generate a Python program to answer the given question.
```python
def solution( ):
Figure 5: Examples of knowledge-augmented CoT
Prompts we used in Section 5.
12853
-----
**Error Type (# Appear)** **Representative Question** **Representative Cot Result** **Ground-truth Python-format Solution** **Explanation**
Misinterpretation of
Required Knowledge
(42/100)
What is the immediate ZAR/HKD exchange rate
given by a broker?
Ratio Spot rate
—— ——
CNY/HKD 0.8422
CNY/ZAR 0.9149
CNY/SEK 1.0218
...The ZAR/CNY spot rate is 0.9149 and the
HKD/CNY spot rate is 0.8422. Let’s calculate: 0.9149 / 0.8422 = 1.0863(wrong formula)
Therefore, the answer is 1.0863.
The change in spread here is -100 basis points
(or -1.0 when we express this as a percent). We
can plug this into the formula along with the
given duration and convexity values: Change in
Return = -6.0 (-1.0) + 0.5 * 55.0 * (-1.0)**2.
Solving the above equation gives us an increase
in return of 3 percent. Therefore, the answer is
3.(wrong answer)
...Given the net income and the adjustments
we need to make, we can calculate FFO as
follows: FFO = Net Income + Depreciation
and Amortization + Other Expenses (Losses
on Sale of Properties) FFO = 35 (Net Income)
**(wrong number for Net Income) + 45 (Depreci-**
ation) + 22 (Losses on sale of properties) = 102
US$ million...We were given Price/FFO multiple as 13x and the weighted average shares as
92 million(wrong number for weighted aver**age shares). Using this we can calculate the**
share price: Share Price = 13 * (102 / 92) Share
Price = 13 * (1.1086) Share Price = 14.41 (approx.)(wrong answer)
Given the fixed interest rate is annualized at
3%, we can calculate the quarterly interest
payment as ( frac3%4 = 0.75% ) per quarter. On a nominal amount of $100 million,
the fixed payment per quarter would be (
100,000,000 times 0.75% = $750,000)...we calculate the present value (PV) of the fixed side
like this:( PV_fixed = $750,000 times (0.992556
+ 0.985222 + 0.975610) )( PV_fixed = $750,000
times 2.953388 = $2,215,041 )...Now let’s calculate the present value (PV) of the equity side:
if the equity index grows to (E), then ((E - 100) /
100 ) would be the growth rate of the equity side
of the swap.(not correct answer to the ques**tion**
def solution():
CNY_ZAR = 0.9149
CNY_HKD = 0.8422
ZAR_HKD = (1/CNY_ZAR) * CNY_HKD
return ZAR_HKD
def solution():
modified_duration = 6.0
delta_spread = -0.01
convexity = 55.0
return_impact = -(modified_duration
- delta_spread)
+ 0.5 * convexity
- (delta_spread**2)
return return_impact
def solution():
multiple = 13
net_income = 92
depreciation_and_amortization = 45
loss_from_property_disposal = 22
shares_outstanding = 118
FFO = net_income
+ depreciation_and_amortization
+ loss_from_property_disposal
FFO_per_share = FFO / shares_outstanding
stock_price = multiple * FFO_per_share
return stock_price
def solution():
fixed_rate = 0.03
nominal_amount = 100000000
current_spot_rates = [0.997506, 0.992556,
0.985222]
number_of_days = 90
denominator = 360
value_fixed_leg = fixed_rate
- (number_of_days / denominator)
- nominal_amount
- sum(current_spot_rates)
+ (nominal_amount
- current_spot_rates[-1])
equity_index_price = value_fixed_leg / nominal_amount
return equity_index_price
For the given problem,
the formula chosen for
the solution is incorrect and not the correct formula for the
corresponding financial
knowledge.
The calculation process
is correct, but the final
result given is incorrect.
The inclusion of the table led to a deviation
in the selection of the
formula, resulting in the
use of an incorrect formula during the calculation. Additionally, an incorrect number was chosen in one of the steps
of the calculation due to
an error in reading data
from the table, causing
the inserted number to
be inconsistent with the
requirements of the formula.
Failed to correctly information from the table, therefore misunderstanding the question
that needed to be answered, leading to the
selection of a formula
that calculates another
financial value, not the
parameter asked in the
question.
Error in Complex Com- The Novartis Company noted a notable rise in
putation (24/100) its profitability, which led to a material increase
in its credit score. Consequently, the market
demanded a 100 basis point tighter spread to
Gilts on Novartis’s 8-year bond. In case the
bond’s altered duration is 6.0 and its convexity
is 55.0, what would be the effect of this change
on the return?
Table Misunderstanding A specialist compiled the financial information
(12/100) for a Real Estate Investment Trust. Using the
income statement, what was the Real Estate Investment Trust’s share price at the end of 2017
according to the Price / FFO multiple?
Income Statement End of 2017
—— ——
Total rental income 450
Total property expenses 228
Net operating income 222
Other expenses 22
General expenses 28
Depreciation 45
Net interest expenses 35
Net income 92
Weighted average shares 118
Earnings per share 0.78
Price/FFO 13x
Question Misunder- Three months ago, a supervisor initiated a
standing (7/100) receive-fixed and pay-equity swap. The annualized fixed interest is 3% and the equity index
was at 100 at the time of the swap. The expiry of
the swap is one year with a quarterly reset, and
the nominal amount is valued at $100 million.
The present spot rates are as stated: At what
equity index level would this equity swap value
stand at zero?
Maturity Years Value Factor
———- ———0.25 0.997506
0.5 0.992556
0.75 0.985222
1 0.975610
Other case (15/100)
Table 6: Case study on GPT-3.5-turbo-1106’s failure cases.
12854
-----
**GPT-4**
**Output: Firstly, under the proportionate consolidation method, we consider the portion of assets, liabilities, revenue, and**
expenses we proportionately own in relation to the company we invested in. In the case of PZ, they increased their stake
in T-Internet from 15% to 50% in 2019. Therefore, for 2019, the proportionate consolidation of liabilities for company
PZ would be its own liabilities plus 50% of T-Internet’s liabilities. From the figures, we have PZ’s total liabilities for
2019 as 900 and T-Internet’s total liabilities for 2019 as 650. Calculating 50% of T-Internet’s liabilities: 0.50 * 650 = 325
**(right formula). To get the total liabilities under the proportionate consolidation method, we add PZ’s own liabilities to**
its 50% share of T-Internet’s liabilities.Doing this, we have: 900 (PZ’s liabilities) + 325 (50% of T-Internet’s liabilities) =
1225 (right formula). Therefore, the answer is 1225 (right answer).
**GPT-3.5**
**Output: To determine the total liabilities for company PZ under the proportionate consolidation method for 2019, we**
need to use the formula: Total liabilities = PZ liabilities + (T-internet liabilities * ownership ratio) In 2018, PZ had a
passive equity ownership interest of 15% in the T-internet company. So, for 2018, the ownership ratio is 15%. In 2019,
PZ increased its ownership to 50% as of 1st January 2019. So, for 2019, the ownership ratio is 50%. From the given
table, we can find the liabilities for PZ in 2018 and 2019: PZ liabilities in 2018 = 1200 PZ liabilities in 2019 = 900
We also have the liabilities for the T-internet company in 2018 and 2019: T-internet liabilities in 2018 = 600 T-internet
liabilities in 2019 = 650 Using the formula and the given values, we can calculate the total liabilities for company PZ
under the proportionate consolidation method for 2019: Total liabilities = 900 + (650 * 50%) (right formula) Therefore,
the answer is 1225 (right answer).
**Llama-2-70B**
**Output: 1. First, we need to calculate the total liabilities for company PZ in 2019 under the proportionate consolidation**
method. We can do this by using the formula: Total liabilities = PZ liabilities + (T-internet liabilities * ownership ratio)
We know that PZ liabilities in 2019 were 1,200,000 (from the table). We also know that T-internet liabilities in 2019
were 900,000 (from the table). The ownership ratio is 50%, which means that PZ owns half of T-internet’s shares. So,
we can plug these values into the formula: Total liabilities = 1,200,000 + (900,000 * 0.5) Total liabilities = 1,200,000 +
450,000 Total liabilities = 1,650,000 (wrong formula) 2. Next, we need to add the dividends paid by T-internet in 2019,
which were 230,000 (from the table). We add this to the total liabilities we calculated above: Total liabilities = 1,650,000
+ 230,000 Total liabilities = 1,880,000 3. Finally, we need to convert the answer to the proportionate consolidation
method, which means we need to subtract the portion of T-internet’s liabilities that are not owned by PZ. We can do this
by multiplying T-internet’s liabilities by the ownership ratio and subtracting the result from the total liabilities: Total
liabilities = 1,880,000 - (900,000 * 0.5) Total liabilities = 1,880,000 - 450,000 Total liabilities = 1,430,000 (wrong
**formula)**
**WizardMath**
**Output: Step 1: Determine the cost of acquiring the 35% stake in T-internet. We need to calculate the cost of purchasing**
the additional 35% (50% - 15%) equity stake in T-internet. We can do this by multiplying the total book value of
T-internet’s equity by the percentage of the stake acquired. Cost of acquiring 35% stake = (35/100) * Equity value
of T-internet = (35/100) * $450,000,000 =$157,500,000 (wrong formula) Step 2: Calculate the total consolidated
liabilities for 2019. Since there have been no transactions between the two companies, the consolidated liabilities for
both companies will be the sum of their individual liabilities. Total consolidated liabilities (2019) = PZ’s liabilities (2019)
+ T-internet’s liabilities (2019) Step 3: Find the total liabilities of PZ and T-internet for 2019. We can calculate this by
adding the liabilities of both companies as reported in their individual financial statements for 2019. PZ’s total liabilities
for 2019 amount to $900,000, and T-internet’s total liabilities for 2019 amount to $650,000. Total consolidated liabilities
(2019) = PZ’s liabilities + T-internet’s liabilities = $900,000 + $650,000 = $1,550,000 (wrong formula) Therefore, the
total liabilities for company PZ under the proportionate consolidation method for 2019 is $1,550,000. The answer is: the
total liabilities for company PZ under the proportionate consolidation method for 2019 is $1,550,000 (wrong answer).
**Mistral-7B**
**Output: To determine the total liabilities for company PZ under the proportionate consolidation method for 2019, we**
need to calculate 50% of the total liabilities of the T-internet company as of January 1, 2019. We can find this information
in the table provided. Therefore, the answer is: 450 (which is 50% of the total liabilities of the T-internet company, as of
January 1, 2019) (wrong answer).
Table 7: Case study of different LLMs with CoT prompting methods.
12855
-----
**Question:**
"Assuming a 30% marginal tax rate and an additional 5% risk premium for shares as opposed to bonds, what is the
cost of equity using the bond-yield-plus-risk-premium method for a 20-year, $1,000 non-callable bond with 8% annual
coupons that is presently valued at $1,105.94?"
**Ground Truth Knowledge:**
**Bond-yield-plus-risk-premium method**
**Definition:**
"The bond-yield-plus-risk-premium method is a method used to estimate the cost of equity. It involves taking the yield
on a company’s debt, which represents the risk investors are willing to take for lending money to the company, and
adding a risk premium that represents the extra risk of investing in the company’s equity. This method is based on the
assumption that equity holders require a higher return than bondholders due to the higher risk associated with equity
investments."
**Formula:**
def Bond_yield_plus_risk_premium_method(risk_premium, yield_to_maturity):
# Define constants
risk_premium = risk_premium
yield_to_maturity = yield_to_maturity
# Calculate cost of equity
cost_of_equity = yield_to_maturity + risk_premium
return cost_of_equity
**Yield to maturity:**
**Definition:**
"Yield to maturity (YTM) is the total return anticipated on a bond if it is held until it matures. YTM is expressed
as an annual percentage rate (APR). It takes into account both the interest payments you receive each year, usually
semi-annually, and any capital gain you may receive if you hold the bond until maturity and it is redeemed at a price
above or below its purchase price."
**Formula:**
def Yield_to_maturity(FV, P, t, C):
”’
FV: Face value of the bond
P: Price of the bond
t: Time to maturity
C: Coupon payment
”’
import sympy as sp
newline r = sp.symbols(’r’)
# Formula for bond price
price_eq = C*(1 - (1 + r)**-t)/r + FV/(1 + r)**t - P
# Equation solution
yield_to_maturity = sp.solve(price_eq, r)[1]
return yield_to_maturity
**Risk premium:**
**Definition: "A risk premium refers to the expected additional return on an investment that an investor requires as**
compensation for investing in a risky asset, over and above the risk-free rate. It is essentially the reward for taking on a
higher level of risk."
**Formula:**
def Risk_premium(bond_yield, risk_premium):
# Calculate cost of equity
cost_of_equity = bond_yield + risk_premium
return cost_of_equity
**Python Solution:**
def solution():
import numpy_financial as npf
# Given values
FV = -1000 # Future Value
PV = 1105.94 # Present Value
N = 20 # Number of periods
PMT = -80 # Payment per period
# Calculating the interest rate
i = npf.rate(N, PMT, PV, FV)
# Converting to percentage
i_percent = i * 100
# Define constants
risk_premium = 5
yield_to_maturity = i_percent
# Calculate cost of equity
cost_of_equity = yield_to_maturity + risk_premium
return cost_of_equity
Table 8: Original question used in the case study of knowledge integration 9. We also present its ground-truth
knowledge definitions and Python-format solution.
12856
-----
**Wo. knowledge augmentation**
**Output: "First, we need to calculate the yield to maturity (YTM) of the bond. The YTM is the rate of return anticipated**
on a bond if it is held until maturity and all payments are made as scheduled. To calculate the YTM, we need to use
the present value of the bond, the annual coupons, and the number of years to maturity. Present value of the bond =
$1,105.94 Annual coupons = 8% of $1,000 = $80 Number of years to maturity = 20 Using these values, we can calculate
the YTM using financial formulas or by using an online calculator. Assuming the YTM is calculated to be 6%, we can
now calculate the cost of equity using the bond-yield-plus-risk-premium method. Cost of equity = YTM + risk premium
YTM = 6% (Wrong calculated result for yield to maturity) Risk premium for shares = 5% Cost of equity = 6% + 5%
= 11% Therefore, the answer is 11%.”
**BM25 as retriever**
**Output: "To calculate the cost of equity using the bond-yield-plus-risk-premium method, we need to add the yield**
on the bond (representing the risk investors are willing to take) to the risk premium for shares. First, we need to
calculate the yield on the bond. The bond has an 8% annual coupon rate and is currently valued at $1,105.94. Using
the coupon payments formula, we can calculate the total coupon payments over the lifespan of the bond:Coupon
payments = Coupon_Payments(n=20, pmt=1000*0.08, i_y=(1105.94/1000)-1) Next, we need to calculate the yield
to maturity on the bond. The yield to maturity can be obtained using the spot yield formula:Yield to maturity =
Spot_yield(current_price=1105.94, previous_price=1000)Now we can calculate the yield on the bond by dividing the
total coupon payments by the present value of the bond:Yield on bond = Coupon payments / Present value of bond.
Finally, we can calculate the cost of equity by adding the yield on the bond to the risk premium for shares:Cost of equity
= Yield on bond + risk premium. Therefore, the answer is the cost of equity calculated using the bond-yield-plus-riskpremium method.” (Not give the direct answer to the question)
**OpenAI Ada Embedding as retriever**
**Output: "Step 1: Calculate the yield to maturity of the bondTo calculate the yield to maturity, we need to use the present**
value formula for a bond. Given that the present value is $1,105.94, the annual coupon is 8%, and the bond has 20 years
until maturity, we can calculate the yield to maturity using financial software or a financial calculator. Let’s assume the
yield to maturity is 7%.Step 2: Calculate the risk premiumGiven that there is an additional 5% risk premium for shares
as opposed to bonds, the risk premium is 5%.Step 3: Use the bond-yield-plus-risk-premium method formulaUsing the
formula of the bond-yield-plus-risk-premium method, we can calculate the cost of equity as follows:cost_of_equity =
yield_to_maturity + risk_premiumcost_of_equity = 7% + 5% cost_of_equity = 12% Therefore, the cost of equity using
the bond-yield-plus-risk-premium method is 12%.” (Right answer)
**LLM as Knowledge Base**
**Output: "To determine the cost of equity using the bond-yield-plus-risk-premium method, we need to find the yield**
on comparable bonds and add the risk premium. First, let’s calculate the yield on the bond. The bond has a face value
of $1,000 and is currently valued at $1,105.94. The coupon rate is 8% and the bond has a 20-year maturity. Using
the present value formula, we can calculate the yield:$1,105.94 = ($80 / (1 + yield)1) + ($80 / (1 + yield)[ˆ] 2) + ... +[ˆ]
($80 + $1,000 / (1 + yield)20)[ˆ] Simplifying this equation, we can solve for the yield using a financial calculator or
software:Yield = 6.5%(Wrong calculated result for yield to maturity) Now, let’s calculate the cost of equity using
the bond-yield-plus-risk-premium method. We add the risk premium of 5% to the yield we calculated:Cost of Equity =
Yield + Risk Premium = 6.5% + 5% = 11.5% Therefore, the answer is 11.5%.”
Table 9: Case study of knowledge integration.
12857
-----
60
40
WizardCoder phi-1.5 GPT-4-1106
GPT-4-0613
Llama 2 (7B)
WizardMath
Baichuan 2
Llama 1 Falcon
LLaMA
Yi Codellama (7B)
GPT-3.5-0613
Mistral
Codellama (34B)
Lemur Gemini Deepseek
Mixtral
WizardLM
Llama 2 (13B)
Llama 2 (70B)
phi-2 Vicuna
MPT
20 40 60 80 100 120
20
Execution Rate
Figure 6: Relationship between execution rate and accuracy across different models. The degraded performance
when applying PoT prompting is attributable to the low execution rate.
12858
-----
| [
"Yilun, Zhao",
"Vivek, Srikumar",
"Hongjun, Liu",
"Rui, Zhang",
"Yitao, Long",
"Chen, Zhao",
"Arman, Cohan",
"Lun-Wei, Ku",
"Andre, Martins"
] | 2024-08-01T00:00:00 | ACL 2024 Long Papers | true | 0 | 0 | null | https://aclanthology.org/2024.acl-long.693 | null | https://www.semanticscholar.org/paper/3a9a77df79772f1a0566fbf3809675c72bc47aaa |
LISA: Language models of ISAbelle proofs | We introduce an environment that allows interaction with an Isabelle server in an incremental manner. With this environment, we mined the Isabelle standard library and the Archive of Formal Proofs (AFP) and extracted 183K lemmas and theorems. We built language models on this large corpus and showed their effectiveness in proving AFP theorems. | null | # LISA: Language models of ISAbelle proofs
Albert Qiaochu Jiang
University of Oxford
[email protected]
Jesse Michael Han
OpenAI
[email protected]
**ABSTRACT**
We introduce an environment that allows interaction with an Isabelle server in an incremental manner. With this environment,
we mined the Isabelle standard library and the Archive of Formal
Proofs (AFP) and extracted 183K lemmas and theorems. We built
language models on this large corpus and showed their effectiveness
in proving AFP theorems.
**1** **INTRODUCTION**
There has been a surge of interests recently in applying machine
learning models for theorem provers. Examples include [3, 6–8, 12,
14], all of which demonstrate great promises of machine learning
models in proving new theorems. In this work, we propose to mine
the libraries used by the Interactive Theorem Prover (ITP) Isabelle,
namely, the Isabelle standard library and the Archive of Formal
Proofs. The libraries have been mined previously for proof method
recommendations based on hand-crafted features [9, 10].
**Contributions**
Wenda Li
University of Cambridge
[email protected]
Yuhuai Wu
University of Toronto
[email protected]
```
theorem I:
```
**theorem**
```
"A ⟶ A"
```
proof (prove) proof (prove) proof (prove)
**proof** theorem I:
goal (1 subgoal): goal (1 subgoal): goal:
**state** ?A ⟶ ?A
1. A ⟶ A 1. A ⟹ A No subgoals!
**proof step** `apply` `apply` `done`
```
(rule impI) assumption
```
**Figure 1: An illustration of the relationship between theo-**
**rems, proof states, and proof steps.**
"done" consist of "apply (rule impI)" and "apply assumption". Because Isabelle provides a Partially Observable Markov Decision
Process (POMDP) with the proof states being the observation, conditioning on the previous steps of the proof helps the agent to
reconstruct the state of the proof.
The unique feature that Isabelle enables in our system is that we
can execute proofs token by token. The benefits brought by this
feature include that we can make copies of a certain proof state and
try multiple different methods very conveniently. This also allows
us to change the order in which a proof is written, which makes
proof sketching possible: we can potentially first sketch a proof
skeleton containing the keyword “sorry”, which assumes that the
given statement before it can be proven. Then, by saving all the
states before the “sorry” command and attempting them after the
skeleton has been completed, we allow a machine to write proofs
in the same order a human sometimes would.
With this environment, we mined a total of 183K theorems from the
Isabelle standard library [11] and the Archive of Formal Proofs (AFP) [1].
We then extracted a total of 2.16 million pairs of inputs and proof
steps. This forms a dataset useful for theorem proving: if an agent
can produce the correct proof step when prompted with an arbitrary proof state, it will be able to prove the theorem. We used
a 95%/1%/4% random split to divide the proof corpus into the
train/valid/test sets. We show some dataset statistics in Table 1.
**3** **EXPERIMENTS**
**3.1** **Setup**
We started by taking a language model pre-trained on the WebMath
dataset for 72B tokens, similar to the GPT-f models applied to
Metamath [12] and Lean [5]. We then fine-tuned the language
- We built an environment where agents can interact with the
Isabelle theorem prover in an incremental manner. This enables
learning-based agents to conjecture in the Isar language.
- We mined the Archive of Formal Proofs and the standard library
of Isabelle. We extracted 183K theorems and 2.16M proof steps.
This is one of the largest proof corpora for interactive theorem
provers.
- We trained large language models on this corpus and obtained
the first results of using such models to prove theorems in this
new dataset.
**2** **ENVIRONMENT AND DATASET**
We created an environment where theorem proving is modelled as
a sequential decision process. Initially, the environment will load a
selected theorem and we have access to the top level state. At each
time-step, the agent produces a proof step of arbitrary length. The
environment then applies the proof step to the top level state and
iterates the process if the theorem has not been proved. We show
the proof process of a simple theorem in Figure 1. The theorem
declaration initialises the first proof state. The proof states in the
middle row represent the stage of the proof progress and the proof
steps in the bottom row are what the agent should produce. We
support three different kinds of inputs: with proof states only, with
previous steps only, and with both proof states and previous steps.
For example, the previous steps when the agent should produce
-----
**Table 1: Sequence length in characters**
|Col1|Source length Target length|
|---|---|
||min max mean median min max mean median|
|With proof states only|7 227831 379.6 187.0 17 138581 3223.6 980.0 2 6522 34.2 19.0 60 229885 3612.2 1328.2|
|With previous steps only||
|With both proof states and previous steps||
models only on the AFP part of the dataset, due to time constraints.
The architecture we chose was a decoder-only transformer similar
to GPT-3 [4]. All models have 163M non-embedding parameters. We
use the same BPE encoding as GPT-3 [4]. For fine-tuning, we used
a batch size of 2048, a learning rate of 0.005, a 100-step ramp-up,
and decayed the learning rate according to a cosine schedule over
64B tokens; we early-stopped according to validation perplexity
after 35B elapsed tokens.
**3.2** **Evaluation**
We used a best-first search strategy at evaluation, similar to that
of [5, 12]. We initialise and maintain a priority queue of top level
states, sorted by their cumulative log probability. The cumulative
log probability of a top level state is the sum of log probabilities of
all the previous proof steps the agent takes to arrive at the current
state. Initially, the priority queue contains only the top level state
right after the declaration of the theorem, with a cumulative log
probability of 0. At each search step, we pop the head of the priority
queue to retrieve the top level state with the highest probability.
We then query the language model for a set of 16 proof step candidates, with a temperature of 1.0. For each of the candidates, we
duplicate the top level state, apply the candidate to it, and calculate
the updated cumulative log probability. If the application of the
candidate is successful, we add the resulted top level state to the
queue. The queue has a length of 16 (i.e. it only maintains 16 entries
with the highest cumulative log probabilities). If one of the resulted
top level state shows that the proof is complete, we consider the
proof attempt successful. If the queue is empty, or a timeout of 120s
is spent on one attempt, or the number of queries exceeds 100, we
consider the attempt a failure.
**3.3** **Results**
We evaluated our language model with the best-first search strategy on a test set of 4000 theorems. 33.2% of the theorems were
successfully proved. We analysed the failure causes of the rest of
the theorems. 59.1% of the attempts failed because of the time limitation, 0.2% of the attempts failed because of the query number
limitation and 7.6% of the attempts failed because the priority queue
was empty at some point in the proving process. We show two successful proofs generated by our language model, and contrast them
with the proofs in the AFP.
Theorem 1 is a lemma in Utility.thy from the AFP entry Executable
_Matrix Operations on Matrices of Arbitrary Dimensions [13]. Our_
proof is a one-liner and much simpler than the original proof. We
checked the validity of some generated proofs manually by writing
them in Isabelle with the same dependency as the original proofs.
**Theorem 1 lemma foldr_foldr_concat:**
"foldr (foldr f) m a = foldr f (concat m) a"
**Original proof**
proof (induct m arbitrary: a)
case Nil show ?case by simp
next
case (Cons v m a)
show ?case
unfolding concat.simps foldr_Cons o_def Cons
unfolding foldr_append by simp
qed
**Our proof**
by (induct m arbitrary: a) simp_all
Theorem 2 is a lemma in Word_Lemmas.thy from the AFP entry
_Finite Machine Word Library [2]. Although our proof is longer than_
the original, it utilises a different set of lemmas to finish the proof,
and is written in a very different style compared to the original. This
demonstrates that our proof search agent with language models is
capable of discovering novel and interesting proofs.
**Theorem 2 lemma scast_ucast_1:**
"[[ is_down (ucast :: ’a word ⇒ ’b word);
is_down (ucast :: ’b word ⇒ ’c word) ]] =⇒
(scast (ucast (a :: ’a::len word) :: ’b::len word) :: ’c::len word) =
ucast a"
**Original proof**
by (metis down_cast_same ucast_eq ucast_down_wi)
**Our proof**
using unat_ucast
apply apply (simp add:ucast_def unat_ucast)+
apply (subst down_cast_same[symmetric])
apply (simp add: is_down)+
apply (rule word_eqI)
apply (simp add: nth_ucast)
apply safe
apply simp
done
As a baseline, we also considered using greedy search. This is equivalent to best-first search with the queue length = 1. This agent, as a
consequence, only proved 28.3% of the theorems.
**4** **CONCLUSIONS AND FUTURE WORK**
We extracted a large corpus from Isabelle proofs and examined
the performance of language models in proving theorems on the
dataset. We showed that a non-trivial proportion of problems on
AFP can be solved by the application of a language model and a
-----
best-first search. The successful proofs demonstrated the language
model’s ability to compose succinct, or novel proofs.
The proof assistant Isabelle provides a very convenient command
that allows users to conjecture ("have"). With our environment that
interacts with the proof assistant in a very flexible manner, and
our rich dataset, we can set out to further study how machines
can learn to conjecture, and to reason about the proof construction
more generally. Specifically, by learning from human conjectures,
computer-assisted theorem provers are endowed with the ability
to sketch proofs. This can be organically integrated with symbolic
methods such as “nitpick” and “sledgehammer”.
**REFERENCES**
[[1] AFP 2021. Archive of Formal Proofs. Retrieved Feb 11, 2021 from https://www.isa-](https://www.isa-afp.org/index.html)
[afp.org/index.html](https://www.isa-afp.org/index.html)
[2] Joel Beeren, Matthew Fernandez, Xin Gao, Gerwin Klein, Rafal Kolanski, Japheth
Lim, Corey Lewis, Daniel Matichuk, and Thomas Sewell. 2016. Finite Machine
[Word Library. Archive of Formal Proofs (June 2016). https://isa-afp.org/entries/](https://isa-afp.org/entries/Word_Lib.html)
[Word_Lib.html, Formal proof development.](https://isa-afp.org/entries/Word_Lib.html)
[3] James P. Bridge, Sean B. Holden, and Lawrence C. Paulson. 2014. Machine
Learning for First-Order Theorem Proving - Learning to Select a Good Heuristic. J.
_[Autom. Reason. 53, 2 (2014), 141–172. https://doi.org/10.1007/s10817-014-9301-5](https://doi.org/10.1007/s10817-014-9301-5)_
[4] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan,
Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan,
Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter,
Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin
Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya
Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners.
In Advances in Neural Information Processing Systems 33: Annual Conference on
_Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020,_
_virtual, Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina_
[Balcan, and Hsuan-Tien Lin (Eds.). https://proceedings.neurips.cc/paper/2020/](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html)
[hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html)
[5] Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W. Ayers, and Stanislas Polu.
2021. Proof Artifact Co-training for Theorem Proving with Language Models.
_[CoRR abs/2102.06203 (2021). arXiv:2102.06203 https://arxiv.org/abs/2102.06203](https://arxiv.org/abs/2102.06203)_
[6] Geoffrey Irving, Christian Szegedy, Alexander A. Alemi, Niklas Eén, François
Chollet, and Josef Urban. 2016. DeepMath - Deep Sequence Models for Premise
Selection. In Advances in Neural Information Processing Systems 29: Annual Confer_ence on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona,_
_Spain, Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon,_
[and Roman Garnett (Eds.). 2235–2243. https://proceedings.neurips.cc/paper/](https://proceedings.neurips.cc/paper/2016/hash/f197002b9a0853eca5e046d9ca4663d5-Abstract.html)
[2016/hash/f197002b9a0853eca5e046d9ca4663d5-Abstract.html](https://proceedings.neurips.cc/paper/2016/hash/f197002b9a0853eca5e046d9ca4663d5-Abstract.html)
[7] Dennis Lee, Christian Szegedy, Markus N. Rabe, Sarah M. Loos, and Kshitij Bansal.
2020. Mathematical Reasoning in Latent Space. In 8th International Conference
_on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020._
[OpenReview.net. https://openreview.net/forum?id=Ske31kBtPr](https://openreview.net/forum?id=Ske31kBtPr)
[8] Wenda Li, Lei Yu, Yuhuai Wu, and Lawrence Paulson. 2021. IsarStep: a Benchmark
for High-level Mathematical Reasoning. (2021).
[9] Yutaka Nagashima. 2020. Simple Dataset for Proof Method Recommendation in
Isabelle/HOL. In International Conference on Intelligent Computer Mathematics.
[10] Yutaka Nagashima and Yilun He. 2018. PaMpeR: Proof Method Recommendation
[System for Isabelle/HOL. CoRR (2018). http://arxiv.org/abs/1806.07239](http://arxiv.org/abs/1806.07239)
[11] Tobias Nipkow, Lawrence C Paulson, and Markus Wenzel. 2002. Isabelle/HOL:
_a proof assistant for higher-order logic. Vol. 2283. Springer Science & Business_
Media.
[12] Stanislas Polu and Ilya Sutskever. 2020. Generative Language Modeling for
[Automated Theorem Proving. CoRR abs/2009.03393 (2020). arXiv:2009.03393](https://arxiv.org/abs/2009.03393)
[https://arxiv.org/abs/2009.03393](https://arxiv.org/abs/2009.03393)
[13] Christian Sternagel and René Thiemann. 2010. Executable Matrix Operations
on Matrices of Arbitrary Dimensions. Archive of Formal Proofs (June 2010).
[https://isa-afp.org/entries/Matrix.html, Formal proof development.](https://isa-afp.org/entries/Matrix.html)
[14] Josef Urban, Jirí Vyskocil, and Petr Stepánek. 2011. MaLeCoP Machine Learning
Connection Prover. In Automated Reasoning with Analytic Tableaux and Related
_Methods - 20th International Conference, TABLEAUX 2011, Bern, Switzerland, July_
_4-8, 2011. Proceedings (Lecture Notes in Computer Science), Kai Brünnler and_
[George Metcalfe (Eds.), Vol. 6793. Springer, 263–277. https://doi.org/10.1007/978-](https://doi.org/10.1007/978-3-642-22119-4_21)
[3-642-22119-4_21](https://doi.org/10.1007/978-3-642-22119-4_21)
-----
| [
"Jesse Michael, Han",
"Albert Q., Jiang",
"Wenda, Li",
"Yuhuai, Wu"
] | 2021-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
LLM vs ITP | Wiedijk's list of 100 theorems provides a benchmark for comparing interactive theorem provers (ITPs) and their mathematics libraries. As shown by the GHOSTS dataset, large language models (LLMs) can also serve as searchable libraries of mathematics, given their capacity to ingest vast amounts of mathematical literature during their pre-training or finetuning phases. ITP libraries are the only other repositories of comparable size and range of mathematical intricacy. This paper presents the first comparison between these two unique mathematical resources, centered on Wiedijk's list. Beyond the intrinsic interest of such a comparison, we discuss the importance of analyzing whether knowledge contained in LLMs (represented by GPT-4 and Claude 2) matches that encoded in ITPs. This analysis contributes thus further to advance the intersection between LLM and ITP technology (examples being tasks like autoformalization, LLM-guided proof generation, or proof completion) by ensuring LLMs possess, beyond ITP code generation capabilities, sufficient mathematical knowledge to carry out the desired formalization. The dataset with our findings, called "LLMKnow", is made available to the public. | null | # LLM vs ITP
**S. Frieder[∗]**
Department of Computer Science
University of Oxford
```
[email protected]
```
**M. Trimmel**
RISE Research Institutes of Sweden
```
[email protected]
```
**R. Alawadhi** **K. Gy**
```
[email protected] [email protected]
```
**Abstract**
Wiedijk’s list of 100 theorems provides a benchmark for comparing interactive
theorem provers (ITPs) and their mathematics libraries. As shown by the GHOSTS
dataset, large language models (LLMs) can also serve as searchable libraries of
mathematics, given their capacity to ingest vast amounts of mathematical literature
during their pre-training or finetuning phases. ITP libraries are the only other repositories of comparable size and range of mathematical intricacy. This paper presents
the first comparison between these two unique mathematical resources, centered
on Wiedijk’s list. Beyond the intrinsic interest of such a comparison, we discuss
the importance of analyzing whether knowledge contained in LLMs (represented
by GPT-4 and Claude 2) matches that encoded in ITPs. This analysis contributes
thus further to advance the intersection between LLM and ITP technology (examples being tasks like autoformalization, LLM-guided proof generation, or proof
completion) by ensuring LLMs possess, beyond ITP code generation capabilities,
sufficient mathematical knowledge to carry out the desired formalization. The
dataset with our findings, called “LLMKNOW”, is made available to the public.
```
https://llmknow.friederrr.org
```
**1** **Introduction**
Interactive theorem provers (ITPs), such as Lean (introduced in [16], currently at version 4 [30]) or
Isabelle (introduced originally in [32, 33] subsequently updated and expanded [34, 20]), which are
some of the most well-known examples of ITPs, have large libraries of formal proofs of mathematical
theorems associated to them. E.g., in the case of Isabelle, these are core libraries included in the main
distribution as well as external proof developments contained in the Archive of Formal Proof[2]; in
the case of Lean’s latest version, the Lean Mathematical Library [28], mathlib4[3], contains all the
pertinent mathematics.
Progress in ITPs has been steady, and as the burden on the person converting a natural-language proof
to a proof in the formal system of an ITP was alleviated in time [20], the libraries of formally verified
proofs grew to the point where a sufficient amount of mathematics is encoded that allows ITPs to be
used as a teaching support for undergraduate curricula (and beyond [7]). For example, undergraduate
mathematical textbooks exist where taught mathematics is formalized as it is developed [27].
In some instances, formalization has advanced to cover research-level mathematics. Some of the
notable examples are Szemerédi’s Regularity Lemma in extremal graph theory [17], hyper-dual
_∗Corresponding author. The other authors are listed in random order._
[2https://www.isa-afp.org/](https://www.isa-afp.org/)
[3https://github.com/leanprover-community/mathlib4](https://github.com/leanprover-community/mathlib4)
37th Conference on Neural Information Processing Systems (NeurIPS 2023) Workshop on MATH-AI.
-----
numbers in second-order automatic differentiation [37], schemes in algebraic geometry [9, 5] or the
Liquid Tensor Experiment [36, 10].
With the introduction of the first version of ChatGPT in November, which was widely adopted,
investigating LLMs mathematical abilities has received renewed impetus, e.g. [6, 19, 26, 3]. All of
these works have focused on the abilities of ChatGPT (also known as “GPT-3.5”) or GPT-4, which
were shown to trump all other models at the time.
In [19], it has been noted that LLMs are able to function well as mathematical search engines.
This begs the question: Where do they search? LLMs obtain their knowledge during pre-training
and finetuning - and it is plausible that LLMs have been trained on the entirety of public sources
[of mathematical data, such as the arxiv.org preprint repository, or various digitized books. For](https://arxiv.org)
the most prominent LLMs, e.g. GPT-4 [31], the precise collections of training data has not been
revealed. LLMs, therefore, can be assumed to have encoded most of the digitally accessible books on
mathematics in their architectures and weights.
Whereas for ITPs the mathematical knowledge they encode is completely transparent (even if it may
be hard to parse their formal language), for LLMs it is unknown what and how much mathematics
they have seen, which raises the question of whether the (implicit) mathematical knowledge base of
an LLM can (at least) match the (explicit) knowledge base of ITP.
Aside from an academic interest in comparing the knowledge bases, there is a second reason, more
pertinent reason for such comparison, coming from the growing trend of merging LLM technology
with ITP technology: LLMs have been used to perform autoformalization (converting a naturallanguage proof to a formal proof that an ITP can ingest) as well as proof completion and generation
(suggesting the next step in a partially elaborated proof/generation an entire formal proof from a
formal statement). It is unreasonable to expect LLMs to carry out these tasks successfully if an LLM
does not have a sufficiently advanced level of mathematics encoded to comprehend its formalization
task. E.g., if an LLM does not have any understanding of the notions such as “compact” and “closed”,
it is unreasonable to assume it will be able to autoformalize the natural-language statement: “A subset
_of a compact metric space is compact if and only if it is closed.”. We elaborate further on these_
matters in Appendix D.
The fairest comparison of the knowledge contained in LLMs vs ITPs would be a brute-force comparison, where their entire knowledge is compared. Because evaluating LLM output that consists of
university-level mathematics is difficult, it cannot be outsourced to some of the many commercial
crowdsourcing services. Brute-force comparison, by comparing long lists of statements and proofs
sourced from books, is thus not possible. To keep the evaluation effort reasonable, we, therefore, focus
on a proxy benchmark for assessing knowledge: Freek Wiedijk’s list, Formalizing 100 Theorems[4].
We motivate this choice and elaborate on how we test for knowledge inclusion in Section 3 and
Appendix B, where we present a more detailed motivation for the use of Wiedijk’s list.
In summary, our contributions are:
- A first analysis of how an LLM approach that digitizes and distills knowledge from many
textbooks in an opaque manner compares to the largest formal libraries of mathematics;
- The first evaluation of LLMs on university-level mathematics using best-practice prompt
engineering;
- A dataset that accompanies this LLM evaluation, where we collect information on how well
the LLM did on each ITP-related item.
**2** **Related Work**
LLMs are typically evaluated on high school-level or lower undergraduate-level mathematics [14,
21, 42]. Few articles evaluate LLMs on graduate-level mathematics [19]. Recently, there have been
attempts to integrate LLMs with ITPs by designing datasets that contain formal and natural-language
proofs side-by-side [1], respectively, to use LLMs to complete formal proofs [18]. Turning ITP
systems into “dojos”, e.g. [46], that allow for convenient extraction of proof traces (as well as other
metrics that can help guide proof search) will further accelerate the convergence of LLMs and ITPs
to automate mathematics.
[4https://www.cs.ru.nl/%7Efreek/100](https://www.cs.ru.nl/%7Efreek/100)
-----
**3** **Methodology**
**3.1** **ITP Proof Sources**
The best way to compare the knowledge encoded in an LLM to the knowledge encoded in libraries
accompanying an ITP would be, as mentioned, a complete comparison: For each item in an ITP
library, a series of prompts are submitted to the LLMs of choice, testing whether the LLM is
knowledgable about that item.
Human evaluation of advanced mathematics that approaches research level is expensive. We, therefore,
focus on evaluating a smaller dataset of ITP library items to lower the evaluation effort: Our approach
centers on Freek Wiedijk’s list, Formalizing 100 Theorems[5], which is a popular progress tracker for
formalization progress. This benchmark contains both well-known and difficult theorems, making
their formalization highly non-trivial: at the time of writing, only 88 theorems from the list have been
proven with the best-ranked ITP, Isabelle. (However, all ITPs taken together have proved 99 out of
the 100 formalized theorems. The only exception is Fermat’s Last Theorem[6].) Some of the theorems
from this list, even though they are of low mathematical difficulty and can be found in undergraduate
textbooks, have only recently been formalized, which highlights the importance of this benchmark.
Examples of such easy theorems formalized late are Ptolemy’s theorem or Stirling’s formula[7]. From
this set of theorems, we select 50 theorems randomly (see Figure 2 for a list of the precise theorems
that were selected).
**3.2** **LLM Evaluation Protocol**
The evaluation of the output of the language model is performed by the authors, who are all mathematicians. 10% of data points were randomly checked and verified in order to make sure that the
evaluators’ grade agrees with the verifiers’ grade. We use GPT-4, via the API with temperature 0.7,
as the LLM of choice since it has the best-reported performance among LLMs [19, 6], and Claude 2
(via the web[8]) as a fallback.
For each theorem, GPT-4 was asked, using standardized prompt templates, to complete the following
three tasks:
1. Statement. State the full theorem (given only the name of the theorem from Wiedijk’s list).
2. Items. Explain and define all constituent items from the theorems, i.e., all definitions that
go beyond foundational mathematical objects like numbers or functions (for example, the
concept of a derivative), which are non-trivial to formulate in ITP. We have noted a priori in
our dataset what we understand to be non-foundational items that we expect to be given.
3. Proof. Prove the theorem. Then, reflect back if the proof was correct; otherwise, make
corrections.
These are wrapped in the following prompt engineering approaches, which comprises some of best,
recommended practices, as recommended in OpenAI’s cookbook[9], OpenAI’s GPT practices[10], see
also [24], that we detail below in the complete pipeline:
A. Impersonate. At initialization time, we ask it to impersonate a professional mathematician.
B. Proceed step by step. We instruct the model twice to proceed step by step: The first time,
when initializing it in the API (only available GPT-4), it should proceed in a stepwise manner
throughout its entire interaction. We reinforce this by asking it for a second time when
[5https://www.cs.ru.nl/%7Efreek/100](https://www.cs.ru.nl/%7Efreek/100)
6Which is in the process of being formalized and was arguably added to this list merely as a joke [8].
7In case of Stirling’s formula, in descending order of recency, a formalization was added to Lean’s mathlib4
in 2023, to MetaMath in 2017, to Isabelle’s Archive of Formal Proofs in 2016, and to the Coqtail library
associated to Coq and to HOL Light in 2010, respectively – according to GitHub commit history of the
relevant formalization file (except MetaMath, where the formalization of Stirling’s formula is located at
```
https://us.metamath.org/mpeuni/stirling.html.
```
8At the time of writing an API for Claude 2 is not publicly available.
[9https://github.com/openai/openai-cookbook](https://github.com/openai/openai-cookbook)
[10https://platform.openai.com/docs/guides/gpt-best-practices](https://platform.openai.com/docs/guides/gpt-best-practices)
-----
we present the proof to it. We further reinforce it by asking it to present a skeleton of the
theorem before producing the actual theorem.
C. Reflect on the output. After the LLM was asked to carry out the proof, we asked it to reflect
on it in order to allow it to make any amendments. Asking the model to review its output
has been shown to be a strategy that, in some cases, leads to correct answers on a second
try [22]. Our rating protocol was such that a second, corrected attempt was rated as a 1.
We refer to Appendix C for a diagram illustrating the full pipeline outlined above, as well as the full
prompts submitted to the LLM(s).
For each of these three sections (statement, items, proof[11]), we rate each output on a binary 0-1 scale,
representing incorrect-correct. The reason for this choice of course rating is that either a piece of
knowledge is in the LLM’s mathematical library or it is not. Therefore, a fine-grained scale, such
as that introduced in [19, 43] to rate how well an LLM responds to a prompt, is not required. We
have evaluated 50 randomly selected theorems from the list of 100 theorems by F. Wiedijk in this
way. For each of the 50 theorems, four prompts were provided to the LLM, with additional follow-up
questions. In total, we, therefore, have rated more than 200 outputs of GPT. Additionally, if the output
was not convincing, we followed up with questions as described in the next paragraph. Unless the
response was perfect, in order to make sure authors engaged with the task, a short justification was
asked from raters for the reason why a 0 or 1 was given.
Suppose the output is rated as 0 on one of the three sections (statement, item, proof[11]). In that case,
our policy is to ask the prompt corresponding to that section again at most three times until it gets
it right: We ask two times using ChatGPT, increasing temperature by 0.1 each time, preserving the
previous chat interaction so that the LLM can make use of all the previous interactions. We customize
these subsequent prompts to the error that was made and provide individualized feedback to assess
whether we can steer the LLM in the right direction. On the last attempt, if no sufficiently good
answer was given so far, our policy is to use Claude 2 to ask for a new generation of all the sections,
as described above, until the desired one (because API access is not available at the time of writing
for Claude 2, and we do not have a separate way to initialize it, we simply ask the “impersonation” as
the first prompt, before we start prompting the LLM for the full statement etc.).
We elaborate in Appendix A on why we have not used techniques such as Chain-of-Thoughts nor
why we haven’t implemented voting strategies to generate proofs. See Appendix F for a specific
example how a datapoint from the LLMKNOW dataset looks like.
We note that formal proofs are always larger than non-formalized proofs (which is captured by the
de Bruijn factor [15, 44]). Moreover, during the process of formalization, small omissions from the
original are regularly observed. In particular, this happens if the result is new or complex, as in the
case of the more recent formalizations within the Archive of Formal Proofs:
- For undergraduate-level mathematics: For the proof of Gödel’s Incompleteness Theorem,
L.C. Paulson mentions in [35]: “[...] other technical problems had to be solved in order to
complete the argument”.
- For research-level mathematics: The authors mention in [17]: “Much of the effort in this
project had not to do with the formalization itself but with ascertaining precisely what to
formalize. Although this material is considered mathematics of central importance, sources
are conflicting about the basic definitions”.) One of the major benefits of using ITPs is to
uncover such omissions and fix them.
Because the pen-and-paper proofs on which LLMs are trained also suffer from such omissions, an
approach where an LLM is asked to construct a proof at a similar level of detail to that of an ITP,
which contains various fixed omissions, would not be a fair comparison. The level of detail in proofs
we aim for should, therefore, be similar to the level of detail in a pen-and-paper proof.
The total score per item is the mean of the individual scores. We note that it is necessary to collect
all these scores independently: A correct response regarding, e.g., the proof of a theorem, does not
–perhaps counterintuitively– a priori imply that the subsequent, easier questions will be answered
correctly as well. The reason is that LLMs are known to act as stochastic parrots occasionally, which
11We rate the combined effect of both the first prompt asking for a proof, as well the second prompt, asking it
to reflect on the proof, which sometimes leads to new, improved proofs.
-----
in this case would mean they have simply memorized a proof but contain no “working knowledge”
about the item that makes up that proof. (This is unlike the case for humans, where, if they quote the
proof of a theorem correctly, in all likelihood, they will be familiar with mathematical concepts that
that proof employs.)
We do not penalize an LLM for dubious outputs such as:
- returning more information than what was asked for (including if that information is wrong).
- getting minor details wrong, such as describing a counterexample that is, in effect, not a true
counterexample.
- Being slightly vague at some stages in the proof.
The reason for this lower bar of tolerance is that our aim is not to have an LLM devise a perfect proof
with logically flawless chains of reasoning. Rather, we allow it to get some of the reasoning wrong
_as long as it can convince us that it has a good grasp of the knowledge of the mathematical objects_
_involved in the statement of the theorem and its proof. If we are confronted with proofs that aren’t_
a clear 0 or 1 because some mistakes are present, we adopt the criterion of marking it as a 1 if we
believe that progress was sufficiently good and understanding of the mathematical object sufficiently
deep that by having longer interactions with the LLM we could ultimately get it to output a longer
proof. What matters to us is that the right mathematical information can be elicited.
**4** **Results and Conclusion**
Because both LLMs and ITPs are being rapidly developed, we do not focus on evaluating a single
(LLM, ITP) pair but investigate whether, for each datapoint in the ITP library (specifically, the
portion that overlaps with the scope of our benchmark), some LLM exists that matches it. In
mathematical terms, we are interested in clarifying whether the following holds: ∀ ITP ∃ LLM :
knowledge(LLM) ≈ knowledge(ITP).[12]
Please see the end of the Appendix for an overview of the performance of GPT-4 on each of the 50
theorems after the first proof, which was the most challenging section. The average score on the proof
section was 0.68, i.e., 68% of all the proofs were satisfactory to our standards. On the statements
and items sections, scores were 94% and 98%, respectively. We have followed the majority in this
case. We refer to Appendix E for further noteworthy observations about how well the LLM proved
theorems.
Some theorems of the list of 100 theorems are actually collections of theorems, for example, in the
case of “Lebesgue measure and integration”. In this case, one ITP, HOL, seems to prove a specific
lemma about the open halfplane[13], whereas the other ITPs define lebesgue measure and integral (as
expected).
On Wiedijk’s list, we are satisfied that the mathematical knowledge encoded in LLMs matches that of
ITP libraries. Nonetheless, further investigations need to be carried out to ensure that LLMs possess
sufficient mathematical knowledge (in order to carry out autoformalization, for example): Beyond for
raw knowledge, it is important to assess whether LLMs are also capable of applying proof techniques.
This is particularly relevant for the miniF2F dataset [48], which is relevant for autoformalization and
uses problems from mathematical olympiads, on which it is known that LLMs struggle [19].
We conclude that future work is required to assess how strongly an LLM needs to be conditioned on
mathematical knowledge in order to be able carry out various formalization tasks (autoformalization,
proof generation, proof completion) successfully.
12The converse analysis, whether ∀ LLM ∃ ITP : knowledge(LLM) ≈ knowledge(ITP) is not relevant for
autoformalization, but would be appropriate for an effort of what could be called “autonaturalization”, which
would investigate whether an ITP can match the level of knowledge of an LLM. This is of inferior interest
because almost all formal ITP theorems in existence are of human invention and thus are represented in natural
language in books or texts. (Exceptions to this exist in the form of theorems of combinatorial flavor that are
well-suited for automated theorem provers and have been discovered with their use, such as the solution to the
Robbins Conjecture [29].) Since the informal, natural-language representation of a theorem is already available
somewhere, it does not need to be deduced from formal material. Furthermore, some ITPs, such as Mizar,
already have some support for presenting formal mathematics in a language similar to natural-language [2].
[13https://www.cs.ru.nl/~freek/100/hol.html#86](https://www.cs.ru.nl/~freek/100/hol.html#86)
-----
**References**
[1] Zhangir Azerbayev, Bartosz Piotrowski, Hailey Schoelkopf, Edward W Ayers, Dragomir Radev,
and Jeremy Avigad. ProofNet: Autoformalizing and formally proving undergraduate-level
mathematics. arXiv preprint arXiv:2302.12433, 2023.
[2] Grzegorz Bancerek, Czesław Byli´nski, Adam Grabowski, Artur Korniłowicz, Roman Matuszewski, Adam Naumowicz, and Karol P ˛ak. The role of the Mizar Mathematical Library for
interactive proof development in Mizar. Journal of Automated Reasoning, 61:9–32, 2018.
[3] Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, et al. A multitask, multilingual, multimodal evaluation
of ChatGPT on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023,
2023.
[4] Alexander Bogomolny. Pythagorean theorem, Retrieved 2023-08-10. `https://www.`
```
cut-the-knot.org/pythagoras.
```
[5] Anthony Bordg, Lawrence Paulson, and Wenda Li. Simple type theory is not too simple:
Grothendieck’s schemes without dependent types. Experimental Mathematics, 31(2):364–382,
2022.
[6] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece
Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general
intelligence: Early experiments with GPT-4. arXiv preprint arXiv:2303.12712, 2023.
[7] Kevin Buzzard. The future of mathematics?, 2019. Accessed on September 28, 2023.
[8] Kevin Buzzard. Formalising fermat, 2022. Accessed on October 28, 2023.
[9] Kevin Buzzard, Chris Hughes, Kenny Lau, Amelia Livingston, Ramon Fernández Mir, and
Scott Morrison. Schemes in lean. Experimental Mathematics, 31(2):355–363, 2022.
[10] Davide Castelvecchi et al. Mathematicians welcome computer-assisted proof in ‘grand unification’theory. Nature, 595(7865):18–19, 2021.
[11] Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv
_preprint arXiv:2211.12588, 2022._
[12] Wenhu Chen, Ming Yin, Max Ku, Elaine Wan, Xueguang Ma, Jianyu Xu, Tony Xia, Xinyi
Wang, and Pan Lu. Theoremqa: A theorem-driven question answering dataset. arXiv preprint
_arXiv:2305.12524, 2023._
[13] Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, and Noah A
Smith. All that’s’ human’is not gold: Evaluating human evaluation of generated text. arXiv
_preprint arXiv:2107.00061, 2021._
[14] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to
solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
[15] N.G. de Bruijn. A survey of the Project Automath. reprinted from: Seldin, j. p. and hindley, j. r.,
eds., to h. b. curry: Essays on combinatory logic, lambda calculus and formalism, p. 579-606,
by courtesy of academic press inc., orlando. In R.P. Nederpelt, J.H. Geuvers, and R.C. de Vrijer,
editors, Selected Papers on Automath, volume 133 of Studies in Logic and the Foundations of
_Mathematics, pages 141–161. Elsevier, 1994._
[16] Leonardo de Moura, Soonho Kong, Jeremy Avigad, Floris van Doorn, and Jakob von Raumer.
The Lean theorem prover (system description). In Amy P. Felty and Aart Middeldorp, editors, Automated Deduction - CADE-25, pages 378–388, Cham, 2015. Springer International
Publishing.
-----
[17] Chelsea Edmonds, Angeliki Koutsoukou-Argyraki, and Lawrence C Paulson. Formalising
Szemerédi’s regularity lemma and Roth’s theorem on arithmetic progressions in Isabelle/HOL.
_Journal of Automated Reasoning, 67(1):2, 2023._
[18] Emily First, Markus N Rabe, Talia Ringer, and Yuriy Brun. Baldur: whole-proof generation
and repair with large language models. arXiv preprint arXiv:2303.04910, 2023.
[19] Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz,
Philipp Christian Petersen, Alexis Chevalier, and Julius Berner. Mathematical capabilities of
ChatGPT. arXiv preprint arXiv:2301.13867, 2023.
[20] John Harrison, Josef Urban, and Freek Wiedijk. History of interactive theorem proving. In
_Computational Logic, volume 9, pages 135–214, 2014._
[21] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn
Song, and Jacob Steinhardt. Measuring mathematical problem solving with the MATH dataset.
In J. Vanschoren and S. Yeung, editors, Proceedings of the Neural Information Processing
_Systems Track on Datasets and Benchmarks, volume 1. Curran, 2021._
[22] Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying
Song, and Denny Zhou. Large language models cannot self-correct reasoning yet. arXiv preprint
_arXiv:2310.01798, 2023._
[23] Albert Q Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timothée
Lacroix, Yuhuai Wu, and Guillaume Lample. Draft, sketch, and prove: Guiding formal theorem
provers with informal proofs. arXiv preprint arXiv:2210.12283, 2022.
[24] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners. Advances in Neural Information Processing Systems,
35:22199–22213, 2022.
[25] Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay
Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving
quantitative reasoning problems with language models. _Advances in Neural Information_
_Processing Systems, 35:3843–3857, 2022._
[26] Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, and Yue Zhang. Evaluating the
logical reasoning ability of ChatGPT and GPT-4. arXiv preprint arXiv:2304.03439, 2023.
[27] Patrick Massot and Jeremy Avigad. Mathematics in Lean. GitHub repository, 2023.
[28] The mathlib Community. The Lean mathematical library. In Proceedings of the 9th ACM
_SIGPLAN International Conference on Certified Programs and Proofs, CPP 2020, page 367–381,_
New York, NY, USA, 2020. Association for Computing Machinery.
[29] William Mccune. Solution of the Robbins Problem. Journal of Automated Reasoning, 19(3):263–
276, Dec 1997.
[30] Leonardo de Moura and Sebastian Ullrich. The Lean 4 theorem prover and programming
language. In André Platzer and Geoff Sutcliffe, editors, Automated Deduction – CADE 28,
pages 625–635, Cham, 2021. Springer International Publishing.
[31] OpenAI. GPT-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
[32] Lawrence C Paulson. Natural deduction as higher-order resolution. The Journal of Logic
_Programming, 3(3):237–258, 1986._
[33] Lawrence C. Paulson. A preliminary user’s manual for Isabelle. Computer Laboratory, Univer_sity of Cambridge, Report 133, 1988._
[34] Lawrence C Paulson. Isabelle: The next 700 theorem provers. In Logic and computer science,
volume 31, pages 361–386. Citeseer, 1990.
-----
[35] Lawrence C. Paulson. Gödel’s incompleteness theorems. Archive of Formal Proofs, November
[2013. https://isa-afp.org/entries/Incompleteness.html, Formal proof develop-](https://isa-afp.org/entries/Incompleteness.html)
ment.
[36] Peter Scholze. Half a year of the Liquid Tensor Experiment: Amazing developments, 2022.
Accessed on October 3, 2023.
[37] Filip Smola and Jacques D. Fleuriot. Hyperdual numbers and forward differentiation. Archive
_[of Formal Proofs, December 2021. https://isa-afp.org/entries/Hyperdual.html,](https://isa-afp.org/entries/Hyperdual.html)_
Formal proof development.
[38] Christian Szegedy. A promising path towards autoformalization and general artificial intelligence. In Christoph Benzmüller and Bruce Miller, editors, Intelligent Computer Mathematics,
pages 3–20, Cham, 2020. Springer International Publishing.
[39] Qingxiang Wang, Cezary Kaliszyk, and Josef Urban. First experiments with neural translation
of informal to formal mathematics. In Intelligent Computer Mathematics: 11th International
_Conference, CICM 2018, Hagenberg, Austria, August 13-17, 2018, Proceedings 11, pages_
255–270. Springer, 2018.
[40] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha
Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language
models. arXiv preprint arXiv:2203.11171, 2022.
[41] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi,
Quoc V Le, and Denny Zhou. Chain-of-Thought prompting elicits reasoning in large language
models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors,
_Advances in Neural Information Processing Systems, volume 35, pages 24824–24837. Curran_
Associates, Inc., 2022.
[42] Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, and Kyunghyun
Cho. NaturalProofs: Mathematical theorem proving in natural language. arXiv preprint
_arXiv:2104.01112, 2021._
[43] Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, and Yejin Choi. NaturalProver:
Grounded mathematical proof generation with language models. Advances in Neural Informa_tion Processing Systems, 35:4913–4927, 2022._
[44] Freek Wiedijk. The "de bruijn factor", 2012. Accessed on September 28, 2023.
[45] Yuhuai Wu, Albert Qiaochu Jiang, Wenda Li, Markus Rabe, Charles Staats, Mateja Jamnik,
and Christian Szegedy. Autoformalization with large language models. Advances in Neural
_Information Processing Systems, 35:32353–32368, 2022._
[46] Kaiyu Yang, Aidan M Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad
Godil, Ryan Prenger, and Anima Anandkumar. LeanDojo: Theorem proving with retrievalaugmented language models. arXiv preprint arXiv:2306.15626, 2023.
[47] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik
Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv
_preprint arXiv:2305.10601, 2023._
[48] Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. Minif2f: a cross-system benchmark for
formal olympiad-level mathematics. arXiv preprint arXiv:2109.00110, 2021.
-----
**A** **Issues with In-Context Learning and Voting for Open-Ended LLM Outputs**
In our evaluation pipeline, we ask the model to output a statement of the theorem, its constituent
items, and a proof (see Figure 1 from Section C. Here, we indicate why in-context learning and voting
do not apply to enhancing proof performance.
Various prompt-engineering techniques have emerged in the past year, some the most effective being
in-context learning ones such as Chain-of-Thoughts (CoT) [41], or Tree-of-Thoughts (ToT) [47]
that have established themselves as performance-enhancing techniques for various tasks, including
math word problems [41] from the GSM8K dataset [14], games based on reasoning such as the 24
puzzle[14] [47], or numerical reasoning tasks [11]. These in-context learning techniques all revolve
around letting a model elaborate longer on a task by making its reasoning explicit and using examples.
This approach, therefore, works best when problem categories are large / task instances are many
so that the model can be given a few examples of (annotated) problem-solution pairs before being
prompted with another problem for which a solution is sought. Implementing such prompt-enhancing
techniques by letting the model make its reasoning more explicit is difficult in a setting like ours,
where open-ended outputs are needed. In particular, when outputting proofs, it is not straightforward
how one could give the LLM an “example proof”, as required by such an in-context learning technique,
that does not yet contain all the essential ideas of the proof. Supposing this were possible, the diversity
of proofs that the same statement can have[15] complicates matters further since it is not clear which
proof to aim for when learning it in-context.
Voting [40, 25] has shown to offer performance benefits if an LLM has difficulties with a specific
type of task. This technique works best where there is an easy criterion to judge whether the
output in one run was similar to the output in another run, as aggregating them and establishing
a majority automatically is straightforward. This is difficult for open-ended approaches like ours
since LLM outputs are highly heterogeneous and cannot be reduced to a single answer. Executing
such a voting technique would require significant manual inspection of the output: In two different
runs, proofs might be produced that are both correct but different, and therefore, an automatic
aggregation procedure will not work. Nonetheless, repeating a prompt can help elicit a good response.
Therefore, we have used the approach to allow the LLM to reflect on its output, as this does not
require comparisons of outputs.
We note that mathematical problems from datasets like MATH [21] or TheoremQA [12] have a much
more constrained scope; in particular, both support the concept of a “final answer”, which can easily
be compared to a ground truth, which does not exist in the present article.
**B** **Motivating Wiedijk’s List of 100 Theorems**
We argue on the following grounds that using this list of theorems is a reasonable test of whether
LLMs’ breadth of mathematical knowledge rivals that of ITPs:
1. To formulate each theorem in an ITP, all the concepts appearing in the proof statement
need to be formalized first (and various properties about each of these concepts, as well
as relations between them, need to be known as well, as they are used in the proof of
the theorem[16]). We, therefore, require our assessment (outlined below, in Section 3.2)
to recursively enquire the LLM about all concepts required to formulate the statement,
similar to what a library of an ITP would contain. Because this list touches diverse areas of
mathematics, it highlights the coverage of mathematics of ITP libraries well.
2. Because these 100 theorems are not tied to any specific ITP, we get a sense of how an
LLM compares against all of the ITP libraries (Archive of Formal Proofs, mathlib4, etc.)
combined. Furthermore, they are an entrenched metric that is often used by the community.
3. Previous research on the mathematical capabilities of LLMs involved creating datasets of
mathematics that span high-school (GSM8K), undergraduate-level (MATH), and graduatelevel or olympiad-style mathematics (GHOSTS [19]). From these, it is known that LLMs are
[14https://en.wikipedia.org/wiki/24_(puzzle)](https://en.wikipedia.org/wiki/24_(puzzle))
15An extreme example is the Pythagorean theorem with hundreds of known proofs [4].
16For example, in order to prove that a function f : K → **R on a compact, topological space K attains a**
minimum and a maximum, properties about compact spaces need to be known.
-----
Figure 1: This figure shows the pipeline that was executed to evaluate the LLM output. The yellow
boxes show which type of prompt engineering was performed and where it was applied. The blue
boxes show which of these methods made use of API initialization. The “theorem description” is
the description as taken from Wiedijk’s list. The “2×” emphasizes that a second proof output is
generated after asking the LLM to reflect on the proof. The dotted arrows indicate that, as described
in Section 3, this part was re-rated by human assessors if the output was not satisfactory.
fairly proficient at reproducing undergraduate mathematics: E.g., the MATH and GHOSTS
datasets cover a significant part of mathematical objects that typically appear in an undergraduate curriculum. This alleviates us of the duty to re-check whether LLMs possess such
knowledge of undergraduate mathematics encoded by ITPs, which can be assumed to be
present. It frees us to investigate major theorems that are present in Wiedijk’s list (which
are not present in MATH or GHOSTS, that focus mostly on LLMs proving much smaller,
“unnamed” theorem).
**C** **Prompt Templates and Data Generation Pipeline**
Figure 1 shows the entire pipeline that was used to obtain the LLM output, starting from Wiedijk’s
list, as well as where prompt engineering was performed and of which type it was. The human
component in this pipeline solely pertains to rating the output.
The following prompt templates were used for each of the three sections, statement-items-proof
(where for the proof section, we use two prompts in order to allow the LLM to reflect on the output):
_Statement-prompt (where {theorem} is the theorem name from Wiedjik’s list):_
```
The following is a well-known statement or theorem: "{theorem}". Provide
a full, complete, and explicit formulation of it, of the form "*Theorem.
...*", as you would encounter it in a textbook.
```
_Items-prompt:_
```
Please explain all the individual, constituent concepts that make up
this complete and explicit formulation of the theorem. If all of the
constituent concepts are basic, such as numbers of functions, do not
elaborate on them.
```
_Proof-prompt-main:_
```
Now provide a proof of the statement. First, describe in a single
paragraph a skeleton of the proof. Then try to create a proof step by step
until you either succeed or have no idea how to proceed further.
```
_Proof-prompt-reflect:_
```
Now, take stock of what you have proved previously, reflect on it and make
any potential corrections.
```
-----
**D** **On the Importance of Assessing LLMs’ Ability to Autoformalize**
Autoformalization [39, 38] has seen a big push in the last years, and this trend is expected to accelerate,
as LLMs have recently been used to perform autoformalization [45, 23]. We recall:
1. The goal of autoformalization is to turn a mathematical text written in the usual, informal
style of natural language into a formal text that is parseable by an ITP;
2. Mathematical scenarios frequently arise in which humans make implicit operations (e.g.,
type conversions between the same number that is both a natural and a real number) that
have to be made explicitly for ITP (which is only possible if the proof that is to be formalized
is well understood in the first place).
Therefore, it is highly plausible that any LLM-like model, were it able to autoformalize a naturallanguage statement/proof in a target ITP, would need to be intimately familiar[17] with the mathematical
concepts and objects that are involved in that statement/proof in order to fill in the various gaps that
any natural-language statement/proof invariable contains and to connect the mathematical objects
presented in natural language with the corresponding formal ones, already present in the (library
associated to) an ITP.
To the best of our knowledge, existing research works on LLM-guided autoformalization and proof
completion/generation have operated under the assumption that LLMs possess sufficient mathematics
to carry out the assigned tasks. Nonetheless, even advanced models such as GPT-4 have been shown
to perform poorly on olympiad-level mathematical problems (e.g., see the GHOSTS dataset [19]), so
it is not plausible that older LLMs, such as PaLM and Codex, which are used by [45] as baselines,
have a deep understanding of the natural-language version of the proofs of the statements from the
miniF2F dataset, which deals with problems sourced from mathematical competitions. Therefore,
it is unclear whether the generally modest state-of-the-art performance that has been achieved so
far (e.g., autoformalization success rates were less than 35% on the miniF2F dataset, see Table 3
from [45]) is due to the inherent complexity related to formalization, or whether LLMs simply do not
encode a sufficiently advanced body of mathematics (definitions, proof techniques, etc.), on top of
which various formalization-related tasks are to be carried out.
**E** **Noteworthy Observations**
In just three cases, GPT-4 wasn’t able to generate a convincing formulation of the theorem directly
from the theorem name, as given on the list of 100 theorems. In just one case, it was not able to
explain all constituent items from a theorem formulation. The fault in this case was the ambiguous
name of the theorem, Ascending or Descending Sequences, which, after inspecting the ITP source,
revealed itself to be actually the Erd˝os–Szekeres theorem[18]
Even though ChatGPT does well on some more complicated theorems, it could not produce a
satisfactory proof for the Pythagorean theorem. It was very close on its second try, though it did not
produce the correct picture from its own instruction. Interestingly, on the third try, it attempted to mix
two different methods of proving the theorem but failed in the end.
The Königsberg problem can be understood as showing that no Eulerian path exists for the concrete
example of Königsberg or that an Eulerian path exists whenever the path is connected and every
vertex has an even degree. ChatGPT tried the latter, but the second part of the proof was incomplete.
We also noted that sometimes it did not went into sufficient detail in proofs, such as for the The Area
_of a Circle theorem, where it initially gave only an intuitive argument from which we were not able_
to assess whether it had some operational knowledge of the concepts involved in proving what the
area of the circle was (see Appendix F).
17We note that the wording that an LLM is “familiar” with a piece of mathematics is, strictly speaking, a case
of anthropomorphization. Nonetheless, we feel this abuse of language is acceptable since 1) it is difficult to avoid
it, as even the expression that an LLM “learns” is a case of it, and 2) it is clear what is meant: It is plausible that
if the training data contains sufficiently many cases where the involved mathematical objects appear, the LLM
will learn some form of the true mathematical relationships between the objects and be able to manipulate them
in a somewhat mathematically consistent way—similar to how sufficiently large and advanced LLMs learn to
use correct grammar [13].
[18https://en.wikipedia.org/wiki/ErdÅŚsâĂŞSzekeres_theorem](https://en.wikipedia.org/wiki/Erdős–Szekeres_theorem)
-----
Proof Ratings Proof Ratings
0 1 0 1
The Number of Subsets of a Set The Irrationality of the Square Root of 2
Pi is Trancendental Fundamental Theorem of Algebra
Konigsberg Bridges Problem The Denumerability of the Rational Numbers
The Laws of Large Numbers Pythagorean Theorem
Bezout s Theorem Gödel s Incompleteness Theorem
The Impossibility of Trisecting the Angle
L Hôpital s Rule
and Doubling the Cube
Isosceles Triangle Theorem The Area of a Circle
Sum of a Geometric Series Euler s Generalization of Fermat s Little Theorem
e is Transcendental The Infinitude of Primes
Sum of an arithmetic series Polyhedron Formula
Greatest Common Divisor Algorithm Euler s Summation of 1 + (1/2)^2 + (1/3)^2 + ....
Order of a Subgroup Fundamental Theorem of Integral Calculus
Ascending or Descending Sequences Insolvability of General Higher Degree Equations
The Principle of Mathematical Induction De Moivre s Theorem
The Mean Value Theorem Green s Theorem
Fourier Series The Non-Denumerability of the Continuum
The Cauchy-Schwarz Inequality Formula for Pythagorean Triples
The Intermediate Value Theorem Schroeder-Bernstein Theorem
The Fundamental Theorem of Arithmetic Leibnitz s Series for Pi
Divisibility by 3 Rule Sum of the Angles of a Triangle
The Triangle Inequality Taylor s Theorem
The Birthday Problem The Solution of a Cubic
The Law of Cosines Arithmetic Mean/Geometric Mean
Principle of Inclusion/Exclusion The Binomial Theorem
Cramer s Rule The Central Limit Theorem
Figure 2: The ratings of all 50 theorems after the first proof attempt, carried out on GPT-4. An
incorrect proof is denoted by 0, while a proof that is convincing enough that the LLM understood the
concepts it was talking about and combined them in a meaningful manner (even if the proof may be
slightly erroneous) is denoted by 1.
-----
**F** **Dataset**
We highlight how one datapoint from the LLMKNOW dataset looks like:
- The value of the theorem_name key is the name of the theorem from Wiedijk’s list.
- The value of the expected_items key is a list of items the raters expected the LLM to find.
- The value of statement is a list of list, where the first entry is 1 if the statement was correct,
```
0 otherwise. The second entry is a comment the rater may or may not make. There may be,
```
in total, at most four lists within this list, one for each attempt. If a 1 is encountered, then no
further lists should be present.
Analogous considerations hold for the items and proof key.
Here is how a single JSON datapoint related to the theorem The Area of a Circle looks like from our
dataset.
```
"theorem_name" : "The Area of a Circle",
" expected_items" : [ "real numbers", "measure" ],
"statement" : [[ 1,"" ]],
"items" : [[ 1, "" ]],
"proof" : [
[ 0, "it gives the intuitive proof, not the formal proof" ],
[ 0, "it claims that small section of the area has certain form (r^2 *
d\theta) without justification (also incorrectly), then
proceeds to integrate and changes the result once again to land on
the correct result in the end. Just integrate 2* sqrt(r^2 - t^2)
dt from -r to r without polar coordinates and stuff, you can do
this!" ],
[ 1, "" ]]
```
-----
| [
"Simon, Frieder",
"Martin, Trimmel",
"Rashid, Alawadhi",
"Klaus, Gy"
] | 2023-10-28T00:00:00 | NeurIPS 2023 MATH-AI Workshop | false | 0 | 0 | null | https://openreview.net/forum?id=EUoe9ujR0C | null | null |
LLMs Are Not Intelligent Thinkers: Introducing Mathematical Topic Tree Benchmark for Comprehensive Evaluation of LLMs | Large language models (LLMs) demonstrate impressive capabilities in mathematical reasoning. However, despite these achievements, current evaluations are mostly limited to specific mathematical topics, and it remains unclear whether LLMs are genuinely engaging in reasoning. To address these gaps, we present the Mathematical Topics Tree (MaTT) benchmark, a challenging and structured benchmark that offers 1,958 questions across a wide array of mathematical subjects, each paired with a detailed hierarchical chain of topics. Upon assessing different LLMs using the MaTT benchmark, we find that the most advanced model, GPT-4, achieved a mere 54\% accuracy in a multiple-choice scenario. Interestingly, even when employing Chain-of-Thought prompting, we observe mostly no notable improvement. Moreover, LLMs accuracy dramatically reduced by up to 24.2 percentage point when the questions were presented without providing choices. Further detailed analysis of the LLMs' performance across a range of topics showed significant discrepancy even for closely related subtopics within the same general mathematical area. In an effort to pinpoint the reasons behind LLMs performances, we conducted a manual evaluation of the completeness and correctness of the explanations generated by GPT-4 when choices were available. Surprisingly, we find that in only 53.3\% of the instances where the model provided a correct answer, the accompanying explanations were deemed complete and accurate, i.e., the model engaged in genuine reasoning. | null | [
"Arash Gholami, Davoodi",
"Seyed Pouyan Mousavi, Davoudi",
"Pouya, Pezeshkpour"
] | 2024-06-07T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2406.05194 | https://arxiv.org/abs/2406.05194 | https://www.semanticscholar.org/paper/a763e20be42f01cb1492e24035786d96c419e3bb |
|
LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning | This paper presents an advanced mathematical problem-solving framework, LLaMA-Berry, for enhancing the mathematical reasoning ability of Large Language Models (LLMs). The framework combines Monte Carlo Tree Search (MCTS) with iterative Self-Refine to optimize the reasoning path and utilizes a pairwise reward model to evaluate different paths globally. By leveraging the self-critic and rewriting capabilities of LLMs, Self-Refine applied to MCTS (SR-MCTS) overcomes the inefficiencies and limitations of conventional step-wise and greedy search algorithms by fostering a more efficient exploration of solution spaces. Pairwise Preference Reward Model~(PPRM), inspired by Reinforcement Learning from Human Feedback (RLHF), is then used to model pairwise preferences between solutions, utilizing an Enhanced Borda Count (EBC) method to synthesize these preferences into a global ranking score to find better answers. This approach addresses the challenges of scoring variability and non-independent distributions in mathematical reasoning tasks. The framework has been tested on general and advanced benchmarks, showing superior performance in terms of search efficiency and problem-solving capability compared to existing methods like ToT and rStar, particularly in complex Olympiad-level benchmarks, including GPQA, AIME24 and AMC23. | An advanced mathematical problem-solving framework, LLaMA-Berry, for enhancing the mathematical reasoning ability of Large Language Models (LLMs), which combines Monte Carlo Tree Search (MCTS) with iterative Self-Refine to optimize the reasoning path and utilizes a pairwise reward model to evaluate different paths globally. | [
"Di, Zhang",
"Xiaoshui, Huang",
"Jianbo, Wu",
"Dongzhan, Zhou",
"Jingdi, Lei",
"Yuqiang, Li",
"Tong, Che",
"Jiatong, Li",
"Tong, Xie",
"Shufei, Zhang",
"Marco, Pavone",
"Wanli, Ouyang"
] | 2024-10-03T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2410.02884v1 | https://arxiv.org/abs/2410.02884 | https://www.semanticscholar.org/paper/d084517f14ee247883de0f4dd58bb923e418157d |
|
LLaMa-SciQ: An Educational Chatbot for Answering Science MCQ | Large Language Models (LLMs) often struggle with tasks requiring mathematical reasoning, particularly multiple-choice questions (MCQs). To address this issue, we developed LLaMa-SciQ, an educational chatbot designed to assist college students in solving and understanding MCQs in STEM fields. We begin by fine-tuning and aligning the models to human preferences. After comparing the performance of Mistral-7B and LLaMa-8B, we selected the latter as the base model due to its higher evaluation accuracy. To further enhance accuracy, we implement Retrieval-Augmented Generation (RAG) and apply quantization to compress the model, reducing inference time and increasing accessibility for students. For mathematical reasoning, LLaMa-SciQ achieved 74.5% accuracy on the GSM8k dataset and 30% on the MATH dataset. However, RAG does not improve performance and even reduces it, likely due to retriever issues or the model's unfamiliarity with context. Despite this, the quantized model shows only a 5% loss in performance, demonstrating significant efficiency improvements. | null | [
"Marc-Antoine, Allard",
"Matin, Ansaripour",
"Maria, Yuffa",
"Paul, Teiletche"
] | 2024-09-25T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.16779 | https://arxiv.org/abs/2409.16779 | https://www.semanticscholar.org/paper/38cf7441a3c0ff8769826b3f3b020a3b0173499d |
|
Language Models Do Hard Arithmetic Tasks Easily and Hardly Do Easy Arithmetic Tasks | The ability (and inability) of large language models (LLMs) to perform arithmetic tasks has been the subject of much theoretical and practical debate. We show that LLMs are frequently able to correctly and confidently predict the first digit of n-digit by m-digit multiplication tasks without using chain of thought reasoning, despite these tasks require compounding operations to solve. Simultaneously, LLMs in practice often fail to correctly or confidently predict the last digit of an n-digit by m-digit multiplication, a task equivalent to 1-digit by 1-digit multiplication which can be easily learned or memorized. We show that the latter task can be solved more robustly when the LLM is conditioned on all of the correct higher-order digits, which on average increases the confidence of the correct last digit on 5-digit by 5-digit multiplication tasks using Llama 2-13B by over 230% (0.13→0.43) and Mistral-7B by 150% (0.22→0.55). | It is shown that LLMs are frequently able to correctly and confidently predict the first digit of n-digit by m-digit multiplication tasks without using chain of thought reasoning, despite these tasks require compounding operations to solve. | # Language Models Do Hard Arithmetic Tasks Easily and Hardly Do Easy Arithmetic Tasks
**Andrew Gambardella[*]** **Yusuke Iwasawa** **Yutaka Matsuo**
University of Tokyo
**Abstract**
The ability (and inability) of large language
models (LLMs) to perform arithmetic tasks has
been the subject of much theoretical and practical debate. We show that LLMs are frequently
able to correctly and confidently predict the
first digit of n-digit by m-digit multiplication
tasks without using chain of thought reasoning, despite these tasks require compounding
operations to solve. Simultaneously, LLMs in
practice often fail to correctly or confidently
predict the last digit of an n-digit by m-digit
multiplication, a task equivalent to 1-digit by 1digit multiplication which can be easily learned
or memorized. We show that the latter task can
be solved more robustly when the LLM is conditioned on all of the correct higher-order digits,
which on average increases the confidence of
the correct last digit on 5-digit by 5-digit multiplication tasks using Llama 2-13B by over
230% (0.13→0.43) and Mistral-7B by 150%
(0.22→0.55).
**1** **Introduction**
The development of large language models
(LLMs) (Brown et al., 2020) has given new life
to the deep learning revolution, and seen mass
adoption within not just the scientific community,
but also society at large. These LLMs, being the
first known “general” machine learning model developed by humanity (Morris et al., 2024), have
been applied to various tasks dealing with natural language such as those commonly encountered
in school curricula (Hendrycks et al., 2021), and
even branching off into tasks such as text-to-image
generation (Saharia et al., 2022) and hierarchical
planning (Wang et al., 2023).
Despite the generality and far-reaching consequences of LLMs, there are still many significant
limitations making difficult the direct application
of LLMs to certain tasks. One such limitation is
[*Correspondence: [email protected]](mailto:[email protected])
the poor performance of LLMs on arithmetic tasks,
such as elementary addition, subtraction, multiplication, and division (Nogueira et al., 2021). Not
only do modern LLMs perform poorly on these
tasks, but some tasks such as n-digit by m-digit
multiplication and division, which require compounding operations to solve, appear to be unlearnable by pure autoregressive transformer architectures unless they decompose the problem into
multiple steps, such as with chain of thought reasoning (Wies et al., 2022; Liu et al., 2023). As
such, several solutions have been proposed, such
as fine-tuning so that chain of thought reasoning
is automatically used for problems which require
compounding operations (Liu et al., 2023; Kojima
et al., 2022) or fine-tuning to call outside tools,
such as a calculator (Schick et al., 2024).
While we most likely cannot expect simply training models with more parameters to allow for the
solving of tasks which require compounding operations without chain of thought, we believe that
analyzing the limitations and abilities of autoregressive LLMs when attempting to solve these tasks
directly may shed light on unknown properties of
LLMs. We therefore use Monte Carlo Dropout
(MC Dropout) (Gal and Ghahramani, 2016) to analyze the performance of LLMs which were trained
with dropout and which have open weights available, such as Llama 2 (Touvron et al., 2023) and
Mistral (Jiang et al., 2023), in carrying out arithmetic tasks.
MC Dropout allows one to interpret neural networks which were trained with dropout as Bayesian
neural networks, as neural networks trained with
dropout have been shown to be equivalent to a
Bayesian approximation to a Gaussian process.
This allows one to obtain empirical Bayesian confidence distributions over neural network weights or
outputs by doing multiple forward passes through
the neural network with dropout on, during test
time (Gal and Ghahramani, 2016). MC Dropout
85
-----
is one of many ensemble-based methods for uncertainty quantification (Ovadia et al., 2019; Ashukha
et al., 2020), and has been applied to analyze
the confidence of transformer architectures (Shelmanov et al., 2021) and to implement tree-based
LLM prompting (Mo and Xin, 2023).
Our results when applying MC Dropout to
Llama 2 and Mistral in arithmetic tasks were surprising. We found that all models could confidently
and correctly predict the first digit result of n-digit
by m-digit multiplication problems, despite it most
likely being impossible for any autoregressive LLM
to have learned a general algorithm for doing so
without decomposing the problem into multiple
steps, as finding this digit in general requires solving the entire multiplication problem[1]. We also
found that all models struggled to correctly output
the last digit of n-digit by m-digit multiplication
problems, despite it being very easy to learn an
algorithm for doing so, as calculating the last digit
is equivalent to 1-digit by 1-digit multiplication.
Finally, we show that the confidence of LLMs in
predicting the last digit can be increased by conditioning the generation of the last digit on the correct
intervening digits, despite the computation of the
last digit not depending on the correct computations of the higher-order digits at all.
**2** **Experiments**
We evaluate the HuggingFace (Wolf et al., 2019)
implementations of Llama 2-7B, Llama 2-13B, and
Mistral-7B (Touvron et al., 2023; Jiang et al., 2023)
in 2-shot settings, where the 2-shot examples are
of correct n-digit by m-digit multiplications. Sections 2.1 and 2.2 show results on the 3-digit by
3-digit multiplication task 592 ∗ 392, and averages over multiple problems with varying digit
length are provided in Section 2.3. Details about
the prompt and hyperparameters are given in Appendix A, details about the tokenizers for the models are given in Appendix B, and details about the
use of dropout in the training of the models is given
in Appendix C.
**2.1** **Unconditional Answer Generation**
We first study a version of the problem in which
the answer is generated with the language model
conditioned on the few shot examples and the problem to be solved, but is provided with none of
1Consider that the highest-order digit of
31622776601683793319[2] is 9, but the highest-order
digit of 31622776601683793320[2] is 1.
the digits to be generated (i.e., the normal few-shot
arithmetic scenario), which we refer to as “unconditional” generation in an abuse of terminology. Our
main results for these experiments are in Figures 1
and 2.
In Figure 1 we can see that both Llama 2-7B and
Llama 2-13B can confidently and correctly predict
the first digit of the 3-digit by 3-digit multiplication task 592 ∗ 392, which equals 232064. This
should be surprising as it is not immediately apparent from the problem that the first digit of the
solution should be 2, and the only way to discover
this is to compute the multiplication. As LLMs
most likely cannot perform n-digit by m-digit multiplication in the general case without decomposing
the problem into steps, the output of the first digit
in this case is unlikely to be the output of a multiplication algorithm learned by the LLM.
Figure 1: Confidence and accuracy of Llama 2-7B and
Llama 2-13B predicting the first digit of the result of
592 _∗_ 392. Both language models are able to confidently
and correctly predict that the first digit should be 2,
despite this not being immediately apparent from the
problem.
Conversely, in Figure 2, we can see that both
Llama 2-7B and Llama 2-13B can neither confidently nor correctly predict the last digit of the
same problem, despite doing so being equivalent
to 1-digit by 1-digit multiplication. This is a case
in which any reasonable model should be able to
confidently and correctly solve the task, as not only
86
-----
Figure 2: Confidence and accuracy of Llama 2-7B and
Llama 2-13B predicting the sixth digit of the result of
592 ∗ 392. Neither are able to predict this digit confidently, with the mode of the distribution on the “end
string” character in both cases. Both only output 4 in
about 20% of samples, despite it being immediately apparent that the final digit should be 4.
could the algorithm to solve the task be learned by
an autoregressive language model, but the information needed to solve this task could also very easily
be memorized by language models with billions of
weights.
**2.2** **Conditional Answer Generation**
Finally, we contrast the experiments given in Figures 1 and 2 with a third experiment, in which the
LLM is given all digits from the answer except for
the final digit, and is tasked with outputting solely
the final digit, which we refer to as “conditional”
generation in an abuse of terminology. Results
for this experiment are given in Figure 3. In this
case the confidence in the correct output doubles
for Llama 2-7B and triples for Llama 2-13B, with
Llama 2-13B now having most of its probability
mass on the correct last digit, whereas it did not do
so when generating the entire string at once (and
therefore often conditioning on incorrect prior digits). The fact that in both cases, more probability
mass is being put on the correct answer should be
surprising, as the computation of this digit does not
depend on the correctness of the higher-order digits
Figure 3: Confidence and accuracy of Llama 2-7B and
Llama 2-13B predicting the last digit of the result of
592 ∗ 392, when conditioned on the first five correct
digits. The confidence in the correct answer being 4
doubles for Llama 2-7B and more than triples for Llama
2-13B, despite the computation of the last digit not
depending on the prior digits being correct at all.
in any way.
**2.3** **Ablation Over Digit Length**
We provide further ablations over digit length with
Llama 2-7B and 13B in Table 1. Each subtable
gives the confidence of the correct digit, averaged
over 10 different n-digit by m-digit multiplication problems each. We find that the conclusions
shown for a single example in Sections 2.1 and
2.2 hold over varying multiplication problems and
digit lengths in general. We further provide similar Mistral-7B experiments in Table 2. While
Mistral-7B is stronger at arithmetic tasks than both
Llama 2-7B and 13B, the same patterns and conclusions found for Llama 2-7B and 13B also hold
for Mistral-7B.
**3** **Discussion of Results**
**3.1** **First Digit**
It is most likely impossible for autoregressive
LLMs to compute the first digit of an n-digit by mdigit multiplication problem without decomposing
the problem into steps, especially given that the answer is being written starting with the highest-order
87
-----
Llama 2-7B Llama 2-13B
|aa|Llam|ma 2-7B|Col4|Col5|
|---|---|---|---|---|
|a a a m a n a aa|2|3|4|5|
|2|0.81|0.90|0.82|0.82|
|3|0.91|0.78|0.88|0.92|
|4|0.88|0.83|0.92|0.77|
|5|0.89|0.74|0.89|0.87|
|aa|Llam|ma 2-13B|Col4|Col5|
|---|---|---|---|---|
|a a a m a n a aa|2|3|4|5|
|2|0.84|0.85|0.79|0.73|
|3|0.87|0.72|0.85|0.86|
|4|0.84|0.83|0.78|0.78|
|5|0.86|0.71|0.84|0.86|
|aa|Col2|(a)|Col4|Col5|
|---|---|---|---|---|
|a a a m a n a aa|2|3|4|5|
|2|0.52|0.34|0.16|0.20|
|3|0.39|0.22|0.16|0.19|
|4|0.40|0.21|0.20|0.15|
|5|0.33|0.20|0.15|0.11|
|aa|Col2|(c)|Col4|Col5|
|---|---|---|---|---|
|a a a m a n a aa|2|3|4|5|
|2|0.64|0.41|0.24|0.51|
|3|0.55|0.45|0.38|0.40|
|4|0.43|0.33|0.38|0.36|
|5|0.44|0.41|0.26|0.25|
(e)
|aa|Col2|(b)|Col4|Col5|
|---|---|---|---|---|
|a a a m a n a aa|2|3|4|5|
|2|0.78|0.50|0.32|0.30|
|3|0.56|0.40|0.24|0.17|
|4|0.63|0.37|0.29|0.22|
|5|0.52|0.30|0.24|0.13|
|aa|Col2|(d)|Col4|Col5|
|---|---|---|---|---|
|a a a m a n a aa|2|3|4|5|
|2|0.82|0.66|0.48|0.57|
|3|0.66|0.68|0.49|0.51|
|4|0.73|0.54|0.56|0.47|
|5|0.70|0.54|0.50|0.43|
(f)
m
n
aaaaaaa
2
3
4
5
m
n
aaaaaaa
Table 1: Llama 2-7B and 13B generation average confidence of the correct first digit (a, b), unconditional average
confidence of the correct last digit (c, d), and conditional average confidence of the correct last digit (e, f).
digit, and calculating the first digit depends on the
correct calculations of the lower-order digits.
LLMs can, however, perform 1-digit by 1-digit
multiplication. If these LLMs were to internally
round 592 to 600 and 392 to 400, it could approximately solve for the highest-order digit in this
way, as 600 ∗ 400 is a computation that can be performed by autoregressive language models. We
find it likely that such a computation is occurring
inside these LLMs, especially as stochastic gradient descent is likely to find such “shortcuts.”
**3.2** **Last Digit**
Both LLMs failing to predict the last digit when
generating the entire string autoregressively, and
their confidence and accuracy in predicting the last
digit increasing when conditioned on correct prior
digits, seem to be related, and could stem from the
view that autoregressive language models are “exponentially diverging diffusion processes,” a view
that several researchers have argued informally (LeCun et al., 2023), and has also recently been more
formally proven (Dziri et al., 2023). The argument
is essentially that if an autoregressive LLM has
some non-zero chance of making a mistake, then
repeated application of that LLM to generate a long
string will cause errors to compound exponentially.
This argument is not fully satisfying, however, for explaining the behavior of LLMs
in predicting the last digit. Not only should
_p(last_digit|wrong_intervening_digits)_
be the same as
_p(last_digit|correct_intervening_digits)_
due to the computation involved (the last digit not
depending on any other digits of the answer at
all), but the fact that LLMs are more correct and
more confident when conditioned on correct digits
rather than wrong digits means that LLMs are able
to internally distinguish between the two states,
despite not being able to generate the entire correct
string in the first place.
This finding may be related to recent results in
the hallucination detection literature, where it has
been noted that the internal states of LLMs can be
used to detect when the conditioning text, including
its own outputs, are wrong (Azaria and Mitchell,
2023; Chen et al., 2024). It stands to reason that
if the internal states of an LLM differ depending
88
-----
|aa|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|a a a m a n a aa|2|3|4|5|
|2|0.97 ± 0.03|0.98 ± 0.03|0.98 ± 0.02|1.00 ± 0.00|
|3|0.98 ± 0.03|1.00 ± 0.00|0.94 ± 0.09|0.93 ± 0.04|
|4|0.99 ± 0.01|0.87 ± 0.15|0.98 ± 0.04|0.82 ± 0.09|
|5|0.89 ± 0.1|0.94 ± 0.11|0.95 ± 0.06|0.99 ± 0.01|
|aa|Col2|(a)|Col4|Col5|
|---|---|---|---|---|
|a a a m a n a aa|2|3|4|5|
|2|0.74 ± 0.06|0.57 ± 0.26|0.52 ± 0.29|0.41 ± 0.21|
|3|0.87 ± 0.10|0.70 ± 0.13|0.20 ± 0.12|0.11 ± 0.07|
|4|0.44 ± 0.14|0.70 ± 0.14|0.28 ± 0.23|0.30 ± 0.15|
|5|0.70 ± 0.10|0.33 ± 0.09|0.20 ± 0.13|0.22 ± 0.07|
|aa|Col2|(b)|Col4|Col5|
|---|---|---|---|---|
|a a a m a n a aa|2|3|4|5|
|2|0.85 ± 0.23|0.83 ± 0.13|0.73 ± 0.21|0.76 ± 0.23|
|3|0.86 ± 0.13|0.85 ± 0.11|0.75 ± 0.22|0.57 ± 0.32|
|4|0.76 ± 0.17|0.62 ± 0.27|0.77 ± 0.26|0.59 ± 0.26|
|5|0.80 ± 0.18|0.68 ± 0.21|0.65 ± 0.26|0.55 ± 0.35|
(c)
Table 2: Mistral-7B generation average and standard deviation confidence of the correct first digit (a), unconditional
average and standard deviation confidence of the correct last digit (b), and conditional average and standard deviation
confidence of the correct last digit (c).
on whether its conditioning is correct or not, then
further outputs which are autoregressively generated based on these internal states may also differ.
In other words, while previous results show that
LLMs may experience exponentially compounding
errors, our finding suggests this may occur not only
due to faulty reasoning when using incorrect intermediate steps, but also when the LLM “realizes”
that it had generated incorrect output, and then “believes” that its task is to continue to do so. While
out of the scope of this paper, we are interested in
further study of this property in particular, and its
potential implications.
**4** **Conclusion**
Here we present findings on the application of
LLMs to arithmetic tasks, seen through the lens of
Monte Carlo Dropout. We found that the abilities
of what LLMs can do in practice, versus what the
theory dictates should be possible for LLMs to do,
can be reversed in several cases. In particular, we
found that Llama 2 and Mistral could confidently
and correctly output the first digit of the result of ndigit by m-digit multiplication tasks despite most
likely being unable to in the general case, whereas
they struggled with outputting the last digit either
correctly or confidently, a task which should be easily learnable. We also found that accuracy and confidence in outputting the last digit increases when
the prior digits are correct, and we believe that this
finding is related to, and could have implications
for, recent results in hallucination detection.
**5** **Limitations**
MC Dropout is a technique that is only applicable when neural network weights are available
and the neural network was trained with dropout.
These restrictions limit the number of language
models that can be analyzed with the techniques
in this paper significantly, and crucially, state of
the art language models such as GPT-4 (OpenAI,
2023), Gemini (Gemini Team et al., 2023), and
Claude (Anthropic, 2023) cannot be analyzed in
this way by researchers outside of OpenAI, Google,
and Anthropic respectively. Such limitations make
clear the need for researchers to have access to
language models with open weights.
As we have restricted our analysis to Llama 2
and Mistral (which share similar architectures), it
is possible that our findings do not generalize to
89
-----
other large language models, but given the very
small number of existing language models that can
be analyzed in this way, it will be difficult to gauge
the generality of our findings until more language
models which were trained with dropout and have
open weights are released.
**References**
Anthropic. 2023. Model Card and Evaluations for
Claude Models.
Arsenii Ashukha, Alexander Lyzhov, Dmitry
Molchanov, and Dmitry Vetrov. 2020. Pitfalls of
in-domain uncertainty estimation and ensembling in
deep learning. In 8th International Conference on
_Learning Representations, ICLR 2020._
[Amos Azaria and Tom Mitchell. 2023. The Internal](https://doi.org/10.18653/v1/2023.findings-emnlp.68)
[State of an LLM Knows When It’s Lying. In Find-](https://doi.org/10.18653/v1/2023.findings-emnlp.68)
_ings of the Association for Computational Linguistics:_
_EMNLP 2023._
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
Clemens Winter, Christopher Hesse, Mark Chen, Eric
Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess,
Jack Clark, Christopher Berner, Sam McCandlish,
Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In
_Advances in neural information processing systems_
_33, pages 1877–1901._
Chao Chen, Kai Liu, Ze Chen, Yi Gu, Yue Wu,
Mingyuan Tao, Zhihang Fu, and Jieping Ye. 2024.
INSIDE: LLMs’ Internal States Retain the Power
of Hallucination Detection. In The Twelfth Interna_tional Conference on Learning Representations._
Nouha Dziri, Ximing Lu, Melanie Sclar, Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D Hwang, Soumya
Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger,
Zaid Harchaoui, and Yejin Choi. 2023. Faith and
Fate: Limits of Transformers on Compositionality.
In Thirty-seventh Conference on Neural Information
_Processing Systems._
Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a
Bayesian approximation: Representing model uncertainty in deep learning. In 33rd International Confer_ence on Machine Learning, ICML 2016, volume 3,_
pages 1651–1660.
Gemini Team, Rohan Anil, Sebastian Borgeaud,
Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu
Soricut, Johan Schalkwyk, Andrew M Dai, Anja
Hauth, and others. 2023. Gemini: a family of
highly capable multimodal models. arXiv preprint
_arXiv:2312.11805._
Gemma Team. 2024. Gemma: Open models based
on gemini research and technology. arXiv preprint
_arXiv:2403.08295._
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou,
Mantas Mazeika, Dawn Song, and Jacob Steinhardt.
2021. Measuring Massive Multitask Language Understanding. Proceedings of the International Con_ference on Learning Representations (ICLR)._
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud,
Marie-Anne Lachaux, Pierre Stock, Teven Le Scao,
Thibaut Lavril, Thomas Wang, Timothée Lacroix,
and William El Sayed. 2023. Mistral 7B. arXiv
_preprint arXiv:2310.06825._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in
_neural information processing systems, 35:22199–_
22213.
Yann LeCun, Brenden Lake, Jacob Browning, David
Chalmers, Ellie Pavlick, and Gary Lupyan. 2023.
[Debate: Do language models need sensory grounding](https://youtu.be/x10964w00zk?si=EbXqIwg_ilD6JaC8)
[for meaning and understanding?](https://youtu.be/x10964w00zk?si=EbXqIwg_ilD6JaC8)
Tiedong Liu, Bryan Kian, and Hsiang Low. 2023. Goat:
Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks. arXiv preprint arXiv:2305.14201.
Shentong Mo and Miao Xin. 2023. Tree of Uncertain
Thoughts Reasoning for Large Language Models.
_arXiv preprint arXiv:2309.07694._
Meredith Ringel Morris, Jascha Sohl-Dickstein, Noah
Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra
Faust, Clement Farabet, and Shane Legg. 2024. Levels of AGI: Operationalizing Progress on the Path to
AGI. arXiv preprint arXiv:2311.02462.
Rodrigo Nogueira, Zhiying Jiang, Jimmy Lin, and
[David R Cheriton. 2021. Investigating the Limita-](https://github.com/castorini/transformers-arithmetic)
[tions of Transformers with Simple Arithmetic Tasks.](https://github.com/castorini/transformers-arithmetic)
Technical report.
OpenAI. 2023. GPT-4 Technical Report. Technical
report.
Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado,
D. Sculley, Sebastian Nowozin, Joshua V. Dillon,
Balaji Lakshminarayanan, and Jasper Snoek. 2019.
Can you trust your model’s uncertainty? evaluating
predictive uncertainty under dataset shift. In Ad_vances in Neural Information Processing Systems,_
volume 32.
Chitwan Saharia, William Chan, Saurabh Saxena, Lala
Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed
Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi,
Raphael Gontijo-Lopes, Tim Salimans, Jonathan
Ho, David J. Fleet, and Mohammad Norouzi. 2022.
Photorealistic Text-to-Image Diffusion Models with
90
-----
**B** **Tokenization**
Both the Llama 2 and Mistral tokenizers have one
single token for each digit, 0 to 9, and no digits
appear in any tokens other than these. This property
has been shown to be necessary to consistently
perform even simple addition tasks (Nogueira et al.,
2021).
**C** **Dropout**
The use of MC Dropout to model uncertainty in
neural networks requires, as a prerequisite, that the
neural networks were trained with dropout. As we
do not know the exact training details of Llama 2 or
Mistral, we cannot be fully assured that they used
dropout in training. We do, however, have very
strong reason to believe that they did use dropout
during training, due to the fact that both of these
models still output reasonable text when dropout
is turned on. Conversely, the Gemma (Gemma
Team, 2024) HuggingFace code also has dropout,
but when dropout is turned on even to only 10%,
the model outputs are entirely nonsensical (when
attempting these experiments with Gemma, we
do not even get numbers as output when dropout
is turned on, but do get reasonable output with
dropout turned off). The sort of robustness to neurons being dropped out that can be seen in Llama
2 and Mistral only occurs in models that were actually trained with dropout, and thus we can be
fairly confident that the use of MC Dropout here is
appropriate.
Deep Language Understanding. In Advances in Neu_ral Information Processing Systems, volume 35._
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta
Raileanu, Maria Lomeli, Eric Hambro, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2024.
Toolformer: Language models can teach themselves
to use tools. Advances in Neural Information Pro_cessing Systems, 36._
Artem Shelmanov, Evgenii Tsymbalov, Dmitri Puzyrev,
Kirill Fedyanin, Alexander Panchenko, and Maxim
[Panov. 2021. How certain is your transformer? In](https://doi.org/10.18653/v1/2021.eacl-main.157)
_EACL 2021 - 16th Conference of the European Chap-_
_ter of the Association for Computational Linguistics,_
_Proceedings of the Conference._
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, and others. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint
_arXiv:2307.09288._
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and
Anima Anandkumar. 2023. Voyager: An OpenEnded Embodied Agent with Large Language Models. arXiv preprint arXiv:2305.16291.
Noam Wies, Yoav Levine, and Amnon Shashua. 2022.
Sub-Task Decomposition Enables Learning in Sequence to Sequence Tasks. In The Eleventh Interna_tional Conference on Learning Representations._
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and others. 2019. Huggingface’s transformers:
State-of-the-art natural language processing. arXiv
_preprint arXiv:1910.03771._
**A** **Prompt Format and Hyperparameters**
The exact prompt used in Sections 2.1 and
2.2 is “111 ∗ 472 = 52392. 362 ∗ 194 =
70228. _{math_question}_ = _{given_str}”_
where math_question is the multiplication task,
and given_str is the empty string in Section 2.1
and all but the last digit of the correct answer in Section 2.2. In Section 2.3 the prompts are randomly
generated 2-shot n-digit by m-digit multiplication
examples in the same format.
We set the dropout rate to be 0.1, which is the
dropout rate commonly used in GPT applications,
and appears to be the dropout rate used to train
Llama 2 and Mistral. All sampling from LLMs
is done deterministically other than the stochasticity induced by dropout (i.e., we take argmax over
logits). We collect 100 samples for each output.
91
-----
| [
"Yutaka, Matsuo",
"Andrew, Gambardella",
"Vivek, Srikumar",
"Yusuke, Iwasawa",
"Lun-Wei, Ku",
"Andre, Martins"
] | 2024-08-01T00:00:00 | ACL 2024 Short Papers | false | 0 | 0 | null | https://aclanthology.org/2024.acl-short.8 | https://arxiv.org/abs/2406.02356 | https://www.semanticscholar.org/paper/c6ab06736fd1ed5c19aed84de33c738f2788af5f |
Language Models, Mathematics, Embeddings | N/A | null | [
"Zsolt, Zombori",
"Pal, Zsamboki",
"Andras, Kornai"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
|
Large Language Models are Contrastive Reasoners | Prompting methods play a crucial role in enhancing the capabilities of pre-trained large language models (LLMs). We explore how contrastive prompting (CP) significantly improves the ability of large language models to perform complex reasoning. We demonstrate that LLMs are decent contrastive reasoners by simply adding "Let's give a correct and a wrong answer." before LLMs provide answers. Experiments on various large language models show that zero-shot contrastive prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks without any hand-crafted few-shot examples, such as increasing the accuracy on GSM8K from 35.9% to 88.8% and AQUA-RAT from 41.3% to 62.2% with the state-of-the-art GPT-4 model. Our method not only surpasses zero-shot CoT and few-shot CoT in most arithmetic and commonsense reasoning tasks but also can seamlessly integrate with existing prompting methods, resulting in improved or comparable results when compared to state-of-the-art methods. Our code is available at https://github.com/yao8839836/cp | This work explores how contrastive prompting (CP) significantly improves the ability of large language models to perform complex reasoning and demonstrates that LLMs are decent contrastive reasoners by simply adding "Let's give a correct and a wrong answer" before LLMs provide answers. | ## Large Language Models are Contrastive Reasoners
**Liang Yao**
Tencent Inc.
Shenzhen, China
```
[email protected]
```
**Abstract**
Prompting methods play a crucial role in enhancing the capabilities of pre-trained
large language models (LLMs). We explore how contrastive prompting (CP)
significantly improves the ability of large language models to perform complex
reasoning. We demonstrate that LLMs are decent contrastive reasoners by simply
adding “Let’s give a correct and a wrong answer.” before LLMs provide answers.
Experiments on various large language models show that zero-shot contrastive
prompting improves performance on a range of arithmetic, commonsense, and
symbolic reasoning tasks without any hand-crafted few-shot examples, such as
increasing the accuracy on GSM8K from 35.9% to 88.8% and AQUA-RAT from
41.3% to 62.2% with the state-of-the-art GPT-4 model. Our method not only
surpasses zero-shot CoT and few-shot CoT in most arithmetic and commonsense
reasoning tasks but also can seamlessly integrate with existing prompting methods,
resulting in improved or comparable results when compared to state-of-the-art
[methods. Our code is available at the following GitHub repository: https://](https://github.com/yao8839836/cp)
```
github.com/yao8839836/cp.
```
**1** **Introduction**
Recent studies [28, 3, 13] have shown that large language models (LLMs) exhibit impressive performance across a wide range of tasks. In particular, the chain-of-thought (CoT) prompting technique
has demonstrated the capability of LLMs to handle complex tasks, including math problem solving,
by guiding them to generate intermediate reasoning steps [22, 10, 27]. These studies spotlight the significance of developing efficient techniques to direct LLMs in their reasoning processes [11, 1, 5, 25].
Nevertheless, the current chain-of-thought (CoT) paradigm encounters two main challenges: offering
_correct guidance or examples of reasoning and reducing the reliance on manual labeling. In particular,_
zero-shot CoT [10] provides general reasoning guidance by providing instructions like “Think step
by step.”, but the generated reasoning steps may not be correct and adequate for tasks such as
commonsense question-answering (Section 4). On the other hand, few-shot CoT [22] offers more
detailed guidance but necessitates labeled examples of the reasoning process, which can be expensive
to obtain for each task. This raises an important research question: Is it possible to generate a more
accurate reasoning process without relying on human labeling?
In this work, we introduce contrastive prompting, a novel prompting approach that automatically
directs the reasoning process of large language models. Our inspiration stems from how humans can
learn from both their correct and incorrect actions [16]. For instance, when confronted with a math
problem (as in Figure 1), people may ask "How can we prevent mistakes in each step?" By identifying
the steps that are prone to mistakes on their own, they can enhance their ability to avoid mistakes and
provide accurate solutions. Our idea is to prompt LLMs to emulate this reasoning process, enabling
them to effectively solve new problems.
Preprint.
-----
Figure 1: Example inputs and outputs of GPT-4 with (a) standard Zero-shot, and (b) ours (Zero
Q: Tim grows 5 trees. Each year he collects 6 lemons from each Q: Tim grows 5 trees. Each year he collects 6 lemons from each
tree. How many lemons does he get in a decade? tree. How many lemons does he get in a decade?
A:The answer (arabic numerals) is A: Let's give a correct and a wrong answer.
_(Output) 3000_ _(Output) Correct Answer:_
Tim grows 5 trees and each tree gives him 6 lemons per year. So,
in one year, he gets 5 * 6 = 30 lemons. A decade is 10 years, so
in a decade, he would get 30 * 10 = 300 lemons.
Incorrect Answer:
In one year, Tim gets 5 * 6 = 30 lemons. A decade is 20 years, so
in a decade, he would get 30 * 20 = 600 lemons. (This is incorrect
because a decade is 10 years, not 20 years.)
(a) Zero-shot (b) Zero-shot-CP (Ours)
shot-CP). In contrast to Few-shot-CoT, which requires step-by-step reasoning examples for each task,
our approach does not rely on any examples. Instead, we use the same prompt "Let’s give a correct
and a wrong answer" for all tasks, including arithmetic, symbolic, commonsense, and other logical
reasoning tasks.
Specifically, when presented with a problem to solve, we instruct LLMs to generate both correct
and incorrect answers within the given context. To achieve this, we provide prompts such as "Let’s
give a correct and a wrong answer." Following this, we verify and confirm the correct answer. Our
proposed approach offers multiple advantages. It not only generates incorrect answers autonomously
but also places a greater emphasis on ensuring the accuracy of the answers. This eliminates the
need for manually labeling reasoning examples for each task and problem, effectively addressing the
challenges faced by zero-shot and few-shot CoT.
We evaluate the proposed approach across various reasoning-intensive tasks, including arithmetic
reasoning (SingleEq, AddSub, MultiArith, AQUA-RAT, GSM8K, SVAMP), commonsense reasoning
(CommonsenseQA, StrategyQA), symbolic reasoning (Last Letter Concatenation, Coin Flip), and
other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects). We employ two
state-of-the-art base LLMs GPT-3.5 and GPT-4 [13] and four popular open source LLMs. The
experimental findings demonstrate significant improvements in scores compared to the zero-shot
baseline across all 12 datasets. Moreover, our method not only surpasses zero-shot CoT and few-shot
CoT in most arithmetic and commonsense reasoning tasks but also achieves better results when
combined with zero-shot or few-shot CoT, approaching or even surpassing the performance of existing
state-of-the-art methods. These results indicate the effectiveness of generating incorrect answers for
individual problems to guide the reasoning process of LLMs.
**2** **Related Works**
We provide a brief overview of the two fundamental research areas that form the basis of our work:
the advent of large language models (LLMs) and prompting, and learning from negative examples.
**Large language models and prompting** Recently, LLMs [28] like ChatGPT and GPT-4 [13] have
gained significant attention. Researchers find that scaling pre-trained language models often leads to
an improved model capacity on downstream tasks. These large-sized models show different behaviors
from smaller models and display surprising abilities in solving a series of complex tasks.
Prompt engineering is an emerging field dedicated to the development and optimization of prompts,
enabling efficient utilization of LLMs across diverse applications and research domains [1, 17].
Zero-shot prompting involves querying the LLM without any examples while few-shot prompting
provides models with a few input-output examples [3]. Chain-of-thought (CoT) [22, 10] prompting
enables complex reasoning capabilities through intermediate reasoning steps. Despite its success,
few-shot CoT [22] needs human-labeled reasoning steps for each example, while zero-shot CoT [10]
may generate incorrect reasoning steps (especially for commonsense and arithmetic reasoning).
Several X-of-thought approaches [23, 24, 7, 4] extend CoT on reasoning tasks, where X can be a tree,
-----
a graph, or a program. Auto-CoT [27] improves zero-shot CoT by providing similar questions as
few-shot examples for the target question. Self-consistency [20] sample multiple, diverse reasoning
paths through few-shot CoT, and use the generations to select the most consistent answer. Analogical
prompting [25] leverages LLMs to automatically generate relevant few-shot examples for each
question. In contrast to these works, our method emphasizes eliciting self-awareness in LLMs
regarding potential errors and actively avoiding them.
**Learning from Negative Examples** Contrastive learning, a widely adopted technique in deep
learning, aims to enhance the quality of learned representations by training models to differentiate
between "positive" and "negative" samples [8]. In the LLMs area, reinforcement learning from human
feedback (RLHF) [14] and direct preference optimization (DPO) [15] fine-tune LLMs with relative
human judgments of response quality. Self-reflection [18, 9, 12, 26] incorporates "critic" or review
steps to identify errors made by the LLM itself and improve upon them. However, it is important to
note that the initial output of the LLM may not contain any errors, and there is a potential risk of the
model reinforcing its own errors if it inaccurately evaluates the quality of its responses or generates
invalid principles. The closest work to ours is the Contrastive CoT [5] that extends few-shot CoT by
creating wrong reasoning processes from annotated correct reasoning steps. The main distinction is
that the erroneous answers generated by Contrastive CoT still require human-annotated examples,
and the random reordering of entities during the reasoning process may not align with the patterns of
errors made by LLMs themselves. On the contrary, our approach enables LLMs to generate erroneous
answers on their own, which aligns better with their intrinsic knowledge. It does not require human
annotation.
**3** **Contrastive Prompting**
We propose Contrastive Prompting (CP), a template-based prompting approach for contrastive
reasoning. Our method can seamlessly integrate with any prompting technique by incorporating a
trigger sentence before the LLM provides answers. In the following, we first illustrate our method
using Zero-shot-CP as an example, which only uses the original question without supporting examples.
Next, we will discuss how to combine our method with other prompting techniques.
**3.1** **Two-stage prompting**
Although Zero-shot-CP is straightforward in concept, it utilizes prompting twice to extract both
reasoning and answer, as illustrated in Figure 2.
**1st prompt: reasoning extraction** In this step we begin by transforming the input question x
into a prompt x[′] using a simple template “Q: [X]. A: [T]”. Here [X] represents the input slot for x
and [T] represents a slot for a manually crafted trigger sentence t that would extract the reasoning
process to answer the question x. For instance, if we use “Let’s give a correct and a wrong answer.”
as a trigger sentence, the prompt x[′] would be “Q: [X]. A: Let’s give a correct and a wrong answer.”.
Additional trigger examples can be found in Table 4. Prompted text x[′] is then inputted into a LLM,
which generates the subsequent sentence z. While various decoding strategies can be employed, we
have used greedy decoding throughout the paper for simplicity.
**2nd prompt: answer extraction** In the second step, we utilize the generated sentence z in
conjunction with the prompted sentence x[′] to extract the ultimate answer from the LLM. To provide a
more specific explanation, we combine three elements by concatenating them as "[X’] [Z] [A]". Here,
[X’] represents the 1st prompt x[′], [Z] represents the sentence z generated in the first step, and [A]
represents a trigger sentence used to extract the answer. The prompt for this step is self-augmented,
meaning that it includes the sentence z generated by the same LLM. During the experiment, we
employed slightly different answer triggers based on the format of the answer. For instance, for
multiple-choice question-answering, we use "Therefore, among A through E, the correct answer is."
For math problems that require a numerical answer, we use "Therefore, the correct answer (arabic
numerals) is." Please refer to Appendix A.2 for the answer trigger sentences we used in each task.
Subsequently, the prompted text is inputted into the LLM to generate sentences y and extract the
final answer.
-----
Figure 2: The complete process of Zero-shot-CP involves two steps: Firstly, we utilize the initial
**【1st prompt】** **【2nd prompt】**
**Reasoning Extraction** **Answer Extraction**
Q: Tim grows 5 trees. Each year he collects 6 Q: Tim grows 5 trees. Each year he collects 6
lemons from each tree. How many lemons does lemons from each tree. How many lemons does he
he get in a decade? get in a decade?
**A: Let's give a correct and a wrong answer.** A: Let's give a correct and a wrong answer.
Correct Answer:
Tim grows 5 trees and each tree gives him 6 lemons
LLM per year. So, in one year, he ...
**Therefore, the correct answer (arabic numerals) is**
Correct Answer:
Tim grows 5 trees and each tree gives him 6 LLM
lemons per year. So, in one year, he gets 5 * 6 =
30 lemons. A decade is 10 years, so in a decade,
he would get 30 * 10 = 300 lemons.
Incorrect Answer:
In one year, Tim gets 5 * 6 = 30 lemons. A 300..
decade is 20 years, so in a decade, he would get
30 * 20 = 600 lemons. (This is incorrect because
a decade is 10 years, not 20 years.)
"reasoning" prompt to extract a comprehensive reasoning process from a LLM. Secondly, we employ
the subsequent "answer" prompt to extract the correct answer from the reasoning text.
**3.2** **Integrating with other prompting methods**
We can easily integrate our CP with any advanced prompting methods. We name the combined
method X-CP, where X can be Zero-shot-CoT, Few-shot-CoT, or any other method. X-CP also has
two steps: reasoning extraction and answer extraction. For Zero-shot-CoT-CP, the only distinction is
we replace the trigger sentence “Let’s give a correct and a wrong answer.” with “Let’s think step by
step and give both a correct answer and a wrong answer.”. For Few-shot-CoT-CP, the distinction is
that k few-shot examples with reasoning steps are added before “Q: [X]. A: Let’s give a correct and a
wrong answer.”, the resulting prompt x[′] will be “Q: [X1] A: [Z1]. The answer is [Y1]. Q: [X2] A:
[Z2]. The answer is [Y2]. ... Q: [Xk] A: [Zk]. The answer is [Yk]. Q: [X]. A: Let’s give a correct
and a wrong answer.”, where Xi, Zi and Yi are the question, reasoning steps and the final answer for
each example i.
**4** **Experiment**
**4.1** **Settings**
**Datasets** We evaluate the effectiveness of our proposal on 12 datasets[1] encompassing four categories
of reasoning tasks: arithmetic (SingleEq, AddSub, MultiArith, AQUA-RAT, GSM8K, SVAMP),
commonsense (CommonsenseQA, StrategyQA), symbolic (Last Letter Concatenation, Coin Flip),
and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects). The detailed
description of each dataset can be found in [10]. We use the few-shot examples with reasoning steps
provided by [22].
**Baselines** We conducted a comprehensive comparison of our CP method with various types
of prompting techniques. These include simple zero-shot methods such as Zero-shot and Zeroshot-CoT [10], Few-shot and Few-shot-CoT [22], X-of-thought approaches like Tree of Thoughts
(ToT) [23], Graph of Thoughts (GoT) [24], Program-aided Language models (PAL) [7], and Program
of thoughts prompting (PoT) [4]. Additionally, we compared our method with other prompting
techniques such as Analogical prompting (Self-generated Exemplars) [25] and Self-consistency
[1The datasets are available at https://github.com/kojima-takeshi188/zero_shot_cot/tree/](https://github.com/kojima-takeshi188/zero_shot_cot/tree/main/dataset)
```
main/dataset.
```
-----
Table 1: Accuracy (in percentage) comparison of Zero-shot-CP with Zero-shot, Zero-shot-CoT and
Zero-shot-CoT-CP on each dataset. The values on the left-hand side of each task represent the results
obtained using GPT-4, while the values on the right-hand side represent the results obtained using
gpt-35-turbo.
Arithmetic
Method SingleEq AddSub MultiArith GSM8K AQUA SVAMP
Zero-shot 90.6/81.7 **92.4/82.8** 96.5/61.2 35.9/14.3 41.3/29.9 86.4/69.7
Zero-shot-CoT 91.7/91.1 89.6/86.6 97.7/94.8 **90.9/75.1** 70.1/55.9 90.4/81.9
Zero-shot-CP 91.7/91.7 91.6/90.6 **97.8/95.2** 88.8/73.2 62.2/40.2 91.5/83.2
Zero-shot-CoT-CP **92.7/92.3** 91.4/88.6 97.2/96.2 89.5/73.5 **71.3/60.6** **91.6/85.9**
Common Sense Other Reasoning Tasks Symbolic Reasoning
Common Strategy Date Shuffled Last Letter Coin Flip
SenseQA QA Understand Objects (4 words) (4 times)
Zero-shot 82.9/71.3 64.8/65.0 73.2/40.4 40.7/33.9 5.0/4.2 36.6/49.6
Zero-shot-CoT 78.3/67.8 69.8/60.9 79.4/62.1 **93.1/73.1** **90.2/88.0** **98.6/94.0**
Zero-shot-CP **83.5/73.9** 73.4/67.3 71.5/51.8 44.4/51.5 23.4/41.8 33.2/56.8
Zero-shot-CoT-CP 82.9/71.3 **73.8/66.7** **80.8/62.6** 75.4/50.9 86.2/70.8 94.4/55.2
(SC) [20]. Furthermore, we evaluated the effectiveness of self-reflection methods, including Recursive
Criticism and Improvement (RCI) [9], Self-Refine [12] and Learning Principles from Mistakes
(LEAP) [26], as well as the closest related work, Contrastive CoT [5]. We also experimented with
running CP using Self-consistency (SC). Specifically, we set the temperature parameter of LLM to
0.7 and sampled 10 correct and incorrect answers. Then, we selected the answer that appeared most
frequently among the 10 correct answers as the final answer.
**Models** We use GPT-4 and gpt-35-turbo (0613) as our base models (accessed between Feb
22nd–May 22nd 2024) for main experiments. We also tested our CP on various open LLMs:
LLaMA3-8B, LLaMA3-70B [19], ChatGLM3-6B [6] and Qwen1.5-72B-Chat [2]. All generations
(except experiments with Self-consistency) are done by greedy decoding (i.e., sampling with zero
temperature) as in the original CoT work [22]. For GPT models, we use Azure OpenAI services. For
open LLM models except ChatGLM3-6B, we use LlamaAPI [2]. For ChatGLM3-6B, we downloaded
the model and perform the inference on a Linux server with an A100 GPU.
**Answer filtering** We follow Zero-shot-CoT [10] work and use its original implementation to pick
up the final answers.
**4.2** **Results**
**Zero-shot Results** Table 1 presents a summary of the accuracy achieved by our method (Zero-shotCP and Zero-shot-CoT-CP), standard zero-shot prompting (Zero-shot), and Zero-shot-CoT across
different datasets. Notably, Zero-shot-CP demonstrates significant improvements over Zero-shot
on all 12 datasets across various tasks using gpt-35-turbo. For instance, Zero-shot-CP achieves
score gains ranging from 14.3% to 73.2% on GSM8K, from 61.2% to 95.2% on MultiArith and
from 4.2% to 41.8% on Last Letter Concatenation. Moreover, Zero-shot-CP outperforms Zero-shot
on the majority (9 out of 12) of datasets when using GPT-4, with improvements ranging from
35.9% to 88.8% on GSM8K and from 41.3% to 62.2% on AQUA-RAT. These results indicate that
eliciting self-awareness in LLMs to compare incorrect and correct answers can help prevent incorrect
responses.
Zero-shot-CP outperforms Zero-shot-CoT in the majority (4 out of 6) of arithmetic reasoning tasks,
suggesting that the self-awareness of LLMs regarding incorrect answers may be more crucial than
their self-awareness regarding steps in mathematical reasoning. Furthermore, in commonsense
reasoning tasks, Zero-shot-CP consistently outperforms Zero-shot, while Zero-shot-CoT exhibits
inferior results. This is likely because commonsense reasoning tasks require fewer steps, making
awareness of individual pieces of commonsense knowledge more crucial. However, Zero-shot CoT
[2https://docs.llama-api.com/quickstart](https://docs.llama-api.com/quickstart)
-----
Table 2: An example question from SVAMP and responses from Zero-shot, Zero-shot-CoT, Zeroshot-CP and Zero-shot-CoT-CP. We use GPT-4 as the base model.
|Question|Q: Danny collects bottle caps and wrappers. He found 46 wrappers and 50 bottle caps at the park. Now he has 21 bottle caps and 52 wrappers in his collection. How many more bottle caps than wrappers did danny find at the park?|
|---|---|
|Zero-shot|A: The answer (arabic numerals) is -4. Danny found 50 bottle caps and 46 wrappers at the park, so he found 4 more bottle caps than wrappers.|
|---|---|
|Zero-shot-CoT|A: Let’s think step by step. First, we need to find out how many bottle caps and wrappers Danny had before he found the new ones at the park. We know that after finding the new ones, he has 50 bottle caps and 52 wrappers. Before finding the new ones, he had 21 bottle caps and 46 wrappers. So, the number of bottle caps he found at the park is 50 (total now) - 21 (total before) = 29 bottle caps. The number of wrappers he found at the park is 52 (total now) - 46 (total before) = 6 wrappers. Therefore, Danny found 29 - 6 = 23 more bottle caps than wrappers at the park. Therefore, the answer (arabic numerals) is 23|
|---|---|
|Zero-shot-CP|A: Let’s give a correct and a wrong answer. Correct Answer: Danny found 50 bottle caps and 46 wrappers at the park. So, he found 50 - 46 = 4 more bottle caps than wrappers at the park. Incorrect Answer: Danny found 50 bottle caps and 46 wrappers at the park. So, he found 46 - 50 = -4 more bottle caps than wrappers at the park. Therefore, the correct answer (arabic numerals) is 4|
|---|---|
|Zero-shot-CoT-CP|A: Let’s think step by step and give both a correct answer and a wrong answer. Correct Answer: Danny found 50 bottle caps and 46 wrappers at the park. So, he found 50 - 46 = 4 more bottle caps than wrappers at the park. Incorrect Answer: If we mistakenly subtract the number of bottle caps and wrappers in his collection from the number he found at the park, we would get: 50 - 21 = 29 bottle caps and 46 - 52 = -6 wrappers. This would incorrectly suggest that Danny found 29 - (-6) = 35 more bottle caps than wrappers at the park. Therefore, the correct answer (arabic numerals) is 4|
|---|---|
|Ground Truth|4|
|---|---|
performs worse than Zero-shot-CP in symbolic reasoning and other reasoning tasks, indicating that
for tasks like Coin Flip and Shuffled Objects, the steps involved are more crucial. Moreover, breaking
down the reasoning process into individual steps makes it less likely for LLMs to make mistakes
in these tasks, as the action space is very limited. Nevertheless, by combining Zero-shot-CP and
Zero-shot-CoT, Zero-shot-CoT-CP achieves better results in most tasks.
Table 8 in Appendix presents the results of using various open-source LLMs: LLaMA3-8B, LLaMA370B, ChatGLM3-6B, and Qwen1.5-72B-Chat as base models. The results demonstrate that Zeroshot-CP not only performs well with state-of-the-art GPT models, but also exhibits significant
improvements across multiple sizes of open-source models.
**Qualitative Analysis** Table 2 and Table 6 illustrates examples from SVAMP and CommonsenseQA.
For the example from the arithmetic reasoning task SVAMP, we found that the reasoning process of
zero-shot is correct, but it produces an incorrect answer "-4". Zero-shot-CoT is disrupted by irrelevant
information, resulting in incorrect reasoning processes and answers being generated. Zero-shot-CP,
on the other hand, is not disrupted and provides both the correct answer and explanation. Combining
Zero-shot-CoT with contrastive prompting also yields correct answers. For the example from the
common sense reasoning task CommonsenseQA, contrastive prompting is able to recognize the word
"work" in the question and provide the correct answer, while Zero-shot and Zero-shot-CoT cannot.
In Appendix B, we present responses generated by Zero-shot-CP for each dataset. Figure 4–15 gives
both a positive example and a negative example of Zero-shot-CP on each dataset. From positive
examples, we found that Zero-shot-CP can generate "wrong" answers that are indeed incorrect in most
cases (11/12), except for Tracking Shuffled Object (Figure 12). Incorrect answers are generated by
intentionally calculating inaccurately (Figure 11), disregarding important details (Figure 9), searching
for descriptions that are not present in the question (Figure 8), or deliberately providing explanations
-----
Table 3: Categorization results of Zero-shot-CP output (with GPT-4) on 240 problems. We manually
annotated 10 solved problems and 10 unsolved problems for each of the 12 datasets. GT means
Ground Truth. See Appendix B for the link of the examples.
**Category** **# Examples**
The given "correct" answer is the GT, and the given "wrong" answer is indeed incorrect. 112
The given "correct" answer is the GT, and the given "wrong" answer is also the GT. 4
The given "correct" answer is the GT, no "wrong" answer is given. 4
The given "correct" answer is incorrect, and the given "wrong" answer is the GT. 23
The given "correct" answer is incorrect, and the given "wrong" answer is also incorrect. 91
The given "correct" answer is incorrect, no "wrong" answer is given. 6
Table 4: Comparison prompting templates using accuracies (in percentage) on AQUA-RAT, GSM8K,
AddSub and MultiArith in zero-shot setting. GPT-4 is used as the model. Bolded numbers indicate
the best results within each block’s column, while underlined numbers indicate the best results across
the entire column.
GPT-4 AQUA GSM8K AddSub MultiArith
Let’s give a correct and a wrong answer. 62.2 88.8 **91.6** **97.8**
Let’s first give a wrong answer, then give the correct answer. 69.3 86.1 90.9 95.0
Let’s first give the correct answer, then give a wrong answer. 58.7 **89.7** 91.6 95.0
Let’s give a correct and an incorrect answer. 66.5 88.7 91.6 97.7
Please give a correct and a wrong answer. 57.5 82.0 88.9 94.0
Let’s give a correct answer. **71.7** 75.9 89.4 97.0
Let’s think step by step and give both a correct answer and a wrong answer. **71.3** 89.5 **91.4** 97.2
Let’s give a correct and a wrong answer. Let’s also think
52.8 88.9 89.4 96.7
step by step for the correct and the wrong answer.
Let’s think step by step. (Zero-shot-CoT) 70.1 **90.9** 89.6 **97.7**
that contradict common sense (Figure 14). From negative examples, We found that the "wrong
answers" provided by Zero-shot-CP can actually be valid answers (Figure 5, 6, 7, 11, 13 and 14).
In some other negative examples, both the "correct answers" and "incorrect answers" provided by
Zero-shot-CP are inconsistent with the ground truth (Figure 4, 8, 9, 10 and 15). Furthermore, we
manually annotated 10 solved problems and 10 unsolved problems of Zero-shot-CP for each of the
12 datasets. Table 3 provides the categorization and counts of these 120 solved problems and 120
unsolved problems. We found that for the solved problems, the majority (112/120) of the given
"wrong" answers were indeed incorrect. For the unsolved problems, the majority (91/120) of both
the "correct" and "wrong" answers were incorrect, with a portion (23/120) of the "wrong" answers
actually being the ground truth. This situation typically occurs in yes or no questions.
**The impact of prompt selection on Zero-shot-CP** We explore different contrastive prompts and
their combination with Zero-shot-CoT. Table 4 outlines performance using 9 different templates with
two classes. The first category is related to correct and wrong answers. We found "Let’s give a correct
and a wrong answer." achieves the best results in general. "Let’s first give a wrong answer, then give
the correct answer." performs well on AQUA-RAT but it performs worse on other datasets. "Let’s
first give the correct answer, then give a wrong answer." generally performs well on the four datasets,
meaning that providing the correct answer first and then the incorrect answer generally leads to better
results. The trigger word "incorrect" performs similarly to "wrong", and the trigger word "Please"
performs much worse than "Let’s". This is likely because, in the pre-training and fine-tuning data,
there are slightly fewer occurrences of "incorrect" compared to "wrong" in samples related to correct
and incorrect answers, and "Please" is rarely present as this type of data is generally not dialogue data.
"Let’s give a correct answer." performs well on the multiple-choice question dataset AQUA-RAT, but
the performances on other three mathematical reasoning tasks are not satisfactory. The second type
of template connects to Zero-shot-CoT, and we found that starting with the steps performs better than
starting with the correct and wrong answers. Overall, it appears that Zero-shot-CoT-CP ("Let’s think
step by step and give both a correct answer and a wrong answer.") performs the best.
**The impact of number of wrong answers on Zero-shot-CP** We explored the impact of the number
of incorrect answers on accuracy. We vary the number of wrong answers from 0 to 4, where 0 means
standard zero-shot prompting. For k = 1, 2, 3, 4, we use the template "Let’s give a correct and k[′]
wrong answer(s).", where k[′] can be "a", "two", "three" and "four". Figure 3 plots the results. We
-----
|Col1|GPT-4 gpt-35-turbo|
|---|---|
0.7 GPT-4gpt-35-turbo 0.90.8 GPT-4gpt-35-turbo 0.92 0.95
0.6 0.7 0.90 0.90
0.6 0.85
0.5 0.5 0.88 0.80
0.4 0.86 0.75
0.4 0.3 0.70
0.2 0.84 GPT-4 0.65 GPT-4
0.3 gpt-35-turbo 0.60 gpt-35-turbo
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
(c) AddSub
(d) MultiArith
(a) AQUA
(b) GSM8K
Figure 3: Accuracy scores by varying the number of wrong answers. We test GPT-4 and gpt-35-turbo
on (a) AQUA-RAT, (b) GSM8K, (c) AddSub and (d) MultiArith. The range of the number of wrong
answers is from 0 (Zero-shot) to 4.
Table 5: Comparison with baseline methods using accuracies (in percentage) on MultiArith, GSM8K,
StrategyQA, AQUA-RAT and SVAMP. gpt-35-turbo is used as the model if not specified. The
baseline results with citations are obtained from corresponding papers. Bolded numbers indicate the
best results within each block’s column, while underlined numbers indicate the best results across the
entire column.
MultiArith GSM8K StrategyQA AQUA SVAMP
Zero-shot 61.2 14.3 65.0 29.9 69.7
Zero-shot-CoT 94.8 75.1 60.9 55.9 81.9
Zero-shot-CP 95.2 73.2 67.3 40.2 83.2
Zero-shot-CP + SC **98.3** **80.3** **67.9** 48.4 **87.6**
Zero-shot-CoT-CP 96.2 73.5 66.7 **60.6** 85.9
Few-shot 87.3 58.2 56.7 37.4 78.2
Few-shot-CoT 98.0 71.1 62.2 55.5 81.0
Few-shot-CoT-CP 97.5 72.7 68.7 52.0 82.2
Few-shot-CoT (GPT-4) 98.3 89.5 **79.1** 58.7 83.3
Few-shot-CoT-CP (GPT-4) **98.7** 90.3 78.2 66.9 91.8
Few-shot-CoT-CP (GPT-4) + SC 97.5 **91.9** 78.8 **70.9** **93.1**
Contrastive CoT [5] – 79.0 66.2 57.5 81.6
Self-consistency (Code-davinci-002) [20] **100.0** 78.0 79.8 52.0 86.8
PAL (Codex) [7] 99.2 80.4 – – 79.4
Zero-shot-PoT (Codex) [4] 92.2 57.0 – 43.9 70.8
Few-shot-PoT (Codex) [4] – 71.6 – 54.1 85.2
Few-shot-PoT-SC (Codex) [4] – 80.0 – **58.6** **89.1**
ToT (GPT-4) [23] – **90.0** **83.0** – –
GoT (T5-large) [24] – 82.2 – – –
Self-generated Exemplars [25] – 77.8 – – –
Self-Refine [12] – 75.1 – – –
LEAP [26] – 77.4 – – –
Zero-Shot-CoT + RCI [9] 97.2 86.2 – – 85.8
Few-Shot-CoT + RCI [9] 99.2 84.3 – – 87.4
found that providing 1-2 incorrect answers yielded the best results in general. The only exception is
on AQUA-RAT, where providing more incorrect answers resulted in higher accuracy. This is because
the task involves a multiple-choice question with five options, and excluding more incorrect answers
makes the LLMs more certain about the correct answer. For mathematical reasoning tasks with an
infinite number of answers, providing just one incorrect answer seems to be sufficient.
**Comparison with other baselines** Table 5 compares the performances on four mathematical
reasoning datasets (MultiArith, GSM8K, AQUA-RAT and SVAMP) and one common sense reasoning
dataset (StrategyQA) across CP and baselines. We find that Zero-shot-CP not only outperforms
Few-shot, but also achieves comparable or even superior results to Few-shot-CoT. For instance, on
GSM8K, the absolute accuracy has improved by 2.1%, and on StrategyQA, the absolute accuracy
has improved by 5.1%. This suggests that in certain cases, the provided examples and reasoning
steps may not be as effective as directly triggering the LLM’s self-awareness of errors. By combining
CP and Few-shot-CoT, we can achieve even better results. Furthermore, if we utilize the GPT-4
-----
model, we can attain performance that is comparable to or even superior to the current state-of-the-art
methods. For example, in AQUA, SVAMP, and GSM8K, we have achieved higher accuracy scores
compared to recently published results. When running CP with Self-consistency (SC), the scores can
be further improved in both zero-shot and few-shot settings.
For a more in-depth performance analysis, we note that X-of-thought methods can improve the effectiveness of Few-shot-CoT, indicating that trees, graphs, and code indeed provide richer information
and greater flexibility compared to simple chains of thought. Among them, the results reported by
the ToT work seem to be more prominent. By sampling multiple reasoning paths and selecting the
most consistent answer, Self-consistency (SC) demonstrates excellent performance in mathematical
and commonsense reasoning tasks. It can also be effectively combined with other methods such
as PoT. Self-generated Exemplars also show better performance than CoT, indicating that allowing
the LLM to recall relevant questions and answer them before responding to the original question is
helpful. The performance of Self-reflection methods, such as Self-Refine and LEAP, is similar to
that of Self-generated Exemplars. RCI performs even better, primarily due to its direct combination
with the CoT method. Compared to these methods, our approach is simpler and can also yield
comparable results. Compared to the most relevant method Contrastive CoT, our Zero-shot-CP
performs better on the StrategyQA and SVAMP datasets. Zero-shot-CoT-CP performs better on
AQUA-RAT. However, on GSM8K, Contrastive CoT performs better, indicating that generating
incorrect answers by swapping the order of entities is useful for this task.
The main reasons why CP works well are threefold: 1) the pre-training data of LLMs is very likely to
contain a variety of correct and incorrect answers to different types of questions. For instance, many
web pages and books provide correct and incorrect answers to math reasoning[3 4 5] and common sense
reasoning [6 7] questions. Answers to questions on social media platforms like Reddit, Quora, and
Zhihu can be voted on by others through “upvotes” or “downvotes”. Highly upvoted answers are
more likely to be correct answers while others may be incorrect. Pre-training LLMs with massive
text containing these correct and wrong answers can encode general patterns (token probability) of
these answers in LLM parameters. When prompted with contrastive prompts, LLMs can leverage
these patterns to generate both a correct and a wrong answer. The "correct" answer is more likely to
align with ground truth, as the model has learned to eliminate possible wrong answers. 2) Instruction
tuning unlocks the abilities of LLMs to give correct and incorrect answers by fine-tuning on various
natural language processing tasks including reasoning tasks [21]. 3) RLHF fine-tunes LLMs using
human feedback data, which offers relative judgments on the quality of answers. This feedback is
valuable for enhancing the LLMs’ capability to distinguish between correct and incorrect answers.
**5** **Limitations and future directions**
Our work has some limitations and there is room for further exploration and improvement. Firstly, we
have not yet validated the effectiveness of CP on smaller models such as Gemma-2B and Qwen1.50.5B. Secondly, we can further explore the combination of contrastive prompting with other prompting
methods, such as X-of-thought approaches. Lastly, exploring the impact of contrastive prompting on
LLM parameters and visualizing it would be an interesting future direction.
**6** **Conclusion**
We propose Contrastive Prompting (CP), a template-based prompting approach for contrastive reasoning. Quantitative and qualitative results indicate that Zero-Shot-CoT shows significant improvements
across various reasoning tasks. Our method can seamlessly integrate with any prompting technique
by incorporating a trigger sentence before the LLM provides answers. CP not only outperforms
zero-shot CoT and few-shot CoT in the majority of arithmetic and commonsense reasoning tasks, but
also achieves comparable or even superior results when compared to state-of-the-art methods.
[3https://prek-math-te.stanford.edu/operations/analyzing-thinking-underlying-wrong-answers](https://prek-math-te.stanford.edu/operations/analyzing-thinking-underlying-wrong-answers)
[4https://mathmistakes.org/category/elementary-school/](https://mathmistakes.org/category/elementary-school/)
[5https://www.gutenberg.org/ebooks/38769](https://www.gutenberg.org/ebooks/38769)
[6https://www.proprofs.com/quiz-school/story.php?title=common-sense-quiz_1](https://www.proprofs.com/quiz-school/story.php?title=common-sense-quiz_1)
[7https://www.wikihow.com/Common-Sense-Quiz](https://www.wikihow.com/Common-Sense-Quiz)
-----
**References**
[1] X. Amatriain. Prompt design and engineering: Introduction and advanced methods. arXiv
_preprint arXiv:2401.14423, 2024._
[2] J. Bai, S. Bai, Y. Chu, Z. Cui, K. Dang, X. Deng, Y. Fan, W. Ge, Y. Han, F. Huang, B. Hui, L. Ji,
M. Li, J. Lin, R. Lin, D. Liu, G. Liu, C. Lu, K. Lu, J. Ma, R. Men, X. Ren, X. Ren, C. Tan,
S. Tan, J. Tu, P. Wang, S. Wang, W. Wang, S. Wu, B. Xu, J. Xu, A. Yang, H. Yang, J. Yang,
S. Yang, Y. Yao, B. Yu, H. Yuan, Z. Yuan, J. Zhang, X. Zhang, Y. Zhang, Z. Zhang, C. Zhou,
J. Zhou, X. Zhou, and T. Zhu. Qwen technical report. arXiv preprint arXiv:2309.16609, 2023.
[3] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam,
G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neural
_information processing systems, 33:1877–1901, 2020._
[4] W. Chen, X. Ma, X. Wang, and W. W. Cohen. Program of thoughts prompting: Disentangling
computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588,
2022.
[5] Y. K. Chia, G. Chen, L. A. Tuan, S. Poria, and L. Bing. Contrastive chain-of-thought prompting.
_arXiv preprint arXiv:2311.09277, 2023._
[6] Z. Du, Y. Qian, X. Liu, M. Ding, J. Qiu, Z. Yang, and J. Tang. Glm: General language model
pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of
_the Association for Computational Linguistics (Volume 1: Long Papers), pages 320–335, 2022._
[7] L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y. Yang, J. Callan, and G. Neubig. Pal: Programaided language models. In International Conference on Machine Learning, pages 10764–10799.
PMLR, 2023.
[8] A. Jaiswal, A. R. Babu, M. Z. Zadeh, D. Banerjee, and F. Makedon. A survey on contrastive
self-supervised learning. Technologies, 9(1):2, 2020.
[9] G. Kim, P. Baldi, and S. McAleer. Language models can solve computer tasks. Advances in
_Neural Information Processing Systems, 36, 2023._
[10] T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa. Large language models are zero-shot
reasoners. Advances in neural information processing systems, 35:22199–22213, 2022.
[11] P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, and G. Neubig. Pre-train, prompt, and predict:
A systematic survey of prompting methods in natural language processing. ACM Computing
_Surveys, 55(9):1–35, 2023._
[12] A. Madaan, N. Tandon, P. Gupta, S. Hallinan, L. Gao, S. Wiegreffe, U. Alon, N. Dziri,
S. Prabhumoye, Y. Yang, et al. Self-refine: Iterative refinement with self-feedback. Advances in
_Neural Information Processing Systems, 36, 2023._
[13] OpenAI. Gpt-4 technical report, 2023.
[14] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal,
K. Slama, A. Ray, et al. Training language models to follow instructions with human feedback.
_Advances in Neural Information Processing Systems, 35:27730–27744, 2022._
[15] R. Rafailov, A. Sharma, E. Mitchell, C. D. Manning, S. Ermon, and C. Finn. Direct preference
optimization: Your language model is secretly a reward model. Advances in Neural Information
_Processing Systems, 36, 2023._
[16] H. L. Roediger and B. Finn. Getting it wrong: Surprising tips on how to learn. Scientific
_American, pages 499–504, 2009._
[17] P. Sahoo, A. K. Singh, S. Saha, V. Jain, S. Mondal, and A. Chadha. A systematic survey of
prompt engineering in large language models: Techniques and applications. arXiv preprint
_arXiv:2402.07927, 2024._
-----
[18] N. Shinn, F. Cassano, A. Gopinath, K. Narasimhan, and S. Yao. Reflexion: Language agents
with verbal reinforcement learning. Advances in Neural Information Processing Systems, 36,
2023.
[19] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal,
E. Hambro, F. Azhar, et al. Llama: Open and efficient foundation language models. arXiv
_preprint arXiv:2302.13971, 2023._
[20] X. Wang, J. Wei, D. Schuurmans, Q. V. Le, E. H. Chi, S. Narang, A. Chowdhery, and D. Zhou.
Self-consistency improves chain of thought reasoning in language models. In The Eleventh
_International Conference on Learning Representations, 2023._
[21] J. Wei, M. Bosma, V. Zhao, K. Guu, A. W. Yu, B. Lester, N. Du, A. M. Dai, and Q. V. Le.
Finetuned language models are zero-shot learners. In International Conference on Learning
_Representations, 2022._
[22] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V. Le, D. Zhou, et al. Chain-ofthought prompting elicits reasoning in large language models. Advances in Neural Information
_Processing Systems, 35:24824–24837, 2022._
[23] S. Yao, D. Yu, J. Zhao, I. Shafran, T. Griffiths, Y. Cao, and K. Narasimhan. Tree of thoughts:
Deliberate problem solving with large language models. Advances in Neural Information
_Processing Systems, 36, 2023._
[24] Y. Yao, Z. Li, and H. Zhao. Beyond chain-of-thought, effective graph-of-thought reasoning in
large language models. arXiv preprint arXiv:2305.16582, 2023.
[25] M. Yasunaga, X. Chen, Y. Li, P. Pasupat, J. Leskovec, P. Liang, E. H. Chi, and D. Zhou. Large
language models as analogical reasoners. arXiv preprint arXiv:2310.01714, 2023.
[26] T. Zhang, A. Madaan, L. Gao, S. Zheng, S. Mishra, Y. Yang, N. Tandon, and U. Alon. In-context
principle learning from mistakes. arXiv preprint arXiv:2402.05403, 2024.
[27] Z. Zhang, A. Zhang, M. Li, and A. Smola. Automatic chain of thought prompting in large
language models. In The Eleventh International Conference on Learning Representations (ICLR
_2023), 2023._
[28] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong,
et al. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023.
**A** **Details of Experimental Setup**
**A.1** **Code, Prompts, Logs**
[All code is available at https://github.com/yao8839836/cp.](https://github.com/yao8839836/cp)
[All prompts are available at https://github.com/yao8839836/cp/blob/master/main.py.](https://github.com/yao8839836/cp/blob/master/main.py)
[Our experimental logs are available at https://github.com/yao8839836/cp/tree/master/](https://github.com/yao8839836/cp/tree/master/log)
```
log.
```
**A.2** **Prompts For Answer Extraction**
Table 7 summarizes the answer extraction prompt for each task used for the CP experiments.
**B** **Additional Experimental Results**
In this section, we provide a summary of additional example texts generated by Zero-shot-CP. gpt-35turbo is used as the model if not specified. Table 6 illustrates example outputs of zero-shot prompting
methods from CommonsenseQA. Figure 4–15 show a positive example and a negative example of
Zero-shot-CP on each dataset. "GT" in the figures means "Ground Truth".
-----
Table 6: An example question from CommonsenseQA and responses from Zero-shot, Zero-shot-CoT,
Zero-shot-CP and Zero-shot-CoT-CP. We use GPT-4 as the base model.
|Question|Q: A bald eagle is likely to be found on what kind of work? Answer Choices: (A) rural area (B) book (C) canada (D) painting (E) aviary|
|---|---|
|Zero-shot|A: Among A through E, the answer is (E) aviary. However, it should be noted that bald eagles can also be found in rural areas (A), in Canada (C), and depicted in books (B) and paintings (D). The question seems to be asking where a bald eagle is likely to be found physically, and an aviary is a place where birds are kept.|
|---|---|
|Zero-shot-CoT|A: Let’s think step by step. A bald eagle is a bird, so it could be found in an aviary (E), which is a place where birds are kept. It could also be found in a rural area (A), as these birds often live in areas with less human activity. A bald eagle could be depicted in a book (B) or a painting (D), as it is a popular symbol in various cultures, especially in the United States. Lastly, bald eagles are also found in Canada (C), as their habitat ranges across most of North America. Therefore, all of the options could be correct depending on the context of the question. Therefore, among A through E, the answer is (A) rural area, (B) book, (C) Canada, (D) painting, (E) aviary.|
|---|---|
|Zero-shot-CP|A: Let’s give a correct and a wrong answer. Correct Answer: (D) painting Wrong Answer: (A) rural area Therefore, among A through E, the correct answer is (D) painting|
|---|---|
|Zero-shot-CoT-CP|A: Let’s think step by step and give both a correct answer and a wrong answer. Correct Answer: (D) painting. A bald eagle can be depicted in a painting, as it is a common symbol of strength and freedom, especially in American culture. Incorrect Answer: (A) rural area. While it’s true that bald eagles can be found in rural areas, the phrasing of the question suggests a type of work or creation where a bald eagle might be represented, not its natural habitat. Therefore, among A through E, the correct answer is (D) painting.|
|---|---|
|Ground Truth|D|
|---|---|
Table 7: Answer extraction prompts used in our CP experiments.
No Task CP answer extraction prompts
1 SingleEq Therefore, the correct answer (arabic numerals) is
2 AddSub Therefore, the correct answer (arabic numerals) is
3 MultiArith Therefore, the correct answer (arabic numerals) is
4 GSM8K Therefore, the correct answer (arabic numerals) is
5 AQUA-RAT Therefore, among A through E, the correct answer is
6 SVAMP Therefore, the correct answer (arabic numerals) is
7 CommonsenseQA Therefore, among A through E, the correct answer is
8 StrategyQA Therefore, the correct answer (Yes or No) is
9 Date Understanding Therefore, among A through F, the correct answer is
10 Shuffled Objects Therefore, among A through C, the correct answer is
11 Last Letters Therefore, the correct answer is
12 Coin Flip Therefore, the correct answer (Yes or No) is
The 240 examples, along with our annotations, can be accessed at the following
[link: https://github.com/yao8839836/cp/blob/master/results/zero_shot_cp_gpt4_](https://github.com/yao8839836/cp/blob/master/results/zero_shot_cp_gpt4_240_examples_labeled.txt)
```
240_examples_labeled.txt.
```
Table 8 presents the results of using various open-source LLMs: LLaMA3-8B, LLaMA3-70B,
ChatGLM3-6B, and Qwen1.5-72B-Chat as base models.
-----
Table 8: Accuracy (in percentage) comparison of Zero-shot-CP with Zero-shot using open LLMs.
MultiArith GSM8K StrategyQA AQUA SVAMP
_LLaMA3-8B_
Zero-shot 31.0 38.1 – – 52.8
Zero-shot-CP **57.3** **54.9** – – **61.4**
_LLaMA3-70B_
Zero-shot 86.5 63.7 54.5 38.2 –
Zero-shot-CP **97.0** **66.1** **57.5** **62.2** –
_ChatGLM3-6B_
Zero-shot 5.3 4.3 – – –
Zero-shot-CP **67.0** **40.0** – – –
_Qwen1.5-72B-Chat_
Zero-shot 54.7 19.3 71.2 31.1 65.2
Zero-shot-CP **75.5** **52.1** **73.5** **45.3** **77.4**
Q: Mary found 2 seashells and Keith found 5 seashells on the beach . When they cleaned them,
they discovered that 9 were cracked . How many seashells did they find together ?
A: Let's give a correct and a wrong answer. Correct answer: To find the total number of seashells
they found together, we need to add the number of seashells Mary found (2) to the number of
seashells Keith found (5).
2 + 5 = 7
Therefore, Mary and Keith found a total of 7 seashells together.
Wrong answer: To find the total number of seashells they found together, we can subtract the
number of cracked seashells (9) from the sum of the seashells Mary and Keith found.
(2 + 5) - 9 = -2
This answer doesn't make sense because we can't have a negative number of seashells.
Therefore, the correct answer (arabic numerals) is 7.
Q: Sam 's dog had puppies and 8 had spots . He gave 2 to his friends . He now has 6 puppies .
How many puppies did he have to start with ?
A: Let's give a correct and a wrong answer. Correct answer:
To find out how many puppies Sam had to start with, we can work backwards from the
information given. We know that Sam currently has 6 puppies after giving 2 to his friends. Before
giving away the 2 puppies, he had 8 puppies with spots. Therefore, the total number of puppies
he had to start with is 8 + 2 = 10.
Wrong answer:
To find out how many puppies Sam had to start with, we can subtract the number of puppies he
currently has from the number of puppies he gave away. Since he currently has 6 puppies and he
gave away 2, we can subtract 6 from 2, which gives us -4. Therefore, Sam had -4 puppies to start
with.
Therefore, the correct answer (arabic numerals) is The correct answer is 10. (GT: 8)
Figure 4: Example outputs by Zero-shot-CP for AddSub.
-----
Q: Machine A puts out a yo-yo every 6 minutes. Machine B puts out a yo-yo every 9 minutes.
After how many minutes will they have produced 10 yo-yos? Answer Choices: (A) 24 minutes (B)
32 minutes (C) 36 minutes (D) 64 minutes (E) 72 minutes
A: Let's give a correct and a wrong answer. Correct answer: (C) 36 minutes
Explanation:
Machine A produces a yo-yo every 6 minutes, so in 36 minutes, it will produce 36/6 = 6 yo-yos.
Machine B produces a yo-yo every 9 minutes, so in 36 minutes, it will produce 36/9 = 4 yo-yos.
Therefore, in a total of 36 minutes, both machines will have produced 6 + 4 = 10 yo-yos.
Wrong answer: (B) 32 minutes
Explanation:
Machine A produces a yo-yo every 6 minutes, so in 32 minutes, it will produce 32/6 = 5.33 yo-yos
(approximately).
Machine B produces a yo-yo every 9 minutes, so in 32 minutes, it will produce 32/9 = 3.56 yo-yos
(approximately).
Therefore, in a total of 32 minutes, both machines will have produced 5.33 + 3.56 = 8.89 yo-yos
(approximately), which is less than 10 yo-yos.
Therefore, among A through E, the correct answer is (C) 36 minutes.
Q: On a map, the length of the road from Town F to Town G is measured to be 20 inches. On this
map, 1/4 inch represents an actual distance of 10 miles. What is the actual distance, in miles,
from Town F to Town G along this road? Answer Choices: (A) 800 (B) 720 (C) 960 (D) 1140 (E)
1160
A: Let's give a correct and a wrong answer. Correct Answer: (C) 960 miles
Explanation:
According to the map, 1/4 inch represents 10 miles. Therefore, 1 inch represents 4 times that,
which is 40 miles.
Since the length of the road from Town F to Town G on the map is 20 inches, the actual distance
in miles would be 20 inches multiplied by 40 miles per inch, which equals 800 miles.
Wrong Answer: (A) 800 miles
Explanation:
This answer is incorrect because it does not take into account the scale of the map. The scale of
the map indicates that 1/4 inch represents 10 miles, so the actual distance from Town F to Town
G is greater than 800 miles.
Therefore, among A through E, the correct answer is (C) 960 miles. (GT: A)
Figure 5: Example outputs by Zero-shot-CP for AQUA-ART.
-----
Q: Yesterday was April 30, 2021. What is the date one year ago from today in MM/DD/YYYY?
Answer Choices: (A) 04/01/2020 (B) 05/08/2020 (C) 05/01/2020 (D) 05/15/2020 (E) 05/01/1971
A: Let's give a correct and a wrong answer. Correct answer: (C) 05/01/2020
Wrong answer: (B) 05/08/2020
Therefore, among A through F, the correct answer is (C) 05/01/2020
Q: Yesterday was April 30, 2021. What is the date 10 days ago in MM/DD/YYYY? Answer Choices:
(A) 01/21/2021 (B) 04/13/2021 (C) 04/21/1990 (D) 04/21/2021 (E) 04/18/2021 (F) 04/22/2021
A: Let's give a correct and a wrong answer. Correct answer: (B) 04/13/2021
Wrong answer: (D) 04/21/2021
Therefore, among A through F, the correct answer is (B) 04/13/2021. (GT: D)
Figure 6: Example outputs by Zero-shot-CP for Date Understanding.
Q: A coin is heads up. Emanuel does not flip the coin. Cheyenne does not flip the coin. Perez flips
the coin. Yvette does not flip the coin. Is the coin still heads up? Note that "flip" here means
"reverse".
A: Let's give a correct and a wrong answer. Correct answer: No, the coin is not still heads up.
Since Perez flips the coin, it means that the coin is reversed or flipped, resulting in a different side
facing up.
Wrong answer: Yes, the coin is still heads up. Since only Perez flips the coin and the others do not,
the initial state of the coin remains unchanged.
Therefore, the correct answer (Yes or No) is No, the coin is not still heads up.
Q: A coin is heads up. Mya flips the coin. Fernando flips the coin. Bubba does not flip the coin.
Tommy does not flip the coin. Is the coin still heads up? Note that "flip" here means "reverse".
A: Let's give a correct and a wrong answer. Correct answer: No, the coin is not still heads up.
When Mya flips the coin, it will be reversed, so it will be tails up. The subsequent actions of
Fernando, Bubba, and Tommy are irrelevant because the coin has already been flipped and its
state has changed.
Wrong answer: Yes, the coin is still heads up. Since Bubba and Tommy did not flip the coin, its
original state remains unchanged.
Therefore, the correct answer (Yes or No) is No, the coin is not still heads up. (GT: Yes)
Figure 7: Example outputs by Zero-shot-CP for Coin Flip.
-----
Q: The man closed his eyes as the music played, what effect did the music have? Answer Choices:
(A) coma (B) enjoyable (C) soothing (D) universal (E) good or bad
A: Let's give a correct and a wrong answer. Correct answer: (C) soothing
The music had a soothing effect on the man, causing him to close his eyes and relax.
Wrong answer: (A) coma
The music did not cause the man to enter a coma.
Therefore, among A through E, the correct answer is (C) soothing.
Q: As he looked out the window, he knew the landing was happening soon, and it made him
nervous, but where would he be soon? Answer Choices: (A) apartment building (B) disembark (C)
stairwell (D) deplane (E) airport
A: Let's give a correct and a wrong answer. Correct answer: (D) deplane
Wrong answer: (C) stairwell
Therefore, among A through E, the correct answer is (D) deplane. (GT: E)
Figure 8: Example outputs by Zero-shot-CP for CommonsenseQA.
Q: Jeff’ s work is 3 miles away. He walks there and back each day he works. How many miles does
he walk if he has to work 5 times a week?
A: Let's give a correct and a wrong answer. Correct answer: Jeff walks 3 miles to work and 3 miles
back, so he walks a total of 6 miles each day he works. If he works 5 times a week, he would walk
a total of 6 miles/day * 5 days/week = 30 miles in a week.
Wrong answer: Jeff walks 3 miles to work and 3 miles back, so he walks a total of 3 miles each
day he works. If he works 5 times a week, he would walk a total of 3 miles/day * 5 days/week =
15 miles in a week.
Therefore, the correct answer (arabic numerals) is 30 miles.
Q: If one person can paint half a house in 5 days, how many hours would it take for 5 people to
paint a whole house?
A: Let's give a correct and a wrong answer. Correct answer: If one person can paint half a house in
5 days, it means that one person can paint one whole house in 10 days. Since there are 5 people
working together, we can divide the time by 5. Therefore, it would take 5 people 2 days to paint a
whole house.
Wrong answer: If one person can paint half a house in 5 days, it means that one person can paint
one whole house in 10 days. Since there are 5 people working together, we can divide the time
by 5. Therefore, it would take 5 people 2 hours to paint a whole house.
Therefore, the correct answer (arabic numerals) is 2 days. (GT: 48)
Figure 9: Example outputs by Zero-shot-CP for GSM8K.
-----
Q: Take the last letters of each words in "Marlen Sonja Anita Dale" and concatenate them.
A: Let's give a correct and a wrong answer. Correct answer: The last letters of each word in
"Marlen Sonja Anita Dale" are "n a a e." Therefore, when concatenated, they form the word
"naae."
Wrong answer: The last letters of each word in "Marlen Sonja Anita Dale" are "n j a e." Therefore,
when concatenated, they form the word "njae."
Therefore, the correct answer is "naae."
Q: Take the last letters of each words in "Rich Parker Edward Milagros" and concatenate them.
A: Let's give a correct and a wrong answer. Correct answer: The last letters of each word in "Rich
Parker Edward Milagros" are "h r d s s". Therefore, when concatenated, they form the word
"hrdss".
Wrong answer: The last letters of each word in "Rich Parker Edward Milagros" are "r r d s s".
Therefore, when concatenated, they form the word "rrdss".
Therefore, the correct answer is "hrdss". (GT: hrds)
Figure 10: Example outputs by Zero-shot-CP for Last Letter Concatenation.
Q: Roger is at the library helping put away books. There are 14 book to put away total but a
librarian takes 2 of them and leaves Roger with the rest. If he can fit 3 books on a shelf, how
many shelves will he need?
A: Let's give a correct and a wrong answer. Correct answer:
If there are 14 books in total and the librarian takes 2 of them, then Roger is left with 14 - 2 = 12
books to put away. Since he can fit 3 books on a shelf, he will need 12 / 3 = 4 shelves.
Wrong answer:
If there are 14 books in total and the librarian takes 2 of them, then Roger is left with 14 - 2 = 12
books to put away. Since he can fit 3 books on a shelf, he will need 12 / 3 = 3 shelves.
Therefore, the correct answer (arabic numerals) is 4 shelves.
Q: John earned 8 dollars for each lawn he mowed. If he had 15 lawns to mow, but forgot to mow
7 of them, how much money did he actually earn?
A: Let's give a correct and a wrong answer. Correct answer: John earned 8 dollars for each lawn
he mowed. Since he mowed 15 lawns, he would have earned 8 * 15 = 120 dollars.
Wrong answer: John earned 8 dollars for each lawn he mowed. Since he forgot to mow 7 lawns,
he actually earned 8 * (15 - 7) = 8 * 8 = 64 dollars.
Therefore, the correct answer (arabic numerals) is 120 dollars. (GT: 64)
Figure 11: Example outputs by Zero-shot-CP for MultiArith.
-----
Q: Alice, Bob, and Claire are dancers at a square dance. At the start of a song, they each have a
partner: Alice is dancing with Karl, Bob is dancing with Rodrigo, and Claire is dancing with Helga.
Throughout the song, the dancers often trade partners. First, Bob and Alice switch partners. Then,
Alice and Claire switch partners. Finally, Alice and Bob switch partners. At the end of the dance,
Bob is dancing with
Which choice is true ? Answer Choices: (A) Karl. (B) Rodrigo. (C) Helga.
A: Let's give a correct and a wrong answer. Correct answer: (C) Helga.
Explanation:
At the start of the dance, Bob is dancing with Rodrigo. After the first partner switch, Bob and
Alice switch partners, so Bob is now dancing with Karl. Then, Alice and Claire switch partners, so
Bob is still dancing with Karl. Finally, Alice and Bob switch partners, so Bob is now dancing with
Helga. Therefore, at the end of the dance, Bob is dancing with Helga.
Therefore, among A through C, the correct answer is (C) Helga.
Q: Alice, Bob, and Claire are dancers at a square dance. At the start of a song, they each have a
partner: Alice is dancing with Sam, Bob is dancing with Karl, and Claire is dancing with Helga.
Throughout the song, the dancers often trade partners. First, Alice and Claire switch partners.
Then, Alice and Bob switch partners. Finally, Claire and Bob switch partners. At the end of the
dance, Claire is dancing with
Which choice is true ? Answer Choices: (A) Sam. (B) Karl. (C) Helga.
A: Let's give a correct and a wrong answer. Correct answer: (B) Karl.
Explanation:
At the start of the dance, Alice is dancing with Sam, Bob is dancing with Karl, and Claire is dancing
with Helga.
First, Alice and Claire switch partners. After this switch, Alice is now dancing with Helga, and
Claire is dancing with Sam. Bob's partner remains the same (Karl).
Next, Alice and Bob switch partners. After this switch, Alice is now dancing with Karl, and Bob is
dancing with Helga. Claire's partner remains the same (Sam).
Finally, Claire and Bob switch partners. After this switch, Claire is now dancing with Karl, and Bob
is dancing with Sam. Alice's partner remains the same (Helga).
Therefore, at the end of the dance, Claire is dancing with Karl. So the correct answer is (B) Karl.
Therefore, among A through C, the correct answer is (B) Karl. (GT: C)
Figure 12: Example outputs by Zero-shot-CP for Tracking Shuffled Object.
-----
Q: A pet supply store has 600 bags of dog food and 327 bags of cat food. How many more bags of
dog food are there than cat food?
A: Let's give a correct and a wrong answer. Correct answer: To find out how many more bags of
dog food there are than cat food, we subtract the number of bags of cat food from the number of
bags of dog food.
Number of bags of dog food - Number of bags of cat food = 600 - 327 = 273
Therefore, there are 273 more bags of dog food than cat food.
Wrong answer: To find out how many more bags of dog food there are than cat food, we add the
number of bags of dog food and cat food together.
Number of bags of dog food + Number of bags of cat food = 600 + 327 = 927
Therefore, there are 927 more bags of dog food than cat food.
Therefore, the correct answer (arabic numerals) is 273.
Q: Harry Hound had a terrible earache yesterday. When I peered into his ears yesterday, I found
36 frisky fleas having a party in his right ear and 85 baby fleas sleeping peacefully in his left ear. I
cleaned out Harry Hound's ears. How many fleas perished?
A: Let's give a correct and a wrong answer. Correct answer: If you found 36 fleas in Harry Hound's
right ear and 85 fleas in his left ear, and you cleaned out his ears, it can be assumed that all the
fleas were removed and none perished.
Wrong answer: If we assume that all the fleas perished, then the total number of fleas that
perished would be 36 + 85 = 121 fleas.
Therefore, the correct answer (arabic numerals) is 0 fleas perished. (GT: 121)
Figure 13: Example outputs by Zero-shot-CP for SingleEq.
-----
Q: Is it unusual to play Happy hardcore music at a funeral?
A: Let's give a correct and a wrong answer. Correct answer: Yes, it is highly unusual to play Happy
hardcore music at a funeral. Funerals are typically solemn occasions where mourners gather to
pay their respects and honor the life of the deceased. The music played at funerals is usually
chosen to reflect the somber and reflective atmosphere of the event. Genres such as classical
music, hymns, or soft instrumental pieces are more commonly associated with funerals.
Wrong answer: No, it is not unusual to play Happy hardcore music at a funeral. Funerals are
evolving events, and people have different ways of expressing their grief and celebrating the life
of the deceased. Some individuals may have had a strong connection to Happy hardcore music,
and playing it at the funeral could be a way to honor their memory and bring a sense of joy and
energy to the ceremony. Ultimately, the choice of music at a funeral is a personal decision, and if
it reflects the wishes and personality of the deceased, then it can be considered appropriate.
Therefore, the correct answer (Yes or No) is Yes, it is highly unusual to play Happy hardcore music
at a funeral.
Q: Would a pear sink in water?
A: Let's give a correct and a wrong answer. Correct answer: Yes, a pear would sink in water. Pears
have a density greater than that of water, which means they are denser and will sink when placed
in water.
Wrong answer: No, a pear would not sink in water. Pears are known for their buoyancy and ability
to float in water due to their unique composition.
Therefore, the correct answer (Yes or No) is Yes, a pear would sink in water. (GT: No)
Figure 14: Example outputs by Zero-shot-CP for StrategyQA.
Q: Mary is baking a cake. The recipe calls for 7 cups of sugar and 10 cups of flour. She already put
in 4 cups of sugar. How many more cups of sugar does she need to add?
A: Let's give a correct and a wrong answer. Correct answer: Mary needs to add 3 more cups of
sugar.
Wrong answer: Mary needs to add 6 more cups of sugar.
Therefore, the correct answer (arabic numerals) is 3.
Q: The grasshopper and the frog had a jumping contest. The grasshopper jumped 13 inches. The
grasshopper jumped 2 inches farther than the grasshopper. How far did the frog jump?
A: Let's give a correct and a wrong answer. Correct answer: The grasshopper jumped 13 inches,
and the frog jumped 2 inches farther than the grasshopper. Therefore, the frog jumped 13 + 2 =
15 inches.
Wrong answer: The grasshopper jumped 13 inches, and the frog jumped 2 inches farther than
the grasshopper. Therefore, the frog jumped 13 + 2 = 15 inches.
Therefore, the correct answer (arabic numerals) is 15 inches. (GT: 11)
Figure 15: Example outputs by Zero-shot-CP for SVAMP.
-----
| [
"Liang, Yao"
] | 2024-05-22T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2403.08211 | https://arxiv.org/abs/2403.08211 | https://www.semanticscholar.org/paper/884a573e07ebcc80e855670e769d803a77505cbe |
Large Scale Deep Learning for Theorem Proving in HOList: First Results and Future Directions | N/A | null | # Large Scale Deep Learning for Theorem Proving in HOList: First Results and Future Directions
Sarah Loos
Theorem proving in large theories comes with unique challenges compared to other tasks on
which reinforcement learning has been applied successfully: unlimited action space, sparse
reward, and quickly growing knowledge base. Here, I present our approaches to deal with these
difficulties and our first practical results on the HOList benchmarks. Our particular baseline
solution is named DeepHOL and builds upon the HOList infrastructure and APIs. The action
space is unlimited in our context, as some tactics may take an arbitrarily long list of theorems for
tactic parameters. Also, newly proved theorems are added to the knowledge base, increasing
the complexity of further possible actions. In our baseline approach, we assume that each
formula is given as a sequence of a finite number of tokens and these tokens are known
beforehand. Our tokens correspond to the tokenization produced by HOL Light, but we
communicate formulas in a simple S-expression format to make it easy to process and interpret
them. Although DeepHOL ignores the tree structure and relies on sequence-based models, we
expect more sophisticated machine learning models to exploit this structure for further
improvements. A further simplification is that we assume a relatively small, fixed set of possible
tactic applications. However, these simplifications (finite and fixed set of tokens and actions) are
not assumed by the HOList system in general.
We present a detailed description of our SearchGraph architecture and how DeepHOL interacts
with it. The nodes of the SearchGraph are goals/subgoals, and edges track tactic applications
and resulting subgoals. Any node of the SearchGraph can be expanded and further tried to be
proved. Also, the SearchGraph automatically merges identical goals, which prevents unsound
cyclic proof attempts and other inefficient cyclic behavior. DeepHOL performs proof search in a
breadth-first manner, but omits expanding those subgoals that have no chance to contribute to
closing the main goal.
Our system relies on a two-tower, two-headed policy network that combines a standard
classification model with a ranking model. The classifier head predicts the tactic to be applied,
while the ranking head is for ranking premises for their usefulness as arguments passed to the
tactic application. The two towers of the network are trained for encoding the goal and premises;
these encodings are further processed by a ranking network taking them as inputs. The tactic
prediction head only uses the encoding produced by the goal tower. Both encoding towers
utilize the WaveNet architecture, which is a residual network with dilated convolutions. While the
application of the tactic prediction head is straightforward, we cache the premise encodings in
order to make the evaluation ranking model faster: we need to process all the preceding
-----
premises in the database, which can have over ten thousand statements in the multivariate
complex analysis corpus.
The reward is very sparse because it takes several minutes to find proofs, and a lot of time is
spent in the harder theorems while not learning from unsuccessful proof search traces. So, we
adopt a slow-feedback strategy that is highly distributed: we use two thousand workers running
the proof search mining for new training examples, while the policy network is trained on a
single GPU. In order to decrease the latency of training on newly found examples, we maintain
several pools of examples: old, fresh and imitation and we trained on some predefined mixtures
of those examples. In order to create the training data for the policy network, we prune the
successful proof searches by keeping only those proof-search nodes that were essential for
closing the goal. Furthermore, we prune the parameter lists of the tactic applications by keeping
only those parameters that are necessary to end up with the same subgoals. In addition,
hyperparameters of the proof search (branching factor, theorem list length and maximum
unsuccessful tactic applications per node) are randomized to increase the variety of produced
proofs. All high ranking theorems that were pruned away for some tactic in a successful proof
are stored as hard negatives for their respective goals and are used more frequently as negative
examples in the contrastive loss of the premise ranking model.
After comparing the results of our large scale reinforcement learning pipeline with the model
trained by imitation learning, we present several ways that our HOList and DeepHOL
infrastructure could be utilized for new research.
-----
| [
"Sarah, Loos"
] | 2019-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Latent Action Space for Efficient Planning in Theorem Proving | N/A | null | # Latent Action Space for Efficient Planning in Theorem Proving
Minchao Wu[∗], Yuhuai Wu[∗]
August 24, 2021
**Abstract**
One of the most critical challenges in applying machine learning techniques to automated reasoning is the need to work with an enormous
action space. Not only does it make the exploration difficult, but also it is
very time-consuming to generate at inference time. In this work, we introduce latent action space with a world model to speed up the efficiency of
action generation, with the potential of alleviating the exploration problem, as well as improving sample efficiency using the world model.
**1** **Introduction**
One of the major challenges in theorem proving [14, 2, 7, 12, 10, 11, 6] is the
need to deal with an enormous action space. In the most general setting, the
action space for a theorem proving agent consists of a sequence of strings, representing a tactic application along with theorem parameters, or an intermediate
proposition, or a new definition, lemma, theorem statements. This approach
also has been adopted in recent works using transformer-based models [12, 6],
because of its generality (e.g., the capability of generating new terms). However,
due to the nature of autoregressive generation for such actions, even doing one
action generation requires a quiet amount of time, not to mention if one wants
to search multiple steps ahead.
In this work, we propose to learn such a high-level representation, by embedding the raw action space into a latent action space.
**2** **Method**
In order to embed the action into a continuous latent space, we first introduce
an action encoder and an action decoder.
- Encoderaction: action space latent action space : α _q(α_ _a)._
_→_ _∼_ _|_
- Decoderaction: latent action space action space : ˆa _p(ˆa_ _α)._
_→_ _∼_ _|_
We can train the action encoder and decoder using the reconstruction objective:
_Lrec = D(p(a)||p(ˆa|a)),_ (1)
where p(ˆa) = _α_ _[p][(ˆ]a|α)q(α|a) and D is a metric on the distribution of actions._
The most natural choice is the KL divergence (cross entropy loss) between the
original action distribution (label) and the decoded action distribution.
[P]
∗Equal contribution. MW is at the Australian National University and YW is at the
University of Toronto. Correspondence to [email protected] and [email protected].
-----
However, we cannot directly work with latent action space, because the
environment only accepts the raw action space as input. Therefore, an “environment” in the latent space is necessary for this purpose. This then naturally
leads us into the model-based RL techniques [13, 3, 4, 8, 5].
We hence introduce the following components. Firstly, we introduce a state
encoder, that encodes the proof state xt into a latent state space Z represented
by a continuous vector, as well as its counterpart, a state decoder – that decodes
the latent state back to the proof state.
- State encoder: zt ∼ _q(zt|zt−1, αt, xt)_
- State decoder: ˆxt _p(ˆxt_ _zt)_
_∼_ _|_
Next, given the latent state space, we introduce the latent transition operator. The transition operator will sample the next latent state given the latent
state and action at the current time step. Namely, we use a neural network
model to learn the internal dynamics of the theorem proving engine, performing
the deduction step of theorem proving.
- Latent transition operator: ˆzt ∼ _p(ˆzt|zt−1, αt−1)_
To train the state encoder and decoder, we also use a reconstruction objective
as the case of latent actions. To train the transition operator, we use a forward
prediction loss – the cross-entropy loss between the ground truth latent state
and the predicted latent state. Furthermore, to add more semantic groundings
for the latent action space, we also use the forward prediction loss to train the
action and state encoder. To summarize, the total loss objective is written
below:
= _rec(a) +_ _rec(x) +_ _forward._
_L_ _L_ _L_ _L_
Given a latent transition operator, one can perform efficient planning in
the latent space – looking ahead by unrolling the state dynamics for a number
of steps. Unlike generating a full sequence of tokens at each step, the latent
action allows a one-shot generation, immensely shortening the planning time.
There are many possibilities in terms of integrating the transition operator with
various kinds of search algorithms, such as best-first search, MCTS, etc. – a
vast space for exploration.
**3** **Experiments**
We start by learning the state and action encoding, as well as the dynamics
of the INequality Theorem proving benchmark (INT) [15]. We generate 40000
proof trajectories using INT using a cardinality of an axiom combination K = 3
and a length of a proof L = 7. The data set then contains 149009 distinct
transitions which are split into training and test data sets with a 80:20 ratio.
We use a character-level transformer to learn the latent representations of
both state and action, and use an MLP to learn the internal dynamics (i.e. the
transition operator) of INT. The transformer uses 256 embedding dimensions,
-----
Table 1: Performance on the test set. `BLEUrec` _action (resp._ `BLEUrec` _state)_
_−_ _−_
denotes the BLEU score of the reconstructed actions. BLEUtrans denotes the
BLEU score of the predicted states by applying the transition operator once.
QED accuracy is the percentage of correctly predicted QEDs by applying the
transition operator once.
Methods `BLEUrec−action` `BLEUrec−state` `BLUEtrans` QED accuracy (%)
_CE_ 96.98 94.12 88.23 94.20
_L_
_MSE_ 73.87 69.38 60.18 0
_L_
8 attention heads and 1024 hidden dimensions for position-wise feed-forward
layers. We also use dropout with rate 0.1, label smoothing with coefficient 0.1,
and a maximum 128 tokens for both training and evaluation examples. The
MLP is a residual block with two hidden layers of dimensions 1024 and 512. We
use the Adam optimizer [9] for training.
We experiment with two different forward losses for training the transition
operator. LCE denotes the cross-entropy loss between the ground truth target
state and the decoded predicted latent state. LMSE denotes the mean squared
error between the encoded ground truth of target state and the predicted latent
state. We implement our algorithms in JAX [1] and run both experiments for
100k training steps using a single NVIDIA Tesla V100 GPU and 8 cores of an
Intel(R) Xeon(R) CPU @ 2.20GHz.
Table 1 shows the quality of the transition prediction and the reconstruction
of states and actions when evaluated on the test set. When calculating the
BLEU scores for transition predictions, we separate out those whose references
are a single “QED” token (which indicates the end of the proof) to make sure
that BLEU scores reflect the quality of prediction properly[1]. We add an additional metric called QED accuracy which is the percentage of exact matches of
the QED token. Figure 1 shows the quality of transition prediction when the
state dynamics is unrolled for a number of steps using the learned transition
operator. It can be seen that the transition operator trained using LCE outperforms the one trained using LMSE by a large margin, and that the latter lacks
the ability to correctly predict QEDs.
**4** **Discussion**
There has been an early investigation on latent space for mathematical reasoning [10], which shows promising results of neural networks for predicting the
latent state several steps ahead, in the HOList system with an ad-hoc action
1For example, if we have references: [“QED”,“QED”,“QED”,“QED”] with predictions:
[“to ((((b ∗ _a) + (a ∗_ _b)) ∗_ (a ∗ (a + b))) ∗ (c ∗ 1)) = ((((a + b) ∗ _a) ∗_ (a + b)) ∗ (c ∗
1))”,“QED”,“QED”,“QED”]), the BLEU score of this corpus is only 0.79, which does not
reflect the quality of prediction properly.
-----
100
90
80
70
60
50
40
30
100
80
60
40
20
|LCE|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||LCE||
||||||
|LMSE|||LMSE||
||||||
||||||
||||||
||||||
||||||
||||||
||LCE||||
|LMSE|||||
steps
Figure 1: Quality of transition operator with respect to the number of steps
unrolled. Given a state s, we look ahead n steps by recursively applying the
transition operator to s and the subsequent ground truth actions corresponding
to s. The further we unroll, the more difficult it becomes for the transition
operator to correctly predict the target states. Note the different scale on right
for QED accuracy. Step 7 has a QED accuracy instead of a BLEU score because
all target states at step 7 are QEDs.
space. Greatly inspired by it, we propose to build a full-fledged latent space
system for the most general action space, to improve mathematical reasoning
in planning efficiency. In the meantime, the world model potentially can speed
up the interaction time with the environment, and also improve the sample efficiency. Furthermore, we believe if the latent space is semantically grounded,
exploration in the latent action space can also provide big gains over exploring with a sequence of long tokens. We hope our work provides a meaningful
direction to future machine learning models for theorem proving.
**References**
[1] James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson,
Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018.
[2] James P. Bridge, Sean B. Holden, and Lawrence C. Paulson. Machine
learning for first-order theorem proving - learning to select a good heuristic.
_J. Autom. Reason., 53(2):141–172, 2014._
[3] David Ha and J¨urgen Schmidhuber. World models. CoRR, abs/1803.10122,
2018.
-----
[4] Danijar Hafner, Timothy P. Lillicrap, Jimmy Ba, and Mohammad Norouzi.
Dream to control: Learning behaviors by latent imagination. In 8th In_ternational Conference on Learning Representations, ICLR 2020, Addis_
_Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020._
[5] Danijar Hafner, Timothy P. Lillicrap, Mohammad Norouzi, and Jimmy Ba.
Mastering atari with discrete world models. CoRR, abs/2010.02193, 2020.
[6] Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W. Ayers, and Stanislas Polu. Proof artifact co-training for theorem proving with language models. CoRR, abs/2102.06203, 2021.
[7] Geoffrey Irving, Christian Szegedy, Alexander A. Alemi, Niklas E´en,
Fran¸cois Chollet, and Josef Urban. Deepmath - deep sequence models
for premise selection. In Daniel D. Lee, Masashi Sugiyama, Ulrike von
Luxburg, Isabelle Guyon, and Roman Garnett, editors, Advances in Neural
_Information Processing Systems 29: Annual Conference on Neural Infor-_
_mation Processing Systems 2016, December 5-10, 2016, Barcelona, Spain,_
pages 2235–2243, 2016.
[8] Lukasz Kaiser, Mohammad Babaeizadeh, Piotr Milos, Blazej Osinski,
Roy H. Campbell, Konrad Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, Afroz Mohiuddin, Ryan Sepassi, George
Tucker, and Henryk Michalewski. Model based reinforcement learning for
atari. In 8th International Conference on Learning Representations, ICLR
_2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020._
[9] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic
optimization. In Yoshua Bengio and Yann LeCun, editors, 3rd Interna_tional Conference on Learning Representations, ICLR 2015, San Diego,_
_CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015._
[10] Dennis Lee, Christian Szegedy, Markus N. Rabe, Sarah M. Loos, and
Kshitij Bansal. Mathematical reasoning in latent space. In 8th Interna_tional Conference on Learning Representations, ICLR 2020, Addis Ababa,_
_Ethiopia, April 26-30, 2020. OpenReview.net, 2020._
[11] Wenda Li, Lei Yu, Yuhuai Wu, and Lawrence Paulson. Isarstep: a benchmark for high-level mathematical reasoning. 2021.
[12] Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving. CoRR, abs/2009.03393, 2020.
[13] Richard S. Sutton. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Bruce W. Porter
and Raymond J. Mooney, editors, Machine Learning, Proceedings of the
_Seventh International Conference on Machine Learning, Austin, Texas,_
_USA, June 21-23, 1990, pages 216–224. Morgan Kaufmann, 1990._
-----
[14] Josef Urban, Jir´ı Vyskocil, and Petr Step´anek. Malecop machine learning
connection prover. In Kai Br¨unnler and George Metcalfe, editors, Auto_mated Reasoning with Analytic Tableaux and Related Methods - 20th Inter-_
_national Conference, TABLEAUX 2011, Bern, Switzerland, July 4-8, 2011._
_Proceedings, volume 6793 of Lecture Notes in Computer Science, pages 263–_
277. Springer, 2011.
[15] Yuhuai Wu, Albert Jiang, Jimmy Ba, and Roger B. Grosse. INT: an
inequality benchmark for evaluating generalization in theorem proving.
_CoRR, abs/2007.02924, 2020._
-----
| [
"Minchao, Wu",
"Yuhuai, Wu"
] | 2021-08-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Models | Model merging, such as model souping, is the practice of combining different models with the same architecture together without further training. In this work, we present a model merging methodology that addresses the difficulty of fine-tuning Large Language Models (LLMs) for target tasks in non-English languages, where task-specific data is often unavailable. We focus on mathematical reasoning and without in-language math data, facilitate cross-lingual transfer by composing language and math capabilities. Starting from the same pretrained model, we fine-tune separate "experts" on math instruction data in English and on generic instruction data in the target language. We then replace the top and bottom transformer layers of the math expert directly with layers from the language expert, which consequently enhances math performance in the target language. The resulting merged models outperform the individual experts and other merging methods on the math benchmark, MGSM, by 10% across four major languages where math instruction data is scarce. In addition, this layer swapping is simple, inexpensive, and intuitive, as it is based on an interpretative analysis of the most important parameter changes during the fine-tuning of each expert. The ability to successfully re-compose LLMs for cross-lingual transfer in this manner opens up future possibilities to combine model expertise, create modular solutions, and transfer reasoning capabilities across languages all post hoc. | The ability to successfully re-compose LLMs for cross-lingual transfer in this manner opens up future possibilities to combine model expertise, create modular solutions, and transfer reasoning capabilities across languages all post hoc. | [
"Lucas, Bandarkar",
"Benjamin, Muller",
"Pritish, Yuvraj",
"Nayan, Singhal",
"Hongjiang, Lv",
"Bing, Liu",
"Rui, Hou"
] | 2024-10-02T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.01335 | https://arxiv.org/abs/2410.01335 | https://www.semanticscholar.org/paper/50b566ab06a86a64810e40793f7e79d48b2c5188 |
|
Laying the Foundation First? Investigating the Generalization from Atomic Skills to Complex Reasoning Tasks | Current language models have demonstrated their capability to develop basic reasoning, but struggle in more complicated reasoning tasks that require a combination of atomic skills, such as math word problem requiring skills like arithmetic and unit conversion. Previous methods either do not improve the inherent atomic skills of models or not attempt to generalize the atomic skills to complex reasoning tasks. In this paper, we first propose a probing framework to investigate whether the atomic skill can spontaneously generalize to complex reasoning tasks. Then, we introduce a hierarchical curriculum learning training strategy to achieve better skill generalization. In our experiments, we find that atomic skills can not spontaneously generalize to compositional tasks. By leveraging hierarchical curriculum learning, we successfully induce generalization, significantly improve the performance of open-source LMs on complex reasoning tasks. Promisingly, the skill generalization exhibit effective in cross-dataset and cross-domain scenarios. Complex reasoning can also help enhance atomic skills. Our findings offer valuable guidance for designing better training strategies for complex reasoning tasks. | By leveraging hierarchical curriculum learning, this work successfully induce generalization, significantly improve the performance of open-source LMs on complex reasoning tasks, and offers valuable guidance for designing better training strategies for complex reasoning tasks. | ## Laying the Foundation First? Investigating the Generalization from Atomic Skills to Complex Reasoning Tasks
**Yuncheng Huang[♠], Qianyu He[♠], Yipei Xu[♠], Jiaqing Liang[♡][*], Yanghua Xiao[♠♢][*]**
_♠Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University_
_♡School of Data Science, Fudan University_
_♢Fudan-Aishu Cognitive Intelligence Joint Research Center_
{yunchenghuang22, qyhe21, ypxu22}@m.fudan.edu.cn
{liangjiaqing, shawyh}@fudan.edu.cn
Current language models have demonstrated
their capability to develop basic reasoning, but
struggle in more complicated reasoning tasks
that require a combination of atomic skills,
such as math word problem requiring skills like
arithmetic and unit conversion. Previous methods either do not improve the inherent atomic
skills of models or not attempt to generalize
the atomic skills to complex reasoning tasks.
In this paper, we first propose a probing framework to investigate whether the atomic skill can
spontaneously generalize to complex reasoning
tasks. Then, we introduce a hierarchical curriculum learning training strategy to achieve
better skill generalization. In our experiments,
we find that atomic skills can not spontaneously
generalize to compositional tasks. By leveraging hierarchical curriculum learning, we successfully induce generalization, significantly
improve the performance of open-source LMs
on complex reasoning tasks. Promisingly, the
skill generalization exhibit effective in crossdataset and cross-domain scenarios. Complex
reasoning can also help enhance atomic skills.
Our findings offer valuable guidance for designing better training strategies for complex
reasoning tasks.
**_Question: length, with each segment being 37 centimeters long. What was the total length of the ribbon in decimeter?Marty cut a ribbon into 12 segments of equal_** _Your reasoning process is correct, but fail in arithmetic_
**_Response: Since Marty cut the ribbon into 12_**
_and unit conversion._
segments, the total length of the ribbon is equal to
the sum of the lengths of all the segments. Since each
segment is 37 centimeters long, the total length of
the ribbon in decimeter is: 12 x 37 cm = 448 cm. ❌So, _You should enhance_
the total length of the ribbon is 448 decimeters.❌ _your atomic skills first!_
**_Prerequisite Tasks_** _Training_
_12 x 37 = 12 * 30 +_ Now I have got the
_12 * 7 = 360 + 84 =_ _448 cm is equal to 44.8_ hang of the atomic
_444_ _decimeter._ skills !
_Arithmetic Skill_ _Unit Conversion Skill_
**_Ideal Response: Since…. the total length of the_**
_Can I apply these skills_ ribbon is: 12 * 37 = 12 * 30 + 12 * 7 = 360 + 84 =
_to complex reasoning_ 444 cm . Convert cm to decimeter: 444 cm is
_task?_ equal to 44.4 decimeter. So the total length of
the ribbon is 44.4 decimeters.
Figure 1: An example of LMs’ deficiencies on atomic
skills when solving complex reasoning tasks. While
these atomic skills can be improved through skill training, it remains uncertain whether language models can
apply enhanced skills to complex tasks.
reasoning tasks is primarily attributed to their deficiency in atomic skills. As shown in Fig. 1 (top),
despite following a correct reasoning process, the
models still yield incorrect solutions due to errors
in arithmetic and unit conversion skills.
Recent studies attempt to address this issue
through skill enhancement, but there are still limitations. Some approach involves introducing
external tools (Imani et al., 2023; Schick et al.,
2023), validators (Khalifa et al., 2023) or knowledge bases (Lewis et al., 2020) to assist atomic
skills. These methods rely on external support
but do not inherently improve the atomic skills
of the model itself. Other studies promote the performance through multitask learning (Chen et al.,
2023; Kim et al., 2023). They argue that skill improvement can be implicitly achieved through the
transfer effect between tasks. However, they neither specify which skills are improved nor quantitatively assess the performance gains from skill improvements. The skill enhancement is implicit and
unobservable, and the relationships between tasks
**1** **Introduction**
Current language models (LMs) have demonstrated their capability in a variety of reasoning
tasks (Huang and Chang, 2023; Wei et al., 2022b).
However, they struggle in more complex tasks that
require the combination of various atomic skills,
such as solving math word problem (MWP, Cobbe
et al., 2021; Patel et al., 2021) requiring arithmetic (Liu and Low, 2023; Nogueira et al., 2021;
Muffo et al., 2022) and unit conversion (Park et al.,
2022) skills. Previous study argue that the inferior
performance of current LMs in solving complex
- Corresponding author.
-----
are inexplainable. The most related studies individually improve particular skills by integrating specific knowledge (Park et al., 2022) or by fine-tuning
with specialized crafted Chain-of-Thought (Liu and
Low, 2023). However, these studies tend to train a
specialized models that proficient in atomic skills
rather than enhance atomic skills while maintaining
the original capabilities of the model. Moreover,
they do not investigate whether the enhancement
of skills can be generalized to complex tasks.
We argue that skill enhancement can generalize
to complex tasks, as the response format for complex tasks is a composition of atomic skills. For
instance, in Fig. 1, the response to the MWP compose arithmetic and unit conversion skills, which
are respectively corresponding to the text segments
“12×37=448 cm” and “448 decimeters”. The precision of complex reasoning tasks is significantly
influenced by the mastery of skills. In this case,
if both skills are improved, the response would
turn out to be correct. Language models have been
proved to individually improve their skills through
specialized training (Fig. 1, middle). What we are
particularly interested in is whether models can ap_ply the enhanced skills to complex tasks (Fig. 1,_
bottom), referred to as skill generalization in this
paper. It is crucial to highlight that our research
objective is fundamentally different from multitasking as we explicitly define skills. Furthermore, due
to the composability between skills and complex
tasks, this generalization effect should be observable and explainable.
In this work, we investigate the mechanism of
skill generalization through empirical experimentation on MWP. We aim to answer two key questions: Can atomic skills spontaneously generalize
_to complex reasoning tasks? How can we maxi-_
_mize the skill generalization effectiveness? First,_
we propose a probing framework to investigate the
skill generalization mechanism on complex reasoning tasks. We select two essential atomic skills in
MWP for probing: arithmetic and unit conversion.
Then, we specifically design prerequisite tasks to
enhance atomic skills and construct corresponding
datasets through automated methods. Moreover,
inspired by hierarchical curriculum design in pedagogy (White and Gagné, 1974; Scott, 2008), we
propose a two-stage training strategy named hierarchical curriculum learning to maximize skill generalization. The first stage is skill training, which
involves continuous learning on prerequisite tasks,
enabling LMs to enhance atomic skills while main
taining their original problem-solving abilities. The
second stage is applied learning, where language
models learn to apply skills to complex reasoning
tasks. Finally, we carry out experiments across
different models and perform detailed analyses.
In our experiments, we observe that (1) atomic
skill can not spontaneously generalize to complex
reasoning tasks, but can be induced to generalize
through hierarchical curriculum learning. (2) A
strong foundation laid in skill learning is crucial for
effectiveness of LMs on complex reasoning tasks.
(3) Skill enhancement exhibits a cross-dataset and
cross-domain generalization effect. (4) Conversely,
complex reasoning task can also help enhance the
atomic skills. We attribute this to the composability
between skills and complex tasks.
Our contributions can be summarized as follows:
- To our best knowledge, we are the first to investigate the generalization from atomic skills
to complex reasoning tasks.
- We propose a probing framework to investigate the spontaneity and effectiveness of skill
generalization.
- We propose a hierarchical curriculum learning
training strategy to induce skill generalization.
Our experiments demonstrate the effectiveness of this strategy in achieving better skill
generalization.
**2** **Related Work**
**Task Generalization** Cross-task generalization
refers to effectively apply previously learned
knowledge and skills from source task to new target tasks (Talmor and Berant, 2019; Khashabi et al.,
2020; Ye et al., 2021). Recent studies attain significant success in cross-task generalization by employing a multi-tasking approach (Sanh et al., 2022;
Wei et al., 2022a; Kim et al., 2023). Chen et al.
(2023) argues that the effectiveness of generalization stems from the implicit skill transfer between
tasks and seeks to find an optimal sequence to maximize the effect. Our research differs from the aforementioned studies in that we explicitly predefine
source and target tasks that possess composability
in format. Moreover, our research does not depend
on massive tasks but emphasizes generalization
from atomic skills to complex reasoning tasks.
**Compositional Generalization** Compositional
generalization research primarily focus in semantic parsing (Lake and Baroni, 2018; Keysers et al.,
-----
**_Probing on_**
**_Hierarchical Curriculum_**
**_Skill Generalization_**
**_Learning (HCL)_**
**_Criteria 1:_**
**_Stage 1: Skill Training_** **_Metrics Improvement_**
**_Criteria 2:_**
**_Response Integration_**
_Prerequisite Data_
**_Spontaneously Skill_**
**_Stage 2: Applied Learning_** **_Generalization?_**
_Compositional_ **_Better Skill_**
_Data_ **_Generalization?_**
Figure 2: Framework of our method. The right part is
our probing approach. The left part describes the model
training stages in hierarchical curriculum learning.
**3.1.2** **Probing Skills**
**Arithmetic Skill.** Arithmetic skill refer to perform operations among numbers such as addition,
subtract, multiplication and division. Most current LMs suffer from inaccurate arithmetic due to
lacking specialized skill-oriented training (Liu and
Low, 2023). We design a prerequisite task for arithmetic and construct the corresponding dataset. By
training on prerequisite tasks, we can enhance the
arithmetic skills. The arithmetic data encompasses
a variety of difficulties, including different operation hops, operation types, value types and significant digits. For simple operations, we require the
model to directly provide the arithmetic result. For
complex operations, we design Chain-of-Thought
responses, following Liu and Low (2023), due to
the challenges in directly deriving the answers for
these tasks. As shown in the example in Tab. 1,
when answering “12 * 43.5”, we require the model
to present the process of splitting, expansion, producting, and adding term by term before providing
the final answer.
**Unit Conversion Skill.** Similar to arithmetic,
unit conversion is necessary when dealing with
values of different units in MWP. Current LMs lack
sufficient knowledge of units, making it difficult to
accurately perform unit conversions (Huang et al.,
2023). Therefore, we also propose a prerequisite
task and corresponding training data for unit conversion. We first extract all quantity types in MWP
based on a comprehensive unit knowledge base
DimUnitKB (Huang et al., 2023). As shown in
Tab. 1, the units involved in MWP include seven
quantity types such as length, time, speed, etc.
Then we construct the unit conversion dataset for
unit pair under the same quantity type. For example, “meters” and “centimeters” are both denote
length, so it can be naturally stated as “1 meter is
2020; Kim and Linzen, 2020). They explore generalizing simple data to complex data through composition within an inter-dataset distribution. In
contrast, our study explores cross-dataset generalization, especially skill generalization in complex
reasoning tasks.
**Atomic Skill Learning** Numerous studies focus
on individually enhancing specific skills. Liu and
Low (2023) enhance arithmetic skill of LMs by
specialized designed COT prompting. Huang et al.
(2023) improve unit conversion skills through dimensional perception pretraining tasks. However,
these studies do not investigate generalizing the
enhanced skills to complex reasoning tasks.
**Curriculum Learning** Curriculum learning suggests that a structured and progressively challenging learning path can improve the learning effectiveness (Bengio et al., 2009; Wu et al., 2021). Previous work focus on ordinal training on a single
task based on the difficulty of the data (Jiang et al.,
2015; Xu et al., 2020; Elgaar and Amiri, 2023).
Our research advances the field by applying hierarchical curriculum learning to multitasks guided by
_composability among these tasks and investigates_
their generalization effects.
**3** **Method**
In this section, we first propose a probing framework to investigate generalization from atomic
skills to complex reasoning tasks (§ 3.1). Then,
we propose a hierarchical curriculum learning strategy to maximize the generalization effect (§ 3.2).
The framework is shown in Fig. 2.
**3.1** **Skill Generalization Probing**
**3.1.1** **Task Selection**
We chose math word problem (MWP, Cobbe et al.,
2021; Patel et al., 2021) as the investigated task,
as it is a common-used benchmark for complex
reasoning and the correctness can be objectively
assessed. We select arithmetic and unit conversion
as atomic skills because LMs display weaknesses
in these skills when addressing MWP (Imani et al.,
2023; Schick et al., 2023; Huang et al., 2023). To
obtain a model proficient in skills, we need to design prerequisite tasks and conduct skill training
first (§ 3.1.2). After that, we can investigate the
skill generalization on the enhanced model (§ 3.1.4)
-----
**Task** **Type** **Example**
AddSub _5520.8 + 1.34 = 5522.14; 5494 + 26.8 + 1.34 = 5520.8 + 1.34 = 5522.14;_
S-Mul _12 * 40 = 480; 12 * 3 = 36; 12 * 0.5 = 6; 12 * 0.01 = 0.12_
C-Mul **_12 * 43.5 = 12 * 40 + 12 * 3 + 12 * 0.5 = 480 + 36 + 6 = 516 + 6 = 522_**
S-Div _123 / 2 = 61.5; 214 / 3 = 80.33 123 / 10 = 12.3_
Arithmetic
Length **_522 meter is equal to 0.522 kiloteters._** _Two inches is equal to 5.08 centimeters._
Unit Conversion Time _1 hour is equal to 60 minutes._ _3 hours is equal to 10800 seconds._
Speed _1 m/s is equal to 3.6 km/h._ _72 kilometer per hour is equal to 20 meter per second._
**Question: James decides to run 3 sprints 4 times a week. He runs 43 meters each sprint.**
Math Word Problem - How many total meters does he run a week?
**Response: He sprints 3 * 4 = 12 times. So he runs 12 * 43 = 516 meters a week.**
**Question: James decides to run 3 sprints 4 times a week. He runs 43.5 meters each**
sprint. How many total kilometers does he run a week?
**Response:He sprints 3 * 4 = 12 times. So he runs 12 * 43.5 = 12 * 40 + 12 * 3 + 12 * 0.5**
**_= 480 + 36 + 6 = 516 + 6 = 522 meters a week. 522 meters is equal to 0.522 kilometers._**
So the answer is 0.522.
Applied Learning Mixture
Table 1: Examples for prerequisite tasks and complex reasoning tasks. The response for compositional tasks presents
arithmetic and unit conversion skill and they are highlighted in orange and green respectively. S- refers to simple
operation where the significant digit of the second number is 1. C- refers to complex operation.
_equal to 100 centimeters". We detail the constru-_
tion method in Appendix A.2.
**3.1.3** **Skill Training (ST)**
mat we use for atomic skills in prerequisite tasks
differs from how the original model performs these
skills. Therefore, we can assess by determining
whether there has been an implementation of response integration. For example, successful skill
generalization should involve performing C-Mul
in a Chain of Thought (COT) format rather than
providing the answer directly.
**3.2** **Hierarchical Curriculum Learning (HCL)**
Since the data for arithmetic and unit conversion
are both automatically constructed, we can generate them in large quantities. It is straight-forward
to enhance the atomic skills into language models
through continuous training. However, continuous
training may lead to catastrophic forgetting (McCloskey and Cohen, 1989). To address this, we
employ the replay strategy (Ke and Liu, 2022) that
is widely used in continuous training. We retain
some training examples from MWP and mix them[1]
with prerequisite task data to ensure the model retains its original problem-solving abilities in skill
training. We conduct individual training for each
skills as well as training with a mixture of skills.
**3.1.4** **_How to determine whether skill_**
**_generalization has been achieved?_**
Probing experiments show that atomic skills can
not spontaneously generalize to complex tasks (results are detailed in § 5.1). Therefore, we propose
hierarchical curriculum learning (HCL) to induce
skill generalization.
Our approach is primarily inspired by hierarchical curriculum design in pedagogy (White and
Gagné, 1974; Scott, 2008). In most of education
system, student complete prerequisite course before enrolling in a more advanced course (Huang
et al., 2005). Prerequisites are to ensure that students possess necessary foundational knowledge
and skills and advanced courses enable students to
learn how to apply these skills in complex scenarios (Rovick et al., 1999). Likewise, we design a
two-stage hierarchical curriculum learning framework in our setting, shown in Fig. 2 left. The first
stage is skill training, which has already been implemented (§ 3.1.3). We introduce the second phase
of applied learning to enable LMs to apply their
acquired skills to complex tasks.
Skill generalization refers to being able to apply
skills learned from prerequisite tasks to complex
reasoning tasks. Therefore, we can assess this by
testing the skill-enhanced model in § 3.1.2 on MWP.
We can consider the following aspects.
**Metrics Improvement: Skills improvement can**
fix mistakes caused by the deficiency of atomic
skills of language models when solving reasoning tasks. Therefore, ideally, skill generalization
should be reflected in an improvement in metrics.
**Response Integration: As seen in Tab. 1, the for-**
1We discuss the mixing ratio in Appendix C.1.
-----
**Dataset** **Operartion AddSub Mixmul MixAll**
2-Hop 43.10 44.24 34.88
GSM8KRAW
3-Hop 22.72 35.00 14.32
2-Hop 31.90 15.20 5.30
GSM8KHARD
3-Hop 21.50 8.04 2.63
Table 2: Arithmetic accuracy(%) of LLaMA-2 on RAW
and HARD. AddSub involves only addition and subtraction operations, MixMul includes multiplication, and
MixAll involves all operations.
difficulty gradient, demonstrating the effectiveness
of the enhanced dataset.
**Unit Conversion Augmentation.** The main challenge in unit conversion lies in the diverse ways
units are represented. Statistical analysis of the
data in GSM8K shown in Appendix B shows that
the representation of these units in GSM8K is quite
uniform, leading to an incomplete evaluation of
unit conversion skills. Without altering the original
meaning of the questions, we have diversified the
representations of units within the same quantity
type. Tab. 8 demonstrates that the enhanced data
better tests unit conversion skills.
**4.2** **Models and Baselines.**
We investigate the skill generalization on two models from different families, namely LLaMa-2 (7B;
Touvron et al., 2023) and Mistral (7B; Jiang et al.,
2023). The baselines we compared include the
following two types: (1) Vanilla Model, which
is a model without any special modifications or
enhancements. (2) SFT Model, which has been
supervised fine-tuned on the training set of MWP.
We test all the model with zero-shot and few-shot
prompting. The few-shot examples are drawn from
the training set in applied learning.
**5** **Experimental Analysis and Findings**
**5.1** **_RQ1: Can atomic skills generalize from_**
**_prerequisite tasks to compositional tasks_**
**_spontaneously?_**
**Atomic skill CANNOT generalize to composi-**
**tional tasks spontaneously.** Tab. 3 illustrates the
overall performance of different language models in compositional tasks. We observe that language models do not get noticeable gain on the
MWP after skill training. LLaMA-2 only improve
from 13.60% to 13.76% with zero-shot prompting
in HARD, even experiencing a slight decrease on
RAW. This phenomenon is model-independent, so
Operation
Hops
Operation
Types
Value
Types Difficulty Level
Significant
Digits
0 20 40 60 80 100 0 20 40 60 80 100
RAW HARD
Figure 3: The distribution of data difficulty across four
dimensions. Darker colors mean greater difficulty and
larger areas mean more data.
**3.2.1** **Applied Learning (AL)**
In this stage, we first construct compositional data
(shown in Tab. 1 bottom) for applied learning. Next,
we further train the model in the first stage with
compositional data. In Tab. 1, the response of the
original MWP directly provides the result when
performing arithmetic. Moreover, the responses
usually do not show the process of converting units.
We incorporates the response format from prerequisite tasks into the problem-solving process for
MWPs, aiming to induce the model to apply atomic
skills to complex tasks. We detail the data construction method in Appendix A.3.
**4** **Experimental Settings**
**4.1** **Evaluation Datasets.**
We choose GSM8K (Cobbe et al., 2021), a widely
used benchmark for complex multi-step reasoning,
requiring arithmetic and unit conversion skills. We
compile statistics of arithmetic and unit conversion
in the test set and observe that it lacks comprehensiveness in terms of difficulty and knowledge
coverage. Therefore, we enhance the difficulty of
GSM8K test set to demonstrate skill generalization
more significantly. We denote the origin dataset as
RAW and the augmented dataset as HARD.
**Arithmetic Augmentation.** The difficulty of
arithmetic skills can be assessed from four dimensions: operation hops, operation type, value type,
and significant digit. RAW has reasonable settings
in the first three dimensions, but its inclusion of
short significant digit resulting in lower demands
on arithmetic skills. Therefore, we extend the significant digits in RAW without changing the logic of
the original problems. In Tab. 2 we showcase the
performance of LLaMA-2 (Touvron et al., 2023)
on the test set before and after enhancement. The
difficulty of these three operations increases progressively, but testing on RAW can not distinguish
the difficulty. The enhanced test set aligns with this
-----
**Arithmetic** **Unit Conversion** **Mixture**
**Model** **Method**
**Zero-shot** **Few-shot** **Zero-shot** **Few-shot** **Zero-shot** **Few-shot**
RAW HARD RAW HARD RAW HARD RAW HARD ARITH UNIT ARITH UNIT
Vanilla 22.97 8.67 23.73 10.03 23.12 1.72 23.50 6.90 8.67 1.72 10.03 6.90
SFT 38.67 13.60 37.23 12.76 38.67 6.03 36.77 6.90 13.60 6.03 12.76 6.90
LLaMa-2-7B
ST 35.78 13.76 33.51 13.61 37.22 8.62 **37.91** 8.62 13.79 14.45 12.41 6.90
AL 41.77 24.66 39.42 23.30 37.30 18.86 36.39 15.52 23.13 20.69 **23.64** 23.27
HCL **48.36 28.06 47.76 28.57 38.28 20.68 36.92 22.41** **25.85** **23.27** 22.62 **28.44**
Vanilla 44.50 16.49 40.10 15.98 43.97 11.21 44.95 17.24 16.49 11.21 15.98 17.24
SFT 55.72 20.41 54.89 21.60 55.72 21.55 56.71 20.67 20.41 21.55 21.60 20.67
Mistral-7B
ST 55.72 22.45 55.34 22.11 57.16 25.00 54.97 28.45 22.28 25.00 22.62 18.97
AL 56.63 32.65 55.95 32.31 57.69 37.93 54.97 28.49 34.01 38.79 33.33 33.62
HCL **57.92 36.56 57.01 35.88 57.39 40.51 53.98 31.03** **35.54** **44.82** **35.37** **37.93**
Table 3: Accuracy(%) of different LMs with different training strategies on MWP. ST, AL, HCL refer to skill
training, applied learning and hierarchical curriculum learning respectively. RAW refers to testing on the origin
GSM8k test set, HARD refers to testing on the augmented test set on specific atomic skill. In Mixture column, ARITH
and UNIT refer to testing on the augmented test set on arithmetic and unit conversion skills respectively.
tasks to compositional tasks, they can be induced
through applied learning. Furthermore, we emphasize that this induced generalization needs to be
achieved through training and cannot be replaced
by few-shot prompting, as the metrics for few-shot
learning do not surpass those for zero-shot learning.
**The enhancement of compositional tasks stem**
**from the improvement of atomic skills.** We extract the atomic skill part from the responses on
MWP and calculate their accuracy, shown in Fig. 4.
Hierarchical curriculum learning results in significant improvements in arithmetic accuracy for all
types of operations. The improvements are particularly striking for MixMul and MixAll, suggesting
that current LMs are struggling to perform these
arithmetic operations. These improvement is consistent with the gain in answer accuracy for compositional tasks, as detailed in Appendix D.1.
Furthermore, we conduct an error analysis of the
responses, as seen in Fig. 5. We first determine
if a response involves an atomic skill error, and
subsequently categorize other mistakes. The majority of errors made by vanilla model stem from
deficiencies in atomic skills. After applying HCL,
a few of errors shift to be question misunderstood
and reasoning errors, while most errors are fully
corrected to the right answers. This demonstrates
that the lack of atomic skills in a model masks
its superior reasoning capabilities, and HCL can
effectively address this.
**Mixture training is also effective for skill gener-**
**alization.** As seen in the third column of Tab. 3,
Method Response
... The salesman sold 31 shoes for
ST 31 * $25 = 775 .Thus, the salesman made a
profit of 775 - 340 = 435. So the answer...
The salesman sold 31 sneakers for
AL 31 * $25 = 31 * 25 = 31 * 20 + 31 * 5 = 620 ✓
+ 155 = 775 throughout the rest of the...
The salesman sold 31 shoes for $25 each, so
HCL his profit was 31 * $25 = 31 * 25 = 31 * 20 ✓
+ 31 * 5 = 620 + 155 = 775. In total ...
Table 4: Example of LM’s response to MWP with different training strategies. The last column indicates
whether the prerequisite task format has been integrated.
skill does not generalize from metrics perspective.
Moreover, as shown in Tab. 4, ST model employs
a format entirely distinct from that of the prerequisite task. This clearly indicates that atomic skills
actually do not generalize from prerequisite tasks
to complex tasks at all.
**Atomic skills can be induced to generalize**
**through applied learning.** HCL introduce the
second phase of applied learning to induce skill
generalization in hierarchical curriculum learning.
As seen in Tab. 3, LLaMA-2 significantly improve
from 13.60% to 28.76% with zero-shot prompting
in HARD, demonstrating the successful generalization of skills. Case studies from Tab. 4 further
show that models after applied learning (AL and
HCL) are capable of integrating data from prerequisites into their responses to MWP, thus performing
accurate calculations. Therefore, although skills
do not spontaneously generalize from prerequisite
-----
80
Accuracy (%) 60
LLaMa-2. Left figure shows the results on RAW and right
|Col1|Col2|Vanilla SFT ST AL HCL|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||||||||||
|e|Add 4|Sub : A|c|c|Mix u|Mu ra|l c|y|(|Mix %|Al )|l o||Ad|dSu|Mu|l||Al|l|
figure shows the results on HARD.
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||w/ w/|o ST ST|RA HA|W RD|
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||ST- ST- HC HC|AddSub Overall L-RAW L-HARD|
70 100
40 60 90
50
30 80
40
20 ST-AddSub 70
30 ST-Overall
w/o ST RAW HCL-RAW 60
10 w/ ST HARD 20 HCL-HARD
0.0 2.5 5.0 7.5 10.0 0 200 400 600
Training Steps (/K) Token (/K)
Figure 6: Accuracy(%) of LLaMa-2 as training increases in different setting.
ifying atomic skills, while swift applied learning
afterwards boosts the model’s practical application
capabilities. In practical applications, it is often
more challenging to obtain a large scale of heterogeneous compositional data than to acquire prerequisite data. The demand for data aligns with the
challenges of data collection, which further demonstrates the feasibility of our approach.
**5.2** **_RQ2: Do atomic skills exhibits cross dataset_**
**_or cross domain generalization?_**
Given that the data in applied learning is sourced
from GSM8k, we aim to ascertain whether
skill generalization is also effective in out-ofdistribution (OOD) data. We categorize the OOD
data into two types: inter-domain and cross-domain.
For inter-domain data, we use SVAMP (Patel et al.,
2021) and MathQA (Amini et al., 2019), both
of which comprise math word problems as well.
As for cross-domain data, we use the MMLUPhysics (Hendrycks et al., 2021), as it is a physical
task but also relies on arithmetic skill.
**Skill generalization is effective in inter-domain**
**data.** As seen in Tab. 5, with applied learning,
models generates responses using step-wise format,
unlike the original model which answered directly.
This demonstrates models can effectively utilize
the skills even if the questions originating from a
cross-dataset distribution.
**Atomic skills can generalize across domains and**
**show selective adaptability.** In Tab. 5, the model
perform arithmetic among physical quantities in
the same format as in prerequisite tasks, indicating
that skill generalization still exhibits effectiveness
in cross-domain scenarios. LMs also exhibit selective adaptability when processing unseen data.
For instance, when dealing with the exponential
value “10[−][6]” that is unseen in prerequisite tasks,
LMs opt to answer in its original format. Our findings reveal that while we need to introduce some
Atomic Skill Fail Correct Reason Error Question Misunderstood
12.0%12.0% 21.0%
8.0%
14.0% 55.0%
68.0% 10.0%
Figure 5: Error analysis on Vanilla model (left) and
HCL model (right).
mixture training yields similar results to achieve
skill generalization as individual training. On
the LLaMA-2 model with zero-shot prompting,
arithmetic performance increase from 13.60% to
25.85%, and unit conversion performance increase
from 6.03% to 23.27%. Compared to individual
training, mixture training leads to interactive effects, which depend on the intrinsic characteristics
of the skills themselves. From the results, the improvement in arithmetic from mixed training is not
as significant as individual training. However, improvement in unit conversion was higher, possibly
because enhanced arithmetic skills positively affected unit conversion, as accurate calculations aid
in better conversions.
**Skill learning is indispensable in HCL.** In
Tab. 3, the performance of applied learning alone
significantly lower than the full HCL. Fig. 6 left
shows the accuracy of the applied learning and
HCL with training increase and demonstrates that
HCL reaches a better upper bound. We argue that
this is because applied learning only teaches the
formal application of the skills without imparting
the associated knowledge.
Furthermore, we observe that applied learning
requires much less data compared to skill training.
In Fig. 6 right, applied learning converge before
200K tokens, whereas skill training necessitates
over 600K tokens. This suggests that enhancing
skills is more difficult than learning how to apply
them. It underscores the educational principle that
mastering prerequisite tasks is essential for solid
-----
Dataset Response
**Method** AddSub S-Mul C-Mul S-Div
_w/o Skill Training_
Vanilla **76.84** 51.65 22.91 70.67
AL 74.03 (-2.81) **52.32** (+0.67) **40.72** (+17.81) 78.94 (+8.27)
_w/ Skill Training_
ST 97.54 **95.75** 93.45 78.19
HCL **98.59** (+1.05) 82.27 (-13.48) 93.82 (+0.37) **81.20** (+3.01)
Table 6: Accuracy(%) in prerequisite tasks of LMs with
and without skill training.
Appendix D.2. Moreover, the gains brought by
compositional tasks are minimal. Applied learning
only results in an increase from 22.91% to 40.72%
on C-Mul while skill training leads to a skyrocketing increase to 93.45%. We attribute this to the
limited heterogeneous data for applied learning.
The prerequisite tasks can automatically generate a
large amount of heterogeneous data, but the compositional data is limited by the training set of the original training set for complex reasoning task. Therefore, applied learning alone can not sufficiently
enhance atomic skills, further highlighting the importance of a skill training stage in hierarchical
curriculum learning.
**Continued training on compositional tasks does**
**not lead to catastrophic forgetting on atomic**
**skills.** Models with skill learning already possess
proficient atomic skills but there is a risk that training on heterogeneous data in applied learning may
lead to catastrophic forgetting. However, as seen
in Tab. 6, it striking is that HCL remains at a comparable level to skill training for most arithmetic
operation. This illustrates that continuous training
on a compositional tasks, can spontaneously prevent catastrophic forgetting of atomic skills. We
suggest this due to the limited operations on compositional data in S-Mul, which may lead the model
to be confused by other data within C-Mul. However, an anomaly is observed in S-Mul operation,
which we further discuss in Appendix D.3.
**6** **Conclusion**
In this work, we are the first to investigate the generalization from atomic tasks to complex reasoning
tasks. We propose a probing framework, in which
we select math word problems as research example and arithmetic and unit conversion as related
atomic skills. By empirical experiments, we reveal
that atomic skills can not spontaneously generalize to complex reasoning tasks. Furthermore, we
propose a hierarchical curriculum learning strategy
**SFT: They harvest 66 sacks per day and each**
sack contains 28 oranges so they harvest 66*28
= 1808 oranges per day.
**HCL: They harvest 66 sacks per day, and**
each sack contains 28 oranges, so they harvest
66 * 28 = 66 * 20 + 66 * 8 = 1320 + 528 = 1848
oranges per day.
SVAMP
MathQA
MMLU
-Physics
**SFT: If selling an article for Rs. 1620 gives a**
25% profit, then the percentage profit earned is
25/100*1620 = 405.
**HCL: When the article is sold for Rs. 1620, the**
seller makes a profit of 25/100*1620 = 25 / 100
- 1620 = 0.25 * 1620 = 0.25 * 1000 + 0.25 *600
+ 0.25 * 20 + 0.25 * 0 = 250 + 150 + 5 + 0 =
400 + 5 + 0 = 405 + 0 = 405. When the article
is sold for Rs. 1280, the ...
**SFT: The stone’s speed in the air is 24 m/s * 9.8**
m/s ^2 = 24 * 9.8 = 22.8= 22.8 m/s. The ...
**HCL: The horizontal component of the stone’s**
speed is 24 m/s * 9.81 m/s^2 = 24 * 9.81 = 24 *
9 + 24 * 0.8 + 24 * 0.01 = 216 + 19.2 + 0.24 =
235.2 + 0.24 = 235.44 = 235.44 m/s. The ...
**HCL: ... Thus, R = PV/nT = (1.105 * 20 * 10^**
-6) / (0.0451 * 273) = 8.314 J/mol*K. Since ...
Table 5: Example of response in cross-dataset and crossdomain senarios.
compositional data during applied learning to induce skill generalization, it is not necessary to provide for every tasks. However, it is crucial to have
well-designed prerequisite data that can be applied
across a broad spectrum of scenarios.
**5.3** **_RQ3: Can complex tasks help enhance_**
**_atomic skills conversely?_**
Considering that applied learning data itself combines multiple atomic skills, it suggests that compositional data may also have a positive effect on
atomic skills. We construct test dataset to assess the
arithmetic skill, and then evaluate on the models
with different training strategies.
**Training with compositional data benefits the**
**atomic skills, but the effect is limited.** As shown
in Tab. 6, applied learning achieve better performance in all operations compared to SFT, showing
that compositional task have a positive effect on
prerequisite data conversely. This lead to a promising conclusion that training on complex reasoning dataset not only improve the performance of
the specific task but also benefits its prerequisite
tasks, as long as they exhibit composability in response. We further discuss the improvement in
-----
to induce skill generalization and show effectiveness. Our experimental findings provide valuable
guidance for designing better training strategies for
complex reasoning tasks in future work.
**Limitations**
In this work, we choose math word problems as
the research task, yet there are numerous more
complicated reasoning tasks that rely on atomic
skills, such as task planning, scenario modelling,
decision making, and so on. Although we do not
delve deeply into more complicated and pluralistic
reasoning tasks, these areas emerge a particularly
interesting direction for future research. Another
limitation is that our proposed skill generalization
depends on atomic skills that can be explicitly
demonstrated in the response. It remains uncertain whether implicit atomic skills can also have a
positive effect on complex reasoning tasks. Moreover, the definition of atomic skills and the method
for prerequisite data generation are based on manual specification. It is worth exploring automated
methodologies to design a complete framework for
hierarchical curriculum learning in future work.
**Ethical Considerations**
All the data sources and language models used in
this paper is available. In this paper, most of the
data generation and evaluation are automated, except for the error analysis in § 5.1 where human
evaluation is used. The details about human evaluation are provided in Appendix C.2. We protect the
privacy rights of annotators. All annotators have
been paid above the local minimum wage and consented to use the evaluation dataset for research
purposes covered in our paper. Our work does not
raise any ethical considerations regarding potential
risks and does not involve the research of human
subjects.
**References**
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik
Koncel-Kedziorski, Yejin Choi, and Hannaneh Ha[jishirzi. 2019. MathQA: Towards interpretable math](https://doi.org/10.18653/v1/N19-1245)
[word problem solving with operation-based for-](https://doi.org/10.18653/v1/N19-1245)
[malisms. In Proceedings of the 2019 Conference](https://doi.org/10.18653/v1/N19-1245)
_of the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
2357–2367, Minneapolis, Minnesota. Association for
Computational Linguistics.
Yoshua Bengio, Jérôme Louradour, Ronan Collobert,
and Jason Weston. 2009. Curriculum learning. In
_Proceedings of the 26th annual international confer-_
_ence on machine learning, pages 41–48._
Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue
Wang, Ce Zhang, Frederic Sala, and Christopher Ré.
[2023. Skill-it! a data-driven skills framework for](https://api.semanticscholar.org/CorpusID:260203057)
[understanding and training language models. ArXiv,](https://api.semanticscholar.org/CorpusID:260203057)
abs/2307.14430.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
Mohamed Elgaar and Hadi Amiri. 2023. [HuCurl:](https://doi.org/10.18653/v1/2023.acl-long.104)
[Human-induced curriculum discovery. In Proceed-](https://doi.org/10.18653/v1/2023.acl-long.104)
_ings of the 61st Annual Meeting of the Association for_
_Computational Linguistics (Volume 1: Long Papers),_
pages 1862–1877, Toronto, Canada. Association for
Computational Linguistics.
Dan Hendrycks, Collin Burns, Steven Basart, Andy
Zou, Mantas Mazeika, Dawn Song, and Jacob Stein[hardt. 2021. Measuring massive multitask language](https://openreview.net/forum?id=d7KBjmI3GmQ)
[understanding. In 9th International Conference on](https://openreview.net/forum?id=d7KBjmI3GmQ)
_Learning Representations, ICLR 2021, Virtual Event,_
_Austria, May 3-7, 2021. OpenReview.net._
[Jie Huang and Kevin Chen-Chuan Chang. 2023. To-](https://doi.org/10.18653/v1/2023.findings-acl.67)
[wards reasoning in large language models: A survey.](https://doi.org/10.18653/v1/2023.findings-acl.67)
In Findings of the Association for Computational
_Linguistics: ACL 2023, pages 1049–1065, Toronto,_
Canada. Association for Computational Linguistics.
Jiunn Huang, John O’shaughnessy, and Robin Wagner.
2005. Prerequisite change and its effect on intermediate accounting performance. Journal of Education
_for Business, 80(5):283–288._
Yuncheng Huang, Qianyu He, Jiaqing Liang, Sihang
Jiang, Yanghua Xiao, and Yunwen Chen. 2023. Enhancing quantitative reasoning skills of large language models through dimension perception. arXiv
_preprint arXiv:2312.17532._
Shima Imani, Liang Du, and Harsh Shrivastava. 2023.
[MathPrompter: Mathematical reasoning using large](https://doi.org/10.18653/v1/2023.acl-industry.4)
[language models. In Proceedings of the 61st An-](https://doi.org/10.18653/v1/2023.acl-industry.4)
_nual Meeting of the Association for Computational_
_Linguistics (Volume 5: Industry Track), pages 37–_
42, Toronto, Canada. Association for Computational
Linguistics.
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral
7b. arXiv preprint arXiv:2310.06825.
Lu Jiang, Deyu Meng, Qian Zhao, Shiguang Shan, and
Alexander Hauptmann. 2015. Self-paced curriculum
learning. In Proceedings of the AAAI Conference on
_Artificial Intelligence, volume 29._
-----
Zixuan Ke and Bing Liu. 2022. Continual learning of
natural language processing tasks: A survey. arXiv
_preprint arXiv:2211.12701._
Daniel Keysers, Nathanael Schärli, Nathan Scales,
Hylke Buisman, Daniel Furrer, Sergii Kashubin,
Nikola Momchev, Danila Sinopalnikov, Lukasz
Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang,
[Marc van Zee, and Olivier Bousquet. 2020. Measur-](https://openreview.net/forum?id=SygcCnNKwr)
[ing compositional generalization: A comprehensive](https://openreview.net/forum?id=SygcCnNKwr)
[method on realistic data. In 8th International Confer-](https://openreview.net/forum?id=SygcCnNKwr)
_ence on Learning Representations, ICLR 2020, Addis_
_Ababa, Ethiopia, April 26-30, 2020. OpenReview.net._
Muhammad Khalifa, Lajanugen Logeswaran, Moon[tae Lee, Honglak Lee, and Lu Wang. 2023. Grace:](https://openreview.net/forum?id=2MiTZxLFA9)
[Discriminator-guided chain-of-thought reasoning.](https://openreview.net/forum?id=2MiTZxLFA9)
Daniel Khashabi, Sewon Min, Tushar Khot, Ashish
Sabharwal, Oyvind Tafjord, Peter Clark, and Han[naneh Hajishirzi. 2020. UNIFIEDQA: Crossing for-](https://doi.org/10.18653/v1/2020.findings-emnlp.171)
[mat boundaries with a single QA system. In Find-](https://doi.org/10.18653/v1/2020.findings-emnlp.171)
_ings of the Association for Computational Linguistics:_
_EMNLP 2020, pages 1896–1907, Online. Association_
for Computational Linguistics.
Joongwon Kim, Akari Asai, Gabriel Ilharco, and Han[naneh Hajishirzi. 2023. TaskWeb: Selecting better](https://doi.org/10.18653/v1/2023.emnlp-main.680)
[source tasks for multi-task NLP. In Proceedings](https://doi.org/10.18653/v1/2023.emnlp-main.680)
_of the 2023 Conference on Empirical Methods in_
_Natural Language Processing, pages 11032–11052,_
Singapore. Association for Computational Linguistics.
[Najoung Kim and Tal Linzen. 2020. COGS: A compo-](https://doi.org/10.18653/v1/2020.emnlp-main.731)
[sitional generalization challenge based on semantic](https://doi.org/10.18653/v1/2020.emnlp-main.731)
[interpretation. In Proceedings of the 2020 Confer-](https://doi.org/10.18653/v1/2020.emnlp-main.731)
_ence on Empirical Methods in Natural Language_
_Processing (EMNLP), pages 9087–9105, Online. As-_
sociation for Computational Linguistics.
Brenden Lake and Marco Baroni. 2018. Generalization
without systematicity: On the compositional skills
of sequence-to-sequence recurrent networks. In In_ternational conference on machine learning, pages_
2873–2882. PMLR.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio
Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation
for knowledge-intensive nlp tasks. Advances in Neu_ral Information Processing Systems, 33:9459–9474._
Tiedong Liu and Bryan Kian Hsiang Low. 2023. Goat:
Fine-tuned llama outperforms gpt-4 on arithmetic
tasks. arXiv preprint arXiv:2305.14201.
Michael McCloskey and Neal J Cohen. 1989. Catastrophic interference in connectionist networks: The
sequential learning problem. In Psychology of learn_ing and motivation, volume 24, pages 109–165. Else-_
vier.
Matteo Muffo, Aldo Cocco, and Enrico Bertino. 2022.
[Evaluating transformer language models on arith-](https://aclanthology.org/2022.lrec-1.30)
[metic operations using number decomposition. In](https://aclanthology.org/2022.lrec-1.30)
_Proceedings of the Thirteenth Language Resources_
_and Evaluation Conference, pages 291–297, Mar-_
seille, France. European Language Resources Association.
Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin.
2021. Investigating the limitations of transformers with simple arithmetic tasks. _arXiv preprint_
_arXiv:2102.13019._
Sungjin Park, Seungwoo Ryu, and Edward Choi. 2022.
[Do language models understand measurements? In](https://doi.org/10.18653/v1/2022.findings-emnlp.128)
_Findings of the Association for Computational Lin-_
_guistics: EMNLP 2022, pages 1782–1792, Abu_
Dhabi, United Arab Emirates. Association for Computational Linguistics.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
[2021. Are NLP models really able to solve simple](https://doi.org/10.18653/v1/2021.naacl-main.168)
[math word problems? In Proceedings of the 2021](https://doi.org/10.18653/v1/2021.naacl-main.168)
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 2080–2094, Online._
Association for Computational Linguistics.
Allen A Rovick, Joel A Michael, Harold I Modell,
David S Bruce, Barbara Horwitz, Thomas Adamson, Daniel R Richardson, Dee U Silverthorn, and
Shirley A Whitescarver. 1999. How accurate are
our assumptions about our students’ background
knowledge? _Advances in Physiology Education,_
276(6):S93.
Victor Sanh, Albert Webson, Colin Raffel, Stephen H.
Bach, Lintang Sutawika, Zaid Alyafeai, Antoine
Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey,
M Saiful Bari, Canwen Xu, Urmish Thakker,
Shanya Sharma Sharma, Eliza Szczechla, Taewoon
Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti
Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han
Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong,
Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Févry, Jason Alan Fries, Ryan
Teehan, Teven Le Scao, Stella Biderman, Leo Gao,
[Thomas Wolf, and Alexander M. Rush. 2022. Multi-](https://openreview.net/forum?id=9Vrb9D0WI4)
[task prompted training enables zero-shot task gener-](https://openreview.net/forum?id=9Vrb9D0WI4)
[alization. In The Tenth International Conference on](https://openreview.net/forum?id=9Vrb9D0WI4)
_Learning Representations, ICLR 2022, Virtual Event,_
_April 25-29, 2022. OpenReview.net._
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta
Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola
Cancedda, and Thomas Scialom. 2023. Toolformer:
Language models can teach themselves to use tools.
_arXiv preprint arXiv:2302.04761._
Shaun Eric Scott. 2008. Student academic performance
in skills-based technology courses delivered through
different scheduling formats. Dissertations & Theses
_- Gradworks._
-----
[Alon Talmor and Jonathan Berant. 2019. MultiQA: An](https://doi.org/10.18653/v1/P19-1485)
[empirical investigation of generalization and trans-](https://doi.org/10.18653/v1/P19-1485)
[fer in reading comprehension. In Proceedings of the](https://doi.org/10.18653/v1/P19-1485)
_57th Annual Meeting of the Association for Computa-_
_tional Linguistics, pages 4911–4921, Florence, Italy._
Association for Computational Linguistics.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann
Dubois, Xuechen Li, Carlos Guestrin, Percy Liang,
and Tatsunori B. Hashimoto. 2023. Stanford alpaca:
An instruction-following llama model. [https://](https://github.com/tatsu-lab/stanford_alpaca)
[github.com/tatsu-lab/stanford_alpaca.](https://github.com/tatsu-lab/stanford_alpaca)
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint_
_arXiv:2307.09288._
Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin
Guu, Adams Wei Yu, Brian Lester, Nan Du, An[drew M. Dai, and Quoc V. Le. 2022a. Finetuned](https://openreview.net/forum?id=gEZrGCozdqR)
[language models are zero-shot learners. In The Tenth](https://openreview.net/forum?id=gEZrGCozdqR)
_International Conference on Learning Representa-_
_tions, ICLR 2022, Virtual Event, April 25-29, 2022._
OpenReview.net.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022b. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural
_Information Processing Systems, 35:24824–24837._
Richard T White and Robert M Gagné. 1974. Past and
future research on learning hierarchies. Educational
_psychologist, 11(1):19–28._
Xiaoxia Wu, Ethan Dyer, and Behnam Neyshabur. 2021.
[When do curricula work? In 9th International Con-](https://openreview.net/forum?id=tW4QEInpni)
_ference on Learning Representations, ICLR 2021, Vir-_
_tual Event, Austria, May 3-7, 2021. OpenReview.net._
Benfeng Xu, Licheng Zhang, Zhendong Mao, Quan
Wang, Hongtao Xie, and Yongdong Zhang. 2020.
[Curriculum learning for natural language understand-](https://doi.org/10.18653/v1/2020.acl-main.542)
[ing. In Proceedings of the 58th Annual Meeting of](https://doi.org/10.18653/v1/2020.acl-main.542)
_the Association for Computational Linguistics, pages_
6095–6104, Online. Association for Computational
Linguistics.
Qinyuan Ye, Bill Yuchen Lin, and Xiang Ren. 2021.
[CrossFit: A few-shot learning challenge for cross-](https://doi.org/10.18653/v1/2021.emnlp-main.572)
[task generalization in NLP. pages 7163–7189.](https://doi.org/10.18653/v1/2021.emnlp-main.572)
-----
**A** **Detail for Training Data**
**A.1** **Arithmetic Prerequisite Task**
We construct data for arithmetic prerequisite task
by a rule-based approach, seen in Algorithm 1. The
difficulty of the data considers four aspects: the
number of hops, the length of a significant digit, the
type of value and the type of operation. The number of hops ranges from two to five, the more hops
means the harder it is. The significant digit length
ranges from one to eight. Longer length means the
more complex and difficult for calculations. Value
types include: all integers, all floating, and mixed
data types, with increasing difficulty. Operation
types include AddSub, W-Mul, C-Mul and S-Div.
Among them, AddSub consists of only addition and
subtraction, S-Mul involves simple multiplication
operations (the second value with a significant figure of 1), C-Mul encompasses complex multiplication operations where we break down the second
number and perform step-wise calculations, and
S-Div represents simple division.
**Algorithm 1: Arithmetic Data Generation**
**Data: Operation set O, Significant Digit set D, Value**
type set V, Hop set H
**Result: Arithmetic Expression E, Response R**
// Initialization
**1 h ←** Random(H); o ← Random(O);
**2 d ←** Random(D); v ← Random(V );
**3 num// Expression generation0 ←** NumberGenerator(d, v); E ← _num0;_
**4 for i ←** 1 to h do
**5** _op ←_ OperationGenerator(o);
**67** _nEi ←+1 ←E ◦NumberGeneratorop ◦_ _ni+1;_ (d, v, op);
**8 end**
**9 R ←** ResponseGeneration (E);
**10 return E, R;**
// Iterative response generation
**11 Function ResponseGeneration(exp):**
**12** _i ←_ _idx if exists opidx in MulDiv else 0;_
**13** _Ep_ (n0, op0, . . ., opi 1, ni);
**1514** _SREs ← ← ←(OneHopResponseopi+1, . . ., nn);_ _−(ni, opi, ni+1);_
**1716** _MRnnew ← ←MergeResponseeval(ni, opi, ni(+1E)p;, SR, Es);_
**1819** _Ereturn ←_ _E MR +p ◦_ _nnew ResponseGeneration ◦_ _Es;_ (E);
In Algorithm 1, reponse demonstrates a stepwise arithmetic process based on hop. For C-Mul
operation, the second value is split and then perform bit-wise multiplication and results addition.
SplitCOT, MulCOT and AddCOT represent the process of digits splitting, bit-wise multiplication and
results addition respectively.
**A.2** **Unit Conversion Prerequisite Task**
We construct data for unit conversion prerequisite
task based on DimUnitKB (Huang et al., 2023).
We first extract the quantity types of units contained in the MWP, including seven type in total:
length, time, speed, mass, volume, area and power.
We construct pair-wise conversion data for units
representing the same quantity types, detailed in
Algorithm 2.
**Algorithm 2: UnitConv. Data Generation**
**Data: Quantity Type Set Q, DimUnitKB K**
**Result: Unit Conversion Text Data T**
// Select random quantity type and units
**1 q ←** Random(S);
**23 U u// Calculate conversion ratio0q, u ←{1 ←u ∈RandomK | u.(typeUq, 2) =; q};**
**4 conv ←** _u0.conv/u1.conv;_
// Generate conversion text
**5 T ←** TextGeneration(u0, u1, conv);
**6 return T** ;
**A.3** **Compositional Data**
In applied learning, we need to construct data that
integrates atomic skills into compositional tasks.
The construction of compositional data consists of
the following steps:
1. Sample data items from the train set of complex reasoning tasks.
2. Extract all segments related to atomic skills
from the response of the item.
3. Use the ResponseGeneration function and
TextGeneration function in Algorithm 1 and
Algorithm 2 to generate the new format of
response for atomic skill.
4. Replace all segments related to atomic skills
with the new answer response to construct
compositional data.
For example, in arithmetic, we extract the arithmetic segments “12 * 37 = 448” from the response
“the total length of the ribbon is: 12 * 37 = 448 cm”
and then replace it with “12 * 37 = 12 * (30 + 7)
// One Hop COT response generation
**20 Function OneHopResponse(n0, op, n1):**
|21 22 23 24 25 26 27 28 29|if op = “C-MUL” then w 0, w 1, ..., w ←SplitDigits(n 1); m SR ←SplitCOT(n 0, w 0, w 1, ..., w m); SR += MulCOT(n , w , w , ..., w ); 0 0 1 m SR += AddCOT(n ∗w0, ..., n ∗w m); 0 0 else SR ←Eval(n 0, op, n 1); end return SR;|
|---|---|
-----
Dataset # w/ unit conv. # Length # Mass # Power
GSM8kRAW 61 3 5 2
GSM8kHARD 116 10 13 4
Table 7: Partial statistic of RAW and HARD set.
**Algorithm 3: Augmenting MWP on Arith.**
**Input: Original math problem (QRAW, ARAW )**
**Output: Enhanced math problem**
(QHARD, AHARD)
**1 N ←** ExtractNumber(QRAW );
**2 I ←** ExtractNumber(ARAW ) \ N ;
// Determine the computational relationship for I
**3 f ←** ExtractMapping(ARAW );
**4 D ←** Random a maximum significant digit length;
// Generate new numbers for N with length <= D
**5 for i ←** 1 to Length(N ) do
**6** _N_ [i] ← RandomNumber(D);
**7 end**
**GSM8kRAW** **GSM8kHARD**
**Model**
overall w/ unit conv. overall w/ unit conv.
LLaMa-2-7B 27.16 21.31 14.44 1.72
Mistral-7B 45.10 47.54 29.46 13.79
Table 8: Performance of LLaMa-2 on RAW and HARD
dataset in unit conversion skill.
**Algorithm 4: Augmenting MWP on Unit**
Conversion.
**Input: Original math problem (QRAW, ARAW ),**
DimUnitKB K
**Output: Augmented problem (QHARD, AHARD)**
**12 u q ←0 ←GetQuantityKindExtractTargetUnit(u(0Q, KRAW);** _, K);_
// Randomly select a new unit with the same quantity type
**34 U conv1 ← ←RandomSelectu0.conv/u1.(conv;{u ∈** _K | u.type = q});_
**5 (QHARD, AHARD)**
_←_
Substitude(QRAW, ARAW, u0, u1, conv);
**6 return QHARD, AHARD;**
without changing the logic of the original problems and denote as HARD set. The algorithm is
outlined in Algorithm 3. Initially, we extract the
numbers in the question, along with the intermediate numbers in the answer, and the computation
relationships between them. Subsequently, we randomly get new numbers based on the maximum
significant value length, and compute new intermediate numbers accordingly. These updated numbers
are then integrated into the original question and
answer, resulting in an enhanced question-answer
pairs. As shown in Fig. 3 (right), the HARD set covers a broader range of significant digits. The new
data distribution is reasonable with the proportion
decreasing appropriately as the difficulty increases.
As for unit conversion, we augment the test data
by including a wider variety of unit representations.
The algorithm is outlined in Algorithm 3. We evaluate LLAMA-2 on both RAW and HARD. As shown
in Table 8, the results suggest that the HARD set
provides greater differentiation.
**C** **Experimental Details**
The implementations of all the LMs in our paper
are based on the HuggingFace Transformers[2] and
Deepspeed[3]. We set the learning rate in 1e-5, 1e-6
with a WarmupLR scheduler, batch size of 32, max
sequence length of 1024 and train for 8 epochs. All
of our experiments are conducted on the worksta
2https://github.com/huggingface/transformers/
3https://github.com/microsoft/DeepSpeed
// Compute new values for I based on f and updated N
**8 I ←** _f_ (N, I);
// Substitute the new numbers into the original problem
**9 QHARD, AHARD**
Substitute(QRAW ←, ARAW, N, I);
**10 return QHARD, AHARD;**
= 12 * 30 + 12 * 7 = 360 + 84 = 444” to get the
compositional data “the total length of the ribbon
is: 12 * 37 = 12 * (30 + 7) = 12 * 30 + 12 * 7 =
360 + 84 = 444 cm”.
**B** **Detail for Evaluation Data**
**B.1** **Deficiency of GSM8K in Skill Assessment**
GSM8K (Cobbe et al., 2021) is the current widely
used evaluation benchmark for math word problem. However, it does not comprehensively cover
the application of atomic skills across various difficulty levels. We analyze the difficulty coverage
of GSM8k on arithmetic, taking into account four
aspects mentioned in Section A.1, as depicted in
Fig. 3 (left). Darker colours indicate more difficult, and a greater area signifies a larger volume
of data. It shows that the GSM8k test set comprehensively covers various operation hops, operation
types, and value types. However, the significant
digits involved are primarily focused on simple
data. GSM8k dataset also lacks comprehensiveness
on unit conversion skills. The statistic is shown in
Tab. 7. It covers only a few unit expressions in
various types.
**B.2** **Method for Data Augmenting**
As for arithmetic, we augment the evaluation data
through increasing the significant digit lengths
-----
tions of NVIDIA A800 PCIe with 80GB memory
and the environment of Ubuntu 20.04.6 LTS and
torch 2.0.1. In the evaluation, we employ vllm [4] for
inference. All the result are generate with a greedy
decoding strategy.
We do not utilize additional prompts for prerequisite data but employ the prompt from Alpaca (Taori
et al., 2023) for training and testing on MWP.
**C.1** **The Ratio Selection for Replay Strategy**
incorrect representation due to an error in the reasoning process. 4.Correct: indicating that both the
reasoning process and the answer are completely
correct; The priority of these four types of classifications decreases in order.
**D** **Additional Results**
**D.1** **The correlation of atomic skills**
Fig. 8 illustrates the correlation between the accuracy gains in prerequisite tasks and compositional
tasks. A higher performance on prerequisite tasks
indicates a stronger atomic skill of the model, revealing a positive correlation between the model’s
atomic skills and its capability to solve compositional tasks. This supports our conclusion that the
gains brought by our method are attributable to the
enhancement of atomic skills.
We retain some training examples from MWP and
mix them with prerequisite task data to ensure the
model retains its original problem-solving abilities
in skill training. Fig. 7 shows the performance of
the model at different mixing ratios on prerequisite
tasks (atomic skills) and complex reasoning tasks.
90
80
70
60
50
40
30
20
10
|Prerequisite T ask Complex Reasoning T ask|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
|50 40 30 20 10|||||||||
||||||||||
||||||||||
||||||||||
||||||||||
||||||||||
||||||||||
||||||||||
0.11 10 Arithmetic50 100
0.10.5 1 Unit Conversion5 10
Figure 7: Accuracy of LMs on prerequisite tasks and
complex reasoning tasks with different mixing ratio.
Overall, as the proportion of prerequisite data
increases, the model become better in atomic skills
and worse in solving MWP. The balanced mixing
ratios vary for different skills. Arithmetic requires
more prerequisite data than unit conversion. Ultimately, we chose to set the mixing ratio for arithmetic to 10 and for unit conversion to 1 in skill
training, to achieve a relatively effective continual
learning.
Prerequisite Task Compositional Task
HCL
CT
MT
SFT
25 20 15 10 5 0 5 10 15 20 25 30
Figure 8: Accuracy gain (%) on prerequisite tasks and
composisiton tasks.
**D.2** **Compositional task is not enough to**
**enhanced atomic skills**
We mention in Section 5.3 that the compositional
task improves atomic skills, but the effect is limited. It can be seen from Tab. 9 that although the
AL method learn the COT form to answer, it has
not acquired complete knowledge, hence it cannot
result in a completely correct answer.
**D.3** **Abnormal Results in S-Mul**
**C.2** **Human Evaluation for Error Analysis**
We recruite human evaluators to do error analysis
on math word problem. All evaluators possess sufficient knowledge of mathematics and are provided
with the necessary background for the evaluation
criteria. Each item is annotated by at least three
evaluators and inconsistencies will lead to a reassessment.
The evaluation of the response is divided into
four types: 1.Atomic skill error: an error occurs
within the atomic skill segment, e.g., an error in performing complex multiplication; 2.Question Misunderstood: the wrong value is used due to a misunderstanding of the question; and 3.Reason error:
We mention in Section 5.3 that HCL exhibit abnormal decline in S-Mul compared to ST. We observe
that most of the error comes from the incorrect of
magnitude. For example, HCL model reponse in
“0.00261” when calculating “2.61 * 0.01” while the
correct answer is “0.0261”. We argue that this is primarily because the special operation S-Mul rarely
occurs in compositional data, as we sample all values uniformly. Whereas in the skill learning phase,
4https://github.com/vllm-project/vllm
-----
we construct a large amount of training data for the
S-Mul operation. It indicates that the property of
avoiding catastrophic forgetting mentioned in Section 5.3 is conditional. The distribution of atomic
skills in compositional data being the same as in
the prerequisite task. The distribution of atomic
skills within composite data needs to be consistent
with the distribution in the prerequisite task, otherwise the phenomenon of catastrophic forgetting
may still occur.
**D.4** **Additional Examples on MWP**
We supplement § 5.1 with more complete examples,
as shown in Tab. 10.
-----
# Method Response COT CORRECT
AL 495.3 + 996.8 = 1491.1 - ✗
1
HCL 495.3 + 996.8 = 1492.1 - ✓
AL 6.8 0.1 = 68.33 - ✗
2
HCL 68 0.1 = 68 - ✓
AL 18.4 * 9.945 = 18.4 * 9 + 18.4 * 0.9 + 18.4 * 0.04 + 18.4 * 0.005 = 166.6 + 16.66 + ✓ ✗
0.736 + 0.092 = 183.26 + 0.736 + 0.092 = 183.996 + 0.092 = 184.088
3
HCL 18.4 * 9.945 = 18.4 * 9 + 18.4 * 0.9 + 18.4 * 0.04 + 18.4 * 0.005 = 165.6 + 16.56 + ✓ ✓
0.736 + 0.092 = 182.16 + 0.736 + 0.092 = 182.896 + 0.092 = 182.988
AL 69.4 * 81.8 = 69.4 * 80 + 69.4 * 1 + 69.4 * 0.8 = 5552 + 6.94 + 55.52 = 5618.94 + 55.52 ✓ ✗
= 5674.462
4
HCL 69.4 * 81.8 = 69.4 * 80 + 69.4 * 1 + 69.4 * 0.8 = 5552 + 69.4 + 55.52 = 5621.4 + 55.52 ✓ ✓
= 5676.92
AL 2.94 * 8.21 = 2.94 * 8 + 2.94 * 0.2 + 2.94 * 0.01 = 23.48 + 0.588 + 0.0294 = 24.068 + ✓ ✗
0.0294 = 24.0974
5
HCL 2.94 * 8.21 = 2.94 * 8 + 2.94 * 0.2 + 2.94 * 0.01 = 23.52 + 0.588 + 0.0294 = 24.108 + ✓ ✓
0.0294 = 24.1374
AL 539.25 - 418.4 = 120.8 - ✗
6
HCL 539.25 - 418.4 = 120.85 - ✓
Table 9: Example of the response in prerequisite task of different training strategies. COT indicates whether the
model answer with a COT format and CORRECT indicates whether the final answer is correct. We mark the areas
that led to errors in red.
-----
Method Response SKILL CORRECT
Jill gets paid $631 per hour to teach and $30 to be a cheerleading coach. If she works
Question 38 weeks a year, 32 hours a week as a teacher and 15 hours a week as a coach, what’s
her annual salary?
Jill works 38 weeks a year * 32 hours a week = 1304 hours a year as a teacher. She
earns 1304 hours a year * $631 a hour = $800800 a year as a teacher. As a coach,
Jill works 15 hours a week * 38 weeks a year = 570 hours a year. She earns 570
hours a year * $30 a hour = $17100 a year as a coach. Therefore, Jill’s annual salary
is $17100 + $800800 = $802900.
As a teacher, Jill works 38*32 = 38 * 30 + 38 * 2 = 1140 + 76 = 1216 weeks. She
earns 1216*$631 = 1216 * 600 + 1216 * 30 + 1216 * 1 = 731600 + 36480 + 1216
= 768080 + 1216 = 769296. As a coach, Jill works 38*15 = 38 * 10 + 38 * 5 = 380 +
190 = 570 weeks. She earns 570*$30 = 570 * 30 + 570 * 0 = 17100 + 0 = 17100.
Annually, she earns $769296+$17100 = $786396
Jill earns 38 weeks * 32 hours/week = 38 * 30 + 38 * 2 = 1140 + 76 = 1216 hours a
year as a teacher. She earns 1216 hours/year * $631/hour = 1216 * 600 + 1216 * 30
+ 1216 * 1 = 729600 + 36480 + 1216 = 766080 + 1216 = 767296 a year as a teacher.
She earns 15 hours/week * $30/hour = 15 * 30 = 15 * 30 + 15 * 0 = 450 + 0 = 450 a
week as a coach. In a year, she earns $450/week * 38 weeks/year = 450 * 38 = 450
- 30 + 450 * 8 = 13500 + 3600 = 17100 as a coach. Her total salary is $767296 +
$17100 = 784396.
ST
AL
HCL
Question [A raspberry bush has 911 clusters of 637 fruit each and 797 individual fruit scattered]
across the bush. How many raspberries are there total?
There are 911 clusters of raspberries, each with 637 fruit, so there are 911 * 637 =
ST 545702 raspberries in the clusters. There are also 797 individual raspberries, so
there are a total of 545702 + 797 = 553699 raspberries on the bush.
There are 911 clusters of raspberries on the bush, so there are 911 * 637 = 911 *
600 + 911 * 30 + 911 * 7 = 546600 + 27330 + 6377 = 573930 + 6377 = 580307
raspberries in the clusters. There are also 797 individual raspberries scattered across
the bush, so there are a total of 580307 + 797 = 581104 raspberries.
First find the total number of fruit in the clusters: 911 clusters * 637 fruit/cluster
= 911 * 600 + 911 * 30 + 911 * 7 = 546600 + 27330 + 6377 = 573930 + 6377 =
580307 fruit. Then add the number of individual fruit to find the total number of
fruit: 580307 fruit + 797 fruit = 581104 fruit.
AL
HCL
Question [A glass of milk is 668 ounces of milk. John drinks 671 glasses of milk. If milk has]
949 calories per ounce how many calories did he consume?
He drank 671*668= 405508 ounces of milk. That means he consumed
ST
405508*949=36545720 calories.
He drank 668*671= 668 * 600 + 668 * 70 + 668 * 1 = 400800 + 46760 + 668 =
447560 + 668 = 448228 ounces of milk. That means he consumed 448228*949 =
448228 * 900 + 448228 * 40 + 448228 * 9 = 393405200 + 17929120 + 3934052
= 411334320 + 3934052 = 415268372 calories.
He drank 671*668 = 671 * 600 + 671 * 60 + 671 * 8 = 402600 + 40260 + 5368 =
442860 + 5368 = 448228 ounces of milk. So he consumed 448228*949= 448228 *
900 + 448228 * 40 + 448228 * 9 = 403405200 + 17929120 + 4034052 = 421334320
+ 4034052 = 425368372 calories.
AL
HCL
Table 10: Example of the response in complex reasoning task of different training strategies. SKILL indicates
whether the model fail in atomic skills (arithmetic). CORRECT indicates whether the final answer is correct. We
mark the areas that led to errors in red.
-----
| [
"Yuncheng, Huang",
"Qianyu, He",
"Yipei, Xu",
"Jiaqing, Liang",
"Yanghua, Xiao"
] | 2024-03-14T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2403.09479v1 | https://arxiv.org/abs/2403.09479 | https://www.semanticscholar.org/paper/7de1c68121c19529ef5610bb4041f215db464f6b |
Lean4trace: Data augmentation for neural theorem proving in Lean | Integrating large language models as proof assistants with theorem provers has shown great promise. However, one of the major challenges in this field is the scarcity of training data. To address this, we release a new open-source tool, *Lean4trace*, for training data extraction from Lean 4 sources. Unlike previous approaches, *Lean4trace* is deeply integrated into the Lean elaborator, allowing us to modify proofs on-the-fly. Leveraging this feature, we propose two methods of data augmentation in Lean: (1) decomposing composite proof steps into multiple simpler steps; (2) testing existing proof automation tactics at each proof state and collecting the successful ones. Models trained on this augmented data are capable of proving 58.0% of theorems from a hold-out subset of Mathlib and 35.6% of the test subset of the MiniF2F benchmark. | null | # Lean4trace: Data augmentation for neural theorem proving in Lean
**Vasilii Nesterov** [1] **Yermek Kapushev** [2] **Mikhail Burtsev** [3]
**Abstract**
Integrating large language models as proof assistants with theorem provers has shown great
promise. However, one of the major challenges
in this field is the scarcity of training data. To
address this, we release a new open-source tool,
_Lean4trace, for training data extraction from_
Lean 4 sources. Unlike previous approaches,
_Lean4trace is deeply integrated into the Lean elab-_
orator, allowing us to modify proofs on-the-fly.
Leveraging this feature, we propose two methods
of data augmentation in Lean: (1) decomposing
composite proof steps into multiple simpler steps;
(2) testing existing proof automation tactics at
each proof state and collecting the successful ones.
Models trained on this augmented data are capable of proving 58.0% of theorems from a hold-out
subset of Mathlib and 35.6% of the test subset of
the MiniF2F benchmark.
**1. Introduction**
One of the advantages of mathematics over other sciences
is that the correctness of its results can, in principle, be
verified mechanically. This is particularly desirable because
the standard peer review process inevitably sometimes results in invalid proofs. However, in practice, formalizing
mathematics is a labor-intensive and time-consuming task.
The standard approach involves using interactive theorem
proving (ITP) systems, among which it is worth mentioning
Lean (de Moura et al., 2015), Isabelle (Nipkow et al., 2002),
Coq (Barras et al., 1997), and Metamath (Megill & Wheeler,
2019). The process of theorem proving in such a system is
similar to programming in an IDE: a user interacts with the
system via commands in a formal language, and the system
provides feedback on whether the proof is successful.
In recent years, significant efforts have been made to simplify the formalization process. The most developed li
1Moscow Institute for Physics and Technology 2Yandex
3London Institute for Mathematical Sciences. Correspondence
to: Vasilii Nesterov <[email protected]>.
_AI for MATH Workshop at ICML 2024. Copyright 2024 by the_
author(s)
braries of formalized mathematics now contain more than
100,000 theorems. One example is Mathlib (mathlib Community, 2020), a user-maintained mathematical library for
the Lean theorem prover, which covers a wide range of
mathematical fields.
From another perspective, formal theorem proving, as well
as reasoning in general, remains a significant challenge
for AI systems. Recently, several approaches to this task
have been proposed, all of which are based on transformer
(Vaswani et al., 2023) language models. Modern natural
language models are typically trained on large corpora of
data, while the training data extractable from proofs is relatively scarce. In our paper, we address this challenge by
proposing two methods of data augmentation specific to the
task of formal theorem proving.
In our study, we focus on the Lean[1] theorem prover as an
ITP system and Mathlib as the main source of training data.
Theorems in Lean are typically proved using a sequence of
commands called tactics[2]. When Lean processes a proof
code, it initially parses it into a syntax tree, then elaborates
the tree into an expression, and finally sends the expression
to the kernel to verify that it has the correct type, ensuring
that the provided proof actually proves the claimed theorem.
Specifically, the elaborator processes the sequence of tactics
and constructs a proof term.
Extracting data from Lean source code is technically complex. Among recent projects that facilitate data extraction, notable examples include lean-training-data (Morrison, 2023a) and LeanDojo (Yang et al., 2023). Both projects
are implemented in Lean and utilize Lean’s internal structure
known as InfoTree. The Lean elaborator uses this structure
to store various pieces of information, such as intermediate
proof states and positional information, which are later used
in the user interface. This method offers the advantage of
not requiring the recompilation of the source code; however, it also has two disadvantages. Firstly, certain internal
temporal information that could be useful for training is not
available. Secondly, this setup does not allow us to modify
the proof. For a more reliable and customizable option, we
1In our paper we work only with Lean version 4.6.0.
2For a list of tactics with descriptions, please refer to [https://github.com/haruhisa-enomoto/](https://github.com/haruhisa-enomoto/mathlib4-all-tactics)
[mathlib4-all-tactics](https://github.com/haruhisa-enomoto/mathlib4-all-tactics)
-----
**Lean4trace: Data augmentation for neural theorem proving in Lean**
chose to integrate tracing code into the Lean elaborator’s
source code. This approach enables us to apply various
proof modifications and trace training data during the elaboration process, with full access to the elaborator’s state. We
release our extraction tool called Lean4trace.
Most prior studies (Polu et al., 2023; Lample et al., 2022;
Yang et al., 2023) in theorem proving in Lean exclusively
relied on human-written proofs from Mathlib as the main
source of training data. Although it is the largest and highestquality source of formalized math in Lean, the proofs in
Mathlib are often compressed, meaning that we can potentially extract more than one proof state from a single tactic.
For example, the most frequently used tactics in the library
are rw and simp, both of which take a list of lemmas and
apply them one by one. Thus, they can be replaced by a
sequence of individual tactic applications. Even humans,
when searching for a proof, would likely try to apply lemmas one by one. Therefore, such proof rewriting not only
provides more training data for the model but also makes
the data more meaningful and easier for the model to understand. This could be especially useful for retrieval models,
as the model is then trained to retrieve lemmas that are
conceptually related to the current state, rather than some
intermediate one. We refer to this as tactic decomposition
**data augmentation.**
Some tactics in Lean, such as rw, apply, and exact,
perform basic operations corresponding to single reasoning
steps. Others, which we refer to as ’automatic’ in this
paper, are capable of more complex reasoning. Examples of
such tactics include solve by elim, which recursively
discharges the goal using the local context; tauto, which
derives the goal from hypotheses in propositional logic; and
aesop, which performs tree-based proof search using a set
of predefined rules. There are many other automatic tactics,
including domain-specific ones. Although they are relatively
rare in Mathlib, we found that these tactics can prove a
notable fraction of goals. Several factors contribute to their
limited presence in human-written proofs: some of them are
relatively new, proofs using automatic tactics are less robust,
and these tactics generate longer proof terms. Nonetheless,
they appear useful for automated proof search. Therefore,
we tested each automatic tactic against each proof state in
the data and collected all successful examples. We refer to
this process as automatic tactic data augmentation.
In summary, this paper makes the following contributions:
- We introduce Lean4trace, a novel tool for data extraction from Lean 4 source code, seamlessly integrated
into the Lean elaborator. It enables interaction with
existing proofs (e.g., testing automatic tactics) and extracts more proof states than previous extraction tools.
- We propose two methods of data augmentation in au
tomated theorem proving: automatic tactic data aug**mentation and tactic decomposition data augmenta-**
**tion.**
- We demonstrate that the proposed data augmentations enhance the performance of the ReProver model
on Mathlib (+9.4% Pass@1) and MiniF2F (+9.1%
Pass@1) benchmarks.
**2. Related work**
The available data for Lean is rather scarce but complex to
learn as it requires advanced reasoning abilities. To tackle
this problem, some papers try to improve, modify or enlarge
training data. For example, in GPT-f (Polu et al., 2023)
they used Expert Iteration to generate new proofs for the
theorems from the training set using the model at the current
iteration. Some of the generated proofs are shorter than
the original ones. This makes the proof search faster as
the model tends to find shorter proofs. In (Wu et al., 2022)
they tried to mine new theorems with proofs by utilizing
large language model (LLM). The idea is to take theorems
in natural language and prompt LLM to translate them to
formal language, and then search for the proofs using pretrained theorem prover. As a result, they obtained larger
dataset with theorems from different domain. Some papers
generate synthetic data (most commonly, equalities and
inequalities as it is easy in this case to generate the proof),
e.g. (Polu et al., 2023), (Lample et al., 2022).
The GPT-f model was trained on additional proof artifacts
collected as described in (Han et al., 2022). The artifacts
are some artificially generated problems, for example, predicting missing proof term or type. Such data is not directly
connected to proof generation but allows to pretrain the
language model on formal language domain.
Another interesting idea was proposed in (Jiang et al., 2022).
The authors modified the proofs by using hammers (Paulsson & Blanchette, 2012) where possible. As a result, the
model trained on such data learns to call hammer and finds
more proofs.
Most recent papers almost fully rely on LLM’s capabilities
to few-shot learning. In (Thakur et al., 2023) the pre-trained
language model serves as an intellectual proof step generator.
It is prompted to generate proof steps for the current proof
state given previous attempts, error messages and retrieved
lemmas.
While most of the previously described models generate
proof step-by-step, there are few papers that attempt to
generate the whole proof at once. In (Jiang et al., 2023)
they assumed that each formal theorem is equipped with
informal statement. Having this they first prompt LLM
to generate informal proof sketch, i.e. high-level proof
-----
**Lean4trace: Data augmentation for neural theorem proving in Lean**
plan in natural language, and then use SledgeHammer to
proof individual high-level steps in the proof. The papers
(Zheng et al., 2023; First et al., 2023) prompts LLM to
interact with the proof assistant in a chat manner. The idea
is to provide feedback to the language model to fix the
error in the proof. The work (Xin et al., 2023) follows
(Jiang et al., 2023) assuming that each formal statement has
its corresponding informal statement and informal proof.
They decompose the proof into steps, formalize and prove
each step using LLM and evolving library of skills (verified
lemmas database, problems statements and newly generated
lemmas by prover).
The authors of (Azerbayev et al., 2023) collect large dataset
of math and code related texts and train LLM called Llemma.
The model can be used to solve various math problems
beyond formal theorem proving.
The vast majority of recent papers rely on large language
models. While they are very useful at generating proof
plans, they can be too expensive to solve individual steps.
This can be done more efficiently with smaller language
models, like in (Yang et al., 2023), as we can generate and
check more tactics at the same amount of time. To improve
such models, we aim to generate larger and simpler training
dataset.
**3. Methods**
**3.1. Experimental setup and baseline**
The main goal of our paper is to propose and evaluate various training data modifications for automated theorem proving in Lean. We chose LeanDojo and ReProver, proposed in
(Yang et al., 2023), as the baseline data extraction tool and
prover, respectively, for two reasons: the experiments do
not require large computational resources, and the code is
open-sourced, making it easy to build upon. Note, however,
that we use LeanDojo only for interaction with Lean, while
utilizing Lean4trace for data extraction.
In our experiments we follow the simplified pipeline of ReProver (Yang et al., 2023). We fine-tune the ByT5-small
model (Xue et al., 2022) on the data extracted from Mathlib. The model is trained to generate a proof step, i.e., a
single tactic application, conditioned on a proof state that
includes the local context—information that appears in the
InfoView as a user proves the theorem. Note, that in ReProver the prover consists of 2 models: tactic generator and
retriever that tries to retrieve relevant lemmas. However, in
our work use only tactic generator. We evaluate the model
on the LeanDojo benchmark, which consists of 2,000 randomly selected theorems from Mathlib, and on the MiniF2F
benchmark (Zheng et al., 2021). During the proving process,
the model generates multiple tactic candidates at each step,
which are used in a standard best-first search algorithm to
find a proof with a cumulative log-prob as a ranking criterion, see details in (Polu & Sutskever, 2020). We evaluate
the Pass@1 metric: the fraction of theorems which can be
proven by the prover within 10 minutes in one attempt.
**3.2. Data extraction in Lean**
The elaborator is a part of the Lean system that infuses
syntactic objects with meaning. In particular, the elaborator
processes tactic blocks, applying tactics and constructing a
proof term as a result. We modify the Lean elaborator so
that at each tactic invocation, it traces:
- The current proof state, obtained from the elaborator’s
state.
- The proof step (tactic) that the elaborator is going to
process.
- The used premises, which we extract by finding all
named constants in the proof step.
Additionally, we trace some meta information such as the
name of the file, module, theorem being processed, and the
position of the proof step.
To enable the use of Lean4trace for generating training data
for retrieval-augmented models as described in (Yang et al.,
2023), we take the following steps. First, we utilize the
**import-graph (Morrison, 2023b) tool to extract the import**
structure, which is necessary for determining which lemmas
are accessible within a given theorem. Finally, to build
the corpus of all declared definitions that can be used as
premises, we use the lean-training-data (Morrison, 2023a)
tool.
From the traced data, we build a canonical dataset, where
”canonical” means that this data is obtained from the original human-written proofs for further comparison with augmented data. It contains all proof states visible to the user.
**3.3. Tactic decomposition data augmentation**
To test if tactic decomposition can help the model learn
better, we focus on the two most frequent tactics in Mathlib:
simp and rw. Both of these tactics take a list of rules as
an argument but process them differently. The rw tactic
takes a list of rules and rewrites the goal by applying them
in a given order. Meanwhile, the simp tactic, given a set of
rules, applies them in an order determined by its heuristics
and can apply each rule multiple times. Therefore, we
decompose proof steps containing these tactics in different
ways, which we describe below. These two tactics are the
most frequent tactics in Mathlib, and in most cases they are
applied to the list of multiple rules, so their decomposition
gives a notable amount of new data.
-----
**Lean4trace: Data augmentation for neural theorem proving in Lean**
The decomposition procedure for rw is straightforward: we
replace a proof step of the form rw [h1, ..., hn] with a
sequence of proof steps rw [h1], . . ., rw [hn]. The rw
tactic is implemented as a thin wrapper over rewrite,
which attempts to close the goal with rfl at the end. To
decompose rw, we modify the code of rewrite so that
when it takes a list of rules containing more than one rule, it
applies single-rule rw multiple times, tracing intermediate
proof states. As a result, our modification affects rw and
some of its variations, such as erw and nth rw.
_Figure 1. Process of expanding the simp tactic. In the original_
proof, the proof state s2 is obtained by applying simp [h1, h2,
h3] in state s1. Using BFS, we find a sequence simp [h2];
simp [h1]; simp [h2] which also leads to s2. In this example, h2 is used twice, and h3 can be omitted. Such situations
actually occur in Mathlib proofs.
The case of simp is more complex because the order of rule
applications is not determined by the user. While the order
could be extracted after the tactic’s invocation using Lean
meta-programming, we use a different approach here. Given
the list of rules, we employ a Breadth-First Search algorithm,
treating proof states as vertices with edges corresponding to
single-rule simp applications with rules from the list, see
the Figure 1. Once a sequence leading to the target proof
state is found, the search stops, and the proof states forming
the sequence are traced. This method has the additional
advantage of finding the shortest such sequence.
**3.4. Automatic tactics data augmentation**
To train our model to use automatic tactics, we mine additional data using the following procedure: at each tactic
invocation, before applying the original tactic, we attempt
to apply tactics from a fixed list of automatic tactics. If
the application is successful, meaning the tactic has closed
the goal, we trace this automatic proof step. If multiple
automatic tactics are able to close the goal, we trace all of
them.
In some cases, a single automatic tactic can replace a sequence of multiple tactics in the original proof. Typically,
the automatic tactic can close the goal from each intermediate proof state. Instead of keeping only the first automatic
tactic application that closes the goal, we retain all these data
points to ensure that the model learns when the automatic
tactic is applicable.
Sometimes, tactics can take a very long time or even hang
indefinitely. To address this, we set a time limit of 10 seconds for each automatic tactic application. Since Lean 4
does not yet support timeouts internally, we use an external
Python script as a temporary solution. This script monitors
the building process and restarts it if it gets stuck in a proof
state, adding the problematic proof state to a blacklist. Proof
states on the blacklist are not tested again.
**4. Experiments and results**
**4.1. Dataset’s statistics**
Firstly, to give an overview of our dataset, we provide the
frequencies of the most popular tactics in Mathlib in Table 1.
Notably, rw and simp cover more than 28% of all proof
states extracted from Mathlib, which motivates us to focus
on them in tactic decomposition augmentation.
Tactic Frequency, %
rw 15.2
simp 13.1
_· (cdot)_ 10.1
exact 9.8
have 5.1
apply 3.6
refine´ 3.4
intro 3.0
simpa 2.1
rfl 2.0
ext 1.9
obtain 1.8
rintro 1.8
rcases 1.6
dsimp 1.4
simp rw 1.4
cases 1.3
refine 1.1
let 1.0
_Table 1. Most frequent tactics in Mathlib_
Next, we present statistics on the usability of each automatic
tactic in Table 2. Note that the aesop tactic alone can close
more than 20% of proof states in Mathlib. Unfortunately,
different general-purpose automatic tactics tend to close
-----
**Lean4trace: Data augmentation for neural theorem proving in Lean**
similar sets of goals. In total, the automatic tactics we used
can close 23.6% of proof states. This demonstrates that
such tactics are quite powerful, and any prover could benefit
from using them. Additionally, they can be considered as a
baseline in automated theorem proving in Lean.
Automatic tactic Solved goals, %
aesop 21.8
simp all 16.6
simp [*] 13.5
simp arith 9.6
tauto 8.9
solve by elim 7.8
continuity 5.7
norm num 5.6
abel 1.5
omega 1.4
nlinarith 1.1
linarith 1.0
ring 0.8
decide 0.6
group 0.5
_Table 2. Number of goals can be solved by auto tactics._
**4.2. Comparison with LeanDojo extraction**
We compare our approach with LeanDojo, our main baseline. The number of extracted proof states and the resources
required for each augmentation are provided in Table 3.
We refer to the dataset extracted by our tracing without
augmentations as canonical. Note that our approach requires considerably less RAM, making it possible to run
on a regular PC. The key factor that slows down our data
augmentation mining is the lack of timeout mechanisms in
Lean 4, the implementation of which would greatly speed
up the process.
The difference in the number of traced proof states between
the LeanDojo approach and ours mainly comes from tactic combinators. For example, the construction tac1 <;>
tac2 in Lean means that tac2 should be applied to all
goals produced by tac1. In LeanDojo tracing, such a construction is treated as a single proof step, while we trace
separate proof steps for each produced goal.
The last three rows in Table 3 refer to augmentations, which
we mix with the canonical subset during training. The
provided number of proof states represents the number of
**new proof states in relation to the canonical subset.**
3The time is measured using 32-core CPU.
4This does not include time spent for building Mathlib.
5LeanDojo tracing requires 48 GB even when running on a
single core.
Dataset # proof states Time[3] RAM, GB
LeanDojo tracing 273k 1 h [4] 48[5]
Canonical 352k 31 min 17
rw decomposition 110k 34 min 18
simp decomposition 37.7k 11 h 24
Automatic tactics 318k 7 days 10
_Table 3. Resources required for tracing._
**4.3. Theorem proving**
In this subsection we discuss models trained on augmented
datasets and their proving capabilities. The Pass@1 metric
is provided in Table 4 (note, that the numbers are for the
model without retrieval). Automatic tactics augmentation
provides +1.7% improvement over canonical tracing, while
tactics decomposition gives +2% improvement on Mathlib
dataset.
Model & training data Mathlib MiniF2F
ReProver
LeanDojo data[6] 48.6 26.5[7]
Canonical 56.3 **35.6**
Canonical + Tactics decomposition **58.0** 30.0
Canonical + Automatic tactics 57.6 33.6
Thor + expert iteration (Wu et al., 2022) 35.2
COPRA + GPT-4 (Thakur et al., 2024) 30.7
Thor (Jiang et al., 2022) 29.9
Lean Expert Iteration (Polu et al., 2023) 29.6
_Table 4. Pass@1 for theorem proving._
In addition to Mathlib, we test our models on a test subset
of the MiniF2F benchmark, which consists of formalized
Olympiad-level mathematics problems. A notable feature of
this benchmark is that its Lean 4 version was released after
all the data the model was trained on (including pre-training)
had been gathered. Therefore, there are no Lean 4 proofs for
MiniF2F available on the internet so far, ensuring that the
model has never encountered these proofs during training.
There is a notable increase in metrics on MiniF2F, even
though no extra fine-tuning was performed specifically for
it. We achieve our best results with the model trained on data
without augmentations: it proves 87 of the 244 theorems
presented in the MiniF2F test subset. We examined the
successful proofs found by the model and discovered that
most of them rely on automatic tactics (see the Figure 2).
6The results are taken from the paper (Yang et al., 2023) and
pertain to a model that also relies on an additional model, which
retrieves potentially useful auxiliary theorems and definitions at
each step.
7This results were obtained using the Lean 3 version of the
MiniF2F benchmark.
-----
**Lean4trace: Data augmentation for neural theorem proving in Lean**
This demonstrates that modern tools for proof automation
in Lean are powerful enough to enable a model equipped
with them to prove a significant portion of the dataset with
minimal inherent reasoning.
At the same time, the quality degrades when we apply augmentations. This requires further investigation, but it might
be explained as follows. The domain of MiniF2F problems differs from that of Mathlib, and when augmentations
are applied, the proportion of proof steps that are useful
for MiniF2F decreases: for example, simp is very rare in
found proofs for MiniF2F but occur frequently in both augmentations. In contrast, norm num is widely used tactic in
MiniF2F which is rare in Mathlib. This reduction in relevant proof states could lead to the observed degradation in
performance.
For comparison we also provide the Pass@1 metric on
MiniF2F reported in prior studies.
**5. Conclusion**
In this paper, we present Lean4trace, a novel tool for data
extraction and augmentation tailored for training neural theorem provers in Lean. Our experimental results demonstrate
that models trained using our dataset achieve a 9% higher
performance on the MiniF2F benchmark compared to ReProver (Yang et al., 2023), when trained and evaluated under
identical conditions. While proposed augmentations provides improvements on Mathlib dataset, they may degrade
the model when evaluated on a dataset from different distribution. Nevertheless, our tool allows gathering a more complete set of proof states in canonical setup and significantly
reduces computational resource requirements compared to
LeanDojo (Yang et al., 2023), making it feasible to run on a
modern PC. We believe that these advancements will lower
the barrier to entry in this field, fostering more accessible
and widespread research in neural theorem proving.
**6. Acknowledgements**
This work was supported by a grant for research centers in
the field of artificial intelligence, provided by the Analytical
Center for the Government of the Russian Federation in
accordance with the subsidy agreement (agreement identifier 000000D730324P540002) and the agreement with the
Moscow Institute of Physics and Technology dated November 1, 2021 No. 70-2021-00138.
**References**
Azerbayev, Z., Schoelkopf, H., Paster, K., Dos Santos, M.,
McAleer, S., Jiang, A. Q., Deng, J., Biderman, S., and
Welleck, S. Llemma: An open language model for mathematics. arXiv preprint arXiv:2310.06786, 2023.
theorem mathd_numbertheory_135
(n A B C : Nat)
(h0 : n = 3ˆ17 + 3ˆ10)
(h1 : 11 | (n + 1))
(h2 : [A,B,C].Pairwise (· ̸= ·))
(h3 : {A,B,C} ⊆ Finset.Icc 0 9)
(h4 : Odd A ∧ Odd C)
(h5 : ¬ 3 | B)
(h6 : Nat.digits 10 n = [B,A,B,C,C,A,C,B,A]) :
100 * A + 10 * B + C = 129 := by
aesop -- general-purpose automatic tactic
theorem mathd_numbertheory_229 :
(5ˆ30) % 7 = 1 := by
decide -- tactic that proves some "decidable" goals
_-- here it just computes (5ˆ30) % 7_
theorem mathd_algebra_33
(x y z : Real)
(h0 : x ̸= 0)
(h1 : 2 * x = 5 * y)
(h2 : 7 * y = 10 * z) :
z / x = 7 / 25 := by
field_simp [h0, h1]
linarith -- tactic that solves linear equations
_Figure 2. Examples of proofs found by our model. Presented_
proofs almost completely rely on automatic tactics.
Barras, B., Boutin, S., Cornes, C., Courant, J., Filliatre, J.-C.,ˆ
Gimenez, E., Herbelin, H., Huet, G., Mu´ noz, C., Murthy,˜
C., Parent-vigouroux, C., Paulin-Mohring, C., Sa¨ıbi, A.,
and Werner, B. The Coq proof assistant reference manual
: Version 6.1. 06 1997.
de Moura, L. M., Kong, S., Avigad, J., van Doorn, F.,
and von Raumer, J. The lean theorem prover (system
[description). In CADE, 2015. URL https://api.](https://api.semanticscholar.org/CorpusID:232990)
[semanticscholar.org/CorpusID:232990.](https://api.semanticscholar.org/CorpusID:232990)
First, E., Rabe, M., Ringer, T., and Brun, Y. Baldur: Wholeproof generation and repair with large language models.
In Proceedings of the 31st ACM Joint European Software
_Engineering Conference and Symposium on the Founda-_
_tions of Software Engineering, pp. 1229–1241, 2023._
Han, J. M., Rute, J., Wu, Y., Ayers, E. W., and Polu, S.
Proof artifact co-training for theorem proving with language models. In International Conference on Learning
_Representations, 2022._
Jiang, A. Q., Li, W., Tworkowski, S., Czechowski, K.,
Odrzygo´zd´ z, T., Miło´ s, P., Wu, Y., and Jamnik, M. Thor:´
Wielding hammers to integrate language models and automated theorem provers. Advances in Neural Information
_Processing Systems, 35:8360–8373, 2022._
Jiang, A. Q., Welleck, S., Zhou, J. P., Li, W., Liu, J., Jamnik,
M., Lacroix, T., Wu, Y., and Lample, G. Draft, Sketch,
-----
**Lean4trace: Data augmentation for neural theorem proving in Lean**
and Prove: Guiding formal theorem provers with informal proofs. In International Conference on Learning
_[Representations, 2023. URL https://doi.org/10.](https://doi.org/10.48550/arXiv.2210.12283)_
[48550/arXiv.2210.12283.](https://doi.org/10.48550/arXiv.2210.12283)
Lample, G., Lacroix, T., Lachaux, M.-A., Rodriguez, A.,
Hayat, A., Lavril, T., Ebner, G., and Martinet, X. Hypertree proof search for neural theorem proving. Advances in
_neural information processing systems, 35:26337–26349,_
2022.
mathlib Community, T. The lean mathematical library. In
_Proceedings of the 9th ACM SIGPLAN International Con-_
_ference on Certified Programs and Proofs. ACM, jan_
[2020. doi: 10.1145/3372885.3373824. URL https:](https://doi.org/10.1145%2F3372885.3373824)
[//doi.org/10.1145%2F3372885.3373824.](https://doi.org/10.1145%2F3372885.3373824)
Megill, N. and Wheeler, D. A. Metamath: a computer
_language for mathematical proofs. Lulu. com, 2019._
Morrison, K. lean-training-data. [https://github.](https://github.com/semorrison/lean-training-data)
[com/semorrison/lean-training-data,](https://github.com/semorrison/lean-training-data)
2023a.
Morrison, K. import-graph. [https://github.](https://github.com/leanprover-community/import-graph)
[com/leanprover-community/import-graph,](https://github.com/leanprover-community/import-graph)
2023b.
Nipkow, T., Paulson, L., and Wenzel, M. Isabelle/HOL —
_A Proof Assistant for Higher-Order Logic. 01 2002. doi:_
10.1007/3-540-45949-9.
Paulsson, L. C. and Blanchette, J. C. Three years of experience with sledgehammer, a practical link between automatic and interactive theorem provers. In Proceedings
_of the 8th International Workshop on the Implementa-_
_tion of Logics (IWIL-2010), Yogyakarta, Indonesia. EPiC,_
volume 2, 2012.
Polu, S. and Sutskever, I. Generative language modeling for automated theorem proving. _arXiv preprint_
_arXiv:2009.03393, 2020._
Polu, S., Han, J. M., Zheng, K., Baksys, M., Babuschkin, I.,
and Sutskever, I. Formal mathematics statement curriculum learning. In The Eleventh International Conference
_[on Learning Representations, 2023. URL https://](https://openreview.net/forum?id=-P7G-8dmSh4)_
[openreview.net/forum?id=-P7G-8dmSh4.](https://openreview.net/forum?id=-P7G-8dmSh4)
Thakur, A., Wen, Y., and Chaudhuri, S. A language-agent
approach to formal theorem-proving. _arXiv preprint_
_arXiv:2310.04353, 2023._
Thakur, A., Wen, Y., and Chaudhuri, S. A language-agent
[approach to formal theorem-proving, 2024. URL https:](https://openreview.net/forum?id=XCMbagV0No)
[//openreview.net/forum?id=XCMbagV0No.](https://openreview.net/forum?id=XCMbagV0No)
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention
is all you need, 2023.
Wu, Y., Jiang, A. Q., Li, W., Rabe, M., Staats, C., Jamnik, M., and Szegedy, C. Autoformalization with large
language models. Advances in Neural Information Pro_cessing Systems, 35:32353–32368, 2022._
Xin, H., Wang, H., Zheng, C., Li, L., Liu, Z., Cao, Q.,
Huang, Y., Xiong, J., Shi, H., Xie, E., et al. Lego-prover:
Neural theorem proving with growing libraries. arXiv
_preprint arXiv:2310.00656, 2023._
Xue, L., Barua, A., Constant, N., Al-Rfou, R., Narang,
S., Kale, M., Roberts, A., and Raffel, C. ByT5: Towards a token-free future with pre-trained byte-to-byte
models. _Transactions of the Association for Com-_
_putational Linguistics, 10:291–306, 2022._ doi: 10.
[1162/tacl a 00461. URL https://aclanthology.](https://aclanthology.org/2022.tacl-1.17)
[org/2022.tacl-1.17.](https://aclanthology.org/2022.tacl-1.17)
Yang, K., Swope, A. M., Gu, A., Chalamala, R., Song,
P., Yu, S., Godil, S., Prenger, R., and Anandkumar, A.
LeanDojo: Theorem proving with retrieval-augmented
language models, 2023.
Zheng, C., Wang, H., Xie, E., Liu, Z., Sun, J., Xin, H.,
Shen, J., Li, Z., and Li, Y. Lyra: Orchestrating dual
correction in automated theorem proving. arXiv preprint
_arXiv:2309.15806, 2023._
Zheng, K., Han, J. M., and Polu, S. minif2f: a cross-system
benchmark for formal olympiad-level mathematics. In
_International Conference on Learning Representations,_
2021.
-----
| [
"Vasilii, Nesterov",
"Yermek, Kapushev",
"Mikhail, Burtsev"
] | 2024-06-13T00:00:00 | null | false | 0 | 0 | null | https://openreview.net/forum?id=sjLWmLeJ6R | null | null |
Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning | Supervised fine-tuning enhances the problem-solving abilities of language models across various mathematical reasoning tasks. To maximize such benefits, existing research focuses on broadening the training set with various data augmentation techniques, which is effective for standard single-round question-answering settings. Our work introduces a novel technique aimed at cultivating a deeper understanding of the training problems at hand, enhancing performance not only in standard settings but also in more complex scenarios that require reflective thinking. Specifically, we propose reflective augmentation, a method that embeds problem reflection into each training instance. It trains the model to consider alternative perspectives and engage with abstractions and analogies, thereby fostering a thorough comprehension through reflective reasoning. Extensive experiments validate the achievement of our aim, underscoring the unique advantages of our method and its complementary nature relative to existing augmentation techniques. | reflective augmentation is proposed, a method that embeds problem reflection into each training instance, thereby fostering a thorough comprehension through reflective reasoning and its complementary nature relative to existing augmentation techniques. | ## Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning
**Zhihan Zhang[ ][1][†], Zhenwen Liang[1][†], Wenhao Yu[2], Dian Yu[2],**
**Mengzhao Jia[1][†], Dong Yu[2], Meng Jiang[1]**
1University of Notre Dame 2Tencent AI Lab, Seattle
[email protected]
**Abstract**
Supervised fine-tuning enhances the problemsolving abilities of language models across various mathematical reasoning tasks. To maximize such benefits, existing research focuses
on broadening the training set with various data
augmentation techniques, which is effective for
standard single-round question-answering settings. Our work introduces a novel technique
aimed at cultivating a deeper understanding of
the training problems at hand, enhancing performance not only in standard settings but also in
more complex scenarios that require reflective
thinking. Specifically, we propose reflective
**augmentation, a method that embeds prob-**
lem reflection into each training instance. It
trains the model to consider alternative perspectives and engage with abstractions and analogies, thereby fostering a thorough comprehension through reflective reasoning. Extensive
experiments validate the achievement of our
aim, underscoring the unique advantages of our
method and its complementary nature relative
to existing augmentation techniques.[1]
stacking more training instances does not necessarily lead to a deeper understanding of each problem. Moreover, the scope of resulting models is
confined to single-round question-answering (QA)
settings that primarily require basic forward reasoning skills. Consequently, these methods provide
limited benefits for more complex reflective reasoning scenarios that involve reviewing past steps
for further reasoning, such as addressing follow-up
questions, correcting errors, or leveraging external
feedback (Liang et al., 2024; Wang et al., 2024a).
Similarly, the strategy in human learning is not
always to practice an increasing number of problems (Rohrer and Taylor, 2006). Instead of merely
memorizing superficial solutions to more problems,
it can be more advantageous to gain a deep understanding of the existing problems (Semerci, 2005).
_Reflection, therefore, becomes an essential accom-_
paniment to practice. Stacey et al. (1982) define
reflection as “to review thoughtfully, consider alternatives and follow extensions”, which encourages
learners to contemplate their previous actions to
engage in deeper reasoning, thereby fostering reflective thinking capabilities (Kagan et al., 1964;
Anderson and Fincham, 2014).
Inspired by such human cognition, we propose a
novel training strategy for LMs that integrates reflection into each math problem. Unlike traditional
data expansion methods which operate on the instance dimension by adding more training examples (see Figures 1b & 1c), our approach targets a
complementary direction, i.e., the sequence dimension of the training data. We introduce reflective
_augmentation (RefAug), which appends a reflec-_
tive section to the original answer of each training
instance, advancing model learning beyond mere
answer generation (see Figure 1d). Such a design
not only strengthens the model’s understanding of
the associated knowledge and methodologies in
training problems, but also maintains the inference
efficiency as the model ceases generation before
**1** **Introduction**
The ability to engage in step-by-step reasoning is
pivotal for language models (LMs) to solve mathematical problems (Wei et al., 2022; Kojima et al.,
2022). Supervised fine-tuning, particularly on data
with detailed reasoning paths, effectively advances
the problem-solving performance of LMs (Fu et al.,
2023; Yue et al., 2023). To enlarge such benefits,
most previous efforts focus on creating additional
instances to augment model training (Luo et al.,
2023a; Yu et al., 2024; Mitra et al., 2024; Li et al.,
2024a). While these data expansion approaches
allow LMs to handle a broader range of math problems by increasing the diversity of training data,
† This work was done when Zhihan, Zhenwen, and
Mengzhao were interns at Tencent AI Lab, Seattle.
[1Code and data are available at https://github.com/](https://github.com/ytyz1307zzh/RefAug)
[ytyz1307zzh/RefAug.](https://github.com/ytyz1307zzh/RefAug)
-----
Question Answer Question Answer _Train_ Question Answer Reflection
Question Answer Aug. Aug. Aug. 𝑄!, [𝐴!; 𝑅!]
Question
Question Answer Answer
𝑄!, 𝐴! 𝑄!, 𝐴! & 𝑄!", 𝐴"! 𝑄!, 𝐴! & 𝑄!, 𝐴"! _Test_ Question Answer _(Early Stop)_
(a) Standard data (b) Question augmentation (c) Answer augmentation (d) Reflective augmentation
Figure 1: Question augmentation creates new questions based on existing ones. Answer augmentation re-samples
answers for each problem to increase diversity. Both methods expand the size of the training set. Reflective augmentation appends the original answer with a reflective section, which is complementary to traditional approaches.
Corresponding training sequences are shown in an (input, output) format, where augmented parts are in red.
|Test Question: The unknown number 𝑥satisfies (7𝑥+ 1)(9𝑥+ 1) = 61. Find 𝑥.|Col2|
|---|---|
|After Standard Training Let 𝑡= 8𝑥+ 1. The equation becomes 𝑡−𝑥+ 1 (𝑡+ 𝑥+ 1) = 61. Next, we…??? If we set 𝑡= 7𝑥+ 1, the original equation turns into 𝑡(𝑡+ 2𝑥+ 1) = 61. Then we can get…???|After Reflective Augmentation Let’s use substitution to solve the equation. To create a common structure for easier substitution, we multiply the expressions to unify the coefficient of 𝑥: (7𝑥+ 1)×9 and (9𝑥+ 1)×7, yielding 63𝑥+ 9 63𝑥+ 7 = 61×63. Observing the equation, we notice that by letting 𝑡= 63𝑥+ 8, it can be rewritten as 𝑡+ 1 𝑡−1 = (62 + 1)(62 −1), which simplifies to 𝑡! = 62! ⇒𝑡= ±62. Therefore, 𝑥=" or 𝑥= −$% . (Early stopping: no reflection needed during inference) # &|
Figure 2: The model that learned the standard solution does not fully understand when and how to apply substitution
**_Training Question: Solve the equation 𝑥−99_** 𝑥−101 = 8.
**_Standard Training_** **_Training with Reflective Augmentation_**
Let 𝑡= 𝑥−100. … [omitted, same as standard training] … we get 𝑥= 97 or 103.
**_(Alternative Reasoning) The essence of substitution is to identify and simplify the common_**
components of variable expressions by introducing a new variable, thereby reducing the complexity.
Let’s revisit the original equation. Expressions 𝑥−99 and 𝑥−101 share a similar form: a large
constant offset from 𝑥. Due to the minimal difference between 99 and 101, we can use substitution
**_Training_** Let 𝑡= 𝑥−100 . Then the to transform the expressions into terms with small constants.
equation becomes 𝑡−1 (𝑡+ Therefore, an alternative approach is to substitute 𝑡= 𝑥−99, which transforms the equation
1) = 8 ⇒𝑡[!] −1 = 8. Therefore, into 𝑡𝑡−2 = 8 ⇒𝑡[!] −2𝑡−8 = 0. This can be easily factorized into 𝑡−4 𝑡+ 2 = 0. Hence,
𝑡= ±3, and accordingly we 𝑡= 4as it also simplifies the equation by substituting or −2, leading to the same results 𝑥= 97 or 𝑥 and reducing the scale of the offset terms.103. This alternative approach is equally effective
get 𝑥= 97 or 103.
**_(Follow-up Reasoning) Extending the idea of substitution, consider the equation 𝑥𝑥+ 1 (𝑥+_**
2) 𝑥+ 3 = 360. We notice that 𝑥𝑥+ 3 = 𝑥[!] + 3𝑥, and 𝑥+ 1 𝑥+ 2 = 𝑥[!] + 3𝑥+ 2. Therefore,
to simplify the expression, we set the common term 𝑥[!] + 3𝑥 as 𝑡, which transforms the equation
into 𝑡𝑡+ 2 = 360 ⇒𝑡[!] + 2𝑡−360 = 0 ⇒𝑡= −20 or 18. If 𝑡= −20, then 𝑥[!] + 3𝑥+ 20 = 0. Here,
the discriminant Δ = −71 < 0, resulting in no real solutions for 𝑥. If 𝑡= 18, then 𝑥[!] + 3𝑥−18 = 0,
so 𝑥= 3 or −6. This scenario reiterates the importance of identifying common components of 𝑥 to
streamline the equation through substitution.
**_Test Question: The unknown number 𝑥_** satisfies (7𝑥+ 1)(9𝑥+ 1) = 61. Find 𝑥.
**_After Standard Training_** **_After Reflective Augmentation_**
Let 𝑡= 8𝑥+ 1. The equation Let’s use substitution to solve the equation. To create a common structure for easier substitution,
becomes 𝑡−𝑥+ 1 (𝑡+ 𝑥+ we multiply the expressions to unify the coefficient of 𝑥: (7𝑥+ 1)×9 and (9𝑥+ 1)×7, yielding
**_Inference_** 1) = 61. Next, we…??? 63𝑥+ 9 63𝑥+ 7 = 61×63. Observing the equation, we notice that by letting 𝑡= 63𝑥+ 8, it can
If we set 𝑡= 7𝑥+ 1, the original
equation turns into1) = 61. Then we can get…??? 𝑡(𝑡+ 2𝑥+ be rewritten asTherefore, 𝑥= "# [or][ 𝑥= −]𝑡+ 1 $%&𝑡−1 = (62 + 1)(62 −1)[.][ (Early stopping: no reflection needed during inference)], which simplifies to 𝑡[!] = 62[!] ⇒𝑡= ±62 .
when facing a different scenario. In contrast, the model trained with reflection on the substitution technique gains a
deeper understanding of its principles, patterns, and its flexible application in new contexts.
decoding the reflective section during inference.
Following the definition by Stacey et al. (1982),
these reflective sections include two components:
_alternative and follow-up reasoning. For example,_
Figure 2 shows a scenario where the model struggles to apply the substitution technique in a different context if only rigidly transferring the pattern
from the standard solution. In contrast, training the
model to reflect on an equivalent substitution expression followed by devising a more challenging
equation facilitates a deeper understanding of the
principles and variations of the technique, thereby
enabling flexible adaptation in new contexts.
Extensive experimentation on diverse math reasoning tasks reveals multiple benefits of RefAug:
(1) It boosts the problem-solving performance of
LMs in the standard single-round QA settings,
yielding a +7.2 accuracy gain over direct fine
tuning. (2) It remarkably enhances the LMs’ performance in multiple reflective math reasoning scenarios, where traditional data expansion methods fall
short. (3) Its benefits are complementary to those
of existing data expansion techniques, allowing
for seamless integration that leads to even greater
performance improvements.
**2** **Related Work**
**2.1** **Data Augmentation for Math Reasoning**
Due to the scarcity (Li et al., 2024a) and quality
issues (Fan et al., 2024) of human-annotated data,
data augmentation is a prevalent strategy in math
reasoning tasks. Most research focused on creating additional training instances, typically using
advanced LMs to minimize human effort. This include question augmentation which generates new
questions from existing ones (Yu et al., 2024; Tang
-----
**_Question_** **_Answer_** **_Reflection_**
Original Problem Initial Solution Alternative Reasoning Follow-up: Abstraction
Find the maximum of Complete the square: Find the derivative Find the maximum of
. .
or
The maximum is at the Solve :
Follow-up: Analogy
vertex is the maximum.
Find all extrema of
Figure 3: Relationship between the original instance and the reflective section. Either abstraction or analogy is
annotated for each instance. Core ideas are shown but textual explanations (like those in Figure 2) are omitted.
et al., 2024; Li et al., 2024a; Liu et al., 2024; Huang
et al., 2024b), and answer augmentation which resamples the answer for each question (Yuan et al.,
2023; Li et al., 2023; Yu et al., 2024). Others also
explored answer refinement, aiming to insert additional reasoning details (Anonymous, 2024) or to
restructure answers for clearer reasoning paths (Fan
et al., 2024). Not only is reflective augmentation
complementary to existing approaches, but it also
exhibits unique advantages in reflective reasoning
scenarios, as we will show in §4.
Another branch of research augmented code
_snippets within problem solutions, which trans-_
forms text reasoning into code generation (Wang
et al., 2023a; Gou et al., 2024; Lu et al., 2024). This
method is effective for math problems but is typically considered a separate track since it uses external tools (i.e., the code interpreter). Beyond supervised fine-tuning, some works augmented data for
further preference optimization (Pang et al., 2024;
Yuan et al., 2024), whereas we leave exploring reflective data in preference tuning for future work.
**2.2** **Reflection in LMs**
Previous applications of reflection in LMs primarily focused on enabling LMs to rectify their own
responses during inference (i.e., self-reflect). Some
works equipped the LM with external feedback,
such as code execution or expert critiques (Shinn
et al., 2023; Chen et al., 2024). Others prompted
LMs to use only internal knowledge to correct
answers (Madaan et al., 2023; Li et al., 2024b),
though the effectiveness of this approach is under
debate (Huang et al., 2024a). Some specific tasks
(e.g., math word problems) permit reverse verification, where the generated answer is used to rederive the question to confirm its correctness (Weng
et al., 2023; Wu et al., 2024). These works demonstrate that reflection is a common aspect of language processing. However, RefAug explores augmenting reflective data for better training instead
of answer refinement during inference. Unifying
these approaches is a promising future study.
**3** **Approach**
RefAug extends each training sequence with a reflective section that encourages the LM to reflect
on its initial reasoning process to engage in further
math reasoning. Figure 1 contrasts RefAug with
traditional augmentation methods, and its detailed
implementation is elaborated below.
**Reflection Types** Following the definition by
Stacey et al. (1982) to “review thoughtfully, consider alternatives and follow extensions”, we consider two types of reflection in composing the reflective section: alternative reasoning and follow_up reasoning._
Alternative reasoning involves thinking about
the problem from different perspectives (Kagan
et al., 1964; Wetzstein and Hacker, 2004). Therefore, besides the initial solution, we annotate an
alternative approach that also effectively solves
the problem. This helps the model master related
methodologies and develop critical thinking skills.
Follow-up reasoning associates the initial solution to a broader class of problems (Silver, 1994;
Lim et al., 2020). To fit various contexts, we consider two options: abstraction and analogy. Abstraction refers to creating a generalized form of the
original problem, thereby encouraging the model
to reduce dependency on specific numerical values.
Analogy challenges the model in applying methodologies of solving the original problem to a more
complex situation. Learning to design follow-up
scenarios enables the model to understand the associated math concepts and principles better and
apply them flexibly in new contexts. The relationship between the initial instance and components
of the reflective section is illustrated in Figure 3.
**Data Annotation** Following a common approach (Li et al., 2023; Yu et al., 2024; Li et al.,
2024a), we employ an expert LM, GPT-4-turbo, to
annotate the reflective sections for high-quality rea
-----
soning paths and minimal human effort[2]. This entails reviewing the original problem and solution to
generate a section consisting of the aforementioned
two types of reflective reasoning. We prompt the
expert model to choose between abstraction and
analogy in follow-up reasoning based on the problem context. Figure 2 shows an annotated example
with alternative reasoning and follow-up analogy,
and the full annotation prompt is in Appendix E.
**Training & Inference** During training, given a
math question as input, we include the reflective
section in the training output immediately following the initial answer, starting with a Reflection:
prefix. Thus, the training objective is to learn
_P([a; r]|q), where [; ] denotes sequence concate-_
nation. Loss is calculated on tokens from both the
initial answer and the reflective section. The format of the whole training sequence is detailed in
Appendix D.
During inference, the generation early stops
upon delivering the answer to the input question
and ignores the reflective section, as shown in Figures 1-2. This is achieved by using Reflection:
as a termination string during model generation.
**4** **Experiments**
We test RefAug in a variety of mathematical tasks
that cover both standard single-round QA and reflective reasoning scenarios. We mainly evaluate
two aspects: the influence of RefAug on LMs’
**math reasoning abilities and its interaction with**
**existing augmentation techniques. Besides, we**
extend our approach to code generation tasks and
perform comprehensive analyses.
**4.1** **Standard Math Reasoning**
**4.1.1** **Settings**
Standard math reasoning tasks follow a singleround QA format. Following a popular approach,
we use the training sets of GSM8k (Cobbe et al.,
2021) and MATH (Hendrycks et al., 2021b). We additionally include out-of-distribution test sets from
MAWPS (Koncel-Kedziorski et al., 2016), Mathematics (Davies et al., 2021), SVAMP (Patel et al.,
2021), plus the math subsets of MMLU (Hendrycks
et al., 2021a) and SAT (Zhong et al., 2023). We
mainly experiment with two LMs known for superior reasoning performance: Mistral-7B (Jiang
et al., 2023a) and Gemma-7B (Mesnard et al.,
2We also tried LLaMA-3-70B for data annotation in Appendix A.1 but its performance lags behind GPT-4-turbo.
2024), and have also tested LLaMA-3-8B (Meta,
2024) in Appendix A.2. Models are trained for
3 epochs with batch size 128. The learning rate
peaks at 1e-5 with a 3% warmup period followed
by linear decay. Greedy decoding is applied during inference. Additional details of datasets and
training settings are in Appendix B.1.
**4.1.2** **Existing Training Methods**
- Standard Fine-tuning (Figure 1a): Utilizes original problem solutions from GSM8k and MATH,
each containing a chain-of-thought reasoning process before reaching the final prediction.
- Question Augmentation (Q-Aug, Figure 1b):
Involves training on both original and GPTaugmented questions. We adopt the augmentation prompt from Li et al. (2024a), detailed in
Appendix C. We also explore Q-Aug + RefAug
by applying RefAug to all questions after Q-Aug,
and Q-Aug×2 by adding a second augmentation
round to further expand the dataset.
- Answer Augmentation (A-Aug, Figure 1c): Resamples the solution for each problem using
GPT-4-turbo, following the approach of Yu et al.
(2024). We also explore its combination with QAug (A-Aug + Q-Aug), RefAug (A-Aug + Re**fAug), and another round of A-Aug (A-Aug×2).**
- MetaMath Augmentation: MetaMath (Yu et al.,
2024) creates a training set of 400K instances
using various augmentation techniques. Due
to budget constraints, we examine the following subsets: (1) A uniformly sampled 40K subset (MetaMath40k), which we augment with
RefAug to compare against an 80K sample
(MetaMath80k); (2) The entire 400K dataset,
of which 40K instances are augmented with RefAug (MetaMath400k+RefAug40k), to compete
with the public MetaMath checkpoint; (3) A oneepoch continual training (CT) from the public
checkpoint on the same dataset as (2).
The augmentation prompt for Q-Aug and A-Aug,
along with the sampling strategy on MetaMath can
be found in Appendix C.
**4.1.3** **Results**
Table 1 lists the QA accuracy of fine-tuned LMs.
We summarize several findings on RefAug:
**Enhancement in Single-Round Math Reason-**
**ing: RefAug boosts model performance across**
both in-distribution and out-of-distribution tasks,
outscoring the direct fine-tuning approach by +7.2
across two base LMs. As the reflective section is
-----
_In-Distribution_ _Out-Of-Distribution_
**Model** **Training Data** **Avg.**
**GSM** **MATH** **Mathematics** **MAWPS** **SVAMP** **MMLU-Math** **SAT-Math**
_Standard Training Data_
|Mistral|Standard Standard + RefAug|56.25 13.96 60.05 17.36|14.80 73.07 53.50 37.68 31.82 19.40 80.25 59.30 43.63 48.64|40.15 46.95|
|---|---|---|---|---|
|Gemma|Standard Standard + RefAug|60.05 17.06 64.59 23.04|19.80 76.81 57.10 39.32 42.73 26.70 85.64 64.70 46.61 55.00|44.70 52.33|
_Question Augmentation Data_
|Mistral|Q-Aug Q-Aug×2 Q-Aug + RefAug|56.03 18.06 59.14 21.26 63.00 21.66|18.00 79.99 59.10 38.19 36.16 20.90 80.84 61.50 40.86 46.82 20.50 81.78 60.20 42.20 50.91|43.65 47.33 48.61|
|---|---|---|---|---|
|Gemma|Q-Aug Q-Aug×2 Q-Aug + RefAug|61.11 21.98 63.68 24.42 68.61 26.38|23.90 81.78 59.70 40.45 48.18 23.50 82.12 59.50 42.71 48.18 28.70 85.39 66.00 48.05 51.82|48.16 49.16 53.56|
_Answer Augmentation Data_
|Mistral|A-Aug A-Aug×2 A-Aug + Q-Aug A-Aug + RefAug|66.19 23.08 67.93 27.12 69.67 24.32 72.93 29.40|23.90 81.10 62.20 37.78 40.91 28.30 83.26 66.50 42.61 45.91 26.90 81.82 61.20 38.50 46.82 31.20 84.41 71.50 47.74 60.45|47.88 51.66 49.90 56.80|
|---|---|---|---|---|
|Gemma|A-Aug A-Aug×2 A-Aug + RefAug|68.31 28.78 70.66 31.14 74.15 33.60|33.10 83.05 65.10 46.51 61.36 33.30 85.22 69.70 47.13 54.55 38.20 85.68 69.10 52.26 64.09|55.17 55.96 59.58|
_MetaMath Augmentation Data_
|MetaMath40k 68.46 20.96 20.30 85.09 66.50 38.09 42.73 48.88 MetaMath80k 69.29 23.54 23.20 86.75 68.60 41.17 43.64 50.88 MetaMath40k + RefAug40k 73.84 26.60 27.00 87.68 75.30 44.15 53.18 55.39 Mistral MetaMath400k* 77.48 28.42 33.00 90.10 79.10 48.77 55.00 58.84 MetaMath400k + RefAug40k 78.70 32.50 34.50 91.59 77.90 49.69 59.09 60.57 MetaMath400k (CT) 78.39 28.72 32.70 90.87 78.90 49.08 55.91 59.22 MetaMath400k + RefAug40k (CT) 78.92 30.12 36.20 91.46 79.90 49.69 57.27 60.51|MetaMath40k MetaMath80k MetaMath40k + RefAug40k|68.46 20.96 69.29 23.54 73.84 26.60|20.30 85.09 66.50 38.09 42.73 23.20 86.75 68.60 41.17 43.64 27.00 87.68 75.30 44.15 53.18|48.88 50.88 55.39|
|---|---|---|---|---|
||MetaMath400k* MetaMath400k + RefAug40k|77.48 28.42 78.70 32.50|33.00 90.10 79.10 48.77 55.00 34.50 91.59 77.90 49.69 59.09|58.84 60.57|
||MetaMath400k (CT) MetaMath400k + RefAug40k (CT)|78.39 28.72 78.92 30.12|32.70 90.87 78.90 49.08 55.91 36.20 91.46 79.90 49.69 57.27|59.22 60.51|
Table 1: Accuracy on single-round math reasoning tasks. * The public checkpoint released by Yu et al. (2024).
not utilized during inference, this advancement underscores RefAug’s role in enhancing model learning, which strengthens math problem-solving capabilities without providing additional context.
**Complementary Benefits with Existing Meth-**
**ods: While data expansion methods (Q-Aug, A-**
Aug, and MetaMath) have improved model performance, combining RefAug with them leads to
further substantial gains, improving overall accuracy by +6.1 on average. This demonstrates that
RefAug still holds value on high-quality data[3] and
is complementary to data expansion strategies. Furthermore, such synergistic benefits outpace the diminishing returns seen with repeated dataset expansions: these three methods bring +6.8 improvement
initially but only +2.3 in the second round. This
disparity indicates that expanding data does not
always yield proportionate gains, whereas the balance of practicing new problems and reflecting on
existing ones maximizes the learning effect.
**Effectiveness on Large Datasets: Even when**
only 10% of the full-sized MetaMath dataset in
3In Appendix A.4, we show that GPT-written solutions are
of higher quality than those original ones in GSM and MATH.
cludes the reflective section, the resulting model
surpasses the public MetaMath checkpoint by ~2
points. This confirms RefAug’s efficacy on larger
scales of data. Additionally, the MetaMath model
barely benefits from continual training on its original QA data, suggesting a good memorization of
these math problems. Nevertheless, RefAug still
manages to elevate its performance, indicating that
the model has not fully internalized the dataset’s
knowledge and RefAug effectively deepens the
model’s understanding of these problems.
**4.2** **Reflective Math Reasoning**
**4.2.1** **Tasks**
Many realistic math applications require models to
reflect on previous predictions and perform further
reasoning. We employ three tasks of this kind: the
follow-up QA (FQA) and error correction (EC)
tasks of MathChat (Liang et al., 2024), and the
math subset of MINT (Wang et al., 2024a). FQA
involves solving two subsequent questions linked
to each initial query, forming a three-round interaction. EC deliberately writes an erroneous solution
to test the model’s error identification and correc
-----
|Training Data|MathChat-FQA 1st 2nd 3rd|MathChat-EC|MINT-Math k = 1 k = 2 k = 3 k = 4 k = 5 ∆|
|---|---|---|---|
|Standard Standard + RefAug|56.25 25.72 15.25 60.05 35.36 27.54|50.68 72.99|20.88 24.91 27.47 28.57 28.94 8.06 22.34 33.70 37.00 38.10 39.56 17.22|
|Q-Aug Q-Aug×2 Q-Aug + RefAug|56.03 30.65 21.02 59.14 32.70 22.99 63.00 42.19 34.37|65.48 63.51 76.48|21.98 27.47 30.04 31.87 32.60 10.62 27.11 32.60 35.16 36.26 37.73 10.62 26.74 37.36 41.03 42.86 43.22 16.48|
|A-Aug A-Aug×2 A-Aug + Q-Aug A-Aug + RefAug|66.19 34.29 23.60 67.93 36.57 28.00 69.67 37.86 27.31 72.93 44.92 36.19|72.08 71.93 69.58 80.20|23.08 30.77 33.70 35.16 35.53 12.45 25.64 31.87 33.33 34.80 34.80 9.16 23.44 31.87 35.16 37.36 38.10 14.66 28.94 42.12 46.15 47.28 47.99 19.05|
|MetaMath MetaMath×2 MetaMath + RefAug|68.46 37.48 24.89 69.29 38.92 26.10 73.84 43.93 34.98|61.15 60.09 79.51|22.34 27.84 31.50 32.23 33.70 11.36 21.61 25.64 26.74 27.47 27.84 6.23 27.47 36.63 39.93 40.66 41.03 13.56|
Table 2: Accuracy on reflective math reasoning tasks. Each question in MathChat-FQA has two subsequent
questions (2nd and 3rd turns), and the accuracy of each turn is calculated separately. MINT evaluates whether the
model solves the math problem within k interaction turns with the feedback from GPT-4, and we use the difference
(∆) between k = 5 and k = 1 to indicate the model’s ability in leveraging external feedback.
**FQA** actual increase in reflective reasoning skills, which
_2nd_ _3rd_ echos the findings of Liang et al. (2024) that con
MAmmoTH 184K 32.16 19.31 54.15 35.21 ventional training approaches overly focus on the
single-round QA setting and neglect many other
**Superiority of RefAug in Enhancing Reflec-**
**tive Reasoning: RefAug significantly enhances**
the model’s reflective reasoning performance, with
gains of +12.3 in FQA-3rd, +22.3 in EC, +10.6 in
MINTk=5, and +9.2 in MINT∆, far exceeding the
corresponding improvements of +7.9, +15.5, +5.0,
and +3.4 brought by three data expansion methods on average. An effective solution, however,
is to combine RefAug with these methods, which
yields substantial improvements over them, e.g.,
+12 on FQA-3rd and +10.1 on MINTk=5. These
results highlight RefAug’s exceptional capability
to improve LMs’ reflective math reasoning, which
complements the disregard of existing augmentation methods on this dimension.
**Comparison with Existing Open-Source Mod-**
**els: Our RefAug-enhanced models excel in the**
reflective reasoning scenarios of MathChat with
just 30K training instances, surpassing many opensource models trained on larger math datasets or
with reinforcement learning. This further supports
RefAug’s effectiveness in cultivating LMs’ reflective reasoning skills in solving math problems.
Based on findings from §4.1 and §4.2, we conclude the benefits of RefAug on math reasoning
as: Not only does it enhance LMs’ basic problem**_solving skills but also advances their reflective_**
**_reasoning abilities, making it a valuable comple-_**
**_ment to existing augmentation techniques._**
|Model|Data|FQA EC 2nd 3rd|Avg.|
|---|---|---|---|
|MAmmoTH MetaMath WizardMath InternLM2-Math DeepSeek-Math Mistral+A-Aug+RefAug Gemma+A-Aug+RefAug|184K 395K 112K* ~2M 776K 30K 30K|32.16 19.31 54.15 43.98 32.16 56.30 44.81 36.86 68.22 40.20 28.64 72.70 48.19 35.70 74.34 44.92 36.19 80.20 47.80 38.54 81.11|35.21 44.15 49.96 47.18 52.74 53.77 55.82|
Table 3: MathChat results compared with other opensource 7B math models. Baseline scores are from Liang
et al. (2024). The best scores are bolded and the second
bests are underlined. *Including both supervised finetuning and reinforcement learning data.
tion abilities. MINT evaluates the model’s ability
to leverage external language feedback to improve
its reasoning process through up to k turns of interaction. More task details are in Appendix B.2.
**4.2.2** **Results**
Results on reflective math reasoning tasks are displayed in Tables 2-3 for Mistral and Table 11 for
Gemma. We summarize the key findings below.
**Challenges for Data Expansion Methods: De-**
spite improving single-round QA performance,
methods like Q-Aug, A-Aug, and MetaMath fall
short in enhancing LMs’ reflective reasoning abilities. For instance, these methods hurt Mistral’s
error correction performance. Moreover, a second
round of augmentation yields minimal or negative
gains across key metrics on reflective reasoning:
+2.5 in FQA-3rd, -1.1 in EC, -0.5 in MINTk=5, and
-4.2 in MINT∆. This indicates that initial augmentation benefits are mainly due to the improved answer quality from GPT annotation[3] rather than an
-----
|Data|GSM MATH|Mathematics MAWPS SVAMP MMLU-Math SAT-Math|Avg.|
|---|---|---|---|
|Standard + Alternative Reasoning + Follow-up Reasoning + RefAug|56.25 13.96 59.51 16.42 56.25 16.82 60.05 17.36|14.80 73.07 53.50 37.68 31.82 17.90 79.57 58.30 39.63 44.09 18.80 77.10 58.50 38.09 44.05 19.40 80.25 59.30 43.63 48.64|40.15 45.06 44.23 46.95|
Table 4: Accuracy on standard math reasoning tasks when varying the components of the reflective section.
48
46
44
42
40
38
1/8 1/4 1/2
|Col1|Col2|Col3|Col4|46.|95|
|---|---|---|---|---|---|
||||45.|32||
||43.|07 43.|40|||
|40.|15|||||
|||||||
Portion of Data with RefAug
Figure 4: Average accuracy on 7 standard math reasoning tasks when different proportions of data are augmented with reflective sections (remaining data are in
the standard QA form).
**4.4.1** **Ablation Study**
To further assess the efficacy of the reflective section, we conduct an ablation study on its two components: alternative and follow-up reasoning. According to Table 4, incorporating any single reflective component to the original data significantly enhances model performance by an average of +4.5
points. This suggests that the original solutions
lack sufficient information for the model to fully
grasp the math reasoning skills, which is consistent
with the findings of Anonymous (2024). Combining both reflective components further enhances
the model’s comprehension of associated concepts
and methodologies, improving the performance by
+2.3 points over using any single one.
**4.4.2** **The Amount of RefAug Data**
We explore the impact of varying the quantity of
reflection-augmented instances in the whole training set. As depicted by Figure 4, the model’s
overall performance continually improves as more
instances are augmented with reflective sections.
When the model is trained through reflecting on
all instances, the model maximizes its grasp of the
training data and reaches the best performance, underscoring the scalability of RefAug’s benefits.
**4.4.3** **RefAug vs. Chain-of-Thought**
For a deeper understanding of the reflective section, we experiment with positioning it before the
original solution, i.e., modeling P([r; a]|q). This
arrangement can be regarded as augmenting the
chain-of-thought (CoT, Wei et al., 2022) for solv
|Model|HE HE+ MBPP MBPP+|Avg.|
|---|---|---|
|CodeLlama-std CodeLlama-RefAug|53.7 50.6 62.9 51.6 57.9 53.0 65.4 52.4|54.7 57.2|
|Mistral-std Mistral-RefAug|38.4 35.4 53.1 40.1 50.0 45.1 56.4 46.4|41.7 49.5|
|StarCoder2-std StarCoder2-RefAug|54.3 49.4 62.7 51.4 56.7 50.6 66.7 51.6|54.4 56.4|
|DeepSeekCoder-std DeepSeekCoder-RefAug|67.1 59.8 75.4 60.4 67.1 62.2 76.7 63.2|65.7 67.3|
Table 5: Pass@1 on code generation, scored by EvalPlus.
-std denotes training with the standard QA setting.
**4.3** **Code Generation**
Besides math reasoning, we extend the application
of RefAug to code generation. In this task, a query
instructs the model to craft a code snippet that fulfills a specific functionality, which also requires
a step-by-step logical flow. We use HumanEval
(Chen et al., 2021) and MBPP (Austin et al., 2021)
as the evaluation benchmarks, along with their plus
versions provided by EvalPlus (Liu et al., 2023).
Training is conducted using the Python subset of
Magicoder-OSS-Instruct (Wei et al., 2023), which
includes 38K QA instances. Considering the abstractive nature of code, we annotate problem analogies as the follow-up section of RefAug.
The outcomes are summarized in Table 5, covering four different base LMs: CodeLLaMA (Rozière
et al., 2023), Mistral, StarCoder2 (Lozhkov et al.,
2024), and DeepSeekCoder (Guo et al., 2024). The
results demonstrate that RefAug consistently elevates the LMs’ proficiency in following instructions
to generate accurate, reasonable code, as evidenced
by an average improvement of +3.5 in Pass@1
across the evaluated benchmarks. These results
indicate that RefAug is able to enhance LMs’ capabilities in solving code problems, which reaffirms
from another scenario that reflection is an essential
ability for LMs to possess.
**4.4** **Analysis**
In this section, we dive deeper into additional aspects of RefAug. Results are tested on Mistral.
-----
|Data|GSM MATH|Mathematics MAWPS SVAMP MMLU SAT|Avg.|FQA-2nd FQA-3rd EC|
|---|---|---|---|---|
|A-Aug +RefAug-front +RefAug|66.19 23.08 72.78 27.34 72.93 29.40|23.90 81.10 62.20 37.78 40.91 28.30 84.62 70.30 47.23 56.82 31.20 84.41 71.50 47.74 60.45|47.88 55.34 56.80|34.29 23.60 72.08 30.96 20.64 68.29 44.92 36.19 80.20|
Table 6: Comparison between RefAug and prepending the reflective section to the answer (RefAug-front).
|Data|GSM MATH|Mathematics MAWPS SVAMP MMLU-Math SAT-Math|Avg.|
|---|---|---|---|
|Standard + RefAug #1 + RefAug #2 + RefAug #3 + RefAug (Avg.)|56.25 13.96 60.05 17.36 62.70 17.26 60.80 16.86 61.18 17.16 ±1.1 ±0.2|14.80 73.07 53.50 37.68 31.82 19.40 80.25 59.30 43.63 48.64 19.20 82.16 60.40 42.51 44.55 18.60 80.29 59.70 42.92 45.45 19.07 80.90 59.80 43.02 46.21 ±0.3 ±0.9 ±0.4 ±0.5 ±1.7|40.15 46.95 46.97 46.37 46.76 ±0.3|
Table 7: We sample the reflective sections three times using the same annotation prompt in Figure 8, and train a
separate Mistral model using each batch of the augmented data (labeled as #1~#3). The last row lists the average
scores of three runs as well as their standard deviation.
hypothesis that training with reflection enhances
the model’s problem-solving accuracy by deepening its grasp of underlying math reasoning skills.
**4.4.5** **Stability of RefAug Data Annotation**
To verify the stability of the improvements and to
avoid bias from cherry-picking augmented data,
we sampled reflective sections three times using
GPT-4-turbo with the same prompt in Figure 8.
Each batch of augmented data is used to train a
separate model. As shown in Table 7, the performance gains are consistent across all augmentation
samples, with a minimal standard deviation of 0.3
in overall accuracy. These results confirm that reflective practices aid in model learning and that
the observed improvements are not due to the
**variability of data sampling.**
In addition to the above perspectives, further
analyses on the risk of data contamination and efficiency statistics in training and inference are presented in Appendix A.5 and A.6, respectively.
**5** **Conclusion**
This paper proposed reflective augmentation (RefAug) for math reasoning, a method that incorporates reflection into training problems and is
complementary to existing data augmentation approaches. We proved the efficacy of RefAug in
not only enhancing LMs’ basic problem-solving
skills on single-round math problems but also in
cultivating their capabilities to solve more complex
reflective reasoning tasks. We further verified the
effectiveness of RefAug in code generation tasks
and its scalability, along with ablation studies and
analyses of the methodological choices, such as the
impact of data sequencing and the stability of the
annotation process.
**Training** **Reasoning** **Calculation** **Total**
Standard 424 287 577
RefAug 374(-50) 264(-23) 527
Table 8: Error analysis on GSM8k test set. The reduction of errors is denoted in gray parentheses.
ing the original problem. According to Table 6,
since the reflective section contains relevant reasoning steps to the original problem, integrating it
into CoT yields similar improvements as RefAug
on single-round QA. However, such setup hurts
performance in reflective math reasoning, which
supports the original design of RefAug in developing reflective reasoning skills and reaffirms that
**reflective reasoning demands distinct capabili-**
**ties from standard forward reasoning. Besides,**
augmenting CoT increases the token count required
for predicting the final answer, thereby reducing
inference efficiency (see Appendix A.6 for details).
**4.4.4** **Error Analysis**
We analyze how the model’s math capabilities has
been enhanced through the lens of an error analysis.
Following Li et al. (2024a), we classify errors in
GSM8k into calculation errors and reasoning errors. Calculation errors include incorrect identification of arithmetic relationships or wrong numerical
computations. Reasoning errors include mistakes
pertaining to the reasoning logic, e.g., incoherent
reasoning steps, misunderstandings of the problem,
etc. Using the gold reasoning paths from GSM8k
test data as a benchmark, we employ GPT-4 to
determine whether solutions contain calculation
errors, reasoning errors, or both. As shown in Table 8, the improvement mostly comes from the
**reduction of reasoning errors. This supports the**
-----
**Limitations**
Some previous data augmentation studies in math
reasoning created millions of data instances with
OpenAI’s GPT models (Li et al., 2024a; Tang et al.,
2024; Huang et al., 2024b). While testing our
method at a similar scale would be valuable, budget constraints limit our ability to do so. For instance, our augmentation data for MetaMath is
capped at 40K instances. In Appendix A.1, we
note that LLaMA-3-70B shows some promising
performance in annotating RefAug data for math
reasoning tasks, though its capabilities have not
fully matched those of GPT-4 yet. We anticipate
that the development of stronger open-source models will reduce researchers’ dependence on paid
services of proprietary models.
**References**
Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng,
[Jian-Guang Lou, and Weizhu Chen. 2023. Learning](https://doi.org/10.48550/arXiv.2310.20689)
[from mistakes makes LLM better reasoner. Arxiv](https://doi.org/10.48550/arXiv.2310.20689)
_preprint, 2310.20689._
[John R Anderson and Jon M Fincham. 2014. Extend-](https://www.sciencedirect.com/science/article/pii/S0010028514000449?casa_token=baU8Q1OvNTkAAAAA:fLi_iBwCFR45W4yCZg7kHYhzlH_8udSkk4eMPPrCOJkSAFLQYXQfCJjs33li0MHjtMEBNGh3fd4)
[ing problem-solving procedures through reflection.](https://www.sciencedirect.com/science/article/pii/S0010028514000449?casa_token=baU8Q1OvNTkAAAAA:fLi_iBwCFR45W4yCZg7kHYhzlH_8udSkk4eMPPrCOJkSAFLQYXQfCJjs33li0MHjtMEBNGh3fd4)
_Cognitive psychology._
[Anonymous. 2024. Enrichmath: Enriching idea and so-](https://openreview.net/forum?id=5y8tPhUiNJm)
[lution elicit mathematical reasoning in large language](https://openreview.net/forum?id=5y8tPhUiNJm)
[models. OpenReview.net.](https://openreview.net/forum?id=5y8tPhUiNJm)
Jacob Austin, Augustus Odena, Maxwell I. Nye,
Maarten Bosma, Henryk Michalewski, David Dohan,
Ellen Jiang, Carrie J. Cai, Michael Terry, Quoc V. Le,
[and Charles Sutton. 2021. Program synthesis with](https://arxiv.org/abs/2108.07732)
[large language models. Arxiv preprint, 2108.07732.](https://arxiv.org/abs/2108.07732)
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster,
Marco Dos Santos, Stephen McAleer, Albert Q.
Jiang, Jia Deng, Stella Biderman, and Sean Welleck.
[2023. Llemma: An open language model for mathe-](https://doi.org/10.48550/arXiv.2310.10631)
[matics. Arxiv preprint, 2310.10631.](https://doi.org/10.48550/arXiv.2310.10631)
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan,
Henrique Pondé de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg
Brockman, Alex Ray, Raul Puri, Gretchen Krueger,
Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela
Mishkin, Brooke Chan, Scott Gray, Nick Ryder,
[Mikhail Pavlov, and et al. 2021. Evaluating large](https://arxiv.org/abs/2107.03374)
[language models trained on code. Arxiv preprint,](https://arxiv.org/abs/2107.03374)
2107.03374.
Wenhu Chen, Ming Yin, Max Ku, Pan Lu, Yixin Wan,
Xueguang Ma, Jianyu Xu, Xinyi Wang, and Tony
[Xia. 2023. Theoremqa: A theorem-driven question](https://doi.org/10.18653/v1/2023.emnlp-main.489)
[answering dataset. In EMNLP 2023.](https://doi.org/10.18653/v1/2023.emnlp-main.489)
Xinyun Chen, Maxwell Lin, Nathanael Schärli, and
[Denny Zhou. 2024. Teaching large language models](https://doi.org/10.48550/arXiv.2304.05128)
[to self-debug. In ICLR 2024.](https://doi.org/10.48550/arXiv.2304.05128)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training verifiers to solve math word prob-](https://arxiv.org/abs/2110.14168)
[lems. Arxiv preprint, 2110.14168.](https://arxiv.org/abs/2110.14168)
Tri Dao. 2023. Flashattention-2: Faster attention
[with better parallelism and work partitioning. Arxiv](https://doi.org/10.48550/arXiv.2307.08691)
_preprint, 2307.08691._
Alex Davies, Petar Velickovic, Lars Buesing, Sam
Blackwell, Daniel Zheng, Nenad Tomasev, Richard
Tanburn, Peter W. Battaglia, Charles Blundell, András Juhász, Marc Lackenby, Geordie Williamson,
[Demis Hassabis, and Pushmeet Kohli. 2021. Advanc-](https://doi.org/10.1038/s41586-021-04086-x)
[ing mathematics by guiding human intuition with AI.](https://doi.org/10.1038/s41586-021-04086-x)
_Nature._
Run-Ze Fan, Xuefeng Li, Haoyang Zou, Junlong Li,
Shwai He, Ethan Chern, Jiewen Hu, and Pengfei
[Liu. 2024. Reformatted alignment. Arxiv preprint,](https://doi.org/10.48550/arXiv.2402.12219)
2402.12219.
Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and
[Tushar Khot. 2023. Specializing smaller language](https://proceedings.mlr.press/v202/fu23d.html)
[models towards multi-step reasoning. In ICML 2023.](https://proceedings.mlr.press/v202/fu23d.html)
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen,
Yujiu Yang, Minlie Huang, Nan Duan, and Weizhu
[Chen. 2024. Tora: A tool-integrated reasoning agent](https://doi.org/10.48550/arXiv.2309.17452)
[for mathematical problem solving. In ICLR 2024.](https://doi.org/10.48550/arXiv.2309.17452)
Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai
Dong, Wentao Zhang, Guanting Chen, Xiao Bi,
Y. Wu, Y. K. Li, Fuli Luo, Yingfei Xiong, and Wen[feng Liang. 2024. Deepseek-coder: When the large](https://doi.org/10.48550/arXiv.2401.14196)
[language model meets programming - the rise of code](https://doi.org/10.48550/arXiv.2401.14196)
[intelligence. Arxiv preprint, 2401.14196.](https://doi.org/10.48550/arXiv.2401.14196)
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou,
Mantas Mazeika, Dawn Song, and Jacob Steinhardt.
[2021a. Measuring massive multitask language under-](https://openreview.net/forum?id=d7KBjmI3GmQ)
[standing. In ICLR 2021.](https://openreview.net/forum?id=d7KBjmI3GmQ)
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
[Jacob Steinhardt. 2021b. Measuring mathematical](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/be83ab3ecd0db773eb2dc1b0a17836a1-Abstract-round2.html)
[problem solving with the MATH dataset. In NeurIPS](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/be83ab3ecd0db773eb2dc1b0a17836a1-Abstract-round2.html)
_Datasets and Benchmarks 2021._
Jie Huang, Xinyun Chen, Swaroop Mishra,
Huaixiu Steven Zheng, Adams Wei Yu, Xiny[ing Song, and Denny Zhou. 2024a. Large language](https://doi.org/10.48550/arXiv.2310.01798)
[models cannot self-correct reasoning yet. In ICLR](https://doi.org/10.48550/arXiv.2310.01798)
_2024._
Yiming Huang, Xiao Liu, Yeyun Gong, Zhibin Gou,
Yelong Shen, Nan Duan, and Weizhu Chen. 2024b.
[Key-point-driven data synthesis with its enhance-](https://doi.org/10.48550/arXiv.2403.02333)
[ment on mathematical reasoning.](https://doi.org/10.48550/arXiv.2403.02333) _Arxiv preprint,_
2403.02333.
-----
Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
de Las Casas, Florian Bressand, Gianna Lengyel,
Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock,
Teven Le Scao, Thibaut Lavril, Thomas Wang, Timo[thée Lacroix, and William El Sayed. 2023a. Mistral](https://doi.org/10.48550/arXiv.2310.06825)
[7b. Arxiv preprint, 2310.06825.](https://doi.org/10.48550/arXiv.2310.06825)
Weisen Jiang, Han Shi, Longhui Yu, Zhengying Liu,
Yu Zhang, Zhenguo Li, and James T. Kwok. 2023b.
[Forward-backward reasoning in large language mod-](https://doi.org/10.48550/arXiv.2308.07758)
[els for verification. Arxiv preprint, 2308.07758.](https://doi.org/10.48550/arXiv.2308.07758)
Jerome Kagan, Bernice L Rosman, Deborah Day,
[Joseph Albert, and William Phillips. 1964. Informa-](https://web.p.ebscohost.com/ehost/pdfviewer/pdfviewer?vid=0&sid=b180caf5-0158-4ad5-9db7-a80db511748b%40redis)
[tion processing in the child: Significance of analytic](https://web.p.ebscohost.com/ehost/pdfviewer/pdfviewer?vid=0&sid=b180caf5-0158-4ad5-9db7-a80db511748b%40redis)
[and reflective attitudes. Psychological Monographs:](https://web.p.ebscohost.com/ehost/pdfviewer/pdfviewer?vid=0&sid=b180caf5-0158-4ad5-9db7-a80db511748b%40redis)
_General and Applied._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu[taka Matsuo, and Yusuke Iwasawa. 2022. Large lan-](http://papers.nips.cc/paper_files/paper/2022/hash/8bb0d291acd4acf06ef112099c16f326-Abstract-Conference.html)
[guage models are zero-shot reasoners. In NeurIPS](http://papers.nips.cc/paper_files/paper/2022/hash/8bb0d291acd4acf06ef112099c16f326-Abstract-Conference.html)
_2022._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate
[Kushman, and Hannaneh Hajishirzi. 2016. MAWPS:](https://doi.org/10.18653/v1/n16-1136)
[A math word problem repository. In NAACL-HLT](https://doi.org/10.18653/v1/n16-1136)
_2016._
Chen Li, Weiqi Wang, Jingcheng Hu, Yixuan Wei, Nanning Zheng, Han Hu, Zheng Zhang, and Houwen
[Peng. 2024a. Common 7b language models already](https://doi.org/10.48550/arXiv.2403.04706)
[possess strong math capabilities.](https://doi.org/10.48550/arXiv.2403.04706) _Arxiv preprint,_
2403.04706.
Chengpeng Li, Zheng Yuan, Hongyi Yuan, Guanting
Dong, Keming Lu, Jiancan Wu, Chuanqi Tan, Xiang
[Wang, and Chang Zhou. 2023. Query and response](https://doi.org/10.48550/arXiv.2310.05506)
[augmentation cannot help out-of-domain math rea-](https://doi.org/10.48550/arXiv.2310.05506)
[soning generalization. Arxiv preprint, 2310.05506.](https://doi.org/10.48550/arXiv.2310.05506)
Yanhong Li, Chenghao Yang, and Allyson Ettinger.
[2024b. When hindsight is not 20/20: Testing lim-](https://doi.org/10.48550/arXiv.2404.09129)
[its on reflective thinking in large language models.](https://doi.org/10.48550/arXiv.2404.09129)
_Arxiv preprint._
Zhenwen Liang, Dian Yu, Wenhao Yu, Wenlin Yao, Zhihan Zhang, Xiangliang Zhang, and Dong Yu. 2024.
[Mathchat: Benchmarking mathematical reasoning](https://doi.org/10.48550/arXiv.2405.19444)
[and instruction following in multi-turn interactions.](https://doi.org/10.48550/arXiv.2405.19444)
_Arxiv preprint, 2405.19444._
Woong Lim, Ji-Eun Lee, Kersti Tyson, Hee-Jeong Kim,
[and Jihye Kim. 2020. An integral part of facilitating](https://idp.springer.com/authorize/casa?redirect_uri=https://link.springer.com/article/10.1007/s10763-019-09966-3&casa_token=crniCVVOYPsAAAAA:fsFdb8SO-_n7nd_B6SQLl5kY99mU6S4hFlmPEjw6H-wuxzb-emdX5Oi2ZXKYjWhKOznDFMbPUHB4Fci-)
[mathematical discussions: Follow-up questioning.](https://idp.springer.com/authorize/casa?redirect_uri=https://link.springer.com/article/10.1007/s10763-019-09966-3&casa_token=crniCVVOYPsAAAAA:fsFdb8SO-_n7nd_B6SQLl5kY99mU6S4hFlmPEjw6H-wuxzb-emdX5Oi2ZXKYjWhKOznDFMbPUHB4Fci-)
_International Journal of Science and Mathematics_
_Education._
Haoxiong Liu, Yifan Zhang, Yifan Luo, and An[drew Chi-Chih Yao. 2024. Augmenting math word](https://doi.org/10.48550/arXiv.2401.09003)
[problems via iterative question composing. Arxiv](https://doi.org/10.48550/arXiv.2401.09003)
_preprint, 2401.09003._
Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Ling[ming Zhang. 2023. Is your code generated by chatgpt](http://papers.nips.cc/paper_files/paper/2023/hash/43e9d647ccd3e4b7b5baab53f0368686-Abstract-Conference.html)
[really correct? rigorous evaluation of large language](http://papers.nips.cc/paper_files/paper/2023/hash/43e9d647ccd3e4b7b5baab53f0368686-Abstract-Conference.html)
[models for code generation. In NeurIPS 2023.](http://papers.nips.cc/paper_files/paper/2023/hash/43e9d647ccd3e4b7b5baab53f0368686-Abstract-Conference.html)
Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Nouamane Tazi,
Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei,
Tianyang Liu, Max Tian, Denis Kocetkov, Arthur
Zucker, Younes Belkada, Zijian Wang, Qian Liu,
Dmitry Abulkhanov, Indraneil Paul, Zhuang Li, Wen[Ding Li, Megan Risdal, and et al. 2024. Starcoder 2](https://doi.org/10.48550/arXiv.2402.19173)
[and the stack v2: The next generation. Arxiv preprint,](https://doi.org/10.48550/arXiv.2402.19173)
2402.19173.
Zimu Lu, Aojun Zhou, Houxing Ren, Ke Wang,
Weikang Shi, Junting Pan, Mingjie Zhan, and Hong[sheng Li. 2024. Mathgenie: Generating synthetic](https://doi.org/10.48550/arXiv.2402.16352)
[data with question back-translation for enhancing](https://doi.org/10.48550/arXiv.2402.16352)
[mathematical reasoning of llms.](https://doi.org/10.48550/arXiv.2402.16352) _Arxiv preprint,_
2402.16352.
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
[Lin, Shifeng Chen, and Dongmei Zhang. 2023a. Wiz-](https://doi.org/10.48550/arXiv.2308.09583)
[ardmath: Empowering mathematical reasoning for](https://doi.org/10.48550/arXiv.2308.09583)
[large language models via reinforced evol-instruct.](https://doi.org/10.48550/arXiv.2308.09583)
_Arxiv preprint, 2308.09583._
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo
Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023b. [Wizardcoder:](https://doi.org/10.48550/arXiv.2306.08568)
[Empowering code large language models with evol-](https://doi.org/10.48550/arXiv.2306.08568)
[instruct. Arxiv preprint, 2306.08568.](https://doi.org/10.48550/arXiv.2306.08568)
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler
Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon,
Nouha Dziri, Shrimai Prabhumoye, Yiming Yang,
Shashank Gupta, Bodhisattwa Prasad Majumder,
Katherine Hermann, Sean Welleck, Amir Yazdan[bakhsh, and Peter Clark. 2023. Self-refine: Iterative](http://papers.nips.cc/paper_files/paper/2023/hash/91edff07232fb1b55a505a9e9f6c0ff3-Abstract-Conference.html)
[refinement with self-feedback. In NeurIPS 2023.](http://papers.nips.cc/paper_files/paper/2023/hash/91edff07232fb1b55a505a9e9f6c0ff3-Abstract-Conference.html)
Thomas Mesnard, Cassidy Hardin, Robert Dadashi,
Surya Bhupatiraju, Shreya Pathak, Laurent Sifre,
Morgane Rivière, Mihir Sanjay Kale, Juliette Love,
Pouya Tafti, Léonard Hussenot, Aakanksha Chowdhery, Adam Roberts, Aditya Barua, Alex Botev, Alex
Castro-Ros, Ambrose Slone, Amélie Héliou, Andrea
Tacchetti, Anna Bulanova, Antonia Paterson, Beth
Tsai, Bobak Shahriari, and et al. 2024. [Gemma:](https://doi.org/10.48550/arXiv.2403.08295)
[Open models based on gemini research and technol-](https://doi.org/10.48550/arXiv.2403.08295)
[ogy. Arxiv preprint, 2403.08295.](https://doi.org/10.48550/arXiv.2403.08295)
[Meta. 2024. Introducing meta llama 3: The most capa-](https://ai.meta.com/blog/meta-llama-3/)
[ble openly available llm to date. Blog.](https://ai.meta.com/blog/meta-llama-3/)
Arindam Mitra, Hamed Khanpour, Corby Rosset, and
[Ahmed Awadallah. 2024. Orca-math: Unlocking](https://doi.org/10.48550/arXiv.2402.14830)
[the potential of slms in grade school math. Arxiv](https://doi.org/10.48550/arXiv.2402.14830)
_preprint, 2402.14830._
Richard Yuanzhe Pang, Weizhe Yuan, Kyunghyun Cho,
He He, Sainbayar Sukhbaatar, and Jason Weston.
[2024. Iterative reasoning preference optimization.](https://doi.org/10.48550/arXiv.2404.19733)
_Arxiv preprint, 2404.19733._
-----
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
[2021. Are NLP models really able to solve simple](https://doi.org/10.18653/v1/2021.naacl-main.168)
[math word problems? In NAACL-HLT 2021.](https://doi.org/10.18653/v1/2021.naacl-main.168)
Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase,
[and Yuxiong He. 2020. Zero: memory optimizations](https://doi.org/10.1109/SC41405.2020.00024)
[toward training trillion parameter models. In SC](https://doi.org/10.1109/SC41405.2020.00024)
_2020._
Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase,
[and Yuxiong He. 2020. Deepspeed: System opti-](https://doi.org/10.1145/3394486.3406703)
[mizations enable training deep learning models with](https://doi.org/10.1145/3394486.3406703)
[over 100 billion parameters. In KDD 2020.](https://doi.org/10.1145/3394486.3406703)
[Doug Rohrer and Kelli Taylor. 2006. The effects of](https://onlinelibrary.wiley.com/doi/abs/10.1002/acp.1266?casa_token=sO-IA5rUBwMAAAAA:F1u9nybhxRPgG03peppCtvdaK2K5QCmXQJ4SvXtQUGUtbyC2GfopvlJnFo8mMK9A-oUfclGIUaLv8Qw)
[overlearning and distributed practise on the reten-](https://onlinelibrary.wiley.com/doi/abs/10.1002/acp.1266?casa_token=sO-IA5rUBwMAAAAA:F1u9nybhxRPgG03peppCtvdaK2K5QCmXQJ4SvXtQUGUtbyC2GfopvlJnFo8mMK9A-oUfclGIUaLv8Qw)
[tion of mathematics knowledge. Applied Cognitive](https://onlinelibrary.wiley.com/doi/abs/10.1002/acp.1266?casa_token=sO-IA5rUBwMAAAAA:F1u9nybhxRPgG03peppCtvdaK2K5QCmXQJ4SvXtQUGUtbyC2GfopvlJnFo8mMK9A-oUfclGIUaLv8Qw)
_Psychology: The Official Journal of the Society for_
_Applied Research in Memory and Cognition._
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten
Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi,
Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom
Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton-Ferrer, Aaron Grattafiori,
Wenhan Xiong, Alexandre Défossez, Jade Copet,
Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, and Gabriel Synnaeve.
[2023. Code llama: Open foundation models for code.](https://doi.org/10.48550/arXiv.2308.12950)
_Arxiv preprint, 2308.12950._
[Nuriye Semerci. 2005. The effects of problem-based](https://journals.sagepub.com/doi/pdf/10.1177/105678790501400405?casa_token=ow7Ea5bb60EAAAAA:R2HgnNVnd-7z0JVKUwaS6APx3udaiEWmZ9E_pbEdBWd7wZzeJODYrfTCH6mzKrLM1VivC_3-bkY)
[learning on the academic achievement of students in](https://journals.sagepub.com/doi/pdf/10.1177/105678790501400405?casa_token=ow7Ea5bb60EAAAAA:R2HgnNVnd-7z0JVKUwaS6APx3udaiEWmZ9E_pbEdBWd7wZzeJODYrfTCH6mzKrLM1VivC_3-bkY)
[development and learning. International Journal of](https://journals.sagepub.com/doi/pdf/10.1177/105678790501400405?casa_token=ow7Ea5bb60EAAAAA:R2HgnNVnd-7z0JVKUwaS6APx3udaiEWmZ9E_pbEdBWd7wZzeJODYrfTCH6mzKrLM1VivC_3-bkY)
_Educational Reform._
Noah Shinn, Federico Cassano, Ashwin Gopinath,
[Karthik Narasimhan, and Shunyu Yao. 2023. Re-](http://papers.nips.cc/paper_files/paper/2023/hash/1b44b878bb782e6954cd888628510e90-Abstract-Conference.html)
[flexion: language agents with verbal reinforcement](http://papers.nips.cc/paper_files/paper/2023/hash/1b44b878bb782e6954cd888628510e90-Abstract-Conference.html)
[learning. In NeurIPS 2023.](http://papers.nips.cc/paper_files/paper/2023/hash/1b44b878bb782e6954cd888628510e90-Abstract-Conference.html)
[Edward A Silver. 1994. On mathematical problem pos-](https://scholar.google.com/scholar_url?url=https://www.jstor.org/stable/pdf/40248099.pdf%3Fcasa_token%3DwoSILagGE1oAAAAA:ODTbie8LX2q3q8YnRYoldJAP8g4xON2toVkGhWUJZzydUMfYdYvAZJyhMdw9bGKEWkXxE7p8n9rI00e4I4xEDmGT4woj5bfLsKMrsnrsVyMN4x90cqM&hl=en&sa=T&oi=gsr-r-gga&ct=res&cd=0&d=11208674570557059518&ei=GQlcZqbBO-KB6rQPmZHNkA0&scisig=AFWwaeYO0TS_6RpdwfqmTLFpQBze)
[ing. For the learning of mathematics.](https://scholar.google.com/scholar_url?url=https://www.jstor.org/stable/pdf/40248099.pdf%3Fcasa_token%3DwoSILagGE1oAAAAA:ODTbie8LX2q3q8YnRYoldJAP8g4xON2toVkGhWUJZzydUMfYdYvAZJyhMdw9bGKEWkXxE7p8n9rI00e4I4xEDmGT4woj5bfLsKMrsnrsVyMN4x90cqM&hl=en&sa=T&oi=gsr-r-gga&ct=res&cd=0&d=11208674570557059518&ei=GQlcZqbBO-KB6rQPmZHNkA0&scisig=AFWwaeYO0TS_6RpdwfqmTLFpQBze)
[Kaye Stacey, L Burton, and J Mason. 1982. Thinking](http://zahrahimi.ir/wp-content/uploads/2021/04/Thinking-Mathematically.pdf)
_[mathematically. Addison-Wesley.](http://zahrahimi.ir/wp-content/uploads/2021/04/Thinking-Mathematically.pdf)_
Zhengyang Tang, Xingxing Zhang, Benyou Wang, and
Furu Wei. 2024. [Mathscale: Scaling instruction](https://doi.org/10.48550/arXiv.2403.02884)
[tuning for mathematical reasoning. Arxiv preprint,](https://doi.org/10.48550/arXiv.2403.02884)
2403.02884.
Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun
Luo, Weikang Shi, Renrui Zhang, Linqi Song,
[Mingjie Zhan, and Hongsheng Li. 2023a. Mathcoder:](https://doi.org/10.48550/arXiv.2310.03731)
[Seamless code integration in llms for enhanced math-](https://doi.org/10.48550/arXiv.2310.03731)
[ematical reasoning. Arxiv preprint, 2310.03731.](https://doi.org/10.48550/arXiv.2310.03731)
Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen,
[Lifan Yuan, Hao Peng, and Heng Ji. 2024a. MINT:](https://doi.org/10.48550/arXiv.2309.10691)
[evaluating llms in multi-turn interaction with tools](https://doi.org/10.48550/arXiv.2309.10691)
[and language feedback. In ICLR 2024.](https://doi.org/10.48550/arXiv.2309.10691)
Yejie Wang, Keqing He, Guanting Dong, Pei Wang, Weihao Zeng, Muxi Diao, Yutao Mou, Mengdi Zhang,
Jingang Wang, Xunliang Cai, and Weiran Xu. 2024b.
[Dolphcoder: Echo-locating code large language mod-](https://doi.org/10.48550/arXiv.2402.09136)
[els with diverse and multi-objective instruction tun-](https://doi.org/10.48550/arXiv.2402.09136)
[ing. Arxiv preprint, 2402.09136.](https://doi.org/10.48550/arXiv.2402.09136)
Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack
Hessel, Tushar Khot, Khyathi Chandu, David Wadden, Kelsey MacMillan, Noah A. Smith, Iz Beltagy,
[and Hannaneh Hajishirzi. 2023b. How far can camels](http://papers.nips.cc/paper_files/paper/2023/hash/ec6413875e4ab08d7bc4d8e225263398-Abstract-Datasets_and_Benchmarks.html)
[go? exploring the state of instruction tuning on open](http://papers.nips.cc/paper_files/paper/2023/hash/ec6413875e4ab08d7bc4d8e225263398-Abstract-Datasets_and_Benchmarks.html)
[resources. In NeurIPS 2023.](http://papers.nips.cc/paper_files/paper/2023/hash/ec6413875e4ab08d7bc4d8e225263398-Abstract-Datasets_and_Benchmarks.html)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le,
[and Denny Zhou. 2022. Chain-of-thought prompt-](http://papers.nips.cc/paper_files/paper/2022/hash/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html)
[ing elicits reasoning in large language models. In](http://papers.nips.cc/paper_files/paper/2022/hash/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html)
_NeurIPS 2022._
Yuxiang Wei, Zhe Wang, Jiawei Liu, Yifeng Ding, and
[Lingming Zhang. 2023. Magicoder: Source code is](https://doi.org/10.48550/arXiv.2312.02120)
[all you need. Arxiv preprint, 2312.02120.](https://doi.org/10.48550/arXiv.2312.02120)
Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He,
Shengping Liu, Bin Sun, Kang Liu, and Jun Zhao.
[2023. Large language models are better reasoners](https://doi.org/10.18653/v1/2023.findings-emnlp.167)
[with self-verification. In Findings of EMNLP 2023.](https://doi.org/10.18653/v1/2023.findings-emnlp.167)
[Annekatrin Wetzstein and Winfried Hacker. 2004. Re-](https://onlinelibrary.wiley.com/doi/pdf/10.1002/acp.949?casa_token=BrGyaQvIWmEAAAAA%3ARMPtQxXvv5nc13VjrQgYeuDxVLHkX4A912PHD6NB6VL_pnCCgGKsks66z7U68M6f-gtLEtD-YVfOeT0)
[flective verbalization improves solutions—the effects](https://onlinelibrary.wiley.com/doi/pdf/10.1002/acp.949?casa_token=BrGyaQvIWmEAAAAA%3ARMPtQxXvv5nc13VjrQgYeuDxVLHkX4A912PHD6NB6VL_pnCCgGKsks66z7U68M6f-gtLEtD-YVfOeT0)
[of question-based reflection in design problem solv-](https://onlinelibrary.wiley.com/doi/pdf/10.1002/acp.949?casa_token=BrGyaQvIWmEAAAAA%3ARMPtQxXvv5nc13VjrQgYeuDxVLHkX4A912PHD6NB6VL_pnCCgGKsks66z7U68M6f-gtLEtD-YVfOeT0)
[ing. Applied Cognitive Psychology.](https://onlinelibrary.wiley.com/doi/pdf/10.1002/acp.949?casa_token=BrGyaQvIWmEAAAAA%3ARMPtQxXvv5nc13VjrQgYeuDxVLHkX4A912PHD6NB6VL_pnCCgGKsks66z7U68M6f-gtLEtD-YVfOeT0)
Zhenyu Wu, Qingkai Zeng, Zhihan Zhang, Zhaoxuan
[Tan, Chao Shen, and Meng Jiang. 2024. Large lan-](https://doi.org/10.48550/arXiv.2405.14092)
[guage models can self-correct with minimal effort.](https://doi.org/10.48550/arXiv.2405.14092)
_Arxiv preprint, 2405.14092._
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo
[Li, Adrian Weller, and Weiyang Liu. 2024. Meta-](https://doi.org/10.48550/arXiv.2309.12284)
[math: Bootstrap your own mathematical questions](https://doi.org/10.48550/arXiv.2309.12284)
[for large language models. In ICLR 2024.](https://doi.org/10.48550/arXiv.2309.12284)
Lifan Yuan, Ganqu Cui, Hanbin Wang, Ning Ding,
Xingyao Wang, Jia Deng, Boji Shan, Huimin Chen,
Ruobing Xie, Yankai Lin, Zhenghao Liu, Bowen
Zhou, Hao Peng, Zhiyuan Liu, and Maosong Sun.
[2024. Advancing LLM reasoning generalists with](https://doi.org/10.48550/arXiv.2404.02078)
[preference trees. Arxiv preprint, 2404.02078.](https://doi.org/10.48550/arXiv.2404.02078)
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting
[Dong, Chuanqi Tan, and Chang Zhou. 2023. Scaling](https://doi.org/10.48550/arXiv.2308.01825)
[relationship on learning mathematical reasoning with](https://doi.org/10.48550/arXiv.2308.01825)
[large language models. Arxiv preprint, 2308.01825.](https://doi.org/10.48550/arXiv.2308.01825)
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
[2023. Mammoth: Building math generalist models](https://doi.org/10.48550/arXiv.2309.05653)
[through hybrid instruction tuning. Arxiv preprint,](https://doi.org/10.48550/arXiv.2309.05653)
2309.05653.
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang,
Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen,
[and Nan Duan. 2023. Agieval: A human-centric](https://doi.org/10.48550/arXiv.2304.06364)
[benchmark for evaluating foundation models. Arxiv](https://doi.org/10.48550/arXiv.2304.06364)
_preprint, 2304.06364._
-----
|Data|GSM MATH|Mathematics MAWPS SVAMP MMLU SAT|Avg.|FQA-2nd FQA-3rd EC|
|---|---|---|---|---|
|Standard + RefAug (GPT) + RefAug (LLaMA)|56.25 13.96 60.05 17.36 62.02 17.00|14.80 73.07 53.50 37.68 31.82 19.40 80.25 59.30 43.63 48.64 17.80 80.29 61.60 39.43 44.55|40.15 46.95 46.10|25.72 15.25 50.68 35.36 27.54 72.99 32.63 23.90 50.00|
Table 9: Training Mistral-7B with data where reflection sections are annotated by GPT-4-turbo or LLaMA-3-70BInstruct. Data annotated by LLaMA-3 yields similar improvements in standard math reasoning tasks, but fails to
match GPT-annotated data in enhancing Mistral’s reflective reasoning capabilities.
|Data|GSM MATH|Mathematics MAWPS SVAMP MMLU-Math SAT-Math|Avg.|
|---|---|---|---|
|Standard + RefAug|64.59 19.86 67.10 22.08|20.20 81.35 66.00 45.59 47.73 25.60 83.64 69.40 48.97 55.00|49.33 53.11|
|GPT-Written Solutions + RefAug|71.72 28.04 75.74 31.64|32.90 85.26 73.20 47.84 55.00 32.00 87.38 75.80 51.75 69.09|56.28 60.49|
Table 10: Results on LLaMA-3-8B. We test integrating RefAug with (1) the original training data, and (2) the data
where answers are re-written by GPT-4-turbo (see Appendix A.4 for GPT answer re-writing).
**A** **Additional Experiments** **Training Data** **MathChat-FQA** **MathChat-EC**
In this section, we present more experimental re- Standard 60.05 30.05 20.56 61.99
sults in addition to those in §4. Standard + RefAug **64.59** **40.44** **33.16** **77.47**
**A.1** **Data Annotation with Open-Source** Q-Aug 2 63.68 34.45 26.40 70.41
**Models** Q-Aug + RefAug **68.61** **42.64** **34.22** **79.97**
The RefAug data used in the main experiments
are annotated by GPT-4-turbo. However, such annotation requires paid API calls to OpenAI’s service, which comes with a restrictive license. To
this end, we explore whether state-of-the-art opensource models can also serve as data annotators.
We employ the recently released LLaMA-3-70BInstruct model (Meta, 2024) for data annotation
using the same prompt shown in Figure 8, and train
a Mistral-7B model based on this data. According
to results in Table 9, RefAug data annotated by
LLaMA-3-70B-Instruct yields a similar improvement in Mistral’s performance on standard math
reasoning tasks. However, the reflective reasoning
capability of the resulting model falls short of its
counterpart trained with GPT-annotated data. This
suggests that developing models with advanced
**reflective math reasoning skills demands higher**
**quality data than what is typically required for**
**standard forward reasoning in single-round QA.**
**A.2** **Results on LLaMA-3**
In addition to training Mistral-7B and Gemma-7B
with RefAug, we also test LLaMA-3-8B (Meta,
2024) on the RefAug data. According to the results
in Table 10, RefAug enhances the math reason**ing capabilities of LLaMA-3 as well, no matter if**
integrating with the original solutions or with solu
|Training Data|MathChat-FQA 1st 2nd 3rd|MathChat-EC|
|---|---|---|
|Standard Standard + RefAug|60.05 30.05 20.56 64.59 40.44 33.16|61.99 77.47|
|Q-Aug Q-Aug×2 Q-Aug + RefAug|61.11 34.67 26.25 63.68 34.45 26.40 68.61 42.64 34.22|67.68 70.41 79.97|
|A-Aug A-Aug×2 A-Aug + RefAug|68.31 41.05 29.59 70.66 42.79 32.25 74.15 47.80 38.54|73.98 77.39 81.11|
Table 11: Results of Gemma on reflective math reasoning tasks. The general trend is similar to that of Mistral
(Table 2).
tions re-written by GPT-4-turbo. This again shows
the generalizability of the RefAug method, which
leads to consistent improvements across various
base models.
**A.3** **Gemma on Reflective Math Reasoning**
Besides evaluating Mistral-based models on reflective reasoning tasks (shown in Table 2, we report
scores on our Gemma-based models as well. As
shown in Table 11, the performance trends for
**Gemma models align with those observed on**
**Mistral models. RefAug demonstrates a clear ad-**
vantage over traditional augmentation methods in
enhancing reflective math reasoning capabilities of
LMs. For instance, RefAug outscores both Q-Aug
and A-Aug in the third round of follow-up QA and
in the accuracy of error correction. Furthermore,
as shown in Table 3, a combination of A-Aug and
RefAug data results in the best-performing model
on the reflective reasoning scenarios of MathChat,
outperforming many open-source models that are
-----
|Data|GSM MATH|Mathematics MAWPS SVAMP MMLU-Math SAT-Math|Avg.|
|---|---|---|---|
|Original Solutions GPT-4-turbo Solutions + RefAug|56.25 13.96 65.73 23.10 71.80 26.12|14.80 73.07 53.50 37.68 31.82 23.90 81.14 68.80 40.25 41.36 29.50 82.84 70.80 44.76 57.73|40.15 49.18 54.79|
Table 12: Comparison between using synthetic solutions written by GPT-4-turbo and using the originally annotated
ones in GSM8k and MATH training sets, as well as applying RefAug on the synthetic solutions. Solutions written
by GPT-4-turbo are of much higher quality than the original ones.
|Dataset|Source Target Overlap|
|---|---|
|GSM8k|Train Question Test Question 1 Train Answer Test Answer 0 RefAug Test Answer 0|
|---|---|
|MATH|Train Question Test Question 228 Train Answer Test Answer 167 RefAug Test Answer 5*|
|---|---|
Table 13: The contamination check on GSM8k and
MATH: the number of instances from the test set (target)
sharing n-gram overlaps with the training data (source).
We use n = 20 for questions and n = 30 for answers.
- The 5 test instances that overlap with the augmented
reflective sections were already contaminated by the
original MATH training set.
trained on substantially larger math datasets.
**A.4** **Quality of GPT-Written Answers**
In Table 1, we find that answer augmentation significantly enhances performance. It improves the
overall accuracy by +9.1 over the use of original
training data, when averaged across Mistral and
Gemma models. This surpasses the improvement
of +7.2 on average seen with RefAug over the original data. A deeper analysis reveals that the rea**soning paths generated by GPT-4-turbo are of**
**significantly higher quality than those originally**
**provided in the GSM8k and MATH datasets.**
As demonstrated in Table 12, merely replacing the
original solutions with those generated by GPT-4_turbo increased the accuracy from 40.15 to 49.18_
on Mistral. However, RefAug does not receive such
benefits as it does not alter the original reasoning
paths during augmentation. Given the complementary nature of these two augmentation methods,
their combination further improves the model accuracy to 54.79. This echoes the synergistic performance advantage achieved by A-Aug+RefAug
over both A-Aug and A-Aug×2 in Table 1.
**A.5** **Risk of Data Contamination**
To prevent the augmented data from contaminating
the test sets, we check the n-gram overlap between
**Training** **Data** **Time**
Standard 15K 60 min
Q-Aug / A-Aug 30K 123 min
RefAug 15K 90 min
Table 14: The impact of various augmentation methods
on dataset size and training time. These stats are tested
on 8×A100 GPUs.
**Training** **Train Tokens** **Test Tokens**
Standard 171.4 185.5
GPT Solutions 358.3 423.5
RefAug-front 910.1 980.5
RefAug 892.3 219.1
Table 15: The resulting sequence lengths of each augmentation method during training and testing.
the augmented reflective sections and the gold solutions within the test sets of GSM8k and MATH. Following a common approach (Huang et al., 2024b;
Liu et al., 2024), we utilize the test script provided
by Azerbayev et al. (2023) and conduct a 20-gram
check for questions and a 30-gram check for solutions. According to the results in Table 13, RefAug
does not contaminate any test instances in GSM8k.
In the MATH dataset, there is a pre-existing contamination issue: 228 questions and 167 solutions
in the test set are already contaminated by the original training set. On the other hand, our RefAug
data overlaps with only 5 instances in the test set,
and these 5 instances were already contaminated
by the training set. In other words, RefAug does
not introduce new contamination to both test sets.
In summary, there is minimal contamination risk
**associated with RefAug in our experiments.**
**A.6** **Training and Inference Efficiency**
For a deeper understanding of RefAug, we analyze
its impact on the efficiency of model training and
inference. To begin with, according to Table 14,
while RefAug does introduce additional time overhead during model training, this increase is less
significant than that caused by Q-Aug or A-Aug
-----
levels (Mathematics, MATH, MMLU), providing
an exhaustive assessment of the mathematical capabilities of language models.
**Training Settings** During model training, we
first tune the hyper-parameters using the original
data under the standard fine-tuning recipe. then,
these settings remain fixed across all models to
avoid extensive hyper-parameter tuning for each
variant. This approach is common in studies comparing models fine-tuned on varied datasets (Yuan
et al., 2023; Li et al., 2023; An et al., 2023). Specifically, we train models for 3 epochs with a batch
size of 128. The learning rate starts at 1e-5, including a warmup for the initial 3% of steps, and then
linearly decreases to 20% of its initial value by the
end of training. Training sequences are truncated
to 4096 tokens. To speed up training, our model
utilize bfloat16 precision and are supported by
FlashAttention-2 (Dao, 2023), DeepSpeed (Rasley
et al., 2020), and ZeRO-3 optimization (Rajbhandari et al., 2020). For training on the full set of
MetaMath, we follow the original authors’ recommendation[4] to lower the learning rate to 2e-6,
and for continued training on the public MetaMath
checkpoint, we use a reduced learning rate of 1e-6
to be more consistent with its initial fine-tuning.
**Evaluation** To facilitate answer extraction during
evaluation, we append The answer is XXX. to the
reasoning path of each training instance so that the
final predicted answer is explicitly stated. We adopt
the evaluation script from Yue et al. (2023) that first
extracts the predicted answer and then checks for
an exact match with the ground-truth. Exceptions
are MMLU and SAT which use multiple-choice
formats instead of numerical answers. Since our
training data does not contain multiple-choice questions, the model may predict the content of an option rather than its letter identifier. Thus, on these
datasets, we leverage GPT-3.5-turbo to match the
predicted content to the appropriate option before
computing accuracy.
**B.2** **Reflective Math Reasoning**
Reflective math reasoning encompasses scenarios
where models must consider previously provided
answers to engage in further reasoning. However,
benchmarks that adequately capture this dynamic
are scarce in the existing literature. Utilizing the
[4https://huggingface.co/meta-math/](https://huggingface.co/meta-math/MetaMath-Mistral-7B)
[MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B)
**Dataset** **Train** **Test**
GSM8k (Cobbe et al., 2021) 7473 1319
MATH (Hendrycks et al., 2021b) 7500 5000
Mathematics (Davies et al., 2021) - 1000
MAWPS (Koncel-Kedziorski et al., 2016) - 2354
SVAMP (Patel et al., 2021) - 1000
MMLU-Math (Hendrycks et al., 2021a) - 974
SAT-Math (Zhong et al., 2023) - 220
MathChat-FQA (Liang et al., 2024) - 1319
MathChat-EC (Liang et al., 2024) - 1319
MINT-Math (Wang et al., 2024a) - 273
Magicoder (Wei et al., 2023) 38284 -
HumanEval (Chen et al., 2021) - 164
MBPP (Austin et al., 2021) - 399
Table 16: Statistics of all datasets used in our training
and evaluation.
which doubles the optimization steps due to dataset
expansion. Additionally, although RefAug results
in longer sequence lengths in training instances,
it does not impair inference efficiency, as shown
by the average number of tokens generated in Table 15. This is due to the early stopping feature that
eliminates the need to generate reflective sections
during inference. Overall, the efficiency impact
**brought by RefAug is minimal.**
**B** **Detailed Task Settings**
In this section, we detail the datasets, training
hyper-parameters, and evaluation settings of each
task used in our experiments. We list the size of all
datasets in Table 16.
**B.1** **Standard Math Reasoning**
**Datasets** In standard math reasoning, we follow a common approach (Wang et al., 2023a; Yu
et al., 2024; Li et al., 2024a) to adopt the training data from GSM8k (Cobbe et al., 2021) and
MATH (Hendrycks et al., 2021b) as they are paired
with human-labeled reasoning paths. For evaluation, we employ a comprehensive suite of benchmarks that span a wide range of mathematical topics. Specifically, GSM8k, SVAMP (Patel et al.,
2021), and MAWPS (Koncel-Kedziorski et al.,
2016) focus mainly on arithmetic math word problems, while datasets such as MATH, Mathematics (Davies et al., 2021), MMLU (Hendrycks et al.,
2021a), and SAT (Zhong et al., 2023) encompass a
broader scope including algebra, geometry, number
theory, probability, and formal logic. By difficulty
levels, they cover elementary (MAWPS, SVAMP),
middle school (GSM8K, SAT), and more advanced
-----
currently available resources, we evaluate our models on three tasks: follow-up QA, error correction,
and feedback utilization.
The follow-up QA (FQA) task is assessed using
the MathChat dataset (Liang et al., 2024). Each
test instance consists of three turns of questions.
The first turn uses the original GSM8k test set, and
subsequent turns contain follow-up questions based
on earlier turns. These follow-ups often require a
deeper understanding of the problem, such as performing subsequent calculations based on previous
answers or introducing new constraints to the original question. The solutions generated by the model
for each turn are incorporated into the input for the
next turn, creating a multi-turn interaction. The
accuracy of each turn is evaluated separately.
The error correction (EC) task, also sourced
from the MathChat dataset and derived from the
GSM8k test set, pairs each question with an intentionally incorrect answer. The model is then tasked
with identifying and correcting errors in the reasoning process. Accuracy is determined by comparing
the model’s corrected answer to the ground truth.
For both tasks from MathChat, we follow the
approach of Liang et al. (2024) to concatenate all
previous turns into the instruction part of the input
sequence. For example, in the third round of FQA,
the model decodes (a3 [q1; a1; q2; a2; q3]); In EC,
_P_ _|_
it decodes (a [q; awrong; f ]), where f is binary
_P_ _|_
feedback indicating that awrong is incorrect.
The MINT (Wang et al., 2024a) benchmark evaluates the ability of LMs to leverage natural language feedback to improve their predictions. We
utilize the math subset from the original benchmark, which includes 273 carefully selected instances from four datasets: 48 from GSM8k, 100
from MATH, 76 from MMLU, and 49 from TheoremQA (Chen et al., 2023). We adhere to the same
evaluation protocols as the original paper except
that we omit the code execution step as our math
models are based on text reasoning. At each interaction turn, the model proposes a solution, and
we collect binary feedback on answer correctness
along with natural language feedback from an expert (i.e., GPT-4). This feedback is then provided to
the model in the subsequent turn of prediction. The
model have at most k = 5 chances to propose solutions, and the accuracy of each turn is calculated
independently. We also measure the improvement
in accuracy (∆) from the first to the fifth turn to
assess the model’s efficacy in leveraging feedback.
**B.3** **Code Generation**
**HumanEval (Chen et al., 2021) and MBPP**
(Austin et al., 2021) are the most popular benchmarks for evaluating code generation capabilities
of LMs (Luo et al., 2023b; Wang et al., 2024b).
Each test instance within these benchmarks includes a natural language prompt, based on which
LMs generate a corresponding code snippet. The
correctness of the code is verified using test cases.
Additionally, EvalPlus (Liu et al., 2023) has developed enhanced versions of these benchmarks
(HumanEval+ / MBPP+) that include more comprehensive test cases for a more rigorous evaluation.
Therefore, we utilize the evaluation suite provided
by EvalPlus on these benchmarks, where MBPP is
reduced to 399 instances for quality control.
For the training dataset, we use the OSS**Instruct dataset collected by Magicoder (Wei et al.,**
2023), which consists of synthetic instruction-code
pairs generated from random code snippets sourced
from GitHub. Since HumanEval and MBPP focus
on Python code, we extracted the Python subset
from OSS-Instruct to reduce annotation costs, resulting in a total of 38K training instances. Given
the abstractive nature of code generation, we opt
for analogy annotations in the follow-up reasoning
part of RefAug.
We adhere to the training settings outlined in
the Magicoder paper for our experiments. Models
are trained over two epochs with a batch size of
512. The learning rate is initiated at 5e-5, with 15
warm-up steps followed by a linear decay. Greedy
decoding is employed during inference.
**C** **Baseline Implementation**
In this section, we detail our implementation of
the major baseline methods that we compare with
in the main paper, including question augmentation (Q-Aug), answer augmentation (A-Aug), and
MetaMath augmentation.
**C.1** **Question Augmentation**
A single round of Q-Aug enerates a new question
from each existing question in the training set, effectively doubling the dataset (illustrated in Figure 1b). Both the augmented question and its solution are annotated by GPT-4-turbo. During the
annotation, we employ a temperature of 0.7 and a
top_p of 1.0 to ensure the diversity of math reasoning paths for both Q-Aug and A-Aug. we largely
follow the question generation prompt from Li et al.
-----
full-dataset training (MetaMath400k+RefAug40k).
**D** **Training Prompt**
The prompt we use to build training sequences
is shown in Figure 5. The format mainly follows Wang et al. (2023b), and the reflection section is appended to the original answer as the
output. Loss is only calculated to tokens after
<|assistant|>.
**E** **RefAug Annotation Prompt**
The prompt we use for annotating reflective sections are detailed in Figure 8, which includes a description of the general principles of reflective reasoning and two in-context examples. We use temperature=0.7 and top_p=1.0 when sampling with
GPT-4-turbo.
**F** **License of Artifacts**
All training and evaluation datasets used in our experiments are publicly accessible and have been
used in many prior researches. We note that the
collection of RefAug data, if annotated by an external model, should comply with its terms of use.
For example, using GPT-generated data is subject
to the terms of use of OpenAI services[5], and using LLaMA-generated data is subject to Meta’s
LLaMA license agreement[6].
[5https://openai.com/policies/terms-of-use/](https://openai.com/policies/terms-of-use/)
[6https://llama.meta.com/llama3/license/](https://llama.meta.com/llama3/license/)
Training Prompt
<|system|>
Below is an instruction that describes a task. Follow the
instruction to complete the request.
<|user|>
{Question}
<|assistant|>
{Answer}
Reflection:
{Reflection}
Figure 5: Prompt used for training the model. Text
in gray are placeholders and will be replaced by the
corresponding sections in the training instance.
(2024a) with minor adjustments. The detailed annotation prompt is provided in Figure 6.
**C.2** **Answer Augmentation**
A single round of A-Aug involves re-sampling a
solution for each math problem in the training set.
The new solution, paired with the original question, forms a new training instance (illustrated in
Figure 1c). Consistent with other methods, the augmented solution is generated by GPT-4-turbo. If
the sampled solution diverges from the gold answer,
it is discarded and re-sampled; And if a correct answer is not produced after five attempts, we retain
the last sampled solution. Following the methodology described by Yu et al. (2024), the prompt for
A-Aug simply instructs the model to solve an arbitrary math problem, which is detailed in Figure 7.
**C.3** **MetaMath**
MetaMath (Yu et al., 2024) introduces a comprehensive suite of augmentation methods tailored for
math reasoning tasks, which has received much
attention. This suite includes answer augmentation,
question rephrasing, and two backward reasoning
augmentation techniques: self-verification (Weng
et al., 2023) and FOBAR (Jiang et al., 2023b). Each
method is sampled for multiple rounds to generate
a large set of 400K training data. Please refer to Yu
et al. (2024) for more details on these methods.
When creating the MetaMath40k subset for our
experiments in §4.1, we randomly select one instance from each of the four augmentation techniques for every seed math question, which we
believe is the most uniform sampling strategy. For
the MetaMath80k subset, we add one more instance from each technique for every seed question. The initially sampled 40K instances are further equipped with RefAug to be included in the
-----
Question Augmentation Prompt
Please act as a professional math teacher. Your goal is to create high quality math problems to help students learn math. You will be
given a math question. Please generate a similar but new question according to the Given Question.
You have four principles to do this.
# Ensure the new question only asks for one thing, be reasonable, be based on the Given Question, and have a definite answer. For
example, DO NOT ask, "what is the amount of A, B and C?".
# Ensure the new question is in line with common sense of life. For example, the amount someone has or pays must be a positive
number, and the number of people must be an integer.
# Ensure your student can answer the new question without the given question. If you want to use some numbers, conditions or
background in the given question, please restate them to ensure no information is omitted in your new question.
# Ensure your created question is solvable. Write the solution to it after the question.
Given Question: $$QUESTION$$
Now write a new question and its solution. The question must begin with "New Question:" and the solution must begin with "Solution
to the New Question:". The solution must end with "The answer is XXX" where XXX should be the final answer to the question.
Figure 6: Prompt for question augmentation, adopted from Li et al. (2024a). The only difference is that we combine
question generation and solution annotation into a single prompt to save costs.
Answer Augmentation Prompt
Your task is to solve a math word problem. You should solve the problem step by step. At the end of your solution, write the final
answer in the form of "The answer is X". Here are two examples:
## Example 1
Question:
Let 𝐹! [= (0,1) ][and ][𝐹]" [= (4,1)][. Then the set of points ][𝑃] [such that ][𝑃𝐹]! [+ 𝑃𝐹]" [= 6 ][form an ellipse. The equation of this ellipse can be ]
($%&)[!] ()%*)[!]
written as + = 1. Find ℎ+ 𝑘+ 𝑎+ 𝑏.
([!] +[!]
Solution:
We have that 2𝑎= 6, so 𝑎= 3. The distance between the foci is 2𝑐= 4, so 𝑐= 2. Hence, 𝑏= 𝑎["] −𝑐["] = 5. The center of the
($%")[!] ()%!)[!]
ellipse is the midpoint of 𝐹![𝐹]"[, which is ][(2,1). ][Thus, the equation of the ellipse is ],[!] + ( -)[!][ = 1][. Hence, ][ℎ+ 𝑘+ 𝑎+ 𝑏= 2 + 1]
+ 3 + 5 = 6 + 5. The answer is 6 + 5.
## Example 2
Question:
Each bird eats 12 beetles per day, each snake eats 3 birds per day, and each jaguar eats 5 snakes per day. If there are 6 jaguars in a
forest, how many beetles are eaten each day?
Solution:
First find the total number of snakes eaten: 5 snakes/jaguar × 6 jaguars = 30 snakes. Then find the total number of birds eaten per
day: 30 snakes × 3 birds/snake = 90 snakes. Then multiply the number of snakes by the number of beetles per snake to find the total
number of beetles eaten per day: 90 snakes × 12 beetles/snake = 1080 beetles. The answer is 1080.
Now solve the following problem. The solution must end with "The answer is XXX" where XXX should be the final answer to the
question.
Question:
$$QUESTION$$
Solution:
Figure 7: Prompt for answer augmentation, which is basically an in-context learning prompt for solving a given
math problem. Two in-context examples come from MATH and GSM8k training sets, respectively.
-----
Data Annotation Prompt
You are a professional math teacher, and your goal is to teach your student to learn a given math problem. Now that your student has successfully solved the original
problem, in order to make the student thoroughly understand the involved knowledge and problem-solving methodology, your task is to write a reflection section that
go through the problem-solving process and provide additional insights. The reflection section should include the following components:
1. Alternative Reasoning: Present an alternative approach to solve the original problem. This alternative approach should be distinct from the original solution and
still lead to the correct answer. While writing the alternative reasoning approach, consider explaining the principle of the methodology used in the original solution,
how the alternative approach differs from the original method, and why it leads to the same correct answer.
2. Follow-up Reasoning: Associate the solution to a broader class of problems. You can either create a general form of the original problem to encourage the student
to reduce reliance on specific values (e.g., use letters or variables to replace specific numbers in the original problem), or apply the concepts and methodologies from
the original problem to a more challenging situation. Please do not just replace the original numbers in the question with new numbers, because that is essentially the
same problem. The follow-up problem must also be solvable, and you need to provide the solution for it. Besides, please explain briefly how the new scenario
associates with the original problem.
Example 1:
Original Problem:
Youngsville had a population of 684 people. The town had a growth spurt and the population increased by 25% then they witnessed that 40% of the population moved
away. What is the current population?
Solution to the Original Problem:
The town had 684 people, and then had a 25% growth spurt, so the population increased by 684×0.25 = 171 people. This increase brought the population to 684 +
171 = 855 people. 40% of the population moved away, so 855×0.40 = 342 people moved away. The new population is 855 −342 = 513 people. The answer is
513.
Alternative Reasoning:
The key to solve the problem is to understand the concept of relative increase and decrease percentages. Increasing by 𝑎% means the population grows to (100 +
𝑎)% of the original, while decreasing by 𝑏% means the population reduces to (100 −𝑏)% based on the increased population. Therefore, this is essentially a problem
of consecutive multiplication: multiply the initial total population by the percentage of change twice.
Therefore, an alternative calculation involves deriving a single effective percentage change of the whole process. A 25% increase is equivalent to multiplying by 1.25,
and a 40% decrease is equivalent to multiplying by 0.60. Combining these two changes, the effective percentage change is 1.25×0.60 = 0.75, which corresponds to
a 25% decrease from the original population. Therefore, the current population is 684×0.75 = 513. The alternative approach leads to the same result because the
associative property of multiplication: (684×1.25)×0.60 = 684×(1.25×0.60) = 684×0.75 = 513.
Follow-up reasoning:
Let‘s think of a more general scenario. Suppose a town has a population of 𝑃 people. The population increases by 𝑎 percent, then 𝑏 percent of the population moves
away, and we would like to know the final population. In this context, the first increase corresponds to multiplying by (1 + 𝑎/100), and the subsequent decrease
corresponds to multiplying by (1 − 𝑏/100). So the total population change is (1 + 𝑎/100)(1 − 𝑏/100). Therefore, the final population is 𝑃(1 + 𝑎/100)(1 −
𝑏/100). This abstract problem allows us to apply the same principles of relative percentage changes to calculate the final population based on the initial population
and the two percentage changes. This generalization helps to understand the problem conceptually and apply it to various scenarios.
Example 2:
Original Problem:
Solve the equation (𝑥−99)(𝑥−101) = 8.
Solution to the Original Problem:
Let t=x-100. Then the equation becomes (𝑡−1)(𝑡+ 1) = 8, which transforms into 𝑡[!] −1 = 8. Therefore, 𝑡= 3 or 𝑡= −3, and accordingly we get 𝑥= 97 or 𝑥=
103. The answer is 97 or 103.
Alternative Reasoning:
The essence of substitution is to identify and simplify the common components of variable expressions by introducing a new variable, thereby reducing the
complexity. Let's revisit the original equation. Expressions 𝑥−99 and 𝑥−101 share a similar form: a large constant offset from 𝑥. Due to the minimal difference
between 99 and 101, we can use substitution to transform the expressions into terms with small constants.
Therefore, an alternative approach is to substitute 𝑡= 𝑥−99, which transforms the equation into 𝑡(𝑡−2) = 8 ⇒𝑡[!] −2𝑡−8 = 0. This can be easily factorized into
(𝑡−4)(𝑡+ 2) = 0. Hence, 𝑡= 4 or 𝑡= −2, leading to the same results 𝑥= 97 or 𝑥= 103. This alternative approach is equally effective as it also simplifies the
equation by substituting 𝑥 and reducing the scale of the offset terms.
Follow-up Reasoning:
Extending the idea of substitution, consider the equation 𝑥(𝑥+ 1)(𝑥+ 2)(𝑥+ 3) = 360. We notice that 𝑥(𝑥+ 3) = 𝑥^2 + 3𝑥, and (𝑥+ 1)(𝑥+ 2) = 𝑥[!] + 3𝑥+ 2.
Therefore, to simplify the expression, we set the common term 𝑥[!] + 3𝑥 as 𝑡, which transforms the equation into 𝑡(𝑡+ 2) = 360 ⇒𝑡[!] + 2𝑡−360 = 0 ⇒ 𝑡=
−20 or 𝑡= 18. If 𝑡= −20, then 𝑥[!] + 3𝑥+ 20 = 0. Here, the discriminant Δ = −71 < 0, resulting in no real solutions for 𝑥. If 𝑡= 18, then 𝑥[!] + 3𝑥−18 = 0, so
𝑥= 3 or 𝑥= −6. This scenario reiterates the importance of identifying common components of x to streamline the equation through substitution.
Now write a reflection section for the following case based on the examples above. Make sure to use "Alternative Reasoning:" and "Follow-up Reasoning:" to
separate the two components.
Original Problem:
$$QUESTION$$
Solution to the Original Problem:
$$RESPONSE$$
Figure 8: Prompt for annotating the reflective section. The prompt first explains the contents to annotate within
the reflective section, and then presents two in-context examples for demonstration. GPT-4-turbo is employed for
annotation.
-----
| [
"Zhenwen, Liang",
"Dian, Yu",
"Wenhao, Yu",
"Zhihan, Zhang",
"Meng, Jiang",
"Mengzhao, Jia",
"Dong, Yu"
] | 2024-06-17T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2406.12050 | https://arxiv.org/abs/2406.12050 | https://www.semanticscholar.org/paper/5ea68952a55a0d47a58a409b66c9daee35a237b8 |
Learning Goal-Conditioned Representations for Language Reward Models | Techniques that learn improved representations via offline data or self-supervised objectives have shown impressive results in traditional reinforcement learning (RL). Nevertheless, it is unclear how improved representation learning can benefit reinforcement learning from human feedback (RLHF) on language models (LMs). In this work, we propose training reward models (RMs) in a contrastive, $\textit{goal-conditioned}$ fashion by increasing the representation similarity of future states along sampled preferred trajectories and decreasing the similarity along randomly sampled dispreferred trajectories. This objective significantly improves RM performance by up to 0.09 AUROC across challenging benchmarks, such as MATH and GSM8k. These findings extend to general alignment as well -- on the Helpful-Harmless dataset, we observe $2.3\%$ increase in accuracy. Beyond improving reward model performance, we show this way of training RM representations enables improved $\textit{steerability}$ because it allows us to evaluate the likelihood of an action achieving a particular goal-state (e.g., whether a solution is correct or helpful). Leveraging this insight, we find that we can filter up to $55\%$ of generated tokens during majority voting by discarding trajectories likely to end up in an "incorrect" state, which leads to significant cost savings. We additionally find that these representations can perform fine-grained control by conditioning on desired future goal-states. For example, we show that steering a Llama 3 model towards helpful generations with our approach improves helpfulness by $9.6\%$ over a supervised-fine-tuning trained baseline. Similarly, steering the model towards complex generations improves complexity by $21.6\%$ over the baseline. Overall, we find that training RMs in this contrastive, goal-conditioned fashion significantly improves performance and enables model steerability. | This work proposes training reward models in a contrastive, goal-conditioned fashion by increasing the representation similarity of future states along sampled preferred trajectories and decreasing the similarity along randomly sampled dispreferred trajectories, and finds that these representations can perform fine-grained control by conditioning on desired future goal-states. | ## Learning Goal-Conditioned Representations for Language Reward Models
**Vaskar Nath[∗]** **Dylan Slack[∗]** **Jeff Da**
**Yuntao Ma** **Hugh Zhang** **Spencer Whitehead[†]** **Sean Hendryx[†]**
Scale AI
**Abstract**
Techniques that learn improved representations via offline data or self-supervised
objectives have shown impressive results in traditional reinforcement learning
(RL). Nevertheless, it is unclear how improved representation learning can benefit reinforcement learning from human feedback (RLHF) on language models
(LMs). In this work, we propose training reward models (RMs) in a contrastive,
_goal-conditioned fashion by increasing the representation similarity of future states_
along sampled preferred trajectories and decreasing the similarity along randomly
sampled dispreferred trajectories. This objective significantly improves reward
model performance by up to 0.09 AUROC across challenging benchmarks, such as
MATH and GSM8k. These findings extend to general alignment as well – on the
Helpful-Harmless dataset, we observe 2.3% increase in accuracy. Beyond improving reward model performance, we show this way of training RM representations
enables improved steerability because it allows us to evaluate the likelihood of
an action achieving a particular goal-state (e.g., whether a solution is correct or
helpful). Leveraging this insight, we find that we can filter up to 55% of generated
tokens during majority voting by discarding trajectories likely to end up in an
“incorrect” state, which leads to significant cost savings. We additionally find
that these representations can perform fine-grained control by conditioning on
desired future goal-states. For example, we show that steering a Llama 3 model
towards helpful generations with our approach improves helpfulness by 9.6% over
a supervised-fine-tuning trained baseline. Similarly, steering the model towards
complex generations improves complexity by 21.6% over the baseline. Overall,
we find that training RMs in this contrastive, goal-conditioned fashion significantly
improves performance and enables model steerability. [1]
**1** **Introduction**
Aligning Language Models (LMs) with human preferences has proven to be an essential step for the
adoption and safe use of these models, with the dominant paradigm being Reinforcement Learning
from Human Feedback (RLHF) [50]. To accomplish this, a standard setup is to collect labels from
humans for generated responses (e.g., preferences, quality ratings) [50]. These labels can then be
used to train a reward model to produce a ranking/scoring of a given sequence or set of sequences.
The policy LM is then trained to maximize the expected returns from this reward model using a
Reinforcement Learning (RL) algorithm.
_∗Equal contribution_
_†Equal senior authorship_
[1Code available at https://github.com/vaskarnathscale/goal-conditioned-rm](https://github.com/vaskarnathscale/goal-conditioned-rm)
Preprint. Under Review.
-----
Prompt
If Alice has 10 apples and gives away 2, how many apples does Alice have?
Preferred Response Dispreferred Response
|Preferred Response Since Alice started off with ten apples, we need to remove 2 from 10. Hence, Alice has 10-2=8 apples.|Col2|
|---|---|
|||
|Dispreferred Response Alice began with a total of 10 apples, so we need to add two to ten. Hence, Alice has 10+2=12 apples.|Col2|
|---|---|
|||
|Col1|Col2|RM|
|---|---|---|
||||
|Hidden State Representations h h h h Since ten remove apples|Hidden State Representations h h h h Alice need add apples||
|P Ro es pi rti ev se e S nto au tir oc ne h ten Maximize Similarity h remove RP eo ps reiti sv ee n G tao tia ol n N Re eg pa rt eiv se e nS to au tiorc ne h ten Minimize Similarity h add RN ee pg ra et siv ee n tG ato ioal|||
Figure 1: Overview of contrastive goal-conditioned learning for text. Pictured is a prompt with
a preferred and dispreferred response. Both source state tokens (ten) for the positive and negative
trajectory are sampled from the preferred response. For illustrative purposes, the positve and negative
source states are sampled as the same token, but in practice they can be different. The positive
goal state is sampled as some future token (subtract) from the preferred response, and the negative
goal state is sampled from any token (add) from the dispreferred response. The corresponding
representations are retrieved from the last hidden state of the reward model. The training objective
is then to maximize and minimize the similarity of the positive and negative representation pairs,
respectively.
High-quality representations have been shown to be an important piece for the success of RL
algorithms [6, 42]. Although such representations can be learned during end-to-end training, many
efforts have found it important to integrate more explicit representation learning components into
RL algorithms, such as via data augmentation [39] or auxiliary losses [21]. Some work even casts
certain RL algorithms as representation learning methods where using the similarity between state
representations serves as a value function, demonstrating success on manipulation and navigation
tasks [20]. Despite these successes in different areas, representation learning for aligning LMs has
been less explored, while more emphasis has been placed on, e.g., pre-training reward models [7, 37]
or learning from different types of rewards [15, 38].
In this paper, we present a simple yet effective approach to improve the representations learned
by reward models for aligning LMs. We train LM-based reward models to learn representations
that capture the expected reward or likelihood of achieving a goal state (e.g., correct solution to a
problem, helpful response) at intermediate steps of the input sequence, inspired by goal-conditioned
RL [5, 13, 20]. To do so, we use a contrastive objective and apply it to the reward model’s hidden
representations from desirable and undesirable sequences. Enforcing this loss on representations
from intermediate steps of the sequence helps encode a dense signal as to which trajectories are more
promising at different points in the sequence, which we show offers several useful properties, such as
helping to localize errors or evaluating partial completed sequences. This method is flexible enough
to support different kinds of alignment data and does not require further annotations beyond common
sequence-level annotations.
We find that this approach improves the reward model’s ability to identify correct/incorrect solutions
in mathematical reasoning, boosting the AUROC on the task by up to 0.09 over standard preference
ranking training. Towards natural language alignment, we find this method is able to increase the
reward model’s accuracy of identifying helpful versus harmless responses by 2.3% on the HelpfulHarmless dataset [8].
We also show the utility of the learned representations themselves, e.g., for filtering solutions
to improve accuracy and steering the outputs towards responses with certain attributes in guided
decoding (e.g., helpfulness, coherence, and complexity). For mathematical reasoning, we show that
we are able to filter up to 55% of generated tokens by discarding trajectories that are likely to lead
to incorrect solutions as deemed by the learned representations while achieving similar or better
-----
performance. Similarly, using these representations to steer a Llama 3 model by conditioning on
desired future goal-states, we improve helpfulness by 9.6%, correctness by 12.2%, coherence by
16.5%, and complexity by 21.6% over the supervised-fine-tuning trained baseline.
In summary, our contributions are as follows: 1) We explore improving the learned representations
of reward models and its effect on LM alignment. Towards this, we present a simple and effective
representation learning method based on a goal-conditioned contrastive objective. 2) We demonstrate
that training reward models with this method can improve reward model performance on mathematical
reasoning and helpfulness/harmlessness benchmarks. 3) We show that simply utilizing a reward
model trained with this method can improve policy LM alignment on math reasoning benchmarks. 4)
We investigate using these representations as a mechanism to evaluate the likelihood of achieving a
desired goal state by filtering generations in a majority-vote scheme and guided decoding, showing
that they can be used to increase accuracy and control policy LM outputs.
**2** **Related Work**
**Language model (LM) alignment and Reinforcement Learning from Human Feedback (RLHF).**
The goal of RLHF [50, 70, 74] for aligning LMs is to train a policy LM that generates responses
that are preferred by humans. This framework has proven instrumental for increasing the utility and
generalization of LMs, improving their capabilities on a wide variety of tasks, and their safety [8, 9,
45, 60, 49]. Critically, RLHF improves the policy LM along a scalar reward signal, which indicates
the value of a generated piece of text, and their efforts have focused on different means of providing
such signals [7, 55]. Typically, researchers train a reward model that learns to score generations
based on pair-wise human preference data. Subsequently, the model provides reward signals during
reinfocement learning (RL) training of the policy, often via Proximal Policy Optimization (PPO) [58].
While reward models have often been trained to score text with just a single scalar value, several
works have shown that training the reward model to provide fine-grained reward signals over spans
of text can help improve model performance by providing intermediate rewards for the policy LM to
learn from during RL [43, 67]. Nevertheless, fine-grained reward methods often require extensive and
costly human annotations on the sentence or token level to train the model, which can be prohibitive.
**Representation learning for RL. Reinforcement learning (RL) methods often learn representations**
as part of policy learning from reward signals. However, learning representations from solely
reward signals can be highly sample inefficient, leading to poor generalization, and struggles in
high-dimension state and action settings [41, 62, 72]. Consequently, techniques that learn improved
representations have been shown to significantly improve RL performance [71, 19, 25, 40, 24].
Typically, these methods construct additional supervision via offline data or self-supervised objectives
to drive representation learning [12, 11]. These methods have achieved significant success in a
variety of settings, such as vision-based control and multi-task RL [10, 44, 30]. One such way of
constructing supervision for representation learning is by leveraging future states and actions sampled
from offline trajectories [48, 18, 59]. In particular, prior work has shown that increasing the similarity
between state-action pairs sampled from the same trajectory can serve as a simple yet powerful way
to learn improved representations for RL, that additionally acts as a type of goal-conditioning by
making representations more similar along the same trajectory [20, 22]. Still, representation and
goal-conditioned RL largely have not been considered for natural language domains.
**3** **Method**
Drawing on recent efforts on goal-conditioned reinforcement learning [5, 13] and representation
learning [20, 31] in other areas, we show that we can learn representations that approximate a Qfunction within a language reward model using contrastive learning. Since the Q-function quantifies
the expected cumulative future rewards of taking a specific action from a given state [47], forcing
the reward model’s output scores to be entangled with an approximated Q-function has the potential
to improve the reward model’s credit assignment and preference ranking accuracy. We first present
preliminaries on reward modeling and contrastive learning in Section 3.1 and present our method in
Section F.
-----
**3.1** **Preliminaries**
**Preference ranking reward modeling for LMs. Typically, a reward model is parameterized**
_r(x, y) →_ R to return a scalar reward, given prompt x and completion sequence of tokens
_y = [y0, ..., yT ]. Given a dataset D consisting of triples (x, y[w], y[ℓ]) of preferred and dispreferred_
completions, y[w] and y[ℓ], respectively, the reward model is optimized via a paired loss:
_L[R]_ = − [1] [E][(][x,y][w][,y][ℓ][)][∈D][ log] _σ(r(x, y[w]) −_ _r(x, y[ℓ]))_ _._ (1)
_|D|_
Note, the reward model, r(x, y), is trained to provide a scalar feedback on the entire completion y.
**Approximating Q-functions via contrastive learning. In deep reinforcement learning, the Q-**
function quantifies the expected cumulative future rewards of taking an action from a given state and
is parameterized by a neural network [47, 20]. Let st be the current state from which a model can
take an action at. Taking action a at the current state and continuing on that trajectory will lead to
some future state sk. The future states reached along this trajectory can be positive, s[+]k [, by reaching]
a target goal state or negative, s[−]k [, by reaching a different or undesired state. A critic function][ f]
can approximate a Q-function up to a multiplicative constant by optimizing a contrastive learning
objective [20]:
_L = log(σ(f_ (st, at, s[+]k [))) +][ log][(1][ −] _[σ][(][f]_ [(][s][t][, a][t][, s]k[−][)))][.] (2)
Here, the critic function f is a scoring function between a state-action encoder and state encoder.
**3.2** **Learning Goal-Conditioned Representations in Language Reward Models**
In this section, we describe how to efficiently learn goal-conditioned representations using decoderonly transformer models [63, 54]. Additionally, we describe how to use these representations as
Q-values that can be leveraged to detect generation errors and in guided decoding [33].
**Reward modeling through the bottleneck of a Q-function. Given the prompt x and a corresponding**
completion sequence y, we seek to learn a language reward model that both scores y and has
representations that can be used to approximate a goal-conditioned Q-function. This Q-function
_Q[π]yg_ [(][y] 0,...,t 0,...,t
_{_ _}[, y][t][+1][)][,][ t][ ∈]_ [[0][, ..., T] []][, captures the expected reward for a policy][ π][(][y][t][+1][|][y]{ _}[, x][)][, for]_
some goal state yg and next token yt+1, where rewards are defined as likelihood of achieving the
goal at the next token generation. Intuitively, the goal state could be an optimal generation from
the policy that a human would find satisfactory, such as the correct solution to a math problem or
a helpful response. The Q-function captures the expected rewards from choosing token yt+1 at the
current step, considering the prompt x and tokens generated so far y{0,...,t}.
We initialize our reward model, r, from an instruction-tuned causal LM. We consider this reward
model to be decomposed into two components: 1) feature extractor ϕ, which produces a representation
for a given token; 2) a reward projection head r[′] that predicts a reward score based on that token
representation. We use ϕ to obtain representations of our state-action pair, ht+1, and arbitrary future
states, hk:
_ht+1 = ϕ(yt+1_ _y_ 0,...,t _, x)_ and _hk = ϕ(yk_ _y_ 0,...,k 1 _, x)._ (3)
_|_ _{_ _}_ _|_ _{_ _−_ _}_
Hence, we parameterize our reward model as
_r(x, y) = r[′](ϕ(yT |y{0,...,T −1}, x))._ (4)
In practice, the feature extractor is the LM decoder and the representations that we use are the hidden
token representations prior to predicting an output token. Meanwhile, the reward projection is either
a linear layer or multi-layered perceptron (MLP) in our experiments.
With this reward model, we train the representations from the feature extractor ϕ using a contrastive
loss [20], which becomes:
= log(σ(f (y 0,...,t _, yt+1, yg[+][))) +][ log][ (1][ −]_ _[σ][(][f]_ [(][y] 0,...,t _g_ [)))][,] (5)
_L[C]_ _{_ _}_ _{_ _}[, y][t][+1][, y][−]_
where f (y{0,...,t}, yt+1, yk) is the cosine similarity between the representations of the state-action
pair (y 0,...,t _, yt+1) and a future state yk. In Equation 5, yg[+]_ [and][ y]g[−] [are our positive and negative]
_{_ _}_
-----
goal states, which we discuss how to obtain in practice later in this section. By forcing the reward
scores predicted by r[′] to depend on the representations from ϕ, which is learned via this contrastive
objective, the reward score incorporates a signal as to which actions are closer to a desired goal state.
We jointly train the reward model on the contrastive objective in Eq. 5 and the preference ranking
objective optimized by Eq. 1. Thus, our final objective is given by
_L = L[R]_ + λ · L[C], (6)
where λ is a hyperparameter to balance the tradeoff between optimizing the contrastive loss and the
rewards.
In the context of RLHF, given that we have preference labels between two or more generations for the
same prompt and recalling that L[C] approximates a Q-function, we can improve the term’s alignment
with human preferences by sampling positive goal states from preferred generations y[w] and negative
goal states from dispreferred generations y[ℓ].
**Computing contrastive loss in practice. Given a preference-ranking instance from a training batch,**
let the preferred completion, y[w], and dispreferred completion, y[ℓ], have lengths T and T _[′], respectively._
The representations of the positive state-action and goal state pair, (ys, yg[+][), are given by sampling]
two points, i, j ∼ [0, T ], i < j, from the preferred completion and retrieving hidden states h[w]i [and]
_h[w]j_ [. Similarly, the representations of the negative pair, (][y]s[′] [,][ y]g[−][), are given by sampling any index]
_m ∼_ [0, T ] from the preferred completion and any index n ∼ [0, T _[′]] from the dispreferred completion_
and then retrieving hidden states h[w]m [and][ h]n[ℓ] [. We note that source states][ h][w]i [and][ h]m[w] [are sampled]
independently and are not necessarily equal. The cosine similarities between positive and negative
pair representations are passed through the sigmoid function and averaged across the training batch.
We will refer to this method as the Single Goal State (SGS) contrastive loss.
**Computing Q-values. Once the reward model has been trained, we can use it to compute an**
additional Q-value, which gives us a measure proportional to the expected future rewards given
the current state and a specific action. The Q-value at a token t is the cosine similarity between
the trained reward model’s hidden state, ht, and a goal state. The the choice of goal state here is
flexible. For our experiments in Section 4.1, unless specified otherwise, we set the goal state to
be the mean representation of all the preferred completion in the reward model training set. Thus,
we do not require any additional annotations aside from the starting preference ranking dataset. In
experiments on LM steering, we construct a prototype to be used as the goal state. The prototype is
constructed using generations labeled as helpful, complex, coherent, and correct by annotators in
Nvidia’s HelpSteer dataset [66]. Out of the highly-scored generations, we sample a subset and use
the mean represetations of the sample as the goal state for the reward model. We ablate these choices
in Appendix F.
**4** **Experiments**
To explore our method, we experiment with two settings: First, mathematical reasoning with code
(Section 4.1), where LMs must use reasoning and coding to answer mathematical questions. The
ability to code has become increasingly important for LLMs as it enables a broad set of capabilities
for reasoning (e.g., tool use [57]). This line of experiments elucidates the ability of our method,
and the resulting representations, to guide and improve step-by-step reasoning capabilities. Second,
natural language alignment (Section 4.2), in which an LM’s responses must align with certain desired
properties, such as helpfulness and harmlessness. These experiments show the capacity of reward
models trained with our method to steer LM outputs in a more open-ended setting. In particular, we
focus on helpfulness, harmlessness, and other desirable characteristics (e.g., correctness, coherence).
**4.1** **Mathematical Reasoning with Code**
**4.1.1** **Experimental Setup**
For training reward models, we use the OpenMathInstruct-1 dataset [61], a math instruction tuning
dataset with 1.8M problem-solution pairs on the training sets of GSM8k and MATH, which are
generated by the Mixtral-8x7B model [32]. To form our reward model training set, we create pairs
of solutions for the same problem, where one solution is correct (i.e., preferred) and the other is
incorrect (i.e., dispreferred).
-----
Figure 2: AUROC scores comparing the baseline Codellama 7b Reward vs. our proposed method
Q-Function 7b Reward on the rewards attributed to the base-model greedy generations across several
math benchmarks.
Our base model is OpenMath-CodeLlama-7b-Python-hf [61], a strong decoder-only LM that has been
instruction tuned on OpenMathInstruct-1 to use a Python interpreter in order to solve mathematical
problems, which we add a reward head to predict reward scores (Section F). This model trained with
the traditional preference ranking objective serves as our baseline (Codellama 7b Reward), which
we compare with the same model trained using our method (Q-Function 7b Reward).
We evaluate on several popular math benchmarks. Since the problems in the preference ranking
dataset come from the training sets of GSM8k and MATH, we consider their respective test splits to
be in-distribution (ID). We also evaluate on test sets we consider to be out-of-distribution (OOD),
namely algebra222 [27], GSM-Hard [23], Asdiv [46], mawps [35], and svamp [52], to test for further
generalization.
Detailed information on data, models, and training can be found in Appendix A.
**4.1.2** **Reward Modeling Evaluations**
We obtain completions from the base model by greedy decoding, following prior work [61, 65], and
measure the reward score AUROC. This quantifies the ability of a model’s reward score to discern
between correct/incorrect sequences across different thresholds.
**Learned representations help reward scores discern correct/incorrect solutions. In Figure 2, we**
see that the reward model trained with our representation learning approach consistently achieves
higher AUROC scores compared to the model trained with only the standard preference ranking loss.
On the ID datasets, we see a gain of approximately 0.05 and 0.09. Meanwhile, on the OOD datasets,
the average improvement is 0.04, with maximum of roughly 0.08 on svamp. Although these datasets
are math problems, they vary in the types and difficulties of questions asked. Despite being OOD, we
see that our method improves the performance when judging these prompts and generations. Overall,
these results suggest that the representations learned using our approach help improve the reward
score predictions such that they are more accurate and generalizable.
**These representations also help reward scores judge which partial solutions are promising. Part**
of the motivation and design of our approach is to encode the likelihood of reaching a certain goal
state at intermediate steps in the generated sequence. To examine how well our method accomplishes
this, we measure reward score AUROC when the model is only shown up to a certain percentage of
the generated sequence (i.e., percentile). Here, we show this on the ID datasets, where we consider the
50 sample generations from the base model and plot the mean AUROC across the 50 samples. Plots
for other benchmarks can be found in Appendix C. Looking at Figure 3, we find that the AUROC
scores for both reward models tend to increase as they see more of the generated sequence, but
Q-Function 7b Reward largely maintains higher AUROC across different percentiles and generally
-----
Figure 3: AUROC scores on the rewards attributed to partial base-model generations across 50
samples on GSM8k and MATH. The error bars depict the 95% confidence intervals (with sample size
_n = 50) at each percentile of generation considered. The Q-Function reward model has incremental_
increase in performance with more information, whereas, the traditional reward model’s performance
is a lot more varied in attributing intermediate rewards.
GSM8k
Accuracy (%) Prop. Correct (%) Avg. Filtered (%) Avg. Tokens Saved (%)
Maj. @ 50 84.8 74.9 - -
Filtered **86.0** **84.0** 35.9 25.5
MATH
Accuracy (%) Prop. Correct (%) Avg. Filtered (%) Avg. Tokens Saved (%)
Maj. @ 50 55.6 40.2 - -
Filtered **59.6** **52.0** 65.6 55.6
Table 1: Majority vote versus Q-value filtering performance. Accuracy shows the final majority voting
accuracy on the respective benchmarks. Prop. Correct is the proportion of the correct class in the
final sample set considered. Avg. Filtered is the average percentage (out of 50 total generations) that
were discarded across all problems in the benchmark. Avg. Tokens Saved is the average percentage
of tokens that are saved with the assumption that the remaining tokens after the first negative Q-value
is discarded across all the problems in the benchmark. Both the difference in accuracy and proportion
of correct class are t-test significant.
exhibits fewer significant drops between percentiles. These improvements across percentiles imply
that the reward scores do indeed better capture which generated sequence are likely to be correct or
incorrect.
**4.1.3** **Directly Using the Learned Representations**
Our experiments thus far illustrate the effect of the learned representations on the reward scores.
Here, we turn our attention to utilizing these representations directly to explore some of the encoded
properties. In these experiments, we filter generations from the base model using the Q-values (i.e.,
cosine similarities between intermediate and goal state representations). We take 50 base model
generations for all the GSM8k and MATH test problems, which are provided by Toshiniwal et al.,
and we filter any generation that has a Q-value less than zero [61]. We compare this against a baseline
of majority voting over the 50 generations.
**Q-values can prune incorrect sequences. We find that filtering based on Q-values provides**
meaningful improvements in accuracy compared to majority voting over all 50 samples, based on
Table 1. Specifically, the majority vote accuracy improves by 1.2% and 3.9% on GSM8k and MATH,
respectively. Looking at the increased proportion of correct sequences after filtering, it appears that
sequences with the negative Q-values generally correspond to incorrect sequences and are filtered
out in this setting, thereby improving the accuracy. On the other hand, we also see that the average
percentage of sequences that are filtered out can be quite large, such as 65.6% on the MATH dataset,
-----
**Problem: In a dance class of 20 students, 20% enrolled in contemporary dance,**
25% of the remaining enrolled in jazz dance, and the rest enrolled in hip-hop
dance. What percentage of the entire students enrolled in hip-hop dance?
Let ' s solve this problem using Python code .
< ll m - code >
# 2 0 % en rolled in contemporary dance
cont empor ary _ d ance _ percent age = 2 0
# 2 5 % of the remaining en rolled in jazz dance
j azz _ d ance _ percent age = 2 5
# the rest en rolled in hip - h op dance
hip _ h op _ d ance _ percent age = 1 0 0 - ( cont empor ary _ d ance _ percent \
...... age + jazz _ d ance _ percent age )
...
5 5 . 0 0 0 0 0 0 0 0 0 0 0 0 0 1
</ ll m - code - output >
Th us \ box ed [ 5 5 \% ] of the students en rolled in hip - h op dance .
**Problem: Toula went to the bakery and bought various types of pastries. She**
bought 3 dozen donuts which cost 68 per dozen, 2 dozen mini cupcakes which
cost 80 per dozen, and 6 dozen mini cheesecakes for 55 per dozen. How much
was the total cost?
Let ' s solve this problem using Python code .
< ll m - code >
don uts _ per _ do zen = 1 2
don uts _ price _ per _ do zen = 6 8
don uts _ cost = don uts _ per _ do zen * don uts _ price _ per _ do zen * 3
...
8 3 2 8
</ ll m - code - output >
Th us the total cost of the past ries is \ box ed [ 8 3 2 8 ] dollars .
Figure 4: Two examples from the GSM8K dataset that was filtered via the Q-value pruning. The
token level Q-values are portrayed as a heat map where the colors red and blue represents scores
close to −1 and 1, respectively. Both examples illustrate that the Q-values pinpoint the exact logical
error in reasoning. The full version of these examples can be found in Appendix D
and some correct solutions can also be filtered out. Nevertheless, using the Q-values based on these
learned representations can help improve the accuracy of such majority voting schemes.
**Q-values can detect errors in reasoning. Experiments on rewards scores for partial sequences**
(Section 4.1.2) suggest that the learned representations help capture the likelihood that a sequence
will reach a goal state from an intermediate step. We can also qualitatively explore this property using
the Q-values and the representations themselves. In Figure 4, we show examples that are filtered
based on the Q-value filtering process above. In the left example, the Q-value drops when there is a
mistake of multiplying the “donuts_price_per_dozen” by “donuts_per_dozen”. The Q-value
in the right example similarly shifts once the “jazz_dance_percentage” is naively added to the
“contemporary_dance_percentage”. Overall, these examples illustrate the potential effectiveness
of using the representations and resulting Q-values for localizing errors and identifying promising
sequences.
**4.1.4** **Aligning Policy Models with RLHF**
We investigate whether the reward model trained with our proposed method provides a better reward
signal when performing RLHF, which is an important usage of these reward models. We compare
the math problem solving performance of a policy model that is aligned using a preference-ranking
reward model versus our Q-Function reward model in the Proximal Policy Optimization (PPO) [58]
algorithm. The prompt dataset is given by problems in the training sets of GSM8k and MATH. To
introduce diversity in the prompts, which has been shown to improve PPO, we also add problems
from the MathQA dataset - a large scale dataset with real world math problems [4]. We keep the
settings and hyperparameters during PPO constant, and further details are in Appendix A.
Table 2 shows that, given the same settings, utilizing a reward model trained with our method
improves policy model accuracy across several benchmarks. We see an average gain, weighted by
benchmark size, of 1.7%. Intuitively, this shows that the improvements we observe in reward scores
(Section 4.1.2) do translate to improvements in the policy model via RL, which further supports the
value of the representations learned via our approach. Although, the gains in accuracy are not as
significant compared to those observed with the reward scores. A possible explanation for this is that,
due to experimental constraints such as additional data collection, the preference-ranking dataset
used for reward model training are off-policy Mixtral-8x7B model [32] generations. Hence, a clear
next step for this setting would be to retrain the reward model with on-policy preference-rankings
and iterate.
**4.2** **Natural Language Alignment**
While fine-grained feedback (and subsequently, Q-values) are most sensible in reasoning environments with explicitly incorrect logical steps, a natural extension of the experiment is to apply our
approach to reward models in the natural language setting, where we expect models to learn improved
representations with respect to their alignment towards human preferences. We first explore this
for training a reward model for helpfulness and harmlessness [8]. Then, we explore if the learned
representations can be further used for steering language model at inference time, via guided decoding,
-----
Model IID OOD
GSM8k MATH algebra222 GSM-Hard Asdiv mawps svamp
Codellama Base 75.9 43.6 65.6 60.1 77.7 93.5 79.6
Codellama PPO 79.3 43.4 65.8 61.1 77.4 91.6 78.5
Q-Function PPO **80.5** **45.1** **70.9** **62.7** **79.5** **93.6** **81.2**
Table 2: Accuracy of the policy model trained via PPO using a preference ranking reward model vs
our Q-Function reward model. We present average accuracy across 4 independent runs. The base
model results are also shown as a reference point presented in [61]. The full table with confidence
intervals is in the appendix (Table 6).
to generate sequences that are more helpful, complex, or coherent. The backbone of our reward
model is Llama 3-8B-Instruct [1], and detailed information can be found in Appendix B.2.
**4.2.1** **Improving Reward Modeling for Helpfulness and Harmlessness**
We use the Helpful-Harmless dataset [8] for training and evaluation. We select this dataset because
of it’s large scale, established representation in the RLHF community, and general purpose nature
which is helpful to avoid restricting our natural language reward model towards a specific domain.
Similar to our previous experiments, we compare training with our approach versus standard reward
model training. Table 3 (left) shows that training with our method also improves reward scores on
this natural language data, increasing the ability of the reward model to discern between helpful and
unhelpful responses.
**4.2.2** **Reward Model Representations can Steer Language Models Towards Helpful**
**Generations**
A practical extension of learning stronger natural language representations is using the extensions for
steering the LM. To test this, we experiment with guided decoding [33, 69] with Llama-3-8B-Instruct,
which defines a decoding objective function to guide inference during beam search. In traditional
guided decoding (and our SFT baseline), the confidence score of the model is extracted from an
additional inference step on each beam. In our method (Q-Function Prototype), the confidence score
is the cosine similarity between the sequence embedding from the reward model and the hidden state
[20, 2]. We construct the prototype by embedding 20 examples that score highly in all categories
(helpfulness, correctness, coherence, complexity) using HelpSteer, a model alignment and steering
dataset [66]. The reward model is trained using HelpfulHarmless as per previous sections [8]. Details
for guided decoding are in the Appendix B.
Table 3 (right) shows our results, and evaluation is done via LLM-as-a-judge (GPT-4) [73]. Using
the representations fine-tuned by the contrastive loss improves across all categories and generally
steers generations towards more helpful generations. We find that complexity improves the most,
suggesting that the generations produced by the using the prototype helps the model avoid beams
that are too simple. Suprisingly, the correctness of the responses is increased also, suggesting that
although the SFT prototype usually prefers beams that are simple and correct, the prototype succeeds
at picking beams with added complexity without sacrificing correctness or helpfulness.
Model Helpful Correct. Coherent Complex.
SFT 45.2 43.9 41.8 39.2
Prototype **54.8** **56.1** **58.3** **60.8**
Model HH
Llama 8b Reward 68.2
Q-Function 8b Reward **70.5**
Table 3: Comparison of model performances in different settings. (a) Reward model accuracy on the
Helpful-Harmless test set. Our result highlights that the representations learned by the Q-function are
helpful for discerning between desirable and undesirable sequences in the natural language setting.
(b) Winrate (%) vs. SFT model. In the Q-Function prototype setting, we use the cosine similarity of
the sequence and the prototype (made from 20 examples that score high on all 4 categories) to score
each beam. In the SFT setting, the model is fine-tuned on the same examples as the prototype and
guided decoding is run via model scores. Evaluation is done via GPT-4.
-----
**5** **Limitations and Future Work**
**Deriving more informative goal states. In Appendix F, we demonstrate the importance of having a**
precise and meaningful goal state. At train time, our SGS method achieves this by looking at future
tokens of the preferred and dispreferred completions. During inference, however, we take the mean
representation of multiple goal state completions from the training data. Although we see that the
SGS method generalizes despite this discrepancy, future work could investigate other methods to
derive more precise inference-time goal states.
**Partial Q-values correlate with partial reward scores. An interesting observation, shown in**
Appendix G, is that the reward scores for partial completions are correlated with the Q-values. While
our experiments show that this can benefit reward scores, especially for math reasoning, disentangling
these values may be desirable in some instances. For example, one may want to incorporate the
completeness of the generated sequence in the reward score, such as for creative writing, meaning
incomplete sequences should strictly get low reward scores. However, currently, partial sequences
may still score highly if they are likely to reach a goal state. This is expected as the final reward score
is derived by a linear projection from the hidden representation, while the Q-values are derived as
the cosine similarity between that same representation and goal state. In Appendix G, we explore
the effects of learning an MLP projection for the rewards to help decouple the two signals. Further
disentangling these signals in the representations could be an interesting direction for future work.
**Further advancing policy models. A clear line of future work here is to integrate the additional**
Q-value signal that we gain from this method into the RL algorithm to further improve policy model
training. Future work should also investigate the impacts of iteratively retraining the reward model
on on-policy completions and the policy model on the updated reward model.
**6** **Conclusion**
Learning proper representations in the reward model is essential for the model’s generalization and
ability to produce accurate rewards during RLHF. We introduce a method of applying goal-conditioned
Q-functions to learn representations that capture the expected reward for a goal-conditioned policy.
On math benchmarks, such as GSM8k, we improve performance and show that the reward model has
a greater ability to discern between correct and incorrect solutions. On natural language alignment
experiments, we show improvement in reward model performance and that the embedding can be
used for natural language steering. Our findings show that using Q-values during reward model
training can improve the representations of the reward model and suggest promising future directions
for further advancing language models via RLHF.
**References**
[[1] Meta AI. Introducing meta llama 3: The most capable openly available llm to date. https:](https://ai.meta.com/blog/meta-llama-3/)
```
//ai.meta.com/blog/meta-llama-3/.
```
[2] Arwa Alanqary, Gloria Lin, Joie Le, Zhi-Xuan Tan, Vikash K. Mansinghka, and Joshua B.
Tenenbaum. Modeling the mistakes of boundedly rational agents within a bayesian theory of
mind. ArXiv, abs/2106.13249, 2021.
[3] Reza Yazdani Aminabadi, Samyam Rajbhandari, Minjia Zhang, Ammar Ahmad Awan, Cheng
Li, Du Li, Elton Zheng, Jeff Rasley, Shaden Smith, Olatunji Ruwase, and Yuxiong He. Deepspeed inference: Enabling efficient inference of transformer models at unprecedented scale,
2022.
[4] Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh
Hajishirzi. Mathqa: Towards interpretable math word problem solving with operation-based
formalisms. CoRR, abs/1905.13319, 2019.
[5] Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder,
Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience
replay. Advances in neural information processing systems, 30, 2017.
[6] Raghuram Mandyam Annasamy and Katia Sycara. Towards better interpretability in deep
q-networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages
4561–4569, 2019.
-----
[7] Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy
Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a
laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021.
[8] Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn
Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless
assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862,
2022.
[9] Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones,
Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai:
Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022.
[10] Andrea Banino, Adria Puigdomenech Badia, Jacob C Walker, Tim Scholtes, Jovana Mitrovic,
and Charles Blundell. CoBERL: Contrastive BERT for reinforcement learning. In International
_Conference on Learning Representations, 2022._
[11] André Barreto, Will Dabney, Rémi Munos, Jonathan J. Hunt, Tom Schaul, David Silver, and
H. V. Hasselt. Successor features for transfer in reinforcement learning. ArXiv, abs/1606.05312,
2016.
[12] Diana Borsa, André Barreto, John Quan, Daniel Jaymin Mankowitz, Rémi Munos, H. V.
Hasselt, David Silver, and Tom Schaul. Universal successor features approximators. ArXiv,
abs/1812.07626, 2018.
[13] Elliot Chane-Sane, Cordelia Schmid, and Ivan Laptev. Goal-conditioned reinforcement learning
with imagined subgoals. In International Conference on Machine Learning, pages 1430–1440.
PMLR, 2021.
[14] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework
for contrastive learning of visual representations. In International conference on machine
_learning, pages 1597–1607. PMLR, 2020._
[15] Ching-An Cheng, Andrey Kolobov, Dipendra Misra, Allen Nie, and Adith Swaminathan.
Llf-bench: Benchmark for interactive learning from language feedback. _arXiv preprint_
_arXiv:2312.06853, 2023._
[16] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John
Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168,
2021.
[17] Tri Dao. Flashattention-2: Faster attention with better parallelism and work partitioning, 2023.
[18] Alexey Dosovitskiy and Vladlen Koltun. Learning to act by predicting the future. ArXiv,
abs/1611.01779, 2016.
[19] Simon S. Du, Sham M. Kakade, Ruosong Wang, and Lin F. Yang. Is a good representation sufficient for sample efficient reinforcement learning? In 8th International Conference on Learning
_Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020._
[20] Benjamin Eysenbach, Tianjun Zhang, Sergey Levine, and Ruslan Salakhutdinov. Contrastive
learning as goal-conditioned reinforcement learning. Advances in Neural Information Process_ing Systems, 2022._
[21] Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter Abbeel. Deep
spatial autoencoders for visuomotor learning. In 2016 IEEE International Conference on
_Robotics and Automation (ICRA), pages 512–519. IEEE, 2016._
[22] Carlos Florensa, Jonas Degrave, Nicolas Manfred Otto Heess, Jost Tobias Springenberg, and
Martin A. Riedmiller. Self-supervised learning of image embedding for continuous control.
_ArXiv, abs/1901.00943, 2019._
[23] Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan,
and Graham Neubig. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435,
2022.
[24] Carles Gelada, Saurabh Kumar, Jacob Buckman, Ofir Nachum, and Marc G. Bellemare. DeepMDP: Learning continuous latent space models for representation learning. In Kamalika
-----
Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Confer_ence on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages_
2170–2179. PMLR, 09–15 Jun 2019.
[25] Dibya Ghosh and Marc G. Bellemare. Representations for stable off-policy reinforcement
learning. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International
_Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research,_
pages 3556–3565. PMLR, 13–18 Jul 2020.
[26] Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen,
David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, Robert Kern,
Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fernández del Río, Mark Wiebe, Pearu Peterson, Pierre Gérard-Marchant, Kevin Sheppard, Tyler
Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant. Array
programming with NumPy. Nature, 585(7825):357–362, September 2020.
[27] Joy He-Yueya, Gabriel Poesia, Rose E Wang, and Noah D Goodman. Solving math word problems by combining language models with symbolic solvers. arXiv preprint arXiv:2304.09102,
2023.
[28] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn
Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset.
_Advances in neural information processing systems, 33, 2021._
[29] Jian Hu, Xibin Wu, Weixun Wang, Xianyu, Dehao Zhang, and Yu Cao. Openrlhf: An easy-touse, scalable and high-performance rlhf framework, 2024.
[30] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo,
David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary
tasks. In International Conference on Learning Representations, 2017.
[31] Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Debapriya Banerjee, and Fillia
Makedon. A survey on contrastive self-supervised learning. Technologies, 9(1):2, 2020.
[32] Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris
Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand,
et al. Mixtral of experts. arXiv preprint arXiv:2401.04088, 2024.
[33] Minbeom Kim, Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, and Kyomin Jung.
Critic-guided decoding for controlled text generation. arXiv preprint arXiv:2212.10938, 2022.
[34] Hannah Rose Kirk, Alexander Whitefield, Paul Rottger, Andrew M. Bean, Katerina Margatina,
Juan Ciro, Rafael Mosquera, Max Bartolo, Adina Williams, He He, Bertie Vidgen, and Scott A.
Hale. The prism alignment project: What participatory, representative and individualised human
feedback reveals about the subjective and multicultural alignment of large language models.
_ArXiv, abs/2404.16019, 2024._
[35] Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi.
Mawps: A math word problem repository. pages 1152–1157, 01 2016.
[36] Andreas Kopf, Yannic Kilcher, Dimitri von Rutte, Sotiris Anagnostidis, Zhi Rui Tam, Keith
Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Rich’ard Nagyfi, ES Shahul,
Sameer Suri, David Glushkov, Arnav Dantuluri, Andrew Maguire, Christoph Schuhmann, Huu
Nguyen, and Alexander Mattick. Openassistant conversations - democratizing large language
model alignment. ArXiv, abs/2304.07327, 2023.
[37] Tomasz Korbak, Kejian Shi, Angelica Chen, Rasika Vinayak Bhalerao, Christopher Buckley,
Jason Phang, Samuel R Bowman, and Ethan Perez. Pretraining language models with human
preferences. In International Conference on Machine Learning, pages 17506–17533. PMLR,
2023.
[38] Minae Kwon, Sang Michael Xie, Kalesha Bullard, and Dorsa Sadigh. Reward design with
language models. arXiv preprint arXiv:2303.00001, 2023.
[39] Misha Laskin, Kimin Lee, Adam Stooke, Lerrel Pinto, Pieter Abbeel, and Aravind Srinivas.
Reinforcement learning with augmented data. 33:19884–19895, 2020.
[40] Tor Lattimore, Csaba Szepesvari, and Gellert Weisz. Learning with good feature representations in bandits and in RL with a generative model. In Hal Daumé III and Aarti Singh,
-----
editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of
_Proceedings of Machine Learning Research, pages 5662–5670. PMLR, 13–18 Jul 2020._
[41] Charline Le Lan, Stephen Tu, Adam Oberman, Rishabh Agarwal, and Marc G. Bellemare. On the
generalization of representations in reinforcement learning. In Gustau Camps-Valls, Francisco
J. R. Ruiz, and Isabel Valera, editors, Proceedings of The 25th International Conference on
_Artificial Intelligence and Statistics, volume 151 of Proceedings of Machine Learning Research,_
pages 4132–4157. PMLR, 28–30 Mar 2022.
[42] Yitao Liang, Marlos C Machado, Erik Talvitie, and Michael Bowling. State of the art control of
atari games using shallow reinforcement learning. arXiv preprint arXiv:1512.01563, 2015.
[43] Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan
Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step, 2023.
[44] Guoqing Liu, Chuheng Zhang, Li Zhao, Tao Qin, Jinhua Zhu, Li Jian, Nenghai Yu, and TieYan Liu. Return-based contrastive representation learning for reinforcement learning. In
_International Conference on Learning Representations, 2021._
[45] Jacob Menick, Maja Trebacz, Vladimir Mikulik, John Aslanides, Francis Song, Martin Chadwick, Mia Glaese, Susannah Young, Lucy Campbell-Gillingham, Geoffrey Irving, et al. Teaching
language models to support answers with verified quotes. arXiv preprint arXiv:2203.11147,
2022.
[46] Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and
developing english math word problem solvers, 2021.
[47] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G
Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al.
Human-level control through deep reinforcement learning. nature, 518(7540):529–533, 2015.
[48] Ofir Nachum, Shixiang Shane Gu, Honglak Lee, and Sergey Levine. Data-efficient hierarchical
reinforcement learning. In Neural Information Processing Systems, 2018.
[49] Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted
question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
[50] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin,
Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to
follow instructions with human feedback. Advances in neural information processing systems,
35:27730–27744, 2022.
[51] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan,
Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas
Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy,
Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style,
high-performance deep learning library, 2019.
[52] Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are nlp models really able to solve simple
math word problems?, 2021.
[53] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel,
P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher,
M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine
_Learning Research, 12:2825–2830, 2011._
[54] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language
understanding by generative pre-training. 2018.
[55] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and
Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model.
_Advances in Neural Information Processing Systems, 36, 2024._
[56] Joshua Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka. Contrastive learning
with hard negative samples. arXiv preprint arXiv:2010.04592, 2020.
[57] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Eric Hambro,
Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can
teach themselves to use tools. Advances in Neural Information Processing Systems, 35, 2023.
-----
[58] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal
policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
[59] Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, and
Sergey Levine. Time-contrastive networks: Self-supervised learning from video. 2018 IEEE
_International Conference on Robotics and Automation (ICRA), pages 1134–1141, 2017._
[60] Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec
Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback.
_Advances in Neural Information Processing Systems, 33:3008–3021, 2020._
[61] Shubham Toshniwal, Ivan Moshkov, Sean Narenthiran, Daria Gitman, Fei Jia, and Igor Gitman.
Openmathinstruct-1: A 1.8 million math instruction tuning dataset. arXiv preprint arXiv:
_Arxiv-2402.10176, 2024._
[62] Frederik Träuble, Andrea Dittadi, Manuel Wuthrich, Felix Widmaier, Peter Vincent Gehler,
Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, and Stefan Bauer.
Representation learning for out-of-distribution generalization in reinforcement learning. In
_ICML 2021 Workshop on Unsupervised Reinforcement Learning, 2021._
[63] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information
_processing systems, 30, 2017._
[64] Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stéfan J. van der
Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, C J Carey, [˙]Ilhan Polat, Yu Feng, Eric W. Moore, Jake
VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero,
Charles R. Harris, Anne M. Archibald, Antônio H. Ribeiro, Fabian Pedregosa, Paul van Mulbregt, Aditya Vijaykumar, Alessandro Pietro Bardelli, Alex Rothberg, Andreas Hilboll, Andreas
Kloeckner, Anthony Scopatz, Antony Lee, Ariel Rokem, C. Nathan Woods, Chad Fulton,
Charles Masson, Christian Häggström, Clark Fitzgerald, David A. Nicholson, David R. Hagen,
Dmitrii V. Pasechnik, Emanuele Olivetti, Eric Martin, Eric Wieser, Fabrice Silva, Felix Lenders,
Florian Wilhelm, G. Young, Gavin A. Price, Gert-Ludwig Ingold, Gregory E. Allen, Gregory R.
Lee, Hervé Audren, Irvin Probst, Jörg P. Dietrich, Jacob Silterra, James T Webber, Janko Slaviˇc,
Joel Nothman, Johannes Buchner, Johannes Kulick, Johannes L. Schönberger, José Vinícius
de Miranda Cardoso, Joscha Reimer, Joseph Harrington, Juan Luis Cano Rodríguez, Juan NunezIglesias, Justin Kuczynski, Kevin Tritz, Martin Thoma, Matthew Newville, Matthias Kümmerer,
Maximilian Bolingbroke, Michael Tartre, Mikhail Pak, Nathaniel J. Smith, Nikolai Nowaczyk,
Nikolay Shebanov, Oleksandr Pavlyk, Per A. Brodtkorb, Perry Lee, Robert T. McGibbon,
Roman Feldbauer, Sam Lewis, Sam Tygier, Scott Sievert, Sebastiano Vigna, Stefan Peterson,
Surhud More, Tadeusz Pudlik, Takuya Oshima, Thomas J. Pingel, Thomas P. Robitaille, Thomas
Spura, Thouis R. Jones, Tim Cera, Tim Leslie, Tiziano Zito, Tom Krauss, Utkarsh Upadhyay,
Yaroslav O. Halchenko, and Yoshiki Vázquez-Baeza. Scipy 1.0: fundamental algorithms for
scientific computing in python. Nature Methods, 17(3):261–272, February 2020.
[65] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha
Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language
models. arXiv preprint arXiv:2203.11171, 2022.
[66] Zhilin Wang, Yi Dong, Jiaqi Zeng, Virginia Adams, Makesh Narsimhan Sreedhar, Daniel
Egert, Olivier Delalleau, Jane Polak Scowcroft, Neel Kant, Aidan Swope, and Oleksii Kuchaiev.
Helpsteer: Multi-attribute helpfulness dataset for steerlm. ArXiv, abs/2311.09528, 2023.
[67] Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A.
Smith, Mari Ostendorf, and Hannaneh Hajishirzi. Fine-grained human feedback gives better
rewards for language model training, 2023.
[68] Yuxi Xie, Kenji Kawaguchi, Yiran Zhao, Xu Zhao, Min-Yen Kan, Junxian He, and Qizhe Xie.
Decomposition enhances reasoning via self-evaluation guided decoding, 2023.
[69] Yuxi Xie, Kenji Kawaguchi, Yiran Zhao, Xu Zhao, MingSung Kan, Junxian He, and Qizhe Xie.
Self-evaluation guided beam search for reasoning. Advances in Neural Information Processing
_Systems, 2023._
-----
[70] Jing Xu, Megan Ung, Mojtaba Komeili, Kushal Arora, Y-Lan Boureau, and Jason Weston.
Learning new skills after deployment: Improving open-domain internet-driven dialogue with
human feedback. arXiv preprint arXiv:2208.03270, 2022.
[71] Jiaqi Yang, Wei Hu, Jason D. Lee, and Simon Shaolei Du. Impact of representation learning in
linear bandits. In International Conference on Learning Representations, 2021.
[72] Amy Zhang, Clare Lyle, Shagun Sodhani, Angelos Filos, Marta Kwiatkowska, Joelle Pineau,
Yarin Gal, and Doina Precup. Invariant causal prediction for block MDPs. In Hal Daumé III and
Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning,
volume 119 of Proceedings of Machine Learning Research, pages 11214–11224. PMLR, 13–18
Jul 2020.
[73] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang,
Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Haotong Zhang, Joseph Gonzalez, and Ion Stoica.
Judging llm-as-a-judge with mt-bench and chatbot arena. ArXiv, abs/2306.05685, 2023.
[74] Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei,
Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences.
_arXiv preprint arXiv:1909.08593, 2019._
**A** **Mathematical Reasoning with Code Details**
In this appendix, we discuss further experimental details for our results in Section 4.1.
**A.1** **Reward Model Training and Evaluation Details**
**Preference ranking dataset.** To construct the preference ranking dataset, we used the
OpenMathInstruct-1, a math instruction tuning dataset with 1.8M problem-solution pairs generated by the Mixtral-8x7B model. The questions come from a blend of the training subsets for GSM8k
and MATH. We matched up generations that had the final answer output correct vs. incorrect as the
preferred and dis-preferred generation. Some problems are used several times, but all preferred and
all dis-preferred solutions are unique. In total, we curated 257, 584 total preference rankings over a
total of 12, 518 unique problems.
**Prompt format. To prompt the base model to answer a math question, we use the same Nemo**
prompt format that was used to fine-tune the model. This prompt format was used both for reward
model training, PPO training, and all evaluations. The following is the prompt format:
System:
You’re an expert Python programmer and mathematician. Help the user to solve this problem using
code when necessary. Make sure to put the answer (and only answer) inside \boxed[].
User: {problem}
Assistant:
where {problem} is replaced with a math problem from the dataset.
**OpenRLHF library. We implement the reward model training with contrastive loss and the baseline**
of standard preference ranking using the OpenRLHF library [29] in combination with PyTorch [51].
We made the following modification to the OpenRLHF Library:
- A new contrastive loss method that computes the contrastive objective
- Optionally add auxiliary loss to the reward model training objective if experimenting with
our reward model method
- Optionally change the reward model reward head from a linear projection to a multi-layer
perceptron. The MLP layer is a simple 3 fully connect layer with ReLU activation in
between, all with the same hidden size as the representation dimension.
-----
**Reward model training parameters. We train both the baseline reward model and the Q-Function**
reward model as well as any reward model ablations for 1 epoch with batch size of 64.
hyperparameter and settings value
batch size 64
learning rate 9 × 10[−][6]
representation dimension 4096
contrastive lambda (λ) 0.5
num epoch 1
preference ranking loss function sigmoid
optimizer Adam
seed 42
Table 4: Hyper-parameters for Reward Model training
The hyper-parameter for the contrastive lambda of 0.5 was chosen by doing a sweep of 0.1, 0.5, 0.9,
and found that the 0.5 setting had the most stable to training loss as well as training rewards for
preferred vs. unpreferred completions.
**Reward model evaluation. We evaluate the Reward Model performance on the test splits of**
several math benchmarks: GSM8K [16], MATH [4], algebra222 [27], GSM-Hard [23], Asdiv [46],
mawps [35], and svamp [52]. To get a set of completions on each problem in these set of benchmarks,
we use the greedy generation of the base model along with annotations of whether each completion
provides the correct answer, which are provided by Toshiniwal et al. [61]. Then for each completion,
we first format using the Nemo format template described earlier, and use the Reward Model to
predict a reward score. In predicting the reward score, we also retrieve the last hidden state of the
Reward Model as well as the predicted reward score for each token in the completion, not including
the prompt tokens. Utilizing the Reward Model predicted reward score and the annotation of the
correctness of each completion, we compute an AUROC score using the Python scikit-learn package
[53]. Using the same procedure, we plot the partial AUROC scores at every tenth percentile of each
completion. The AUROC scores are compared between Q-Function 7b Reward (our method) and
Codellama 7b Reward and the results are summarized in Section 4.1.2.
**A.2** **Learned Representations Experiment Details**
**Computing Q-values. In order to retrieve the Q-values, we take the cosine similarity between the**
goal state and the hidden states for each token, which is retrieved during the forward pass when
collecting the reward scores. The goal state in all the experiments is the mean representation of all
the preferred completion in the preference ranking dataset. More explicitly, we pass each preferred
problem and response pair through the trained Reward Model and then we take the average across the
last hidden state of the final token of each problem and response pair.
**Filtering sampled generations. In order to understand the value of these learned representations,**
we use these Q-values to filter generations. For each of the In-Distribution datasets, GSM8K [16]
and MATH [4], we use the 50 completion samples provided by Toshiniwal et al. [61]. We take
each completion and we discard it if the Q-value for any of the tokens is less than 0, which indicates
that the expected future rewards from that state is low. This is performed for each problem and each
generation sample. Using the final set of remaining completions, we then compute the majority vote
accuracy, comparing it against the baseline of majority vote at 50. Furthermore, in order to visualize
these Q-values, we plot a heat map of these values on the corresponding tokens. The results and plots
from this experiment is detailed in Section 4.1.3.
**A.3** **RLHF Training and Evaluation Details**
In these set of experiments, we investigate whether the reward model trained with our proposed
method provides a better reward signal when performing RLHF. Here we explain further experimental
details for the results in Section 4.1.4.
**Prompt dataset. The prompt dataset for PPO comprises of the training problems from GSM8k and**
MATH, which constitutes a total of 12,518 problems [16, 28]. We also incorporate all the problems
-----
from the MathQA dataset [4], which has 37,297 unique problems. Thus, in total, the prompt dataset
for PPO has 49,815 unique problems. In addition, at each training step of the policy model, we add a
small pretraining coefficient. The SFT dataset for the pre-training coefficient is comprised of all the
preferred completions in the preference ranking dataset.
**PPO training parameters. We also use the OpenRLHF package and PyTorch for PPO training**
[29, 51]. No major code changes were made to the OpenRLHF package, with the exception of the
dataset loader for our custom prompt dataset. Table 5 summarizes the hyperparameters used for all
PPO training reported in this paper.
hyperparameter and settings value
train batch size 128
number of episodes 1
rollout batch size 512
training epochs 1
actor learning rate 3 × 10[−][6]
critic learning rate 9 × 10[−][6]
init kl coeff 0.02
normalize reward true
generation max length 1024
GAE gamma 1.0
GAE lambda 0.95
policy model temperature 1
pretraining coefficient 0.05
learning rate 9 × 10[−][6]
representation dimension 4096
contrastive lambda (λ) 0.5
optimizer Adam
Table 5: Hyper-parameters for PPO training
**Policy model evaluation. We evaluate the policy-trained model on the test set on all the aforemen-**
tioned math benchmarks. For each problem in these benchmarks, we process the input to use the
prompt format detailed above and perform zero shot evaluation. We perform 4 independent runs to
account for variance. In order to perform this evaluation, we utilize the Nemo-Skill [61] package. As
a pre-requesite to performing evaluation on this package, we first had to convert the model to the
Nemo model format. Table 6 displays the average accuracy scores across 4 independent runs, as well
as the 95% confidence interval. The variability across the 4 runs comes from random seeding, which
influences the initialization of the Value Model as well as dataset shuffling.
Reward Model IID OOD
GSM8K MATH algebra222 GSM-Hard Asdiv mawps svamp
Q-FunctionCodellama 79.380.5±±0.30.2 **45.143.4±±0.10.2** **70.965.8±±1.71.6** **62.761.1±±0.50.3** **79.577.4±±0.50.2** 91.693.6±±0.30.3 **81.278.5±±0.40.9**
Table 6: Average accuracy and 95% confidene interval after the base policy model was trained via the
PPO algorithm using a preference ranking reward model and our Q-Function reward model, across 4
different independent runs.
**Code execution setup. Our sandbox environment is constructed using Python 3.10 and includes**
necessary mathematical libraries such as NumPy to support the execution of LM-generated code
[26]. This environment is containerized and deployed across a Kubernetes cluster with 64 replicas,
designed to handle the training and evaluation workload in parallel. Each replica is allocated 1 CPU
and 1 Gi of memory to efficiently manage the computational demands. Each sandbox establishes
communication and accepts requests through a Flask server.
**Code execution security considerations. In our current evaluation setup, the code generated by**
the Language Model is primarily focused on solving mathematical questions and originates from
trusted sources, allowing us to maintain a controlled and relatively secure environment. However,
-----
extending this methodology to broader applications, where code is executed from varied or unknown
sources, significantly increases security concers such as malware intrusion and unauthorized data
access. To mitigate these risks, it is imperative to create a sandbox environment to isolate the code
execution from the regular operational environment. Key protocols include enforcing least privilege
in the sandbox to limit permissions, minimizing external communications to reduce attack surfaces,
and rigorously validating both incoming code and outgoing artifacts to ensure they adhere to security
best practices.
**A.4** **Compute and Statistical Significance**
For the reward model training and PPO, we used a single node, 8 gpus, and 88 CPU cores, 80 Gi GPU
memory and 1TB system memory. The total reward model training time ranged between 6-7 hours
and the total PPO training was approximately 13-14 hours. We additionally used flash attention 2 and
deepseed with zero_stage 3, for reward model training, and zero_stage 2 for PPO training [17, 3].
Moreover, all the experiment results presented are statistically significant, performed by the scipy
library [64]. Where applicable, all error bars, whether a table or plot, are 95% confidence intervals,
calculated via numpy package [26].
**B** **Natural Language Alignment Details**
In this appendix, we discuss further experimental details for our results in the Section 4.2.
**B.1** **Training a Natural Language Reward Model**
**Dataset. To train our natural language reward model baseline and contrastive reward model, we use**
the training set of Helpful-Harmless [8]. Helpful-Harmless is a popular natural language alignment
dataset created by Anthropic with human-assistant dialog chains. Each pair contains a chosen and
rejected response, where the chosen response generally aligns on axes of helpfulness and harmlessness.
This dataset contains 169,352 rows where each row contains a pair of chosen and rejected samples.
**Training. For training the reward model, we use the OpenRLHF [29] package and Pytorch. No**
major code changes were made to the OpenRLHF package, and we use the dataset loader given by
OpenRLHF to load the Helpful-Harmless dataset for training. Table 7 shows the hyperparameters
used during training for reproducibility.
hyperparameter and settings value
batch size 64
learning rate 1 × 10[−][4]
representation dimension 4096
contrastive lambda (λ) 0.5
num epoch 1
preference ranking loss function sigmoid
optimizer Adam
seed 0
Table 7: Hyper-parameters for Natural Language Reward Model training
**Evaluation. To evaluate the natural language reward model, we use the evaluation script in Open-**
RLHF [29]. We evaluate on the test set of Helpful-Harmless. For each chosen, rejected pair, we count
the reward model score as correct if the chosen generation has a higher score than the reject pair.
**B.2** **Guided Decoding with Contrastive Reward Model**
**Dataset. For guided decoding, we consider several different datasets. We considered OpenAssistant**
[36], a large corpus of assistant dialog and Prism-Alignment [34], a conversation alignment dataset.
We also considered HelpSteer [66], a dataset created by Nvidia with labels on how helpful, coherent,
correct, verbose, and complex a given response is. We settled on HelpSteer as our dataset for
-----
experimentation due to the relatively high-quality of the dataset (human-collected vs automatic) and
the fine-grained category labels. The dataset contains 37,120 examples in the training set.
**Guided decoding. We do not further train the model, but rather randomly sample 20 examples**
from each of the fine-grained category labels in HelpSteer: coherent, complexity, correctness, and
helpfulness. We do not use the verbosity category because the hidden states are averaged at inference
time, thus, there is a worry that we would lose the information needed to identify a sequence as
verbose. We use SelfEval Guided Decoding [68] for running the decoding experiments. Equation 7
shows the objective function ε(s[1:][T] ) used in guided decoding beam search.
_ε(s[1:][T]_ ) =
_PLM_ (s[t] _s[1:][t][−][1])ρC(s[t])_ (7)
_|_
_PLM is the language model distribution, C(s[t]) is the confidence score, and ρ is a hyperparameter._
For the steering experiments, we set the max number of tokens per beam to 52, the beam search
temperature to 0, temperature as 0.2, the number of samples to 2, and the confidence ratio to 0 e.g.
equal weight to the confidence score and LM loss.
**Evaluation. We evaluate on the test set of HelpSteer, which contains 1790 examples. For evaluation,**
we use GPT-4, asking GPT-4 to compare the SFT model and the model with the decoding prototype
which generation is more helpful, correct, coherent, and complex. We ask GPT-4 to give a single
token (0 or 1). For statistical significance, we run GPT-4 5 times for each evaluation example and take
the majority. In addition, we find that for a small percent of cases GPT-4 will not product a single
token even when asked. In this case, we throw away the example that did not produce a properly
formatted response when computing averages.
**B.3** **Compute and Statistical Significance**
Natural language alignment training jobs were done on 1 core, 8 GPUs, 88 CPU cores, 1TB system
memory, and 80GB GPU memory. An average training time for 1 episode of each prompt was around
13-14 hours.
All experiments are statistically significant. For the Helpful-Harmless experiments, the experiments
are evaluated on a test set of 8.55k rows. For HelpSteer, the validation set is 1.79k rows.
-----
**C** **Partial Completion Rewards**
In this appendix, we show the plots for the remaining out-of-distribution datasets, namely, algebra222,
Asdiv, GSM-Hard, mawps, and svamp. The general trend observed in Section 4.1.2 still holds, where,
Q-Function 7b Reward model incrementally improves as percentile of model generation considered
increases. On the other hand, there is a lot more variability in Codellama 7b Reward partial auroc
scores. More importantly, the performance when considering partial solutions is generally more
accurate for the Q-Function 7b Reward. This demonstrates that the improved judgement of partial
solutions that we observed for the in-distribution tasks extend to out-of-distributions tasks as well.
Figure 5: AUROC scores on the rewards attributed to partial base-model greedy generations on the
OOD datasets.
-----
**D** **Q-Value Visualization**
In this appendix, we display the full Q-value heat map for the examples shown in Figure 4. We also
show additional examples that show the Q-values correctly scoring reasoning errors as well as some
examples where the Q-values are noisy, unable to explicitly pin-point the logical error.
**D.1** **Good Examples**
**Problem: Toula went to the bakery and bought various types of pastries. She bought 3 dozen donuts**
which cost 68 per dozen, 2 dozen mini cupcakes which cost 80 per dozen, and 6 dozen mini
cheesecakes for 55 per dozen. How much was the total cost?
Let ' s solve this problem using Python code .
< ll m - code >
don uts _ per _ do zen = 1 2
don uts _ price _ per _ do zen = 6 8
don uts _ cost = don uts _ per _ do zen * don uts _ price _ per _ do zen * 3
cup c akes _ per _ do zen = 1 2
cup c akes _ price _ per _ do zen = 8 0
cup c akes _ cost = cup c akes _ per _ do zen * cup c akes _ price _ per _ do zen * 2
che es ec akes _ per _ do zen = 1 2
che es ec akes _ price _ per _ do zen = 5 5
che es ec akes _ cost = che es ec akes _ per _ do zen * che es ec akes _ price _ per _ do zen * 6
total _ cost = don uts _ cost + cup c akes _ cost + che es ec akes _ cost
total _ cost
</ ll m - code >
< ll m - code - output >
8 3 2 8
</ ll m - code - output >
Th us the total cost of the past ries is \ box ed [ 8 3 2 8 ] dollars .
**Problem: Marissa is hiking a 12-mile trail. She took 1 hour to walk the first 4 miles, then**
another hour to walk the next two miles. If she wants her average speed to be 4 miles
per hour, what speed (in miles per hour) does she need to walk the remaining distance?
Let ' s solve this problem using Python code .
< ll m - code >
distance _ tra ve led = 4 + 2
time _ t aken = 1 + 1
aver age _ speed = 4
rem aining _ distance = 1 2 - distance _ tra ve led
time _ rem aining = remaining _ distance / average _ speed
time _ rem aining
</ ll m - code >
< ll m - code - output >
1 . 5
</ ll m - code - output >
Th us Mar issa needs to walk at \ box ed [ 8 ] miles per hour to achieve her average speed of 4 miles per hour .
-----
**Problem: In a dance class of 20 students, 20% enrolled in contemporary dance, 25% of the**
remaining enrolled in jazz dance, and the rest enrolled in hip-hop dance. What percentage of the
entire students enrolled in hip-hop dance?
Let ' s solve this problem using Python code .
< ll m - code >
# 2 0 % en rolled in contemporary dance
cont empor ary _ d ance _ percent age = 2 0
# 2 5 % of the remaining en rolled in jazz dance
j azz _ d ance _ percent age = 2 5
# the rest en rolled in hip - h op dance
hip _ h op _ d ance _ percent age = 1 0 0 ( cont empor ary _ d ance _ percent age + jazz _ d ance _ percent age )
# total number of students
total _ stud ents = 2 0
# number of students en rolled in hip - h op dance
hip _ h op _ d ance _ stud ents = total _ stud ents * hip _ h op _ d ance _ percent age / 1 0 0
# percentage of students en rolled in hip - h op dance
hip _ h op _ d ance _ percent age _ of _ total = hip _ h op _ d ance _ stud ents / total _ stud ents * 1 0 0
hip _ h op _ d ance _ percent age _ of _ total
</ ll m - code >
< ll m - code - output >
5 5 . 0 0 0 0 0 0 0 0 0 0 0 0 0 1
</ ll m - code - output >
Th us \ box ed [ 5 5 \% ] of the students en rolled in hip - h op dance .
**Problem: When Freda cooks canned tomatoes into sauce, they lose half their volume. Each 16**
ounce can of tomatoes that she uses contains three tomatoes. Freda’s last batch of tomato sauce made
32 ounces of sauce. How many tomatoes did Freda use?
Let ' s solve this problem using Python code .
< ll m - code >
# each can of tom atoes has 3 tom atoes
tom atoes _ per _ can = 3
# each can of tom atoes has 1 6 oun ces
oun ces _ per _ can = 1 6
# Fred a ' s last batch of sau ce made 3 2 oun ces
s au ce _ made = 3 2
# Fred a used M can s of tom atoes
M = sau ce _ made / oun ces _ per _ can
# Fred a used N tom atoes
N = M * tom atoes _ per _ can
N
</ ll m - code >
< ll m - code - output >
6 . 0
</ ll m - code - output >
Th us Fred a used \ box ed [ 6 ] tom atoes .
-----
**D.2** **Noisy Examples**
**Problem: Lee used to be able to run the 400-meter hurdles two seconds faster than Gerald would run**
the 400-meter hurdles. But Gerald changed his diet, which improved his speed by 10%. If Lee runs
the 400-meter hurdles in 38 seconds, how fast can Gerald, with his improved diet, run the 400-meter
hurdles, in seconds?
Let ' s solve this problem using Python code .
< ll m - code >
lee _ time = 3 8
lee _ speed = 4 0 0 / le e _ time
ger ald _ speed = le e _ speed - 2
ger ald _ speed _ with _ im prov ement = ger ald _ speed * 1 . 1
ger ald _ time = 4 0 0 / ger ald _ speed _ with _ im prov ement
ger ald _ time
</ ll m - code >
< ll m - code - output >
4 2 . 6 4 8 7 0 9 3 1 5 3 7 5 9 7 4
</ ll m - code - output >
Th us Ger ald can run the 4 0 0 - meter hur d les in \ box ed [ 4 3 ] seconds .
**Problem: John drives for 3 hours at a speed of 60 mph and then turns around because he realizes he**
forgot something very important at home. He tries to get home in 4 hours but spends the first 2 hours
in standstill traffic. He spends the next half-hour driving at a speed of 30mph, before being able to
drive the remaining time of the 4 hours going at 80 mph. How far is he from home at the end of those
4 hours?
Let ' s solve this problem using Python code .
< ll m - code >
# John drives for 3 hours at a speed of 6 0 m ph
distance _ tra ve led _ first _ part = 3 * 6 0
# then turns around because he real izes he forgot something very important at home
distance _ tra ve led _ second _ part = 0
# tries to get home in 4 hours but sp ends the first 2 hours in stand st ill traffic
distance _ tra ve led _ second _ part += 2 * 0
# sp ends the next half - hour driving at a speed of 3 0 m ph
distance _ tra ve led _ second _ part += 0 . 5 * 3 0
# before being able to drive the remaining time of the 4 hours going at 8 0 m ph
distance _ tra ve led _ second _ part += 2 * 8 0
# total distance tra ve led
total _ distance _ tra ve led = distance _ tra ve led _ first _ part + distance _ tra ve led _ second _ part
total _ distance _ tra ve led
</ ll m - code >
< ll m - code - output >
3 5 5 . 0
</ ll m - code - output >
Th us John is \ box ed [ 3 5 5 ] miles from home at the end of those 4 hours .
-----
**E** **Exploring Sources of Goal States**
A natural question about our approach is: What is the impact on the choice of goal state represen_tation? For sampling negative goal states, we experiment with randomly sampling any arbitrary_
completion (not necessasarily the same prompt) vs. the corresponding dispreferred completion to
show the effect of picking completions that have similar contexts and invalid reasoning steps when
constructing the contrastive loss. Furthermore, we experiment with taking the average of the hidden
states of the positive and negative goal states across the train batch and then apply cosine similarity
with the current token representation. The results of these experiments are in Table 8.
Method IID OOD
GSM8k MATH algebra222 GSM-Hard Asdiv mawps svamp
Pref. Rank 0.805 0.618 0.850 0.768 0.829 0.775 0.761
Q-Function (SGS) **0.852** **0.708** **0.879** **0.781** **0.854** **0.819** 0.841
Q-Function (RS) 0.801 0.626 0.832 0.716 0.723 0.760 0.802
Q-Function (AVG) 0.835 0.707 0.851 0.765 0.839 0.773 **0.863**
Table 8: Comparison of AUROC scores for different training time contrastive goal states on several
popular math benchmarks. Pref. rank is the reward model trained with the standard preference-ranking
objective, Q-Function (SGS) is trained with the method described in Section 3.2, Q-Function (RS)
trained with the randomly sampled goal-state and Q-Function (AVG) trained with the batch averaged
goal state.
The ablation in Table 8 shows that the positive and negative goal state representations we use are
largely more effective than randomly sampled and batch averaged goal representations. Somewhat
similar observations have been made for other contrastive learning approaches where negatives,
especially hard-negatives, can play an important role [14, 56]. We see that when we randomly sample
positive and negative goal states, there is a clear regression in performance on the OOD datasets,
even below the standard preference ranking trained reward model. Meanwhile, the batch averaged
goal states offers competitive but largely lower performance compared to our final setup. Given these
results, we use the SGS contrastive loss method as our final setup.
**F** **Source State and Goal State Sampling Ablation**
In this section, we discuss ablations to source and goal state sampling. In computing the contrastive
loss during training, we need to sample a positive source and goal state pair as well as a negative
source and goal state pair. To investigate how we should sample these tokens, we explore 3 different
approaches.
**Method 1: Random Source and Goal State. In this approach, the positive and negative source states**
and sampled randomly from the preferred response, then the positive goal state is sampled randomly
amongst any token after the positive source state, and last the negative goal state is randomly sampled
anywhere from the dis-preferred response. This is the method we describe in .
**Method 2: Late Goal State. In this approach, we sample the positive and negative goals states only**
towards the end of the preferred and dis-preferred response, respectively. The intuition here is that
the token later in the sequence encode more information about the whole response and thus provide
more information as a goal state. In practice, we restrict the goal state sampling to come after the
90th percentile with respect to the total number of tokens in the response.
**Method 3: Late Source and Goal State. In this approach, we sample both the source state and**
goal state towards the end of the preferred and dis-preferred response (after 90th percentile of tokens
generated).
-----
Sampling Method IID OOD
GSM8k MATH algebra222 GSM-Hard Asdiv mawps svamp
Random Source and Goal State **0.852** **0.708** **0.879** **0.781** 0.854 0.819 **0.841**
Late Goal State 0.841 0.696 0.821 0.775 **0.860** 0.729 0.761
Late Source and Goal State 0.829 0.643 0.867 0.779 0.829 **0.831** 0.833
Table 9: Comparison of AUROC scores for different source and goal state sampling method.
The ablation in Table 9 shows that the Random Source and Goal State sampling on average performs
the best across both in-distribution and out-of-distribution tasks. We also see that with Late Goal
State sampling, we have similar performance to Random Source and Goal State sampling for the
in-distribution tasks but we notice a performance drop across several out-of-distribution benchmarks.
The opposite is true for the Late Source and Goal State sampling where performance on the indistribution benchmark drop and the performance out-of-distribution benchmarks are comparable
to the random sampling. Given these results, we use the Random Source and Goal State sampling
method as our final setup, which is equivalent to the (SGS) method referred to in the rest of the paper.
**G** **Reward Projection Ablation**
In this section, we show high correlation between our learned Q-values and reward scores for partial
completions, as illustrated in Table 10. The partial completions are from the base model on the
GSM8K test split [16]. This is expected since, by design, the reward model outputs are dependent on
the learned features that approximate the Q-function. This presents a potential risk in practice that
the policy model may exploit during PPO. We explore attempting to decouple the reward score from
the Q-value for partial completions by projecting the hidden features which approximate the Q-value
through a multilayer perceptron. Table 10 and Table 11 compares the result of projecting the hidden
features via a MLP layer versus a linear layer. The finding is we can reduce correlation between
rewards and Q-values for partial completions while maintaining high reward model performance.
Iterative retraining of the reward model on on-policy completions would further mitigate any potential
reward model hacking given that our complete loss function also considers the preference ranking
paired loss in 1.
Method IID OOD
GSM8k MATH algebra222 GSM-Hard Asdiv mawps svamp
Pref. Rank 0.805 0.618 0.850 0.768 0.829 0.775 0.761
Q-Function (Linear) **0.852** **0.708** **0.879** **0.781** 0.854 **0.819** 0.841
Q-Function (MLP) 0.832 0.693 0.871 0.770 **0.871** 0.780 **0.841**
Table 10: Comparison of AUROC scores for different reward projection strategy on several popular
math benchmarks. Both Q-Function (Linear) and Q-Function (MLP) are trained with the method
described in Section 3.2, the only difference being the reward head is a single linear layer versus a
3-layer MLP followed by a linear layer.
Model Percentile
0.2 0.4 0.6 0.8
Q-Function (Linear) 0.983 0.986 0.988 0.988
Q-Function (MLP) 0.870 0.908 0.921 0.915
Table 11: Correlation between Q-value and reward score at different percentile of model generations.
We see that the Q-value and reward scores for Q-Function (Linear) has higher correlation than
Q-Function (MLP).
-----
**H** **Examples of Prototype Guided Decoding**
**Question: Consider this reference information delimited in """:**
There is a man who circles the perimeter with a baby in his arms unmoving. Locusts
burn with the silhouettes of saints at dusk. Saints are in the cloud. We are in a dry
storm. The man extends his circles pulling the baby through the cactus scrub. Look
at his melting trainers in the heat, they arent what he asked for. There are black
leather skids on the dry stone wall. People in black cloaks run out of the corner
of your eye. A pig turns on a spit. The prairie is a terrarium for the blaze but the
edge is dry of fire. It is the height of one season, bushes burn. A burnt five-year-old
without eyelids turns quick cartwheels through the heat wave under the big pale
sky, black and blue.
What type of poem is this text from?
**SFT: This text is a prose poem.**
**Prototype: This text is a prose poem, which is a type of poetry that combines the elements of prose**
with those of poetry to create a unique and imaginative style of writing.
The guided decoding successfully produces a response that is still correct while adding complexity
to the response by describing that the poetry combines elements of prose and imaginative writing
styles. We find that, in general, guided decoding responses produce more complex responses without
sacrificing correctness.
**Question: Reference: <start of reference> If you invested 1,000 in Facebook in 2012, here’s how**
much you’d have now Even given the stock market’s recent volatility, a 1,000 investment in Facebook
10 years ago would still prove to be a good bet: You could have quadrupled your money. An initial
investment in Facebook in 2012, when it made its initial public offering, would be worth slightly
more than 4,600 as of Wednesday, according to CNBC calculations. The investment would have
seen a lifetime total return of more than 360 percent, including price appreciation and dividend gains
reinvested. The market’s recent activity has taken a toll, of course: In one month, Facebook’s stock
dipped from 5,051. Its current portfolio value is just above 30 percent now, down from its peak of
more than 40 percent. And the market keeps ebbing and flowing. In the charts below, all data splits
are adjusted and gain-loss figures do not include dividends, interest, distributions or fees except on
cash accounts. The portfolio value represents current holdings and the comparison charts represent
current and historical prices of individual benchmarks, stocks or exchange-traded funds. Meanwhile,
daily users in the United States and Canada dropped in 2017 to 184 million daily average users in the
fourth quarter from 185 million in the third quarter. Facebook chief financial officer David Wehner, on
the company’s earnings call, said usage would "fluctuate" based on market penetration. The revenue
increase despite the usage drop could be due to seasonal changes related to the advertising market. As
the company points out, users in the fourth quarter accounted for 6.39 billion of Facebook’s revenue,
compared to the 5.03 billion users accounted for in the third quarter. Facebook has retreated slightly
from its record price reached earlier this month as investors debate whether changes to the platform to
combat fake news and other problems will hurt advertising growth. But the stock is still up 1 percent
on the year. And some investors remain bullish about the stock. "News feed changes do not impair
the power of the platform," Andy Hargreaves, an analyst at KeyBanc Capital Markets, wrote in a note
to clients. "Facebook offers reach, targeting and ad-unit quality that is difficult to match, and they
are the underlying drivers of its growth. As long as [it] continues to improve its effectiveness, the
foundation for strong growth should remain intact, regardless of adjustments to the news feed." <end
of reference> Instruction: What is a share of stock?
**SFT: A share of stock is a unit of ownership in a company.**
-----
**Prototype: A share of stock is a unit of ownership in a company that represents the right to a**
proportionate share of the company’s assets and profits.
In this response, the prototype response is slightly more correct as it describes the value of a share of
stock. In addition, the prototype response is more complex than the SFT response.
**Question: What is the difference between a pound and an ounce? Please explain with examples.**
**SFT: A: A pound is a unit of measurement for weight, while an ounce is a unit of measurement for**
volume.
**Prototype: A pound is equal to 0.45359237 kilograms, while an ounce is equal to 28.3495 grams. A:**
A pound is a unit of weight in the metric system, while an ounce is a unit of weight in the English
system.
Here, we find that the SFT response is incorrect. The prototype response gives the correct response,
but produces some additional information at the beginning of the response that does not answer the
question. This can be seen as a side effect of the guided decoding process since a more complex
response is not always desired.
**Question: What are some common misunderstandings about the causes and consequences of the**
2008 global financial crisis?
**SFT: One is that it was caused by a lack of regulation in the financial sector, when in reality, the crisis**
was caused by a combination of factors, including excessive risk-taking by financial institutions, poor
risk management practices, and a lack A: There are a few common misunderstandings about the
causes and consequences of the 2008 global financial crisis.
**Prototype: Some believe that the crisis was caused by a lack of regulation or irresponsible lending**
practices. Some also believe that the crisis was caused by a lack of regulation or irresponsible lending
practices. In reality, the crisis was the result of complex and interconnected events that took place
over many years. There are many misconceptions about the 2008 financial crisis, including the belief
that it was caused by a lack of regulation or irresponsible lending practices.
The prototype response is more coherent than the SFT response in this example. In the SFT response,
there is a lack of coherence because the response is separated into two distinct sections, seperated by
"A:". However, this issue does not exist in the prototype response. Each sentence builds off the last,
for example "Some believe..." followed by "in-reality".
-----
**Question: Reference: <start of reference> As a casino player, it would be amazing to predict when**
a slot machine will next hit. You could time exactly when you play and how much to bet perfectly,
maximising your winnings. The problem? If this was widely possible, no casino would exist. Theyd
all go bankrup straight away. Instead, modern slot machines are designed to give determined results,
that are not generated at random. It is incredibly difficult to guess what the result of a spin will be.
There may be slot strategis that might improv your chances of winning. But any legitimate methods
won’t include being able to know when a slot is about to drop. Yet there’s still a lot of talk about
this online, with “tips” and “triks” on how to predict when a slot machine will next payout. We’ll
explain what really happens, along with how you can give yourself a slightly better chance of winning
legitimatly. You cannot tell when a modrn slot machine will hit because the outcome of each spin
is random. No matter how many times a machine has spun, and no matter what the outcomes of
those spins were, the probability the next result remains the same. But people often believe otherwise.
One famous example of why is the gambler’s fallacy. The gambler’s fallacy is an erroneous way of
thinking about the probability of independent events. When people commit the gambler’s fallacy,
they think that the likelihood of a random event is impacted by the occurrence of previous events, or
series of previous events. For example, believing that the result of previous spins on a slot will impact
the result of present or future spins is committing the gambler’s fallacy. The result is not affected by
what’s happened before. So if you win or lose one round, it has no impact on whether you’ll win or
lose the next round. <end of reference> Instruction: Rewrite the passage correcting any grammar and
spelling mistakes that have been made. Also omit any false information that the passage may contain.
Make sure to keep the same tone and message of the original passage.
**SFT: Casino players believe they can predict when a slot machine will next hit, but this is not possible.**
**Prototype: You could time exactly when you play and how much to bet perfectly, maximising your**
winnings. The problem? If this was widely possible, no casino would exist. They’d all go bankrup
straight away. Instead, modern slot machines are designed to give determined results, that are not
generated at random. It is incredibly difficult to guess what the result of a spin will be. There may be
slot strategis that might improv your chances of winning. As a casino player, it would be amazing to
predict when a slot machine will next hit.
The prototype response gives more detail including additional reasoning: "modern slot machines are
designed to give determined results". Interestingly, this response is generally incorrect, as modern
slot machines will give random results that have deterministic rules (for example, no jackpot right
after another jackpot). This shows that, while the prototype can help improve correctness, it can still
suffer from hallucination.
**I** **Assets**
In Table 12, we summarize assets used in this paper along with their corresponding URLs and licenses.
Asset Name URL License Type
Llama 3 [1] `https://github.com/meta-llama/llama3` Llama 2 Community License
OpenMath CodeLlama [61] `https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python` Llama 2 Community License
OpenRLHF [29] `https://github.com/OpenLLMAI/OpenRLHF` Apache-2.0
NeMo-Skills [61] `https://github.com/Kipok/NeMo-Skills` Apache-2.0
OpenMathInstruct-1 [61] `https://huggingface.co/datasets/nvidia/OpenMathInstruct-1` NVIDIA License
GSM8k [16] `https://huggingface.co/datasets/gsm8k` MIT License
MATH [28] `https://github.com/hendrycks/math` MIT License
algebra222 [27] `https://huggingface.co/datasets/sirdug/Algebra222` Apache-2.0
GSM-Hard [23] `https://huggingface.co/datasets/reasoning-machines/gsm-hard` MIT License
Asdiv [46] `https://github.com/chaochun/nlu-asdiv-dataset` CC BY-NC 4.0
mawps [35] `https://huggingface.co/datasets/MU-NLPC/Calc-mawps` MIT License
svamp [52] `https://huggingface.co/datasets/ChilleD/SVAMP` MIT License
Helpfulness and Harmlessness [8] `https://huggingface.co/datasets/Anthropic/hh-rlhf` MIT License
HelpSteer [66] `https://huggingface.co/datasets/nvidia/HelpSteer` CC-BY-4.0
Table 12: Asset information and licences.
-----
| [
"Vaskar, Nath",
"Dylan, Slack",
"Jeff, Da",
"Spencer, Whitehead",
"Yuntao, Ma",
"Sean, Hendryx",
"Hugh, Zhang"
] | 2024-07-18T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2407.13887 | https://arxiv.org/abs/2407.13887 | https://www.semanticscholar.org/paper/4da57cd1976b769384e664b13fd2387ac6ce861e |
Learning Goal-Conditioned Representations in Reward Models for Aligning Language Models | Representation learning is important for the success of Reinforcement Learning (RL) algorithms, but has been less explored for Language Model (LM) alignment with Reinforcement learning from Human Feedback (RLHF).In this work, we present a simple yet effective approach to improve the representations learned by reward models for aligning LMs.Our method uses a contrastive loss that encourages reward models to learn goal-conditioned representations which encode the expected reward at intermediate steps of the input sequence.By enforcing this loss on representations from intermediate steps, we can capture which trajectories are likely to reach a desired goal (e.g., correct solution or helpful response) at different points in the sequence.This method is flexible enough to support different kinds of alignment data and does not require extra annotations.We demonstrate the effectiveness of this approach in 2 domains: mathematical reasoning and natural language alignment.On math benchmarks, such as GSM8k, we show that our approach improves the reward model's ability to discern between correct/incorrect solutions, increasing AUROC score by up to 0.11 points, and that the learned representations can help prune undesirable generations.Using this reward model to improve a policy model via RLHF yields accuracy gains of 1.7\% across several math benchmarks when compared to a standard preference-ranking trained reward model.Additionally, we show the that learned representations can be used to steer LMs toward generations that are more aligned with human preferences via guided decoding.Overall, our study underscores the potential of incorporating feedback signals in RLHF frameworks via learned representations, which we believe is a promising avenue for improving the alignment of LLMs. | null | null | null | null | NeurIPS 2024 | true | 0 | 0 | null | https://neurips.cc/virtual/2024/poster/95067 | null | null |
Learning alignment between formal & informal mathematics | N/A | null | # Learning alignment between formal & informal mathematics
Kshitij Bansal[1] and Christian Szegedy[2]
1 Google Research
```
[email protected]
```
2 Google Research
```
[email protected]
## 1 Introduction
```
In this talk, we explore the possibility of training an alignment model between informal and
formal mathematical corpora in a semi-supervised manner. Though there is a lot of informal
mathematics available in natural language (textbooks, papers), the fully formalized and computer checked mathematical content is limited. Availability of alignment information between
the two is even further limited. That said, an alignment model between formal and informal
mathematics would be essential for the task of autoformalization [3] and could result in dramatically growing the corpus of formalized mathematics. This could open up the possibility for an
open-endedly improving system by training proof-guidance and alignment models in lockstep.
We look into the currently available resources for bootstrapping such a system, and share our
findings.
## 2 Learning an alignment model
Unsupervised (and weakly-supervised) neural approaches to machine translation relying on
learning semantic representations for languages and an alignment model between them have
shown great promise (e.g. [4]). We look into various aspects from the point of view of incorporating such ideas for learning an alignment model between formal and informal mathematics.
One of the key aspects is learning semantic representations from large unstructured corpora
in a self-supervised manner. On the natural language side (generally, not specifically for mathematics) this is a well-studied area with a lot of progress over the past few years alone [2, 6, 8].
In general, research has established that training current deep neural network based models on
proxy-tasks for natural language modeling can be fine-tuned to several downstream tasks such
as machine translation, semantic search, sentiment analysis and question answering. Moreover,
these tasks did not need as large amounts of data, yet yielded significant gains. For mathematics, on the informal side, there is also significant semantic information in the formulas,
equations, diagrams, etc. which would be crucial to leverage for autoformalization work. The
availability of large (unlabeled) corpora of informal mathemetics is not necessarily an issue,
even if work is required for collecting such datasets for our puprose.
Perhaps less systematically explored and established, nevertheless, various works on theorem
proving using neural approaches have looked into learning semantic representations on the
formal side. Examples include tasks such as predicting the relevance of premises for proving
a statement [1], predicting latent representations of rewrites [5], and labeling a formula with
symbols using its structure alone [7]. One can argue that formal mathematical content is even
more amenable to unsupervised pretraining as there is a larger number of conceivable selfsupervised tasks than in the case of natural language processing. For that, we can leverage
-----
Learning alignment between formal & informal mathematics Bansal and Szegedy
the well-defined graph structure of formulas and the ability to systematically transform them
(using, say, rewrite rules and substitutions).
Given the success of unsupervised pretraining on the natural language side and encouraging
initial results of semantic embeddings of formal mathematical content, the main task that
remains is to train an alignment model between the two sets of embeddings. One key idea is to
use cycle consistency [10]. We are especially inspired by its use for learning machine translation
models on non-aligned corpora [4]. We propose a similar approach in conjunction with requiring
that the translations should utilize similar notions. We explore models that translate natural
language text to formal mathematical content (in HOL Light) and vice versa, with several
constraints: after back and forth translation the embedding of the resulting statement should
stay close to the input in the embedding space; put a loss on the network to enforce that the
set of notions referred to by the two translations contain similar notions; and we maximize the
probability of the translated sentence looking natural (or being a valid formal sentence). Using
these constraints (that is, a combination of the associated losses) we have trained sequence-tosequence models based on the transformer network [9] with end-to-end backpropagation.
To summarize, using corpora derived from formalization efforts in HOL Light proof assistant
on the formal side, we will dicuss the different aspects of the approach:
_• sources of datasets,_
_• language models for informal mathematics including formulas/equations,_
_• semantic embedding models and discussion of training tasks for formal mathemetics,_
_• training translation models with the various (cycle consistency, notion-similarity and nat-_
urality) requirements,
_• neural network architecture choices and_
_• qualitative evaluation of our first alignment and translation models._
## References
[1] Alex A Alemi, Francois Chollet, Niklas Een, Geoffrey Irving, Christian Szegedy, and Josef Urban.
Deepmath-deep sequence models for premise selection. arXiv preprint arXiv:1606.04442, 2016.
[2] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep
bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
[3] Cezary Kaliszyk, Josef Urban, and Jiˇr´ı Vyskoˇcil. Automating formalization by statistical and
semantic parsing of mathematics. In International Conference on Interactive Theorem Proving,
pages 12–27. Springer, 2017.
[4] Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. Unsupervised
machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043, 2017.
[5] Dennis Lee, Christian Szegedy, Markus N Rabe, Sarah M Loos, and Kshitij Bansal. Mathematical
reasoning in latent space. arXiv preprint arXiv:1909.11851, 2019.
[6] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
[7] Miroslav Olˇs´ak, Cezary Kaliszyk, and Josef Urban. Property invariant embedding for automated
reasoning. arXiv preprint arXiv:1911.12073, 2019.
-----
Learning alignment between formal & informal mathematics Bansal and Szegedy
[8] Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. _arXiv preprint_
_arXiv:1802.05365, 2018._
[9] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information
_processing systems, pages 5998–6008, 2017._
[10] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international confer_ence on computer vision, pages 2223–2232, 2017._
-----
| [
"Kshitij, Bansal",
"Christian, Szegedy"
] | 2020-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Learning theorem proving through self-play | N/A | null | # Learning theorem proving through self-play
Stanis law Purga l
University of Innsbruck, Innsbruck, Tirol, Austria
```
[email protected]
## 1 Introduction
```
This work attempts to apply the AlphaZero algorithm [4] to theorem proving. Following the
philosophy of learning without using any human–generated datasets we attempt to learn to
prove theorems without using any database of proofs or theorems. This is different from other
attempts at ML-guided theorem proving in [2], [3], [1].
The only input we expect before starting training is the set of axioms we can use in our
proofs — no theorems or conjectures.
## 2 The theorem-construction game
In the game we are using to learn theorem proving, one player constructs a provable theorem
and the other player tries to prove it:
Construct a theorem
Prove the theorem Adversary wins
Prover wins
The goal of the adversary is to construct such a theorem, that the prover will fail to prove
it. Because of the way the construction works, this theorem will have to be provable.
In the game we use prolog-like terms, where a term can be either a variable, or a pair of an
atom and a list of subterms. In the examples we use the convention of marking variables with
capital letters, and denoting compound terms and an atom name followed by a list of subterms
in brackets (skipped when the list is empty).
Eg. node(A, leaf).
The construction game is defined for a given set on inference rules. An inference rule is a
pair of a term and a list of terms, that can share variables.
Eg. tree(node(A, B)) ← `tree(A), tree(B).`
A state here is a pair consisting of a list of terms that need to be proven and an information
about which player is now in control. During its move a player can choose one of the given
inference rules, and apply it to the first term of the list. The left side of the rule is then unified
-----
Learning theorem proving through self-play Stanis law Purga l
with that term. If the unification fails, the player making the move loses. If it succeeds, the
term is removed from the list, and the right side of the rule (after unification) is added.
The first player (called adversary) starts the game with a list consisting of a single variable
term. It then proceeds to “prove” it using the inference rules. As it is a variable, to begin
with any inference rule can be applied. When the list is empty (meaning that the theorem was
proven), the variable we started with will be unified with some theorem. This theorem is then
given to the other player, after replacing every remaining variable with a fresh ground atom.
The second player then tries to prove the theorem, winning when the list is empty.
To ensure termination of the game, during every move there is a small chance that the
player making the move will immediately lose, so that every game will end with probability 1.
## 3 Monte Carlo tree search modification
The AlphaZero [4] algorithm utilizes the Monte Carlo Tree Search (MCTS) to estimate state
values and policies. As it is used there it works well, when getting a sure value of a game state
is almost impossible. However, when players don’t take turns, and instead can make several
moves in a row, it’s possible to find a path to a winning state, and prove with certainty the
value of state without searching infeasibly large state space.
To allow propagation of sure state values in our implementation of MCTS we keep track of
upper and lower bound for every state. In a non-final game state these are simply (1) and (−1)
(as the reward is always somewhere between −1 and 1), but in a final state they are both equal
to the outcome of the game. These bounds are then propagated up the tree, in accordance
with state ownership (with which player is making a move in which state). This assures that if
the tree search finds a certain way for one player to win in state s, the value of this state will
become exactly 1.
It is worth pointing out that finding a winning path in the MCTS doesn’t necessarily mean
further search is pointless. Eg., the player constructing the theorem can avoid building theorems, for which the MCTS already found a proof.
## 4 Preliminary investigation
When training our model on a toy problem (involving reversing a list) we observed that the
performance does improve in time, although it does not achieve a stable high result. Although
we do not use any set of theorems during training, we do require it to measure the performance.
-----
Learning theorem proving through self-play Stanis law Purga l
For estimating value and policy we currently use a variant of a Graph Attention Network [5],
but we plan on experimenting with different architectures, as well as different axiom sets and
hyperparameters.
## References
[1] Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, and Christian Szegedy. Learning to reason in
large theories without imitation. ArXiv, abs/1905.10501, 2019.
[2] Cezary Kaliszyk, Josef Urban, Henryk Michalewski, and Miroslav Ols´ak. Reinforcement learning
of theorem proving. In NeurIPS, 2018.
[3] Michael Rawson and Giles Reger. A neurally-guided, parallel theorem prover. In FroCos, 2019.
[4] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur
Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy P. Lillicrap,
Karen Simonyan, and Demis Hassabis. Mastering chess and shogi by self-play with a general
reinforcement learning algorithm. ArXiv, abs/1712.01815, 2017.
[5] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua
Bengio. Graph attention networks. ArXiv, abs/1710.10903, 2017.
-----
| [
"Stanislaw, Purgal"
] | 2020-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Learning to Identify Useful Lemmas from Failure | N/A | null | # Learning to Identify Useful Lemmas from Failure[∗]
Michael Rawson[1], Christoph Wernhard[2], and Zsolt Zombori[3][,][4]
1
TU Wien, Austria [email protected]
2
University of Potsdam, Germany [email protected]
3
Alfréd Rényi Institute of Mathematics, Hungary [email protected]
4
Eötvös Loránd University, Budapest, Hungary
**Introduction** We investigate learning to identify useful lemmas for ATP, where usefulness is
defined in terms of 1) reducing proof search and 2) shortening the length of the overall proof.
How can ATP performance be improved by the generation and selection of useful lemmas?
In particular, we raise the question of what one can learn from a failed proof attempt. We
present a proposal about how to learn from failed proof attempts of a single problem. This is in
sharp contrast with previous works that rely on a large corpus of problems and aim to improve
performance based on success obtained with easier problems. By way of motivation, we argue
that human mathematicians learn from failed attempts as well.
**Restriction to a class of problems with accessible and simple proof structures** Interested in novel techniques, we work with a restricted class of first-order problems, condensed
_detachment (CD) problems [9, 7], due to Carew A. Meredith [5]. Inference steps can be charac-_
terized by detachment (modus ponens) combined with unification. Proof structures are particularly simple and accessible: full binary trees, or terms with a binary function symbol D, which
we call D-terms. Constants in these terms label axioms. As examples of D-terms consider 1, a
constant representing a use of the axiom labeled by 1; D(1, 1), representing a detachment step
applied to axiom 1 as major and minor premise; or D(1, D(1, 1)), representing a proof with two
detachment steps. These proof terms are closely related to proof structures of the connection
method [3, 4].
**Proof search and data extraction** We rely on theorem prover SGCD [8] which performs
proof search by structure enumeration of binary trees (interwoven with formula unification),
until a suitable D-term is found. Enumeration can be axiom-driven, i.e. starting from axiom set
_As, producing D-terms that represent complete proofs of unit lemmas. We can also enumerate_
goal-driven, starting from conjecture C and obtaining partial proof trees of C. In practice
we interleave goal-driven and axiom-driven phases. Using the idea of Hindsight Experience
Replay [1], we can “pretend” that both sorts of failed proof attempts are in fact successful: In
the axiom-driven case, we change the goal conjecture to the one that was actually proven and
in the goal-driven case, we change the axioms to include whatever is needed to complete the
proof. Hence, we end up enumerating complete proof trees of “some” problems. We note that
the idea of Hindsight Experience Replay has already been applied to theorem proving in [2] in
the context of training a policy model to guide saturation-style forward reasoning.
Given a proof tree P _[′]_ of some formula C _[′]_ from axiom set As[′], any connected subgraph S[′]
of P _[′]_ can be considered as the proof of a lemma candidate L. If S[′] is a full tree, it proves a
unit lemma, which is the formula associated with its root. Otherwise, it proves a Horn clause,
whose head is the root formula of S[′] and whose body consists of the open leaves of S[′]. If we
_∗Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-_
ID 457292495, by the North-German Supercomputing Alliance (HLRN), by the ERC grant CoG ARTIST
101002685, by the Hungarian National Excellence Grant 2018-1.2.1-NKP-00008, the Hungarian Artificial Intelligence National Laboratory Program (RRF-2.3.1-21-2022-00004) and the ELTE TKP 2021-NKTA-62 funding
scheme.
-----
Training Data Extraction for Identifying Useful Lemmas M. Rawson, C. Wernhard, Z. Zombori
can measure how useful lemma L is for proving C _[′]_ given axioms As[′], this can serve as a useful
training signal for a guidance model. For the utility measure U, there are easy-to-compute
logical candidates, such as the compression in tree size, tree height, DAG size etc. A more
refined measure is obtained if we reprove C _[′]_ with the lemma L added to the axioms As[′] and
observe how the number of interence steps change. This is slower to compute, but takes into
account the particularities of the prover, hence providing more focused guidance. In practice,
we find that the best performance is obtained by reproving and then normalising the inference
step reduction into [−1, 1], where −1 means that the problem could not be solved within the
original inference limit and 1 is assigned to the lemma that yields the greatest speedup. We
end up with tuples ⟨C _[′], As[′], L, U_ _⟩_ to learn from.
**Iterating proof search and training** During the proof search of conjecture C from axiom
set As, we keep track of all produced proof trees P _[′]_ and collect ⟨C _[′], As[′], L, U_ _⟩_ tuples, forming
a training dataset D. Once proof search is unsuccessful, we fit a neural lemma selector to D.
This neural model M (conjecture, axioms, lemma) predicts the utility of the input lemma for
proving the conjecture from the axioms. Model M is next evaluated on all collected lemmas,
along with the original conjecture and axioms, i.e. we compute pairs
_{⟨L, U_ _⟩| ⟨_, _, L, _⟩∈_ _D, U = M_ (C, As, L)}
Lemmas with the top k utilities are selected, where k is a hyperparameter to be tuned. The
selected lemmas are added to the problem as axioms[1] and we can start proof search again.
Each iteration produces novel proof attempts and novel training signal, hopefully guiding the
prover closer to solving the target problem.
**Current Status** We have performed experiments on training a unit lemma selector from
successful proof attempts, which is explained in [6]. We are currently working on extending
the codebase to accommodate Horn lemmas and the extraction of training signal from failed
attempts.
## References
[1] Andrychowicz, M., Wolski, F., Ray, A., Schneider, J., Fong, R., Welinder, P., McGrew, B., Tobin,
J., Pieter Abbeel, O., Zaremba, W.: Hindsight experience replay. In: Guyon, I., Luxburg, U.V.,
Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural
[Information Processing Systems. vol. 30. Curran Associates, Inc. (2017), https://proceedings.](https://proceedings.neurips.cc/paper_files/paper/2017/file/453fadbd8a1a3af50a9df4df899537b5-Paper.pdf)
```
neurips.cc/paper_files/paper/2017/file/453fadbd8a1a3af50a9df4df899537b5-Paper.pdf
```
[2] Aygün, E., Anand, A., Orseau, L., Glorot, X., Mcaleer, S.M., Firoiu, V., Zhang, L.M., Precup,
D., Mourad, S.: Proving theorems using incremental learning and hindsight experience replay.
In: Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., Sabato, S. (eds.) Proceedings
of the 39th International Conference on Machine Learning. Proceedings of Machine Learning Re[search, vol. 162, pp. 1198–1210. PMLR (17–23 Jul 2022), https://proceedings.mlr.press/v162/](https://proceedings.mlr.press/v162/aygun22a.html)
```
aygun22a.html
```
[[3] Bibel, W.: Automated Theorem Proving. Vieweg, Braunschweig (1982). https://doi.org/10.](https://doi.org/10.1007/978-3-322-90102-6)
```
1007/978-3-322-90102-6, second edition 1987
```
[4] Bibel, W., Otten, J.: From Schütte’s formal systems to modern automated deduction. In: Kahle,
[R., Rathjen, M. (eds.) The Legacy of Kurt Schütte, chap. 13, pp. 215–249. Springer (2020). https:](https://doi.org/10.1007/978-3-030-49424-7_13)
```
//doi.org/10.1007/978-3-030-49424-7_13
```
1If the prover has some special mechanism for handling lemmas, they need not be treated as axioms.
-----
Training Data Extraction for Identifying Useful Lemmas M. Rawson, C. Wernhard, Z. Zombori
[5] Prior, A.N.: Logicians at play; or Syll, Simp and Hilbert. Australasian Journal of Philosophy 34(3),
[182–192 (1956). https://doi.org/10.1080/00048405685200181](https://doi.org/10.1080/00048405685200181)
[6] Rawson, M., Wernhard, C., Zombori, Z., Bibel, W.: Lemmas: Generation, selection, application.
[CoRR abs/2303.05854 (2023). https://doi.org/10.48550/ARXIV.2303.05854](https://doi.org/10.48550/ARXIV.2303.05854)
[7] Ulrich, D.: A legacy recalled and a tradition continued. J. Autom. Reasoning 27(2), 97–122 (2001).
```
https://doi.org/10.1023/A:1010683508225
```
[8] Wernhard, C.: CD Tools — Condensed detachment and structure generating theorem proving
[(system description). CoRR abs/2207.08453 (2023). https://doi.org/10.48550/ARXIV.2207.](https://doi.org/10.48550/ARXIV.2207.08453)
```
08453
```
[9] Wernhard, C., Bibel, W.: Investigations into proof structures. CoRR abs/2304.12827 (2023).
```
https://doi.org/10.48550/ARXIV.2304.12827
```
-----
| [
"Rawson, Michael"
] | 2023-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Learning to Love Edge Cases in Formative Math Assessment: Using the AMMORE Dataset and Chain-of-Thought Prompting to Improve Grading Accuracy | This paper introduces AMMORE, a new dataset of 53,000 math open-response question-answer pairs from Rori, a learning platform used by students in several African countries and conducts two experiments to evaluate the use of large language models (LLM) for grading particularly challenging student answers. The AMMORE dataset enables various potential analyses and provides an important resource for researching student math acquisition in understudied, real-world, educational contexts. In experiment 1 we use a variety of LLM-driven approaches, including zero-shot, few-shot, and chain-of-thought prompting, to grade the 1% of student answers that a rule-based classifier fails to grade accurately. We find that the best-performing approach -- chain-of-thought prompting -- accurately scored 92% of these edge cases, effectively boosting the overall accuracy of the grading from 98.7% to 99.9%. In experiment 2, we aim to better understand the consequential validity of the improved grading accuracy, by passing grades generated by the best-performing LLM-based approach to a Bayesian Knowledge Tracing (BKT) model, which estimated student mastery of specific lessons. We find that relatively modest improvements in model accuracy at the individual question level can lead to significant changes in the estimation of student mastery. Where the rules-based classifier currently used to grade student, answers misclassified the mastery status of 6.9% of students across their completed lessons, using the LLM chain-of-thought approach this misclassification rate was reduced to 2.6% of students. Taken together, these findings suggest that LLMs could be a valuable tool for grading open-response questions in K-12 mathematics education, potentially enabling encouraging wider adoption of open-ended questions in formative assessment. | AMMORE, a new dataset of 53,000 math open-response question-answer pairs from Rori, is introduced and two experiments to evaluate the use of large language models (LLM) for grading particularly challenging student answers suggest that LLMs could be a valuable tool for grading open-response questions in K-12 mathematics education. | # Learning to Love Edge Cases in Formative Math Assessment:
Using the AMMORE Dataset and Chain-of-Thought Prompting to Improve Grading Accuracy
Owen Henkel, Hannah Horne-Robinson, Maria Dyshel, Nabil Ch, Baptiste Moreau-Pernet, Ralph Abood
**ABSTRACT**
This paper introduces AMMORE, a new dataset of 53,000 math open-response question-answer pairs
from Rori, a learning platform used by students in several African countries and conducts two
experiments to evaluate the use of large language models (LLM) for grading particularly challenging
student answers. The AMMORE dataset enables various potential analyses and provides an important
resource for researching student math acquisition in understudied, real-world, educational contexts. In
experiment 1 we use a variety of LLM-driven approaches, including zero-shot, few-shot, and chain-of
thought prompting, to grade the 1% of student answers that a rule-based classifier fails to grade
accurately. We find that the best-performing approach – chain-of-thought prompting – accurately scored
92% of these edge cases, effectively boosting the overall accuracy of the grading from 98.7% to 99.9%.
In experiment 2, we aim to better understand the consequential validity of the improved grading
accuracy, by passing grades generated by the best-performing LLM-based approach to a Bayesian
Knowledge Tracing (BKT) model, which estimated student mastery of specific lessons. We find that
relatively modest improvements in model accuracy at the individual question level can lead to
significant changes in the estimation of student mastery. Where the rules-based classifier currently used
to grade student, answers misclassified the mastery status of 6.9% of students across their completed
lessons, using the LLM chain-of-thought approach this misclassification rate was reduced to 2.6% of
students. Taken together, these findings suggest that LLMs could be a valuable tool for grading open
response questions in K-12 mathematics education, potentially enabling encouraging wider adoption of
open-ended questions in formative assessment.
**KEYWORDS: LLMs, Formative Assessment, Math Education**
-----
F. Author et al.
## 1. Introduction
Formative assessment and feedback are crucial components of the learning process, enabling students
and educators to adapt their approach within or in-between lessons to maximize learning [34]. It has
been shown to lead to significant improvements in learning outcomes [18]. Closed-response questions,
such as multiple-choice and true/false, are commonly used in formative assessment, and have the benefit
of being efficient to grade and can provide instant feedback [31]. However, they have several
drawbacks, such as the possibility of students relying on test-taking strategies, a potential lack of face
validity, and the complexity of generating multiple answer options [16, 24]. In contrast, open-ended and
short answer questions require students to answer a question using their own words often with a few
sentences [31]. Many researchers argue that open-response questions decrease the influence of test
taking strategies, have greater face validity, and may be better suited to evaluate certain subprocesses of
the skill being assessed [3, 4, 8, 16, 34]. However, the process of grading open-ended questions can be
resource-intensive and expensive, which limits their widespread use [23]. While educators may prefer
the type of information they can glean from student responses to open-ended questions, the laborious
grading process can overburden educators and compromise the quality of feedback, which may limit
students' comprehension and critical engagement with the subject matter [25]. Therefore, automatic
short answer grading (ASAG) offers a promising solution to address this, but it has historically been
challenging to perform easily and effectively enough for widespread use in educational settings [2, 7,
13]. Most state-of-the-art approaches have relied primarily on handcrafted approaches, or more recently
fine-tuning models for specific tasks [5, 14], which required extensive technical expertise and large
datasets [26, 32].
The field of ASAG has seen significant advancements with the emergence of LLMs, presenting
new opportunities for enhancing educational assessment and personalized learning. There is growing
evidence that these models can complete evaluation tasks on novel datasets with only minimal prompt
engineering [15, 18, 19]. If LLMs can accurately mark open-ended questions, the time savings for
educators would be substantial, and could facilitate more frequent and effective formative assessment.
However, little is known about how LLMs perform across a variety of educational settings and whether
LLMs can be relied upon to generalize to ever more complex use cases. Additionally, there are a limited
-----
number of publicly available datasets from educational settings upon which LLMs can be tested. This
paper makes two contributions in response to these gaps.
First, we introduce a novel dataset, the African Middle-School Math Open REsponse
(AMMORE) dataset, which consists of 53,000 answers to middle school math questions from students
in West Africa. The data for AMMORE was collected from Rori, an AI-powered WhatsApp math-tutor
that allows students in West Africa to independently practice math concepts free of charge. The dataset's
rich structure, which includes question level data, user IDs, learning standard designators and students
self-reported age, enables various potential analyses, such as investigating students' skill mastery across
micro-lessons, analyzing the relative difficulty of specific questions or micro-lessons across students, or
exploring how different grading models' judgments compare to those of humans. This dataset provides a
unique opportunity to explore the challenges of grading diverse student responses in a real-world
educational context, particularly in regions where access to quality education is often limited and where
innovative solutions like AI tutors are being leveraged to bridge educational gaps.
Second, we conduct an extensive empirical evaluation of LLM-based approaches to grade a
challenging subset of AMMORE, using a variety of automated approaches. We explore various
methods, including string matching, text processing, and different LLM prompting techniques, to
evaluate their accuracy and consistency in assessing student responses. We find that LLM-based
approaches, particularly chain-of-thought prompting (CoT), outperform traditional methods in grading
accuracy, demonstrating their ability to handle the complexity and variability of student responses in
open-ended math questions. The superior performance of LLM-based methods is especially evident in
cases where students provide correct answers in unexpected formats or use equivalent mathematical
expressions. We also explore whether improvement in question grading leads to more accurate estimates
of student concept mastery. We find that relatively modest improvements in model accuracy at the
individual question level can lead to significant changes in the estimation of student mastery and
perceived lesson difficulty. These results have important implications for the design of intelligent
tutoring systems (ITS), potentially enabling more accurate adaptive learning pathways and personalized
feedback. Our findings also suggest that the use of LLM-based grading could encourage wider adoption
of open-ended questions in formative assessment, leveraging their pedagogical benefits without
increasing the grading burden on educators.
-----
F. Author et al.
## 2. Prior Work
**2.1 Automatic Short Answer Grading**
Automatic short answer grading has been an active area of research for over a decade. Burrows et. al. [7]
provide a comprehensive overview of approaches up until 2015; while Haller et. al., [17] discuss how
ASAG has more recently moved from models based on handcrafted features to approaches including
word-embedding and representation learning. However, regardless of the paradigm, most models used
for ASAG are explicitly trained or fine-tuned for specific grading tasks [21]. There has been
considerable progress with these types of tasks, for instance, Sultan et al. [36] developed a model that
represents each sentence as the sum of the individual word embeddings. At the time, this model
achieved state-of-the-art performance on the SemEval benchmarking dataset. As these types of models
depend on prompt-specific training data, they often need to be re-trained for each individual short
answer prompt, which is costly, time-consuming, and in most cases simply infeasible.
The recent rise of ever-larger pre-trained LLMs, trained on vast text corpi, has enabled a new
approach: fine-tuning, often referred to as transfer learning. This paradigm of machine learning typically
consists of two steps: pre-training and fine-tuning. In the former, a neural network model learns weights
through unsupervised learning on a large general dataset. In the latter, the model trains on a smaller,
task-specific dataset [20] to update its weights to better align with downstream tasks. As a result of their
significant pre-training, LLMs can achieve far better sample efficiency on the target task(s) [6]. For
example, Sung et al. [37] fine-tuned BERT, a widely used pre-trained transformer-based language model
to grade short-answer responses and found it was able to classify almost at the human-level agreement
and achieve superior results to the previous state-of-the-art on the SemEval dataset. More recently,
Fernandez et al. [14] used a BERT-based model to evaluate open-response reading comprehension
questions and achieved an agreement score with expert raters, as measured by Cohen's Kappa, of 0.84,
where human-to-human scores were 0.88.
While pre-trained language models that have been fine-tuned with small task-specific datasets
have improved ASAG, their practical application to formative assessment in educational settings
remains limited. This is largely due to a few central constraints of this approach: the technical
complexity of the fine-tuning process, the continued (albeit small) need for task-specific data, and these
models' difficulty in generalizing. First, fine-tuning an LLM requires substantial computational power,
-----
which is not widely available in educational contexts. Second, data from educational settings, even in
smaller amounts, is remarkably hard to obtain given sensitivities over data sharing and privacy. Finally,
LLMs' performance across different tasks is variable and often does not generalize well to different
settings, leading to reliability concerns for the overall approach.
**2.2 Potential of Generative LLMs for ASAG**
The current generation of LLMs, including ChatGPT, GPT-4, Claude, Llama, Mistral, Gemini, were
trained similarly to previous generations but with significantly larger datasets and a higher number of
parameters, in some cases by more than an order of magnitude [9, 35]. Additionally, these models
underwent various "instruction fine-tuning" steps to enhance their usability and ability to generalize to
new tasks, often with minimal exposure to examples [30]. This also improved their ability to interpret
human-written natural language instructions (i.e., prompting), allowing non-technical users to make
requests and adapt a model to new tasks by modifying their prompts, rather than requiring further
training or fine-tuning [35]. Therefore, it is unsurprising that evidence is growing that LLMs can be used
for certain types of grading tasks [21]. Current LLMs can perform various linguistic tasks that
previously required the use of task-specific, fine-tuned LLMs [20, 38], and with minimal prompt
engineering can complete evaluation tasks on novel datasets [15, 22]. Instead of using a task-specific
dataset to fine-tune a pre-trained LLM, a user can now simply write an explanation and a few examples
of how they wish the model to grade student answers and achieve reasonable performance.
While there is a growing amount of research on grading essays using generative LLMs, relatively
little is known about their potential for ASAG [21, 27, 33]. Morjaria et al. [28] found that ChatGPT
graded 6 short answer assessments from an undergraduate medical program similarly to a single expert
rater. Cohn et al. [11] found that GPT-4 successfully graded student answers to high school science
questions. However, Kortemeyer [21] found that LLMs fell short in certain aspects of grading
introductory physics assignments. A review by Schneider et al. [33] concluded that "while 'out-of-the
box' LLMs provide a valuable tool to offer a complementary perspective, their readiness for independent
automated grading remains a work in progress."
In all aforementioned cases, the studies were conducted with a small sample of student
responses, with a primary focus on high school and university students. However, there has been little
exploration of generative LLMs’ ability to grade short-answer responses from elementary or middle
-----
F. Author et al.
school students which can be attributed, in part, to the limited number of publicly available short-answer
datasets. As a result, we propose our dataset and empirical analysis to help fill this gap in the literature.
**2.3 Overview of Existing Short Answer Datasets**
While there are several math question datasets in the literature (see Table 1 below for a more detailed
overview), they present many limitations that undermine their relevance in real-world grading contexts,
especially for elementary and middle school students. First, several prominent datasets, e.g., MATH,
contain questions and correct answers but do not contain information about how students answered,
others, e.g., EEDI, MathE, contain information about students’ multiple-choice responses. Second, of
the below datasets, only ASSISTments contains information allowing researchers to track progression
through a curriculum. Third, few of these datasets contain information from lower resource and
underrepresented populations. These limitations are the main motivation behind our proposed dataset
AMMORE, which we discuss in more detail in Section 3.
**Table 1**
_Summary of Publicly Available Math Datasets_
**Student**
**Answers / Age** **Country** **Response Type**
**Number of**
**Responses**
**Dataset** **Topic**
Competition
**MATH** No / N/A N/A Open Response 12,500
Mathematics
Primary School
**GSM8K** No / N/A N/A Open Response 8,000 +
Mathematics
College Mathematics
**MathE**
Yes / No Multiple Multiple Choice 9,546
Primary & Highschool United
**COMAT** Yes / No
Mathematics States
Conversations &
188
Open Response
Primary & Highschool United
**EEDI** Yes / No Multiple Choice 17 million +
Mathematics Kingdom
Grade 4, Grade 8 United
**NAEP** Yes / Yes
Mathematics States
Constructed
250,000 +
Response
Multiple Choice &
1,000,000 +
Open Response
Primary & Highschool
Mathematics
United
Yes/ No
States
**ASSISTMents**
-----
## 3. AMMORE Dataset
In this section, we present the African Middle-School Math Open REsponse (AMMORE) Dataset,
which contains 53,298 student answers to open response practice questions, assembled from a subset
math practice sessions on Rori of 2,508 at-home users that took place between January 1st and April
30th, 2024.
**3.1 Background**
Rising Academies, an educational network based in Ghana, has created Rori, an AI-powered math tutor
available on WhatsApp. Rori can be used at home or in schools free of charge. The Rori curriculum has
one or more micro-lessons for each skill in the math Global Proficiency Framework (GPF), with over
500 micro-lessons to date. Each micro-lesson includes a brief student-friendly explanation of the skill
and ten scaffolded practice questions. Many of these questions require open-ended responses, which was
a decision taken for pedagogical reasons. Students are expected to write their answers into WhatsApp
using the mobile keyboard. If students answer a question incorrectly, they are first shown a hint to help
them solve the question and if their second attempt is unsuccessful, they are shown a worked solution.
When students finish a micro-lesson, they are encouraged to continue with the next, which incrementally
increases in difficulty. Rori will suggest students move either backwards or forwards in the curriculum if
[they find a lesson too difficult or easy. For more context you can watch this 2-minute video.](https://youtu.be/xXg6XRajbbk)
Rori’s curriculum is built upon the comprehensive and evidence-based GPF. The framework was
developed to create uniform global standards for reading and mathematics across the world and was
created by USAID by using inputs from experts representing organizations such as the World Bank, the
Bill and Melinda Gates Foundation, the UK's Foreign, Commonwealth, and Development Office, the
UNESCO Institute for Statistics, and many more. The GPF represents a global standard for the
competencies required for learners at different stages. It covers grades 1 to 9, aligns with national
standards globally, and the standards are linked across grade levels. The math framework has five
domains: “Numbers and operations”, “Measurement”, “Geometry”, “Statistics and probability”, and
“Algebra”. Each domain is split into constructs, then subconstructs, and then in specific skills that a
student in each grade should be able to demonstrate. For example, the domain “Numbers and
-----
F. Author et al.
operations” has a topic “Integers and Exponents” that has skills such as “Add and subtract” and
[“Multiply and divide”. For a more detailed description of the structure of the curriculum see here.](https://github.com/owenhenkel/ammore_dataset)
**3.2 Structure**
Each response in our dataset was scored by a pre-existing, rules-based classification model, native to
Rori, which classifies answer attempts as “correct”, “wrong” or “other”. The latter was typically
returned when a student entered something besides an answer attempt, such as a voice note or a sticker.
These classifications were then manually reviewed by humans, and changed where necessary, meaning
the dataset also has a ground truth score for each student answer. The dataset is comprised of students’
answers to math questions from Rori lessons from grade levels 6 to 9 in the domains “Algebra” and
“Number and operations”. Each student answer is paired with the corresponding question, the expected
response, a ground-truth correct/incorrect score, the specific learning standard evaluated by the question,
the time the student answered, and a UID number that can be used to link student responses across the
dataset.
**Summary Information** **Example attributes of single entry**
**Total Answers** 53,031 **lesson** G9.N5.2.1.1
**Correct Answers** 34,668 **question_number** _2_
**Incorrect Answers** 15,278 **question_text** _3^2 + 3^1 = ___
**Other Answers** 3,085 **expected_answer** 12
=6+6
**Unique Students** 2,508 **student_response**
=12
**Grade Levels Covered** 6-9 **model_grade** wrong
Algebra,
**Domains Covered** **human_grade** correct
Numbers and Operations
**Number of Lessons** 151 **time** 1/9/24 7:57
**Number of Skills** 35 **user_id** 17
**Figure 1 Structure of dataset**
-----
The dataset also includes matched but anonymized demographic data on the 2,508 users, such as when
they first started using Rori, their country code, self-reported age, and number of messages they sent and
active days on Rori. At-home users tend to come from Nigeria, Ghana and South Africa and are mostly
between the ages of 10 and 30 and could be using their own or their family members’ phones. You can
[access the AMMORE dataset and data dictionary here.](https://github.com/owenhenkel/ammore_dataset)
**3.3 Potential Uses of the AMMORE Rising Dataset**
The dataset's structure enables various potential analyses. For example (a) investigating students' skill
mastery across micro-lessons, (b) analyzing the relative difficulty of specific questions or micro-lessons
across students, or (c) exploring how the classification model's judgments compare to those of human
raters.
Expanding on the first example, while there are many ways to evaluate student mastery at the
micro-lesson level, for simplicity, we define mastery as an 80% correct answer rate for questions from a
micro-lesson. As discussed above, a micro-lesson is a set of 10 questions of the same difficulty level
focusing on a specific learning standard. We consider the responses labelled “correct” or “wrong” and
discard those labelled “other” to compute the percentage of micro-lessons that students “mastered”.
Using this threshold, we can determine that students “mastered” 48% of micro-lessons. To further this
analysis, one could combine or “roll up” micro-lesson mastery into skill-level mastery. The dataset
includes 151 different micro-lessons covering 35 different skills. For instance, if we posit that a student
must master at least 75% of the micro-lessons contained within a skill to have mastered that skill, we
can determine how many of the 2,508 students in the dataset have mastered each skill. With this
example, 1,133 of the 2,508 students in the data set (45%) would have mastered a skill.
Also, because the same student practices skills at different grade levels, it is possible to compare
student age to the grade-level of the topics they are practicing. Using the same mastery thresholds as
above, we can determine that amongst the 11% of students who master at least two skills (273 students),
28% of them (76 students) master skills at multiple grade levels. One can also estimate whether students
are performing at “grade-level”. Our dataset’s lessons span grades 6 to 9, with 38% of all answers at
level 9, 29% at level 6, then 20% and 13% at levels 7 and 8 respectively.
Yet another approach could be to use this dataset to test different analytics approaches, such as
Bayesian Knowledge Tracing (BKT), which we explore in experiment 2, or other mastery prediction
models. The rich data available, including question-level responses and progression through micro
-----
F. Author et al.
lessons over time, makes this dataset particularly suitable for such analyses. These are just a few
potential uses for this novel dataset. The combination of detailed student responses, demographic
information, and curriculum structure provides a unique opportunity for researchers to explore various
aspects of learning analytics, from individual student progress to broader trends in mathematical skill
development across grade levels.
## 4. Experiment 1: LLM-based Approaches to Math ASAG
Using a carefully curated subset of challenging student responses from the AMMORE Dataset, we
investigate six different automatic grading strategies, ranging from simple string matching to sophisticated
LLM-based methods, evaluating their respective performance relative to human scores. We also consider
how consistent the models are between repeated runs, if the prompting strategy affects the intra-rater
reliability between the model's responses, and how prompting strategy impacts the model response time.
Our analysis aims to shed light on the potential of these approaches to improve grading accuracy,
particularly when dealing with diverse answer types and formatting variations.
**4.1 Challenges of Grading Open-Response Math Questions**
Accurately grading student answers becomes a complex challenge when moving beyond direct string
matches because practice questions on Rori have a diverse set of expected answer types, including
fractions, floating-point numbers, and expressions with exponents. In Table 2 you can see a subset of
student responses for a given question.
**Table 2**
_Example Student Answers and Labels_
**question_id: G7.N2.2.3.6**
**question_text: Fill in the missing number: 1/5 × 2/3 = _ /15**
**expected_answer: 2**
**student_id** **student_answer** **model_grade** **human_grade**
_514_ _2_ _correct_ _correct_
_1073_ _Hold am solving it_ _other_ _wrong_
_876_ _is 2_ _correct_ _correct_
-----
_1203_ _30_ _wrong_ _wrong_
_549_ _30/15_ _wrong_ _wrong_
_324_ _2/15_ _wrong_ _correct_
A particular challenge is identifying responses that are correct but have some variation in their
formatting or expression that differs from the expected answer. For example, requiring too strict of an
answer match would mean only student 514’s answer would be accepted, too permissive and only
students 1073 and 1203 would be marked incorrect. Accordingly, Rori was already using a relatively
sophisticated bespoke classification model to interpret and score student responses, which had already
undergone several rounds of improvement. It was this model that generated the initial classifications of
student responses for the dataset. However, after human review we found that approximately 1% (1186)
of classifications were false negatives, an example of which is student 324 in Table 2.
From a pedagogical perspective, it is important to avoid misclassifying correct student answers
(i.e. a false negative) as much as possible - particularly in independent learning environments - as telling
a student they made a mistake when they were in fact correct can lead to confusion and frustration.
However, false negatives are particularly challenging to identify. For example, looking at the response
given by student 324, an expert human reviewer can understand that the student did the core methodical
operation correctly and gave the full answer rather than only giving the missing number. While there
might be a pedagogical reason to encourage the student to use the correct formatting, treating their
answer as wrong would be suboptimal. This contrasts with student 549, who also used the wrong
formatting, but performed the operation incorrectly, most likely adding the numerators while
multiplying the denominators. Because they require a more sophisticated degree of interpretation, rule
based approaches are unlikely to be successful with these types of student answers. Therefore, we
explore the incremental benefits of increasingly sophisticated approaches, combining rule-based systems
with LLMs to evaluate the long tail of student answers.
**4.2 Experimental Design**
From the larger AMMORE dataset, we create a smaller dataset, which we refer to as AMMORE-hard.
This dataset is comprised of difficult-to-grade student answers, which we used to evaluate the
performance of different automatic grading strategies. The resulting dataset comprises of 4,463 answers,
including 1528 unique non-trivially correct answers and 2935 unique, non-trivially wrong answers.
-----
F. Author et al.
AMMORE-hard was created using the following steps: (1) remove answers that were labeled as
“other” by a human labeler; (2) remove duplicate occurrences where question, expected answer, and
student answer were identical, leaving only one occurrence of each unique combination; (3) remove
trivially correct answers, where the student’s answer was identical to the expected answer; (4) remove
trivially wrong answers, where the expected answer was one character long and the student’s answer
was one character long (mostly multiple-choice questions with wrong answer); (5) remove trivially
wrong answers, where the student’s answer was an integer different from the integer expected; and
finally, (6) remove answers where either the question was ambiguous or the expected answer was
wrong. Using AMMORE-hard, six approaches were used to classify a student answer as correct or
wrong.
Simple rule-based evaluation of matching the expected answer with the
**“Naive” string matching**
student’s response.
**Text processing** Evaluation with additional text substitutions and symbolic evaluations.
**LLM Zero-shot prompting** Evaluation using an LLM prompt without specific examples.
**LLM Few-shot prompting** Evaluation using an LLM prompt with a small set of examples.
Evaluation using an LLM prompt instructing it to show its reasoning
**LLM Chain-of-thought prompting**
process.
Evaluation proceeded through the evaluations until a correct answer
was found or all three evaluations had run: simple rule-based
evaluation, text substitutions and symbolic evaluations, and an LLM
prompt without examples.
**Naive string matching, text**
**processing, and zero-shot prompting**
**Figure 2 Different approaches to grading student answers**
To make a prediction, each approach was given the same information from the dataset: the question text,
the expected answer to the question, and the student’s response. The evaluation approach would predict
if the answer is “correct” or “wrong”. The resulting prediction was recorded. At the time of writing, the
model with the strongest performance score on math benchmarks is OpenAI’s GPT-4o. Hence, each
experiment of a prompt approach used GPT-4o as the LLM. Its temperature setting was set to 0 to
reduce the variability of model outputs. No student demographic information was fed to the LLM, nor
was it shown the human labels of a student answer.
-----
**4.2.1 Prompting Strategy**
We employ a relatively simple prompting strategy, as the task is straightforward. The base part of the
prompt was similar across all strategies. The zero-shot prompt included a description of the core task
and slots for the dataset values. The few-shot prompt added three examples of correct answers. These
examples represented common student response patterns of equivalent answers: 1) where a student
wrote the answer and 2) where a student wrote out their work to arrive at the answer. Instead of
providing examples, the chain-of-thought prompt instructed the model to think step-by-step and present
a rationale for the classification chosen. The chain-of-thought evaluation used the DSPy framework,
which dynamically created a chain-of-thought prompt. Figure 3 shows the prompts for each strategy.
**Zero-shot Prompt** **Few-shot Prompt** **Chain-of-thought Prompt**
You are a math assistant. You are
evaluating whether a student's
submission to a math question is right
or wrong. The student may have
submitted a correct answer in a variety
of acceptable, equivalent ways. You
must tell whether their submission
correctly solves the problem or
whether their submission contains a
valid answer that is equivalent to the
expected answer. If the student's
submission is correct or equivalent,
write "yes". If the submission is
incorrect and not equivalent, write
"no". You should only write "yes" or
"no".
## Question
{question}
## Expected Answer
{expected_answer}
## Student Submission
{student_message}
You are a math assistant. You are evaluating whether a
student's submission to a math question is right or wrong.
The student may have submitted a correct answer in a
variety of acceptable, equivalent ways. You must tell
whether their submission correctly solves the problem or
whether their submission contains a valid answer that is
equivalent to the expected answer. If the student's
submission is correct or equivalent, write "yes". If the
submission is incorrect and not equivalent, write "no".
You should only write "yes" or "no".
## Examples
### Example 1 - The student gave their work and showed
the correct answer.
- Question: Solve for z in the proportion: 9/3 = 27/z.
- Expected Answer: 9
- Student Submission: 9/3=27/a.9×z=3×27.9z/9=91/9.z=9
- is_correct: yes
### Example 2 - The student wrote the correct answer
option and its value.
- Question: 9 / ___ = 0.25 A) 18 B) 36 C) 81 D) 72
- Expected Answer: B
- Student Submission: B.36
- is_correct: yes
## Question
{question}
## Expected Answer
{expected_answer}
## Student Submission
{student_message}
You are a math assistant. You are evaluating
whether a student's submission to a math
question is right or wrong. The student may
have submitted a correct answer in a variety
of acceptable, equivalent ways. You must tell
whether their submission correctly solves the
problem or whether their submission
contains a valid answer that is equivalent to
the expected answer.
Follow the following format.
Question: the math question
Expected Answer: the student’s response to
the question
Reasoning: Let’s think step by step in order to
produce the correct answer
We...
Answer: correct_answer if the student
correctly solves the problem or whether their
submission contains a valid answer that is
equivalent to the expected answer,
wrong_answer otherwise
Question: {question}
Expected Answer: {expected_answer}
Student Answer: {student_answer}
Reasoning: Let’s think step by step in order to
solve the equation {question}
**Figure 3 System Prompts Used in Experiment**
To establish a baseline and evaluate the individual prompt strategies, we created a script that called the
relevant functions from Rori’s answer evaluation API. The script pulled the question, expected answer,
-----
F. Author et al.
and student answer from the dataset. For the baseline evaluation, the script only ran these values through
various string processing strategies. For each prompt strategy evaluation, the script inserted the values
into appropriate parts of the prompt and passed the complete prompt to the OpenAI API. The script
recorded all evaluation run responses (i.e. the predicted class). Access to prompts and script can be
[found here.](https://github.com/owenhenkel/ammore_dataset)
**4.3 Results**
Table 3 shows the results of the six approaches. As mentioned earlier, each answer evaluation would
label a student's answer as “correct” or “wrong”. These predictions were compared against the label
assigned by a human rater. In Table 3, a result closer to one indicates that the human label and the
prediction were similar (i.e., both labeled a student answer as “wrong_answer”). A lower score would
indicate that the human label and the predicted label differed (i.e., the human label marked
“correct_answer” and the predicted label “wrong_answer”).
We report a set of widely used metrics in classification problems which measure model
performance after accounting for imbalanced classes in the dataset: precision, recall, and F1 score
(Banerjee et al., 2008). We also report the Kappa scores, which are chance-adjusted metrics of
agreement, with values ranging from -1 to 1. A value of 1 indicates perfect agreement, 0 suggests that
the agreement is only what would be expected by chance, and a value less than 0 indicates agreement
worse than random chance. While there are several different measures of chance-adjusted agreement,
because we are evaluating 2-class ratings (wrong/correct), we use Linear Weighted Kappa (LWK).
**Table 3**
_Performance of Answer Evaluation Approaches on 2-Class Task_
**Prediction** **Accuracy** **Precision** **Recall** **F1** **LWK**
_Wrong_ 0.79 0.76 0.99 0.86
**String Matching** 0.44
_Correct_ 0.79 0.97 0.39 0.56
_Wrong_ 0.96 0.96 0.97 0.97
**Text processing** 0.90
_Correct_ 0.96 0.94 0.93 0.94
_Wrong_ 0.94 0.93 0.98 0.95
**LLM Zero-shot** 0.86
_Correct_ 0.94 0.96 0.85 0.90
-----
_Wrong_ 0.93 0.91 0.99 0.95
_Correct_ 0.93 0.97 0.81 0.88
_Wrong_ 0.97 0.97 0.98 0.98
_Correct_ 0.97 0.96 0.94 0.95
_Wrong_ 0.96 0.97 0.97 0.97
_Correct_ 0.96 0.94 0.95 0.95
**LLM Few-shot**
**LLM**
**Chain-of-thought**
**Text process +**
**LLM Zero-shot**
0.83
0.93
0.92
All evaluation techniques generally performed better when attempting to predict a wrong answer rather
than the correct answer. Lower recall and F1 scores in the zero-shot and few-shot results may indicate
that these techniques have trouble handling the variety of unique correct answer formats in the dataset.
This led to more false negatives as a result, which as mentioned is not ideal for educational scenarios.
Surprisingly, enhancing the prompt with examples did not necessarily lead to improved performance.
Evaluating a student message using a combination of text processing and LLM zero-shot prompting had
similar performance to the chain-of-thought prompting.
The performance of chain-of-thought prompting points to improved capability of latest LLMs to
deal with mathematical tasks and solve math problems. Many of the model’s verdicts can therefore be
attributed to differences in pedagogical standards, such as the evaluation presented in Figure 4.
**question_id:** G6.N1.2.2.1.7
**question_text :** 3^2 + 3^1 = __
**expected_answer:** 12
**student_response:** =6+6
=12
**human_category:** correct_answer
**llm_cot_category:** wrong_answer
-----
F. Author et al.
**llm_cot_rationale:** We need to evaluate the student's response to see if it correctly solves the problem or contains
a valid answer that is equivalent to the expected answer.
First, let's solve the problem step by step:
- Calculate \(3^2\): \(3^2 = 9\)
- Calculate \(3^1\): \(3^1 = 3\)
- Add the results: \(9 + 3 = 12\)
The expected answer is 12.
Now, let's examine the student's answer:
- The student wrote "=6+6 =12"
The student's answer is not a correct step-by-step solution to the problem. The student
incorrectly broke down the calculation into \(6 + 6\), which is not how \(3^2 + 3^1\) should be
computed.
**Figure 4 Output of different LLM (GPT-4o) strategies on a sample question-answer pair**
While a more lenient educator might credit the student for arriving at the correct answer, the model
chose to rate the response as wrong due to incorrect solution steps. Similarly, the model chose to mark
wrong responses that contained Unicode analogs of the expected answer (e.g. À instead of A), badly
formatted numbers (e.g. 9.34.5 instead of 9.345), and similar small variations that human labelers tended
to classify as “correct”, taking into account the student’s limitations when using a mobile keyboard to
type an answer. Looking further into chain-of-thought, the responses reveals that the model will
sometimes make a mistake in its mathematical reasoning. This can lead to the model ignoring the
expected answer, resulting in an inaccurate evaluation. Figure 5 provides an example of this behavior.
**question_id :** G6.N1.2.2.1.7
**question_text :** What is the value of 7 in 4,170,348?
A. 7,000,000
B. 7,000
C. 700,000
D. 70,000
**expected_answer:** D
**student_response:** d
**human_grade:** correct_answer
**llm_cot_grade:** wrong_answer
-----
**llm_cot_rationale:** We need to determine the place value of the digit 7 in the number 4,170,348.
1. The digit 7 is in the hundred thousand place.
2. The value of a digit in the hundred thousand place is the digit multiplied by 100,000.
3. Therefore, the value of 7 in 4,170,348 is 7 * 100,000 = 700,000.
The correct answer is C (700,000). The student's answer is d, which corresponds to 70,000.
**Figure 5 Example of LLM with CoT prompt containing faulty mathematical reasoning**
**4.3.1 Performance vs Latency**
Table 4 shows the average and longest processing times each evaluation took to make a prediction.
While chain-of-thought prompting resulted in small but stable improvements over the string processing
and symbolic evaluations, it also significantly increased response latency. On average, chain-of-thought
responses took 2.79 seconds, compared to 0.73 seconds for few-shot LLM calls. The few-shot
evaluation took slightly longer than the zero-shot approach. Text processing evaluations took
considerably less time than all prompt-based approaches, which is expected given that this approach did
not require connection to the model over internet or the execution of a large-scale machine learning
model.
**Table 4**
_Latency of Four Answer Evaluation Approaches on 2-Class Task in Seconds_
**Average Processing Time** **Longest Processing Time**
**Text Processing** _0.006_ _0.269_
**LLM Zero-shot** _0.68_ _5.687_
**LLM Few-shot** _0.73_ _5.937_
**LLM Chain-of-thought** _2.79_ _16.281_
These results indicate that LLM processing time can be affected by the amount of input tokens the
model needs to consume in the case of a longer prompt (such as in a few-shot prompts), and can be
increased significantly when the model needs to generate a significant amount of output tokens (such as
in the case of chain-of-thought prompting). Additionally, prompt-based approaches could experience
more fluctuation in processing time. String processing and symbolic evaluation can reasonably good
performance with less latency and more consistent processing time.
-----
F. Author et al.
**4.3.2 Model Reliability**
While the deterministic approaches like text processing provide consistent results, generative LLMs
generate their output using probabilistic methods, and therefore can return different outputs given the
same inputs. This variation may occur even when the temperature is set to 0. In some respects, this is
similar to human raters, who occasionally will award different ratings to the same student response,
when asked to re-rate it after a period of time. Measures of intra-rater reliability are intended to evaluate
the extent to which a single rater agrees with their own judgment over time.
To investigate the consistency of prompt-based methods, zero-shot and chain-of-thought
approaches were rerun 10 times on a smaller dataset of 100 examples. As shown earlier, these two
approaches scored the highest of the prompt-based approaches. For each run, the model labels were
compared against the predicted labels to get a Fleiss’s Kappa score to measure inter-rater reliability for
the run. Table 5 shows the results of these runs. All runs were then compared against each other to arrive
at a Fleiss Kappa to represent inter-run reliability.
**Table 5**
_Agreement Between Model Runs and Human Labeling Using Fleiss’s Kappa_
**Run**
**Run 1 Run 2 Run 3 Run 4 Run 5 Run 6 Run 7 Run 8 Run 9**
**10**
**Fleiss’s**
**Kappa**
**LLM**
_0.66_ _0.66_ 0.68 0.70 0.66 0.62 0.66 0.70 0.66 0.66 0.90
**Zero-shot**
**LLM**
_0.86_ _0.72_ 0.74 0.74 0.74 0.72 0.74 0.66 0.70 0.72 0.88
**Chain-of-thought**
Both chain-of-thought and zero-shot approaches had relatively high inter-run reliability as measured by
Fleiss Kappa. However, the results indicate that chain-of-thought grading, while showing higher answer
validity (represented by higher agreement with human labeler), has lower reliability between individual
run outcomes and increased possibility for “outlier” runs, occasionally scoring worse than few-shot
prompting.
This suggests that chain-of-thought prompting may experience more variation in how it scores
responses, which may stem from its reasoning differing between runs. It also indicates that the LLM’s
“pedagogical standard” may be less consistent. This could lead to accepting answers with typographical
errors or other discrepancies outlined earlier, while rejecting them in other instances. While a student
-----
may not answer the same question multiple times, this variation could cause student confusion when the
LLM does not consistently handle a particular answer pattern (such as substituting Unicode characters).
## 5. Experiment 2: Impact of Improved Grading on Student Ability Estimates
While improving model performance in grading short answer questions is an important area of research,
we also seek to better understand the impact of such models on the analysis of student learning. In our
second experiment, we investigate whether improved accuracy in model grading corresponded to
changes in our estimates of student ability. In the context of a learning environment, even a small
number of misgraded answers can lead to vastly different judgments of student ability when aggregated
across questions. Tracking a student’s progress and understanding of the subject is an essential part of
ITS [1, 10] . Accurately estimating a student’s current knowledge state enables these systems to deliver
a personalized learning experience. For example, student modeling can be used by ITS for making key
decisions such as which problem a student should attempt, how much practice is needed to master a skill
before moving to a more advanced topic, and when to provide immediate feedback to struggling
students.
Bayesian Knowledge Tracing [12] is one of the most widely used algorithms to model students’
knowledge in ITS [1]. At any given moment BKT assumes that when a student attempts to demonstrate
a skill, they either know the skill or not. Every time a student attempts to demonstrate the skill, the
probability of them knowing the skill is updated based on their performance up to that point and whether
they were able to demonstrate the skill correctly or not.
Standard BKT uses four parameters to model student knowledge. Two parameters are related to
learners’ knowledge. When first attempting to demonstrate a skill, a student has the initial probability
P(L0) of knowing the skill. This probability is updated each time the student attempts to demonstrate the
skill (i.e., after t attempts, the probability of knowing the skill is P(Lt)). At each practice opportunity, a
student has a probability P(T) of learning the skill. The other two BKT parameters are related to
learners’ performance. The probability of a student knowing the skill and yet making a mistake when
attempting to demonstrate the skill is P(S). P(G) represents the probability of a student correctly
guessing the answer even when not knowing the skill.
-----
F. Author et al.
**5.1 Methodology**
To quantify the effect of different automated grading algorithms on predicting individual student
mastery, we apply the algorithms described in the previous section to generate answer correctness labels
for the entire dataset. We exclude questions labeled by human annotators as “other”, as there are no
straightforward ways to incorporate student non-attempts into the BKT evaluation.
We calculate BKT scores for each student on every lesson they attempted, using only their first
attempts to respond to each question. To calculate these scores, we use the following default parameters
for every lesson, as suggested by Nguyen et al., [29] : P(L0)=0.4, P(T)=0.05, P(S)=0.299, and
P(G)=0.299. To determine if a student had mastered a lesson, we use the last BKT score calculated for
that student in each lesson. While mastery thresholds for BKT scores vary between different sources, we
choose a threshold of 0.9 to signify that a student had mastered the lesson.
Next, to investigate the effect of grading mechanisms on evaluating individual student mastery,
we calculate the number of lessons each student mastered according to different grading algorithms. We
then compare these numbers between the worst-performing algorithm (naive string matching) and the
best-performing algorithm (chain-of-thought), using human labels of the students' answers as the gold
standard. Finally, to estimate the effect of grading mechanisms on evaluating micro-lesson difficulty, we
calculate the median mastery score for each micro-lesson and compared this measure across different
grading approaches.
**5.2 How Grading Methods Impact Mastery Predictions**
When comparing the number of lessons that reach our threshold for mastery (BKT score of 0.9)
according to different grading approaches, we find that 6.9% (165 out of 2,388) of students had their
mastery of a completed lesson incorrectly estimated. In contrast, the most successful grading approach,
LLM chain-of-thought grading, only underestimated the number of completed lessons for 2.6% of
students (61 out of 2,388) students.
This difference can be illustrated by looking at a specific lesson, G7.N3.2.2.2, which
demonstrates how dramatic the effect of grading approach on lesson difficulty estimation can be. This
lesson deals with changing forms and asks the student to present a given decimal number as a fraction.
As there are multiple correct answers to this question and string-matching evaluation struggles with
identifying equivalent fractions, the string-matching algorithm would regularly grade mathematically
correct results as wrong. Examples of this difference in terms of a single lesson can be seen in Table 6.
-----
**Table 6**
_Change in BKT Score on Lesson G7.N3.2.2.2 by Grading Method for Example Students_
**user_id** **BKT Estimate with** **BKT Estimate with** **BKT Estimate with**
**String Match Grading** **LLM CoT Grading** **Human Grading**
996 0.349435 0.845858 0.845858
1165 0.629638 0.966567 0.966567
1235 0.173999 0.809262 0.809262
1239 0.895698 0.973051 0.973051
1841 0.128321 0.913219 0.913219
2037 0.295264 0.994347 0.994347
Anecdotally, we observe that while the overall number of misgraded answers by simpler methods like
string-matching was relatively small, these errors tended to be concentrated around certain students or
specific lessons. Students who adapted more slowly to the expected answer format were
disproportionately affected by inaccurate grading. Additionally, certain lessons that allowed for multiple
correct answer formats or required understanding of equivalent expressions (such as fractions) seemed
to be more susceptible to grading errors from simpler methods. For one student, 1190, using string
matching to grade their answers resulted in BKT estimating that they mastered zero lessons, while both
human and LLM-based grading resulted in BKT estimate of over 0.90 for all the lesson they completed.
Another interesting specific case demonstrates the impact of inaccurate grading on both student
experience and behavior, as well as mastery estimation. Student 994 began their practice with multiple
choice questions in lesson G6.N1.3.6.1. However, because they were not following the expected answer
format, their correct answers were graded as wrong. This presumably caused the student to abandon the
lesson midway and start a different lesson, where the situation repeated itself. The student then switched
to another lesson again after just 3 questions. However, once they started a lesson where the answer
format was less ambiguous, the grading quality improved. From that point on, not only did the student
start completing the lessons, solving all 10 questions, but the estimation of their mastery also became
more aligned with their actual performance.
## 6. Discussion
**6.1 Implications**
-----
F. Author et al.
The results of our experiments have significant implications for the field of ASAG and its application in
educational settings. The superior performance of LLM-based approaches, particularly chain-of-thought
prompting, suggests that these models can effectively handle the complexity and variability of student
responses in open-ended math questions.
One of the most important implications of our findings is the potential for more widespread use
of open-ended questions in formative assessment. As noted in the introduction, open-ended questions
have several advantages over closed-response formats, including decreased influence of test-taking
strategies and greater face validity. The ability to accurately grade these questions automatically could
encourage educators to incorporate more open-ended questions into their assessments, potentially
leading to more effective evaluation of student understanding and improved learning outcomes.
The improved accuracy of LLM-based grading approaches also has implications for student
experience and engagement. As demonstrated in our analysis of individual student mastery prediction,
inaccurate grading can significantly impact a student's perceived progress and potentially influence their
behavior. More accurate grading could lead to better alignment between a student's actual performance
and their estimated mastery, potentially increasing motivation and reducing frustration. Furthermore, the
ability of LLMs to handle diverse answer formats and equivalent expressions could promote more
flexible problem-solving among students. Instead of being constrained to a specific answer format,
students could express their solutions in ways that feel most natural to them, knowing that the grading
system can accurately evaluate their responses.
The implications extend beyond the design and implementation of ITS as well. From a resource
perspective, the ability to accurately grade open-ended questions automatically could lead to significant
time savings for educators. This could allow them to focus more on providing personalized feedback and
support rather than spending time on routine grading tasks.
**6.2 Limitations**
Despite the promising results, our study has several limitations that should be considered when
interpreting the findings and planning future research. Firstly, our dataset is limited to middle school
mathematics questions from specific domains (“Algebra” and “Numbers and operations”). The
performance of the grading approaches, particularly the LLM-based methods, may vary for different
subject areas, complexity levels, or age groups.
-----
Secondly, our experiments focused on a binary classification of answers as correct or incorrect.
This simplification, while useful for our analysis, does not capture the full spectrum of partial
understanding that students may demonstrate in their responses. A more nuanced grading approach
might provide richer insights into student comprehension and learning progress.
Thirdly, LLM-based approaches revealed some inconsistency in grading, particularly for the
chain-of-thought method. This variability in “pedagogical standards” between runs could be problematic
in educational settings where consistent evaluation is crucial for fair assessment and student trust in the
system. Relatedly, there is the potential for LLM hallucination or faulty mathematical reasoning, as
demonstrated in some of our examples. While these instances were relatively rare, they highlight the
need for caution when relying solely on LLM-based grading without human oversight.
Lastly, while we demonstrate the impact of grading accuracy on estimates of student mastery and
lesson difficulty, we did not explore how these improved estimates might translate into better learning
outcomes in practice. The real-world educational impact of using LLM-based grading in an ITS remains
to be studied.
**6.3 Further Research**
Our findings open several avenues for future research in the field of ASAG and its applications in
education. One crucial area for further investigation is the expansion of available datasets to include a
wider range of subjects, grade levels, and cultural contexts. This would allow researchers to test the
generalizability of LLM-based grading approaches across different educational domains and student
populations. Additionally, creating datasets that include more complex, multi-step problems could help
push the boundaries of what automatic grading systems can handle. Future studies should also explore
more nuanced grading scales beyond binary classification. Developing and evaluating methods for
assigning partial credit or identifying specific misconceptions in student responses could provide more
detailed insights into student understanding and learning progress.
Research into improving the consistency of LLM-based grading is another important direction.
This could involve experimenting with different prompting strategies, exploring ensemble methods that
combine multiple LLM runs, or investigating ways to fine-tune LLMs for more consistent performance
in educational grading tasks. Another promising direction is the integration of LLM-based grading into
ITS and studying its impact on adaptive learning. Researchers could investigate how more accurate
-----
F. Author et al.
grading and mastery estimation influence the effectiveness of personalized learning paths, problem
selection, and intervention strategies.
Finally, research into hybrid approaches that combine the strengths of rule-based systems,
traditional machine learning, and LLMs could lead to more robust and efficient grading systems. This
could involve developing frameworks that can dynamically select the most appropriate grading method
based on the specific characteristics of each question and response. By pursuing these research
directions, we can continue to advance the field of automatic short answer grading and work towards
more effective, fair, and personalized educational technologies that support both students and educators
in the learning process.
**7. Conclusion**
We make two contributions to the fields of ASAG and LLM evaluation. By presenting AMMORE, we
aim to expand and diversify the range of publicly available datasets. As it includes students from Africa
answering math questions at middle school levels and provides demographic data, it is a unique dataset
that enables a variety of analyses, a few of which we have explored here. We find that chain of thought
prompting is the best LLM-driven approach to grade open-response math answers. Additionally, we find
improving grading accuracy can lead to significant changes in the estimation of student mastery, which
could have considerable impact on the field of ITS and opens up many more questions for future
research.
**Declarations and Acknowledgement**
The first author has an ongoing research partnership with Rising Academies and works as a consultant
on a project related to developing a conversational agent to support students’ math skills. Author 2
works for Rising Academies as Research and Assessment Manager. The remaining authors all work as
research consultants on projects with Rising Academies. We would like to thank John Whitmer and
Alexis for their support in assembling the AMMORE dataset. We would also like to thank Ryan Baker
for guidance on BKT modeling.
-----
**REFERENCES**
[1] Abdelrahman, G., Wang, Q. and Nunes, B. 2023. Knowledge Tracing: A Survey. ACM Computing Surveys.
55, 11 (Nov. 2023), 1–37. DOI:https://doi.org/10.1145/3569576.
[2] Allen, L.K., Snow, E.L., Crossley, S.A., Tanner Jackson, G. and McNamara, D.S. 2014. Reading
comprehension components and their relation to writing. L’Année psychologique. 114, 04 (Dec. 2014), 663–
691. DOI:https://doi.org/10.4074/S0003503314004047.
[3] van den Bergh, H. 1990. On the Construct Validity of Multiple- Choice Items for Reading Comprehension.
_Applied_ _Psychological_ _Measurement._ 14, 1 (Mar. 1990), 1–12.
DOI:https://doi.org/10.1177/014662169001400101.
[4] Black, P. and Wiliam, D. 2009. Developing the theory of formative assessment. _Educational Assessment,_
_Evaluation and Accountability. 21, 1 (2009), 5–31. DOI:https://doi.org/10.1007/s11092-008-9068-5._
[5] Botelho, A., Baral, S., Erickson, J.A., Benachamardi, P. and Heffernan, N.T. 2023. Leveraging natural language
processing to support automated assessment and feedback for student open responses in mathematics. Journal
_of Computer Assisted Learning. 39, 3 (Jun. 2023), 823–840. DOI:https://doi.org/10.1111/jcal.12793._
[6] Brown, T.B. et al. 2020. Language Models are Few-Shot Learners. arXiv:2005.14165 [cs]. (Jul. 2020).
[7] Burrows, S., Gurevych, I. and Stein, B. 2015. The Eras and Trends of Automatic Short Answer Grading.
_International_ _Journal_ _of_ _Artificial_ _Intelligence_ _in_ _Education._ 25, 1 (2015), 60–117.
DOI:https://doi.org/10.1007/s40593-014-0026-8.
[8] Cain, K. and Oakhill, J. 2007. Children’s comprehension problems in oral and written language a cognitive
_perspective. Guilford Press._
[9] Chowdhery, A. et al. 2023. PaLM: Scaling Language Modeling with Pathways. (2023).
[10] Chrysafiadi, K. and Virvou, M. 2013. Student modeling approaches: A literature review for the last decade.
_Expert_ _Systems_ _with_ _Applications._ 40, 11 (Sep. 2013), 4715–4729.
DOI:https://doi.org/10.1016/j.eswa.2013.02.007.
[11] Cohn, C., Hutchins, N., Le, T. and Biswas, G. 2024. A Chain-of-Thought Prompting Approach with LLMs for
Evaluating Students’ Formative Assessment Responses in Science. arXiv.
[12] Corbett, A.T. and Anderson, J.R. 1995. Knowledge tracing: Modeling the acquisition of procedural knowledge.
_User_ _Modelling_ _and_ _User-Adapted_ _Interaction._ 4, 4 (1995), 253–278.
DOI:https://doi.org/10.1007/BF01099821.
[13] Crossley, S.A., Kim, M., Allen, L. and McNamara, D. 2019. Automated Summarization Evaluation (ASE)
Using Natural Language Processing Tools. Artificial Intelligence in Education. S. Isotani, E. Millán, A. Ogan,
P. Hastings, B. McLaren, and R. Luckin, eds. Springer International Publishing. 84–95.
[14] Fernandez, N., Ghosh, A., Liu, N., Wang, Z., Choffin, B., Baraniuk, R. and Lan, A. 2023. Automated Scoring
for Reading Comprehension via In-context BERT Tuning. arXiv.
[15] Gilardi, F., Alizadeh, M. and Kubli, M. 2023. ChatGPT outperforms crowd workers for text-annotation tasks.
_Proceedings_ _of_ _the_ _National_ _Academy_ _of_ _Sciences._ 120, 30 (Jul. 2023), e2305016120.
DOI:https://doi.org/10.1073/pnas.2305016120.
[16] Gurung, A., Vanacore, K., Mcreynolds, A.A., Ostrow, K.S., Worden, E., Sales, A.C. and Heffernan, N.T. 2024.
Multiple Choice vs. Fill-In Problems: The Trade-off Between Scalability and Learning. Proceedings of the 14th
_Learning Analytics and Knowledge Conference (Kyoto Japan, Mar. 2024), 507–517._
[17] Haller, S., Aldea, A., Seifert, C. and Strisciuglio, N. 2022. Survey on Automated Short Answer Grading with
Deep Learning: from Word Embeddings to Transformers. arXiv.
[18] Hattie, J. 2010. Visible learning: a synthesis of over 800 meta-analyses relating to achievement. Routledge.
[19] Henkel, O., Hills, L., Roberts, B. and McGrane, J. 2023. Supporting Foundational Literacy Assessment in
LMICs: Can LLMs Grade Short-answer Reading Comprehension Questions? (2023).
[20] Kojima, T., Gu, S.S., Reid, M., Matsuo, Y. and Iwasawa, Y. 2022. Large Language Models are Zero-Shot
Reasoners. (2022).
[21] Kortemeyer, G. 2023. Performance of the Pre-Trained Large Language Model GPT-4 on Automated Short
Answer Grading. arXiv.
-----
F. Author et al.
[22] Kuzman, T., Mozetič, I. and Ljubešić, N. 2023. ChatGPT: Beginning of an End of Manual Linguistic Data
Annotation? Use Case of Automatic Genre Identification. arXiv.
[23] Magliano, J.P. and Graesser, A.C. 2012. Computer-based assessment of student-constructed responses.
_Behavior Research Methods. 44, 3 (2012), 608–621. DOI:https://doi.org/10.3758/s13428-012-0211-3._
[24] Magliano, J.P. and Millis, K.K. 2003. Assessing Reading Skill With a Think-Aloud Procedure and Latent
Semantic Analysis. _Cognition_ _and_ _Instruction._ 21, 3 (2003), 251–283.
DOI:https://doi.org/10.1207/S1532690XCI2103_02.
[25] Matelsky, J.K., Parodi, F., Liu, T., Lange, R.D. and Kording, K.P. 2023. A large language model-assisted
education tool to provide feedback on open-ended responses. arXiv.
[26] Mayfield, E. and Black, A.W. 2020. Should You Fine-Tune BERT for Automated Essay Scoring? Proceedings
_of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications (Seattle, WA, USA_
→ Online, 2020), 151–162.
[27] Mizumoto, A. and Eguchi, M. 2023. Exploring the potential of using an AI language model for automated essay
scoring. _Research_ _Methods_ _in_ _Applied_ _Linguistics._ 2, 2 (Aug. 2023), 100050.
DOI:https://doi.org/10.1016/j.rmal.2023.100050.
[28] Morjaria, L., Burns, L., Bracken, K., Levinson, A.J., Ngo, Q.N., Lee, M. and Sibbald, M. 2024. Examining the
Efficacy of ChatGPT in Marking Short-Answer Assessments in an Undergraduate Medical Program.
_International Medical Education. 3, 1 (Jan. 2024), 32–43. DOI:https://doi.org/10.3390/ime3010004._
[29] Nguyen, H.A., Hou, X. and Stamper, J. 2020. Moving beyond Test Scores: Analyzing the Effectiveness of a
Digital Learning Game through Learning Analytics. (2020).
[30] Ouyang, L. et al. 2022. Training language models to follow instructions with human feedback. arXiv.
[31] Pearson, P.D. and Hamm, D.N. 2006. The Assessment of Reading Comprehension: A Review of Practices—
Past, Present, and Future. Children’s reading comprehension and assessment. Lawrence Erlbaum Associates.
[32] Pulman, S.G. and Sukkarieh, J.Z. 2005. Automatic short answer marking. Proceedings of the second workshop
_on Building Educational Applications Using NLP - EdAppsNLP 05 (Ann Arbor, Michigan, 2005), 9–16._
[33] Schneider, J., Schenk, B., Niklaus, C. and Vlachos, M. 2023. Towards LLM-based Autograding for Short
Textual Answers. (2023).
[34] Shute, V.J. 2008. Focus on Formative Feedback. Review of Educational Research. 78, 1 (Mar. 2008), 153–189.
DOI:https://doi.org/10.3102/0034654307313795.
[35] Stiennon, N., Ouyang, L., Wu, J., Ziegler, D.M., Lowe, R., Voss, C., Radford, A., Amodei, D. and Christiano,
P. 2022. Learning to summarize from human feedback. arXiv.
[36] Sultan, M.A., Salazar, C. and Sumner, T. 2016. Fast and Easy Short Answer Grading with High Accuracy.
_Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational_
_Linguistics: Human Language Technologies (San Diego, California, 2016), 1070–1075._
[37] Sung, C., Dhamecha, T., Saha, S., Ma, T., Reddy, V. and Arora, R. 2019. Pre-Training BERT on Domain
Resources for Short Answer Grading. Proceedings of the 2019 Conference on Empirical Methods in Natural
_Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-_
_IJCNLP) (Hong Kong, China, 2019), 6070–6074._
[38] Wei, J. et al. 2022. Emergent Abilities of Large Language Models. arXiv.
-----
| [
"Owen, Henkel",
"Hannah, Horne-Robinson",
"Maria, Dyshel",
"Nabil, Ch",
"Baptiste, Moreau-Pernet",
"Ralph, Abood"
] | 2024-09-26T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.17904 | https://arxiv.org/abs/2409.17904 | https://www.semanticscholar.org/paper/761bf477a610975f511a46155c9261067bbdee16 |
Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x | Training on model-generated synthetic data is a promising approach for finetuning LLMs, but it remains unclear when it helps or hurts. In this paper, we investigate this for reasoning problems via an empirical study, followed by a theoretical formalization. First, we find that while the typical approach of finetuning a model on synthetic correct or *positive* problem-solution pairs generated by capable models offers modest performance gains, sampling more correct solutions from the finetuned learner **doubles** the sample efficiency of synthetic data. At the same time, training on model-generated positives can amplify spurious correlations, resulting in flat or even inverse scaling trends as the amount of data increases. Surprisingly, we find that several of these issues can be addressed if we also utilize *negative* responses, \ie model-generated responses that are deemed incorrect via final answer checking. Crucially, these negatives must be constructed such that the training can appropriately recover the utility or credit of each intermediate step in the negative response. With this \emph{per-step} scheme, we are able to attain consistent gains over only positive data, attaining performance similar to amplifying the amount of synthetic data by **8x**. We show that training on per-step negatives can help to unlearn spurious correlations in the positive data, and is equivalent to advantage-weighted reinforcement learning (RL), implying that it inherits benefits of RL over imitating positive data alone. | null | 000
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
## Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x
**Anonymous Authors[1]**
**Abstract**
Training on model-generated synthetic data is a
promising approach for finetuning LLMs, but it
remains unclear when it helps or hurts. In this
paper, we investigate this for reasoning problems
via an empirical study, followed by a theoretical
formalization. First, we find that while the typical
approach of finetuning a model on synthetic correct or positive problem-solution pairs generated
by capable models offers modest performance
gains, sampling more correct solutions from the
finetuned learner doubles the sample efficiency
of synthetic data. At the same time, training on
model-generated positives can amplify spurious
correlations, resulting in flat or even inverse scaling trends as the amount of data increases. Surprisingly, we find that several of these issues can
be addressed if we also utilize negative responses,
_i.e., model-generated responses that are deemed_
incorrect via final answer checking. Crucially,
these negatives must be constructed such that the
training can appropriately recover the utility or
credit of each intermediate step in the negative
response. With this per-step scheme, we are able
to attain consistent gains over only positive data,
attaining performance similar to amplifying the
amount of synthetic data by 8×. We show that
training on per-step negatives can help to unlearn
spurious correlations in the positive data, and is
equivalent to advantage-weighted reinforcement
learning (RL), implying that it inherits benefits of
RL over imitating positive data alone.
**1. Introduction**
Training large language models (LLMs) relies on the ability
to train on large amounts of high-quality data. It is predicted that we will run out of high-quality internet data by
1Anonymous Institution, Anonymous City, Anonymous Region,
Anonymous Country. Correspondence to: Anonymous Author
<[email protected]>.
Preliminary work. Under review by the International Conference
on Machine Learning (ICML). Do not distribute.
2026 (Villalobos et al., 2022; Liu et al., 2024), necessitating training on model-generated data, or what is commonly
referred to as synthetic data. Recent trends illustrate that
scaling up synthetic data can lead to improvements (Li et al.,
2024; Chen et al., 2024) on hard reasoning problems, while
other results illustrate that training on synthetic data can
steer the performance of the model into a downward spiral (Shumailov et al., 2023; Alemohammad et al., 2023; Gerstgrasser et al., 2024)—amplying biases, misinformation,
and undesired stylistic properties. Thus while in principle,
synthetic data could potentially address data scarcity, it must
be designed in an appropriate manner to be effective. However, due to a lack of an understanding of how synthetic data
contributes to LLM behavior, it is unclear how to best use
synthetic data in practice.
To provide clarity on the role of synthetic data, we aim to
understand its impact on LLM capabilities via a study on
reasoning problems, a prevalent scenario where synthetic
data is used. Typically, in this setting, synthetic data corresponds to correct or positive model-generated responses
for a novel set of initial problems synthesized by prompting
capable models (Li et al., 2024; Liu et al., 2023). The resulting model is then evaluated on a held-out set of problems
drawn from a test set. Perhaps as expected, we find that performance improves when finetuning models on positive synthetic responses, though the scaling rates for performance
improvement are often substantialy slower than those observed during pretraining. Concretely, we find that under
the scaling law of Zhang et al. (2024a), the error rate scales
as ≈D[−][0][.][05] to D[−][0][.][15] in the size D of synthetic dataset.
Second, we observe that not all types of positive synthetic
data are equally effective: often positive responses sampled
from the learner are as effective as 2× synthetic data from
bigger models in improving performance. This is because
responses from a similar model are “easier-to-fit” than those
from a more capable model, resulting in reduced memorization (Kang et al., 2024; Tirumala et al., 2022) during
finetuning. We also observe that if the positive response
contains incorrect/irrelevant intermediate steps, training on
such data often incentivizes the model to overfit on spurious
correlations, leading to a flat or even inverse scaling with
more data.
-----
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
Negative Data _Filter negative by_
_calculating per-_
_step credit*_
Sampled data with
Synthetic Data incorrect answers
Finetune
(e.g., Llama-2)base model SFT Sample answers for same questions as in synthetic SFT modelFinetune Per-step
Reasoning QnApairs sampled Model data and filter based on synthetic answers DPO Model
from capable
model, e.g., GPT-4 Positive Data
Finetune RFT Model
Sampled data with base model
correct answers (e.g., Llama-2)
**Figure 1: Positive and negative synthetic data: Pictorial repre-**
sentation of positive/negative synthetic data definitions we use and
how they are fed to SFT, RFT and DPO.
Perhaps surprisingly, we find that the aforementioned
pathologies of training on positive data only can be addressed if we also utilize synthetic negative responses: responses generated by the model that do not result in obtaining a correct final answer. One way to utilize negative
responses is via methods such as direct preference optimization (DPO) (Rafailov et al., 2023). While performance of
standard DPO (Rafailov et al., 2023) largely flatlines as the
synthetic problems are scaled up (Figure 5), we are able to
attain consistent improvements if the negative data is generated appropriately. Our intuition is that instead of contrasting arbitrary correct and incorrect responses, we contrast
positive and negative responses that depict good and bad
choices for the more “critical” intermediate steps (Hwang
et al., 2024): steps that the model must carefully produce so
as to succeed at the problem. In other words, critical steps
are those which the model is unable to recover from, and
hence, must be emphasized. With this scheme, we are able
to attain consistent gains over only positive data, attaining
**performance similar to scaling up positive synthetic data**
**by 8×. We also show that training on this sort of negative**
data evades spurious correlations introduced by training on
positive data alone via a controlled study.
To theoretically understand our empirical findings, we build
a conceptual model of how training on this data benefits
performance. Formally, we show that this construction of
negative data, which emphasizes “critical” tokens (Figure 6)
enables us to perform credit assignment, and is equivalent
to training the model with per-step advantage-weighted reinforcement learning (RL) (Peng et al., 2019) on a mixture
of positive and negative synthetic data. Specifically, these
advantage values are computed under an optimal value function induced by sampling multiple responses under the SFT
policy obtained by training on only the positive data. This
reduction of using negative data to advantage-weighted RL
enables us to conceptually compare it to training on positive data, which corresponds to imitation learning (i.e.,
behavioral cloning) on expert data. Building on theoretical
results in RL (Kumar et al., 2022), we are also able to show
that when advantages can be estimated reliably, advantageweighted RL will be significantly more sample-efficient
compared to imitation learning. Overall, this abstraction
and conceptual model explains the utility of negative syn
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
094
095
096
097
098
099
100
101
102
103
104
105
106
107
108
109
thetic data over only positive synthetic data.
Our contribution is a study of the role of synthetic data in
improving reasoning capabilities of LLMs. We derive scaling laws for positive and negative data on common reasoning benchmarks such as GSM8K (Cobbe et al., 2021) and
MATH (Hendrycks et al., 2021), and observe that: (a) training on positive synthetic data from capable models results in
scaling rates that are significantly slower than standard empirical risk minimization; (b) training on model-generated
positive synthetic data can improve sample efficiency by
2× but also amplifies spurious correlations; (c) appropriate ways of constructing learner-specific negative data with
emphasis on critical steps, results in a performance boost
equivalent to scaling up positive data 8×; (d) training with
negative data provides a mechanism to unlearn spurious
correlations; and (e) we present a conceptual model inspired
from RL to explain our observations for synthetic data.
**2. Synthetic Data Generation Pipeline**
Our goal in this paper is to understand the role of synthetic data in producing strong language model reasoners.
Building on the recipe of Li et al. (2024); Liu et al. (2023),
we collect synthetic data consisting of both novel problems designed by capable models such as GPT4 (Achiam
et al., 2023) and Gemini 1.5 Pro (Reid et al., 2024), and responses to these problems, obtained from the same models.
Concretely, we focus on two mathematical reasoning benchmarks: GSM8K (Cobbe et al., 2021) and MATH (Hendrycks
et al., 2021).
**Synthetic data pipeline. Our synthetic data generation**
is done in two phases. First, given a dataset Dreal =
**_x[r]i_** _[,][ y]i[r]_ _i_ [∼] _[p]real[(][x][)][ and solution traces]_
**_y[r]i_** [∼] _[p]real[(][y][ ∣]_ **_[x]i[)][, we prompt one of the highly-capable]_**
models with a uniformly random sample{( [)}][ of problems][ x][r] **_x[r]i_** _[,][ y]i[r]_ real
and ask the model to generate a new problem xi such that
it is similar to the real problem x[r]i [, in a way that a feasi-] ( [)][ ∈] _[D]_
ble solution exists. Second, we ask the model to provide
a solution trace answer yi with step-by-step reasoning (exact prompts for xi, yi are borrowed from Li et al. (2024),
shown in Appendix E). We assume that the answers generated via this process are accurate, and perform lightweight
filtering step to remove duplicates, badly-formatted answer
traces, and model failures. Based on the above, for any
synthetic problem and solution pair **_x, y_**, we can define a
binary reward function r **_y, ˆy_** ↦ 0, 1, which verifies if
a new solution trace ˆy is correct or not. This is implemented ( )
with a set of answer extraction and string matching tools( ) { }
borrowed from (Yu et al., 2024; Li et al., 2024). We say that
a new trace ˆy is a positive trace if it produces the correct
final answer i.e., r **_yˆ, y_** = 1, and negative if it produces
an incorrect final answer, i.e., r **_yˆ, y_** = 0. By definition,
_r_ **_y, y_** = 1, and the original trace( ) **_y is always positive._**
( )
( )
-----
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
**Positive and negative datasets. The above process in-**
duces a joint distribution psyn **_x, y_**, iid samples from which
yields positive synthetic dataset syn. We note that the
_D_
sampling process for syn is designed to ensure that the in-( )
_D_
duced marginal distribution over synthetic problems psyn **_x_**
is close to preal **_x_** . We will use Dπ[+] [to denote the pos-]
itive dataset of **_x, +yˆ_** where +yˆ is a positive solution( )
trace generated from some policy( ) _π_ ⋅ **_x_** . For a positive
+yˆ and negative ( −yˆ trace, sampled from the same policy)
_π_ ⋅ **_x_**, we denote a dataset over problems and solution( ∣ )
pairs: **_x, +yˆ, −yˆ_** as Dπ[±][.]
( ∣ )
**Reasoning steps. The trace yi consists of several inter-**
( )
mediate steps: yi = **_yi,1, . . ., yi,L_** . We assume each
solution trace has at most L steps, an2d use y1∶t to denote
the subsequence of first [ t steps in the trace. Since mathemat-]
ical reasoning problems require step-by-step computation,
simply arriving at an incorrect final answer does not mean
that all individual steps in a negative ˆy are incorrect. In
fact, given previous steps ˆy1∶t−1 the following intermediate calculation ˆyt is often correct. Similarly, a positive ˆy
may also have incorrect reasoning steps. In fact, even the
original answers generated by more capable models in syn
_D_
may also contain incorrect reasoning steps, and training on
such traces may actually lead to unintended consequences
(Section 4).
**3. Learning from Synthetic Data**
In this section, we discuss various algorithms for learning
from the synthetic dataset Dsyn discussed in the previous
section, as well as positive and negative solution traces
generated using a model.
**Supervised and rejection finetuning (SFT and RFT).**
Given positive synthetic syn, perhaps the most straightfor_D_
ward approach (and the most prevalent) is to learn πsft on
this data via supervised next-token prediction: πsft ⋅ **_x_** ∶=
arg maxπ Ex,y∼Dsyn log π **_y_** **_x_** . Another option is to train
via supervised next-token prediction on problems in( ∣ )syn,
_D_
but when using a positive solution trace( ∣ ) ˆy sampled from
_πsft_ ⋅ **_x_**, instead of positive synthetic responses from the
capable models in syn. Akin to rejection finetuning
_D_
(RFT (( ∣ Yuan et al.), 2023)) or STaR (Zelikman et al., 2022),
sampling from πsft ⋅ **_x_** once is not guaranteed to give
a positive response, and we instead sample M times for
each x and construct the dataset( ∣ ) _Dπ[+]sft_ [of SFT policy gen-]
erated positive responses. Then, we apply the next-token
prediction loss on Dπ[+]sft[.]
**Preference optimization. Beyond positive data, we can**
also learn from negative synthetic data generated from the
SFT policy, especially when contrasted with positive responses. However, learning from negative data presents
multiple open design questions pertaining to the construction of negative traces, and the choice of the loss function,
and simple supervised fine-tuning will not be a good choice
since it will incentivize the model to produce more errors.
Therefore, we utilize a contrastive training approach, direct
preference optimization (DPO (Rafailov et al., 2023)) for
incorporating negative data from πsft. In a nutshell, DPO
trains a policy using the following preference optimization
objective:
EDπ[±]sft _πsft_ +y **_x_** −β log _π[π]sft[(][−]−[y]y[ ∣]_ **_[x]x[)]_**
We consider two objectives that construct negative data −yˆ
in distinct ways and subsequently train the model on that[[][σ][ (][β][ log][ π][(][+]( **_[y][ ∣] ∣[x][)])_** ( ∣ ) [)]][ .][ (1)]
data using Equation 1. The first variant we study is naïve
**_DPO, which simply samples negative data −yˆ ∼_** _πsft_ **_y_** **_x_**
from the SFT policy and adds **_x, y, −yˆ_** to Dπ[±]sft[. The]
second variant is per-step DPO (Hwang et al., 2024), which( ∣ )
first samples a complete solution trace ( ˆy1∶)L from πsft and
then determines the “first pit” ˆyc, such that any completion
**_yˆc+1∶L ∼_** _πsft_ ⋅ **_x, ˆy1∶c_**, sampled conditioned on x, and
previous steps ˆy1∶c leads to incorrect answers for a majority
of the Monte-Carlo rollouts. Given the first pit( ∣ ) ˆyc, the triplet
**_x, y, ˆy1∶c_** is added to the negative dataset Dπ[±]sft[.]
**4. Positive Data Improves Coverage, But**
( )
**Amplifies Spurious Correlations**
We first analyze the influence of scaling up positive synthetic data on GSM8K and MATH. In this experiment, we
fine-tune DeepSeek-Math-7B (Bi et al., 2024) and LLama27B (Touvron et al., 2023) models (details in Appendix H)
on varying sizes of syn, constructed out of a 5:1 mix_D_
ture of GPT-4-turbo (Achiam et al., 2023) and Gemini-1.5
Pro (Reid et al., 2024). We obtain a series of SFT policies
on this data scaling ladder. We then train a series of models
by running one iteration of RFT on data obtained from the
SFT policies at each step.
**Scaling results with positive synthetic data GPT-4 and**
**Gemini 1.5 Pro. Since we assume that the more capable**
models generate correct solutions for new problems, by
scaling Dsyn we are increasing coverage under preal, i.e.,
adding new x, y with non-zero probability under preal. In
Figures 2(a,b), we plot the test error rate of the SFT policy
as Dsyn is scaled. As expected, we observe that the test
error rate on both GSM8K and MATH improves with more
positive data. Further, by simply fitting the parametric scaling law from (Zhang et al., 2024a), for D ∶= _Dsyn_, we
find that the scaling trends decay as ≈D[−][0][.][15] on GSM8K
and ≈D[−][0][.][05] on the harder MATH dataset, with similar ∣ ∣
trends for the corresponding pass@5 error rates. Since these
scaling trends are much more underwhelming than those
for pre-training (Hoffmann et al., 2022), this perhaps implies that samples in Dsyn are indeed improving coverage
over samples in preal **_x, y_**, but maybe not as efficiently as
sampling iid samples directly from it.
( )
**Scaling results with positive synthetic data from 7B SFT**
**policy. Previously, we scaled problems in Dsyn by querying**
-----
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
GSM8K
MATH
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
RFT Scaling
0.59
0.43
0.35
0.21
0.90
0.85
0.80
0.60
0.57
0.62
0.60
0.58
0.28
0.26
0.24
0.22
0.39
0.35
0.05
|Col1|Col2|Col3|Col4|SFT Llama2|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||SFT DeepSeek||
|||||RFT DeepSeek||
|||||2||
|×||||||
|||||SFT DeepSeek Pass@5||
|Col1|Col2|Col3|Col4|SFT Llama2|Col6|
|---|---|---|---|---|---|
|||||||
|SFT DeepSeek||||||
|||||RFT DeepSeek||
|||||2||
|||||×||
|||||SFT DeepSeek Pass@5||
|MATH 8k prompts|1|6k promp|ts|
|---|---|---|---|
|||||
|SM8K 8k prompts 16k prompts||||
|||||
|||||
|||||
MATH 8k prompts 16k prompts
GSM8K 8k prompts 16k prompts
SFT Llama2
SFT DeepSeek
RFT DeepSeek
2×
SFT DeepSeek Pass@5
SFT Llama2
SFT DeepSeek
RFT DeepSeek
2×
SFT DeepSeek Pass@5
8k16k 32k 64k 128k 8k16k 32k 64k 128k 10k 20k 30k 40k 50k 60k
Synthetic dataset size (|Dsyn[+] _[|][)]_ Synthetic dataset size (|Dsyn[+] _[|][)]_ RFT dataset size (|Dπ[+]sft[|][)]
(a) (b) (c)
**Figure 2: Positive data scaling laws: On GSM8K (a) and MATH (b), we evaluate SFT trained on Dsyn and RFT that uses SFT policy**
generated positives (Dπ[+]sft [), as we scale][ D]syn[, observing][ D]π[+]sft [to be][ 2][×][ as effective as][ D]syn[. In (c), we plot performance of RFT the]
number of correct solutions in Dπ[+]sft [are scaled, for a fixed set of 8k/16k problems from][ D]syn[, observing that scaling model positives can]
amplify spurious correlations.
8k16k 32k 64k 128k
8k16k 32k 64k 128k
0.07
0.06
0.05
0.04
0.03
0.02
0.01
GPT-4 and Gemini-1.5. Now, for existing problems in syn
_D_
we generate new responses by sampling from the πsft trained
on problems+solutions in Dsyn. For any **_x, y_** ∈ _Dsyn_
we generate verified positive solution traces ˆy ∼ _πsft s.t._
_r_ **_yˆ, y_** = 1. Following Yuan et al. (2024a ( ), to ensure)
we sample enough correct responses, we sample 100 times
from( _π)sft and generate RFT datasets_ _πsft_ [, where each prob-]
_D[+]_
lem has atmost 4 correct and diverse solutions. Next, we
finetune the pretrained DeepSeek-Math-7B model on these
new series of RFT datasets and plot the performance on
GSM8K and MATH (Figure 2(a,b)). First, we observe that
**for any size of** syn, the performance of the RFT model
_D_
**is better than the corresponding SFT model, and the dif-**
ference remains consistent as we scale syn. Surprisingly,
_D_
this indicates that training on positive answer traces from
the 7B πsft **_y_** **_x_** can lead to better performing policies
than capable models.
( ∣ )
**What is the value of positives from πsft** **_y_** **_x_** **? If sam-**
pling from πsft also improves coverage and performance,
then should we scale problems and solutions in( ∣ ) syn, or
_D_
just solutions in Dπ[+]sft[? To answer this, we need to assign]
a value to the RFT dataset Dπ[+]sft [in terms of][ ∣][D]syn[∣][. We]
do this by training SFT policies on Dsyn of sizes 8k and
16k, and then generating RFT datasets from the corresponding SFT policies where we only add more correct solution
traces (for the same problems) and scale RFT data from
10k to 50k (unlike RFT data in Figure 2(a,b) where both
questions and answers scale). In Figure 2(c) we plot the
error rate of DeepSeek-Math-7B finetuned on the different
sizes of Dπ[+]sft[. Comparing the lowest values of the curves]
in Figure 2(c) with Dsyn scaling in Figure 2(a,b), we note
that performance from Dπ[+]sft **[is][ 2][×][ the size of][ D]syn** **[used]**
**to train πsft. We also note that performance can plateau (or**
worsen in the case of GSM8K) as we scale up Dπ[+]sft [by a lot.]
This is because r ⋅, y is unable to verify the correctness of
each step in the positive solution traces in Dπ[+]sft[. Later, we]
see how incorrect steps induce spurious correlations that get( )
0.00
|Col1|SFT da RFT da|ta ta|Col4|
|---|---|---|---|
|||||
|||||
|||||
|||||
|||||
|||||
SFT data
RFT data
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4
Negative Log Likelihood
**Figure 3: Under base LLM, Dπ[+]sft** [has higher likelihood than]
syn.
_D_
amplified as we scale positive data, explaining this drop.
**Why is self-generated positive data more sample-**
**efficient? From our result above, we find that solutions**
sampled from πsft (trained on Dsyn) yield better models, as
good as those trained on 2 × syn . This finding is sur_D_
prising since one might expect more capable GPT-4/Gemini
models to present better solutions, training on which should ∣ ∣
lead to good performance, akin to distillation (Sharma et al.,
2024), but this is not the case. Our results are consistent
with the study of memorization in LLMs (Kang et al., 2024;
Hartmann et al., 2023; Tirumala et al., 2022), which shows
that pretrained (base) LLMs tend to memorize “hard-to-fit”
and “out-of-pretraining-distribution” responses during finetuning, resulting in imperfect generalization. In contrast,
correct response traces produced by πsft on problems from
_Dsyn are not as hard-to-fit or as out-of-distribution, since_
they are obtained from a model that is “close” to the base
LLM. We confirm this hypothesis with a histogram of negative log-likelihood values of the SFT and RFT data under the
base LLM (Figure 3). Hence, we expect STaR/RFT to alleviate the memorization problem on a large chunk of examples.
This finding also corroborates Yuan et al. (2023)’s result that
lower the perplexity of SFT data under the base model, the
smaller the gap between SFT and RFT performance. Note
that one may also attribute better performance of RFT to
improved coverage from multiple answers in Dπ[+]sft [for each]
question in syn. But, we find that even when RFT data is
_D_
-----
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
0.8
SFT on Dsyn[+] [original] still steer the model towards the correct response on some
0.7 RFT on Dπ[+] [spurious] training problems, but derail it otherwise. In this section, we
0.6 per-step DPO on Dπ[±] [spurious] present a conceptual model for constructing negatives that
enables us to perform per-step credit assignment, and show
0.5 that this approach can help us address these failure modes of
Test accuracy
0.4 positive data. We show that per-step DPO from Section 2 is a
variant of this more general approach. We will then analyze
0.3 scaling laws with negative data and empirically demonstrate
that carefully constructed negative data can address issues
with memorization. Finally, we theoretically prove that
negative data improves sample-efficiency of syn.
_D_
**5.1. Conceptual Model: Constructing Negatives to**
**Enable Per-Step Credit Assignment**
While naïvely contrasing an entire positive response +y
against an entire negative response −y will increase the
likelihood of each step that appears in +y (even when incorrect or irrelevant) and reduce likelihood on each step
appearing in −y (even when accurate and relevant), it does
not account for the importance of each step. Formally, given
a negative solution trace −y, we would want to identify the
first critical step where the model introduces a flaw −y,
and emphasize alternate correct completions from this step
that the model could have still produced. Likewise, given
a positive solution trace, +y, we would like to identify if a
given step +yi does not make progress towards the solution
by identifying if there exist alternatives from its predecessor
step, +y1∶i−1, which now presents a key decision-making
point. What are these critical steps and how can we
**identify them procedurally?**
**Value functions. We can formalize this notion of a critical**
step under the notion of value functions from reinforcement
learning (RL). Recall that both +y and −y are sampled from
_πsft. For problem x, with correct solution y, a response ˆy_
with a sequence of steps ˆy1∶i−1, and a candidate step ˆyi,
we define the value function for step yi, and previous steps
under some policy ˜π as:
_Qπ˜[(][x][,][ ˆ]y1∶i−1, ˆyi_
state action
)
ÍÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÑÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÏ= Ey[new]i+1∶ÍÑÏL[∼]π[˜] ⋅ **_x,yˆ_** 1∶i **_y1∶i, y[new]i+1∶L[]][,][ y][) ]]_** (2)
expected future reward under new actions sampled by policy( ∣ )[[][r][ ([][ˆ] ˜π
Intuitively, for any partial solution upto i steps, this Q
ÍÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÑÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÏ
function evaluates the probability of succeeding at solving
the problem given the remaining budget of L − _i more steps,_
in expectation over all possible futures sampled from some
policy ˜π. Our conceptual model treats the policy ˜π as an algorithmic design choice that can differ for algorithms using
negative data. As we see later, choosing ˜π as the Best-of-K
distribution around πsft (denoted as BoK _πsft_ ) enables a
particularly interesting tradeoff between Q-value estimation
and policy improvement. Another common choice is( ) _πsft_
itself. Now, for any given step ˆyi, we can define its advan
outperforms SFT consistently by > 1%. Since verification
|0.8 SFT on Ds+yn original 0.7 RFT on Dπ+ spurious per-step DPO on Dπ± spurious 0.6 accuracy 0.5 Test 0.4 0.3 GSM8K MATH : Spurious correlations in RFT data hurt p d to one solution per question, LLM t|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
|||||SFT on Ds||||+yn original|||
|||||||RFT on D per-step D||π+ spurious PO on Dπ± spurious|||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
|||s cor solut|GSM8K relati ion||ons i per q|n RFT uesti|MATH data on,||hurt LLM|p t|
is cheap, we can sample more solutions and also benefit
from coverage.
**SFT/RFT policy suffers from spurious correlations in**
**positive synthetic data. While RFT data maybe “easier-**
to-fit”, in Figure 2(c) we also note that continuing to scale
RFT data leads to test error saturation, or even worse test error. This is unlike scaling of problems and solutions in SFT
data (in Figure 2(a,b)). This failure can be attributed to the
presence of incorrect/irrelevant steps that are not detected
by our verifier, since it only verifies the final answer (see
Appendix H, F for examples). For a problem x, when the
LLM is trained with supervised next-token prediction on
some positive sub-optimal y in the RFT data, with incorrect
step yk, it is likely to overfit on spurious correlations between the sub-optimal subsequence y1∶k, and the following
valid step yk+1, when trying to maximize π **_yk+1_** **_y1∶k, x_** .
To verify this hypothesis, we amplify the presence of these
spurious steps. Specifically, for each question in( ∣ syn we)
_D_
sample “spurious steps” from πsft trained on it, i.e., steps
which lead to the incorrect answer with high probability
under πsft (we sample multiple completions conditioned on
the same spurious step to check how likely it leads to the
correct final answer). Then, we interleave the solution traces
in the RFT data with these spurious steps. Note, that all
traces in the RFT data are still positive since, they all lead to
the correct answer eventually. We find that the LLM trained
on this sub-optimal spurious RFT data performs worse than
the πsft policy itself.
Takeaways for scaling positive synthetic data
- While positive data from GPT-4/Gemini-1.5 improves coverage over new problems and solutions,
positive data from SFT policy trained on it is 2×
more sample efficient.
- When positive data from πsft contains spurious
steps, scaling leads to worse test errors.
**5. Negative Synthetic Data Enables Per-Step**
**Credit Assignment**
The spurious correlations from Section 4 correspond to
intermediate irrelevant or incorrect steps that are able to
-----
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
_tage as the relative change in Qπ˜_ [when adding step][ ˆ]yi in
comparison with other possible candidates for step i:
_Aπ˜[(][x][,]y[ˆ]1∶i−1; ˆyi_
= Qπ˜[(][x][,][ ˆ]y1∶i−1, ˆyi − _Qπ˜[(][x][,][ ˆ]y1∶i−2, ˆyi−1_ _. (3)_
)
Equation 3 is identical to the definition of advantage of
an action (i.e., ˆyi) at a state (x) _, ˆy1∶i−1) from RL (Sutton &)_
Barto, 2018), in that it is the gap between the Q-value of a
state-action pair and the value function of the state (which
itself is equal to the Q-value of the previous step due to
deterministic dynamics).
**Critical steps, per-step DPO, and advantage-weighted**
**RL. We can use advatanges (Equation 3) to characterize**
critical steps. While the advantage values will always be
non-positive by definition, steps that attain a higher advantage value than others are more critical to precisely execute.
In contrast, steps that with very low advantage values are
likely worse and must be unlearned. Our definition of the
advantage function and this connection also implies that
one can calculate advantages for each step in a response
via additional Monte-Carlo rollouts starting from prefixes
defined by partial solutions. These advantage estimates
(Equation 3) can then be used for training the model by
running advantage-weighted RL. However, in Theorem 5.1
we formally show that DPO on per-step pairs, which contrasts positive and negative traces obtained via additional
rollouts from policy ˜π, on prefixes of a response sampled
from πsft is equivalent to advantage-weighted RL. A proof
of Theorem 5.1 is in Appendix C. Note that unlike the standard reduction of DPO to the RL objective under some
reward function (Rafailov et al., 2023; 2024), Theorem 5.1
is stronger in that it identifies the value function induced by
per-step DPO.
**Theorem 5.1 (Equivalence of advantage-weighted RL and**
DPO with per-step pairs). The optimal policy from Equa_tion 1 with Dπ[±]sft_ _[given by][ (][x][,][ [][y]1∶i[,][ +][y]i+1[]][,][ [][y]1∶i[,][ −][y]i+1[])]_
_where the positive and negative traces share prefix_
**_y1∶i_** ∼ _πsft, and −yi+1_ ∼ _πsft_ ⋅ **_x, y1∶i_** _, +yi+1_ ∼
_σ_ _Aπ˜[(][x][,][ y]1∶i[;][ ⋅][)][ −]_ _[A]π˜[(][x][,][ y]1∶i[;][ −][y]i+1[))][, is identical to the]_
_optima of the advantage-weighted RL objective:(_ ∣ )
_L_
(
maxπ Ex∼psyn **_x_** _,y∼πsft_ ⋅ **_x_** ∑ log π **_yi_** **_x, y0∶i−1_**
_i=1_
( ) ( ∣ )
[ ( ∣ )
⋅ exp _Aπ˜[(][x][,][ y]0∶i−1[,][ y]i[)/][β][)]][.]_ (4)
**Practical instantation of DPO with per-step pairs. (** Our
practical implementation of per-step DPO is an approximation of the above scheme, with ˜π chosen to be a particular
policy. Concretely, the practical implementation of per-step
DPO sets ˜π to be the best-of-K policy, BoK _πsft_ where
_K = 5. There are two advantages for choosing a higher_
value of K: (i) estimating the advantage in Equation( ) 3 with
Monte-Carlo rollouts has lower variance; and (ii) QBoK _πsft_
( )
is a non-decreasing function in K for any state-action, which
implies that the solution of advantage-weighted RL objective will only improve, in the neighborhood of the SFT
policy πsft that appears in the regularization term. We will
next discuss scaling results for negative data, and then in
Section 5.3 show how per-step credit assignment improves
generalization and builds robustness to spurious correlations.
**5.2. Scaling Results for Negative Data**
Observe in Figure 5(a,b), that for both DeepSeek-Math-7B
and LLama2-7B models, per-step DPO improves performance beyond the SFT policy and the performance continues to scale favorably as data size increases. In fact, also
note that for any given size of syn, per-step DPO also sub_D_
stantially improves over RFT (Figure 2) on both datasets,
and overall, while RFT improved effective data size of
_Dsyn by 2×, additionally training on negative data ex-_
**tends the performance improvement to 8× the size of**
syn. Additionally, since per-step DPO estimates advantage
_D_
of each step under the Best-of-5 policy, one might expect a
saturation in the pass@5 performance of the per-step DPO
solution. On the contrary, we find that pass@5 performance
also improves consistently.
**Choice of negative data has significant impact. In Fig-**
ure 5(c) we plot negative data scaling laws where the choice
of negative data (and thereby pairs for DPO in Equation 1)
differs. Observe that standard pairing of positive and negative responses in Dπ[±]sft [for DPO (][Rafailov et al.][,][ 2023][) does]
not improve upon the SFT policy. As such, we needed to
tune β in Equation 1 for DPO but could not fully avoid performance degradation. Our conceptual model explains this
result: since contrasting arbitrary positives and negatives
would result in an incorrect induced advantage function,
training with DPO will exacerbate spurious correlations that
maximize this induced advantage function (Saeidi et al.,
2024; Pang et al., 2024; Xu et al., 2024). In fact, Pal et al.
(2024) also find similar concerns with random pairing and
instead pair positives and negatives that with highest edit
distance, which leads to some improvement over standard
DPO (Figure 2(c)) but still performs poorer than per-step
DPO that accounts for credit.
Takeaways for scaling negative synthetic data
- Negative data can identify high-advantage (critical)
steps in model-generated responses.
- We can construct negative data distribution that
equates DPO to advantage-weighted RL. Negative
data used in this way improves the sample efficiency
of synthetic data by 8×.
-----
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
GSM8K
MATH
|Col1|Col2|Col3|SFT|Llama2 per-step DP|O|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
||||SFT De|epSeek per-step DP|O|
|||||8||
|||||×||
|||||||
|||||||
||||per-st|ep DPO DeepSeek Pass@5||
|||||||
SFT Llama2 per-step DPO
SFT DeepSeek per-step DPO
8×
per-step DPO DeepSeek Pass@5
8k16k 32k 64k 128k
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
Choice of Negative Data on MATH
0.51
0.43
0.35
0.25
0.21
0.13
0.65
0.62
0.86
0.79
0.72
0.61
0.58
0.55
0.52
0.44
0.40
0.36
0.32
0.59
0.56
0.53
0.04
|Col1|Col2|Col3|SFT|Llama2 per-step DP|O|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
||||SFT De|epSeek per-step DP|O|
|||||||
|||||8×||
|||||||
|||||||
||||per-st|ep DPO DeepSeek Pass@|5|
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
||||SFT DP|O (Rafailov et al. (2023)|)|
||||DP per-|O (Pal et al. (2024)) step DPO||
|||||||
|||||||
|||||||
SFT Llama2 per-step DPO
SFT DeepSeek per-step DPO
8×
per-step DPO DeepSeek Pass@5
SFT
DPO (Rafailov et al. (2023))
DPO (Pal et al. (2024))
per-step DPO
8k16k 32k 64k 128k
Synthetic dataset size (|Dsyn[+] _[|][)]_
(a)
8k16k 32k 64k 128k
Synthetic dataset size (|Dsyn[+] _[|][)]_
(c)
Synthetic dataset size (|Dsyn[+] _[|][)]_
(b)
**Figure 5: Negative data scaling laws: We evaluate algorithms that consume negative data as we scale** syn, and compare them with only
_D_
positive training (SFT) on syn. On GSM8K (a) and MATH (b), we observe an 8× gain from per-step DPO (Section 3) which aligns with
_D_
our model of negative data that enables per-step credit assignment. In (c) we compare different negative data construction algorithms, and
particularly note that naïvely pairing positives and negatives (Rafailov et al., 2023) leads to worse performance as we scale syn.
_D_
of the final answer, the model fails to generalize on novel
problems, as we saw in Figure 4. We now explain how
_online model-specific interventions and advantage estima-_
tion would address this issue. Consider ˜π = πsft. As we
show later, in under-trained models memorized steps are
imperfectly cloned under πsft, implying that while teacherforcing loss is low for some spurious, memorized step ys,
sampling paths from πsft, conditioned on y1∶s is likely to
generate incorrect responses. This means ys attains a low
advantage. On the other hand, for a correct step, whp estimated advantage is higher. Thus, training the model with
advantage weighted RL would de-emphasize spurious steps
and emphasize critical steps. Running per-step DPO on data
generated by the RFT model that has overfit on spurious
correlations improves accuracy by >6% (Figure 4). We
visualize advantages in Appendix F. In Figure 8, we plot
the average Q-value of a step for different negative data
schemes, and note that only per-step DPO improves over
SFT at each step, as expected based on the connection to
advantage-weighted RL (Theorem 5.1). Standard DPO fails
to improve performance since it has poor success rate at
earlier (critical) steps.
**2) Generalization depends on low advantage estimation**
**error. The practical efficacy of algorithms that use negative**
data for credit assignment requires the advantage estimation
error to be low with fewer rollouts from ˜π. For discussion,
consider ˜π = πsft. When the initial advantage of a spurious
step is incorrectly over-estimated, negative data algorithms
up-weight the likelihood further. This only leads to further
memorization. Hence, most Monte-Carlo rollouts from πsft
would rely upon the memorized feature. Since the model
generates the correct answer from the memorized feature,
it would estimate higher Aπsft, and this downward spiral
of training with increasing weights on the spurious step
leads to test-time model collapse. On the other hand, when
_π˜ = BoK_ _πsft_ for a higher value of K, the Monte-Carlo
advantage estimator has a lower variance (and error). This
( )
Start W4 . . . Wrong
C1 S1 S2 W5 . . . solutions
Spurious steps
Correct
C2 C3 C4 C5 C6 solution
. = Steps W1 W2 W3 solutionWrong
Informally, step is reasoning operation performed as a part of solving the problem,
e.g. if previous state was 2x-5 = 1, then one step can be calculating value of x.
Advantage function
𝐴!"[(𝐶]#[, 𝐶]# [→𝑆]$[)][ is ][low ][(often leads to a wrong sol.)]
𝐶# →𝐶%
is critical
𝐶% →𝐶&
is critical
𝐴!" [𝐶]#[,][ 𝐶]# [→𝐶]% [is ][midrange][ (leads to correct ]
and wrong sol.)
𝐴!"[(𝐶]%[, 𝐶]% [→𝑊]$[)][ is ][low ][(leads to a wrong sol.)]
𝐴!"[(𝐶]%[,][ 𝐶]% [→𝐶]&[)][ is ][high ][(leads to the correct sol.)]
**Figure 6: Illustration of advantage estimation from negative data**
for identifying critical steps in synthetic model generations.
**5.3. Why Does Credit Assignment Improve Model**
**Generalization?**
Our conceptual model illustrates that per-step DPO can
perform credit assignment, and identify critical steps over
irrelevant ones via advantage estimates. We saw that this
improves test performance and scaling. Now, we attempt to
understand why per-step credit assignment should improve
generalization by understanding the generalization properties of advantage-weighted RL. We present two empirical
studies below, and a formal theoretical guarantee combining
these insights is shown in Appendix D.
**1) Advantage-weighted RL de-emphasizes spurious steps**
**and emphasizes critical steps. Our key insight is that spu-**
rious correlations emerge in monolithic SFT or RFT due to
the well-known issue of causal confusion (De Haan et al.,
2019) in imitation learning: by memorizing incorrect or
irrelevant steps and associating them with the correctness
-----
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
0.6
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
|Col1|SFT test error|
|---|---|
||prefix DPO (iter 580)|
|||
|||
|||
|||
SFT test error
prefix DPO (iter 580)
(a) (b) (c)
0.30 0.8
SFT loss
0.25 per-step DPO (iter 60) loss
0.6
0.20
0.15 SFT critical token Q-value
0.4 Q-value
0.10
Next-token prediction loss 0.05 0.2
0.00
0 100 200 300 400 500
Training iterations
SFT
0.8 pre-step DPO (iter 60)
pre-step DPO (iter 200)
0.6
0.4
Test error
0.2
0.0
0 100 200 300 400 500
Training iterations
**Figure 7: Didactic analysis on star graph: In (a) we plot the SFT loss and Q-value of the critical token (adjacent node) for SFT and**
per-step DPO (starting from iter 60). Indicative of memorization SFT loss decreases at a slow rate, matching the slow rate of increase in
the Q-value. In contrast per-step DPO loss sharply decreases during training. In (b) we notice a corresponding phase transition in the test
error of per-step DPO starting from different under-trained SFT checkpoints, which does not happen for an over-trained SFT checkpoint
in (c).
find that when training with negative data from iteration 60
(under-trained πsft) and iteration 200 (early-stopped πsft),
utilizing per-step DPO reduces the training loss very aggresively. These benefits translate to test losses and performance as well (Figure 7(b), orange and green). In contrast,
supervised finetuning exhibits a nearly-flat test loss landscape, although the train loss reduces slowly. Upon a closer
inspection, we find that training on positive data via SFT
only tends to memorize the critical token in the training data
using non-generalizable features, and hence, the resulting
model does not generalize to novel problems. More training
with SFT is unable to “unlearn” this spurious correlation and
does not reduce the loss function. On the other hand, perstep DPO with negative data is able to unlearn this spurious
feature and drives improvement, as evident by the drastic
improvement on train and test.
**(3) Training on negative data from an over-trained SFT**
**initialization leads to model collapse. When training with**
negative data on an over-trained πsft (iteration 580) in Figure 7(c), we observe that both SFT and per-step DPO exhibit
identical test errors since training with more negative data
exacerbates the model’s dependence on memorizing the
critical token, which manifests in the form of lower train
losses. This is also an example where Monte-Carlo samples
from the over-trained checkpoint estimates a high advantage
since Q-value is already high at iteration 500 (in (a)). Thus,
when the SFT policy has sufficiently memorized the training
data using a spurious feature, training further is unable to
unlearn this dependence. Hence, in this regime, negative
data leads to no improvement, capping performance at what
was attained by fine-tuning on positive data.
0.50
0.45
0.40
0.35
0.30
|Col1|Col2|Col3|Col4|
|---|---|---|---|
|||||
|||||
|SFT DPO (Rafailov et al. (2|023))|||
|DPO (Pal et al. (2024)) per-step DPO||||
SFT
DPO (Rafailov et al. (2023))
DPO (Pal et al. (2024))
per-step DPO
1 2 3 4 5 6 7 8
Step
**Figure 8: Per-step DPO improves Q-values at each step, standard**
DPO only improves at irrelevant steps.
discussion also justifies the choice of K=5, an intermediate
value, in per-step DPO.
**Didactic analysis. With the above insight, we now study**
the influence of πsft on the generalization effects of per-step
DPO. For our analysis, we consider a didactic star graph
problem (Appendix G) from Bachmann & Nagarajan (2024),
where given a graph in the shape of a star and a query (center/end node), the model is asked to output the full path
between the start/end nodes. This task highlights the failure
of SFT at planning problems (akin to math reasoning). They
show that πsft minimizes SFT loss by memorizing the “hardto-predict” node adjacent to the center, and copying the rest
from the input graph. It is clear that the failure stems from
not being able to identify the critical adjacent token. We will
show how credit assignment with negative data accurately
upweights the critical token and unlearns the memorized
token. To vary the choice of πsft, we choose several intermediate checkpoints obtained during supervised finetuning
for synthetic negative data generation. We consider three
initializations: (1) an under-trained SFT model with a large
training and test loss, and (2) an SFT model obtained by
early-stopping based on a held-out validation set, where the
validation loss is the lowest, and (3) an over-trained SFT
checkpoint, with a low training but high validation loss.
**(1) & (2): Training on negative data from an under-**
**trained or early-stopped πsft improves both training loss**
**and test performance. As shown in Figure 7(a,b), we**
Takeaways for generalization with negative data
Advantage-weighted RL unlearns spurious steps and
improves generalization when: (i) advantage estima-
tion error is low; and (ii) the model is under-trained
enough that imperfectly cloned spurious steps have
low advantage, which can then be estimated with
negative data.
-----
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
**References**
Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I.,
Aleman, F. L., Almeida, D., Altenschmidt, J., Altman, S.,
Anadkat, S., et al. Gpt-4 technical report. arXiv preprint
_arXiv:2303.08774, 2023._
Agarwal, A., Jiang, N., Kakade, S. M., and Sun, W. Reinforcement learning: Theory and algorithms. CS Dept.,
_UW Seattle, Seattle, WA, USA, Tech. Rep, 2019._
Alemohammad, S., Casco-Rodriguez, J., Luzi, L., Humayun, A. I., Babaei, H., LeJeune, D., Siahkoohi, A.,
and Baraniuk, R. G. Self-consuming generative models
go mad. arXiv preprint arXiv:2307.01850, 2023.
Bachmann, G. and Nagarajan, V. The pitfalls of next-token
prediction, 2024.
Bi, X., Chen, D., Chen, G., Chen, S., Dai, D., Deng, C.,
Ding, H., Dong, K., Du, Q., Fu, Z., et al. Deepseek llm:
Scaling open-source language models with longtermism.
_arXiv preprint arXiv:2401.02954, 2024._
Bradley, R. A. and Terry, M. E. Rank analysis of incomplete block designs: I. the method of paired comparisons.
_Biometrika, 39(3/4):324–345, 1952._
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D.,
Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
Askell, A., et al. Language models are few-shot learners.
_Advances in neural information processing systems, 33:_
1877–1901, 2020.
Chen, Z., Deng, Y., Yuan, H., Ji, K., and Gu, Q. Self-play
fine-tuning converts weak language models to strong language models. arXiv preprint arXiv:2401.01335, 2024.
Cheng, P., Yang, Y., Li, J., Dai, Y., and Du, N.
Adversarial preference optimization. _arXiv preprint_
_arXiv:2311.08045, 2023._
Chiang, W.-L., Li, Z., Lin, Z., Sheng, Y., Wu, Z., Zhang,
H., Zheng, L., Zhuang, S., Zhuang, Y., Gonzalez, J. E.,
Stoica, I., and Xing, E. P. Vicuna: An open-source
chatbot impressing gpt-4 with 90%* chatgpt quality,
March 2023. [URL https://lmsys.org/blog/](https://lmsys.org/blog/2023-03-30-vicuna/)
[2023-03-30-vicuna/.](https://lmsys.org/blog/2023-03-30-vicuna/)
Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H.,
Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano,
R., Hesse, C., and Schulman, J. Training verifiers to solve
math word problems. arXiv preprint arXiv:2110.14168,
2021.
De Haan, P., Jayaraman, D., and Levine, S. Causal confusion in imitation learning. Advances in neural information
_processing systems, 32, 2019._
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
Dohmatob, E., Feng, Y., and Kempe, J. Model collapse
demystified: The case of regression, 2024.
Dong, G., Yuan, H., Lu, K., Li, C., Xue, M., Liu, D., Wang,
W., Yuan, Z., Zhou, C., and Zhou, J. How abilities in large
language models are affected by supervised fine-tuning
data composition. _arXiv preprint arXiv:2310.05492,_
2023.
Dziri, N., Lu, X., Sclar, M., Li, X. L., Jiang, L., Lin, B. Y.,
Welleck, S., West, P., Bhagavatula, C., Le Bras, R., et al.
Faith and fate: Limits of transformers on compositionality.
_Advances in Neural Information Processing Systems, 36,_
2024.
Ethayarajh, K., Xu, W., Muennighoff, N., Jurafsky, D., and
Kiela, D. Kto: Model alignment as prospect theoretic
optimization. arXiv preprint arXiv:2402.01306, 2024.
Gerstgrasser, M., Schaeffer, R., Dey, A., Rafailov, R.,
Sleight, H., Hughes, J., Korbak, T., Agrawal, R., Pai,
D., Gromov, A., Roberts, D. A., Yang, D., Donoho, D. L.,
and Koyejo, S. Is model collapse inevitable? breaking
the curse of recursion by accumulating real and synthetic
data, 2024.
Hartmann, V., Suri, A., Bindschaedler, V., Evans, D., Tople,
S., and West, R. Sok: Memorization in general-purpose
large language models, 2023.
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart,
S., Tang, E., Song, D., and Steinhardt, J. Measuring mathematical problem solving with the math dataset. NeurIPS,
2021.
Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E.,
Cai, T., Rutherford, E., Casas, D. d. L., Hendricks, L. A.,
Welbl, J., Clark, A., et al. Training compute-optimal
large language models. arXiv preprint arXiv:2203.15556,
2022.
Hong, J., Lee, N., and Thorne, J. Reference-free monolithic
preference optimization with odds ratio. arXiv preprint
_arXiv:2403.07691, 2024._
Hosseini, A., Yuan, X., Malkin, N., Courville, A., Sordoni,
A., and Agarwal, R. V-star: Training verifiers for selftaught reasoners. arXiv preprint arXiv:2402.06457, 2024.
Hwang, H., Kim, D., Kim, S., Ye, S., and Seo, M. Selfexplore to avoid the pit: Improving the reasoning capabilities of language models with fine-grained rewards. arXiv
_preprint arXiv:2404.10346, 2024._
Kääriäinen, M. Lower bounds for reductions. In Atomic
_Learning Workshop, 2006._
-----
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
Kakade, S. and Langford, J. Approximately optimal approximate reinforcement learning. In International Conference
_on Machine Learning (ICML), volume 2, 2002._
Kang, K., Wallace, E., Tomlin, C., Kumar, A., and Levine,
S. Unfamiliar finetuning examples control how language
models hallucinate, 2024.
Kumar, A., Hong, J., Singh, A., and Levine, S. When
Should We Prefer Offline Reinforcement Learning over
Behavioral Cloning? ICLR, 2022.
Li, C., Wang, W., Hu, J., Wei, Y., Zheng, N., Hu, H.,
Zhang, Z., and Peng, H. Common 7b language models
already possess strong math capabilities. arXiv preprint
_arXiv:2403.04706, 2024._
Lightman, H., Kosaraju, V., Burda, Y., Edwards, H., Baker,
B., Lee, T., Leike, J., Schulman, J., Sutskever, I., and
Cobbe, K. Let’s verify step by step, 2023.
Liu, H., Zaharia, M., and Abbeel, P. Exploration with
principles for diverse ai supervision. _arXiv preprint_
_arXiv:2310.08899, 2023._
Liu, R., Wei, J., Liu, F., Si, C., Zhang, Y., Rao, J., Zheng, S.,
Peng, D., Yang, D., Zhou, D., and Dai, A. M. Best practices and lessons learned on synthetic data for language
models, 2024.
Luo, H., Sun, Q., Xu, C., Zhao, P., Lou, J., Tao, C., Geng, X.,
Lin, Q., Chen, S., and Zhang, D. Wizardmath: Empowering mathematical reasoning for large language models
via reinforced evol-instruct, 2023.
McCoy, R. T., Yao, S., Friedman, D., Hardy, M., and Griffiths, T. L. Embers of autoregression: Understanding
large language models through the problem they are
trained to solve. arXiv preprint arXiv:2309.13638, 2023.
Momennejad, I., Hasanbeig, H., Vieira Frujeri, F., Sharma,
H., Jojic, N., Palangi, H., Ness, R., and Larson, J. Evaluating cognitive maps and planning in large language
models with cogeval. Advances in Neural Information
_Processing Systems, 36, 2024._
Munos, R., Valko, M., Calandriello, D., Azar, M. G., Rowland, M., Guo, Z. D., Tang, Y., Geist, M., Mesnard, T.,
Michi, A., et al. Nash learning from human feedback.
_arXiv preprint arXiv:2312.00886, 2023._
Pal, A., Karkhanis, D., Dooley, S., Roberts, M., Naidu, S.,
and White, C. Smaug: Fixing failure modes of preference optimisation with dpo-positive. arXiv preprint
_arXiv:2402.13228, 2024._
Pang, R. Y., Yuan, W., Cho, K., He, H., Sukhbaatar, S., and
Weston, J. Iterative reasoning preference optimization.
_arXiv preprint arXiv:2404.19733, 2024._
Peng, X. B., Kumar, A., Zhang, G., and Levine, S.
Advantage-weighted regression: Simple and scalable
off-policy reinforcement learning. _arXiv preprint_
_arXiv:1910.00177, 2019._
Rafailov, R., Sharma, A., Mitchell, E., Ermon, S., Manning,
C. D., and Finn, C. Direct preference optimization: Your
language model is secretly a reward model. arXiv preprint
_arXiv:2305.18290, 2023._
Rafailov, R., Hejna, J., Park, R., and Finn, C. From r to q[∗]:
Your language model is secretly a q-function, 2024.
Reid, M., Savinov, N., Teplyashin, D., Lepikhin, D., Lillicrap, T., Alayrac, J.-b., Soricut, R., Lazaridou, A., Firat,
O., Schrittwieser, J., et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context.
_arXiv preprint arXiv:2403.05530, 2024._
Ross, S. and Bagnell, D. Efficient reductions for imitation learning. In International Conference on Artificial
_Intelligence and Statistics (AISTATS), pp. 661–668, 2010._
Saeidi, A., Verma, S., and Baral, C. Insights into alignment: Evaluating dpo and its variants across multiple
tasks. arXiv preprint arXiv:2404.14723, 2024.
Seddik, M. E. A., Chen, S.-W., Hayou, S., Youssef, P., and
Debbah, M. How bad is training on synthetic data? a
statistical analysis of language model collapse, 2024.
Sharma, A., Keh, S., Mitchell, E., Finn, C., Arora, K., and
Kollar, T. A critical evaluation of ai feedback for aligning
large language models, 2024.
Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot,
N., and Anderson, R. The curse of recursion: Training
on generated data makes models forget. arXiv preprint
_arXiv:2305.17493, 2023._
Singh, A., Co-Reyes, J. D., Agarwal, R., Anand, A., Patil,
P., Garcia, X., Liu, P. J., Harrison, J., Lee, J., Xu, K.,
Parisi, A., Kumar, A., Alemi, A., Rizkowsky, A., Nova,
A., Adlam, B., Bohnet, B., Elsayed, G., Sedghi, H., Mordatch, I., Simpson, I., Gur, I., Snoek, J., Pennington, J.,
Hron, J., Kenealy, K., Swersky, K., Mahajan, K., Culp,
L., Xiao, L., Bileschi, M. L., Constant, N., Novak, R.,
Liu, R., Warkentin, T., Qian, Y., Bansal, Y., Dyer, E.,
Neyshabur, B., Sohl-Dickstein, J., and Fiedel, N. Beyond
human data: Scaling self-training for problem-solving
with language models, 2024.
Sutton, R. S. and Barto, A. G. Reinforcement learning: An
_introduction. The MIT Press, second edition, 2018._
Swamy, G., Dann, C., Kidambi, R., Wu, Z. S., and Agarwal,
A. A minimaximalist approach to reinforcement learning
from human feedback. arXiv preprint arXiv:2401.04056,
2024.
10
-----
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
Tajwar, F., Singh, A., Sharma, A., Rafailov, R., Schneider,
J., Xie, T., Ermon, S., Finn, C., and Kumar, A. Preference
fine-tuning of llms should leverage suboptimal, on-policy
data, 2024.
Tirumala, K., Markosyan, A., Zettlemoyer, L., and Aghajanyan, A. Memorization without overfitting: Analyzing
the training dynamics of large language models. Ad_vances in Neural Information Processing Systems, 35:_
38274–38290, 2022.
Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi,
A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P.,
Bhosale, S., et al. Llama 2: Open foundation and finetuned chat models. arXiv preprint arXiv:2307.09288,
2023.
Villalobos, P., Sevilla, J., Heim, L., Besiroglu, T., Hobbhahn,
M., and Ho, A. Will we run out of data? an analysis of
the limits of scaling datasets in machine learning. arXiv
_preprint arXiv:2211.04325, 2022._
Wang, P., Li, L., Shao, Z., Xu, R. X., Dai, D., Li, Y., Chen,
D., Wu, Y., and Sui, Z. Math-shepherd: Verify and
reinforce llms step-by-step without human annotations,
2024.
Wang, Y., Liu, Q., and Jin, C. Is rlhf more difficult than
standard rl? arXiv preprint arXiv:2306.14111, 2023.
Williams, R. J. and Zipser, D. A learning algorithm for continually running fully recurrent neural networks. Neural
_computation, 1(2):270–280, 1989._
Wu, T., Zhu, B., Zhang, R., Wen, Z., Ramchandran, K., and
Jiao, J. Pairwise proximal policy optimization: Harnessing relative feedback for llm alignment. arXiv preprint
_arXiv:2310.00212, 2023._
Wyllie, S., Shumailov, I., and Papernot, N. Fairness feedback loops: Training on synthetic data amplifies bias,
2024.
Xu, H., Sharaf, A., Chen, Y., Tan, W., Shen, L., Van Durme,
B., Murray, K., and Kim, Y. J. Contrastive preference
optimization: Pushing the boundaries of llm performance
in machine translation. arXiv preprint arXiv:2401.08417,
2024.
Yu, F., Gao, A., and Wang, B. Outcome-supervised verifiers
for planning in mathematical reasoning. arXiv preprint
_arXiv:2311.09724, 2023._
Yu, L., Jiang, W., Shi, H., Yu, J., Liu, Z., Zhang, Y., Kwok,
J. T., Li, Z., Weller, A., and Liu, W. Metamath: Bootstrap your own mathematical questions for large language
models, 2024.
Yuan, L., Cui, G., Wang, H., Ding, N., Wang, X., Deng, J.,
Shan, B., Chen, H., Xie, R., Lin, Y., et al. Advancing
llm reasoning generalists with preference trees. arXiv
_preprint arXiv:2404.02078, 2024a._
Yuan, W., Pang, R. Y., Cho, K., Sukhbaatar, S., Xu, J.,
and Weston, J. Self-rewarding language models. arXiv
_preprint arXiv:2401.10020, 2024b._
Yuan, Z., Yuan, H., Li, C., Dong, G., Tan, C., and Zhou,
C. Scaling relationship on learning mathematical reasoning with large language models. _arXiv preprint_
_arXiv:2308.01825, 2023._
Zelikman, E., Wu, Y., Mu, J., and Goodman, N. Star: Bootstrapping reasoning with reasoning. Advances in Neural
_Information Processing Systems, 35:15476–15488, 2022._
Zhang, B., Liu, Z., Cherry, C., and Firat, O. When scaling meets llm finetuning: The effect of data, model and
finetuning method, 2024a.
Zhang, R., Lin, L., Bai, Y., and Mei, S. Negative preference
optimization: From catastrophic collapse to effective unlearning. arXiv preprint arXiv:2404.05868, 2024b.
Zhao, Y., Khalman, M., Joshi, R., Narayan, S., Saleh, M.,
and Liu, P. J. Calibrating sequence likelihood improves
conditional language generation. In The Eleventh Inter_national Conference on Learning Representations, 2022._
11
-----
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
# Appendices
**A. Related Work**
A standard procedure to finetune a pretrained LLM is teacher-forcing on expert data, i.e., maximizing the likelihood of the
next token given all previous tokens (Williams & Zipser, 1989; Brown et al., 2020). First, we discuss some failure modes of
this procedure for math reasoning that positive or negative synthetic data can address.
**Failure modes for supervised finetuning (SFT). First, since SFT induces an open-loop (Wu et al., 2023) next-token**
prediction loss, prediction errors on even a single token can snowball during inference, leading to poor performance on the
prompts appearing in the dataitself (Kääriäinen, 2006; Ross & Bagnell, 2010). Second, even when an LLM has perfectly
cloned the SFT data, it is prone to memorize “hard to learn” tokens (Tirumala et al., 2022), especially in planning and
lookahead tasks (McCoy et al., 2023; Momennejad et al., 2024), which is critical for math reasoning. This leads to poor
generalization (Bachmann & Nagarajan, 2024; Dziri et al., 2024) and hallucination on new novel, test-tim prompts (Kang
et al., 2024). In this work, we study how synthetic data methods can address these failures via: (i) maximizing likelihood on
positive data generated from both the SFT policy and a stronger teacher that enjoys improved coverage over new states,
and (ii) preference optimization using the negative data generated from the SFT policy. Positive synthetic data. Learning
theory dictates that the SFT policy trained on more SFT data (e.g., 1.5M for DeepSeek-Math (Bi et al., 2024)) would have
improved math reasoning capbabilities. Thus, a common goal for generating synthetic data as close as possible to the
SFT data (Li et al., 2024; Liu et al., 2023; 2024). That said, generating high quality math data can be challenging, since
verification can often be hard. When synthetic data is verified by larger models (Sharma et al., 2024; Wang et al., 2024),
recent works (Luo et al., 2023; Yu et al., 2024) observe scaling similar to finetuning LLMs on expert data (Zhang et al.,
2024a; Yuan et al., 2023), while another work (Dong et al., 2023) notes the compositional gains from SFT data for code
generation. Common sources of “good” synthetic data include responses from stronger teachers (Li et al., 2024; Lightman
et al., 2023), or data generated by the SFT policy itself, in the framework of reinforced self-training (ReST) and STaR
(Zelikman et al., 2022; Singh et al., 2024; Chen et al., 2024; Yuan et al., 2023). In our work, we study and compare the
performance scaling with positive synthetic data from bigger models like GPT-4 and Gemini 1.5 Pro with self-generated
positive data. We connect our findings to evidence showing “ease of learning” generalizable features on self-generated
completions (Kang et al., 2024) which often prevents undesirable memorization (Tirumala et al., 2022). Finally, our work
also sheds light on several concerns about training on synthetic positive data amplifying biases (Seddik et al., 2024; Wyllie
et al., 2024), and leading to model collapse (Dohmatob et al., 2024; Gerstgrasser et al., 2024), especially due to overfitting
on“spurious” intermediate steps. We conceptually explain this phenomenon and also discuss how negative model-generated
responses can help identify and unlearn those spurious steps.
**Benefits and nuances of negative synthetic data. While most works on synthetic data for math reasoning (Yu et al., 2024;**
Li et al., 2024; Liu et al., 2024; Yuan et al., 2023) focus on training on positive (correct) answers, our work also studies
complementary gains from negative (incorrect) completions generated by the SFT policy (Hwang et al., 2024; Pal et al.,
2024; Yuan et al., 2024b; Pang et al., 2024). To leverage sub-optimal negative data, we adopt the generic framework of
offline preference optimization (Rafailov et al., 2023; Ethayarajh et al., 2024; Zhao et al., 2022), where a preference pair
is constructed using correct and incorrect responses for the same problem (Pal et al., 2024). Despite numerous studies on
preference data composition (Chen et al., 2024; Cheng et al., 2023; Tajwar et al., 2024; Chiang et al., 2023; Wang et al.,
2023; Munos et al., 2023; Swamy et al., 2024), it remains unclear what is the best approach to pose a reasoning problem as
a preference optimization problem. Randomly pairing correct and incorrect completions in a preference pair can lead to
poor performance (Pang et al., 2024; Hong et al., 2024; Xu et al., 2024; Pal et al., 2024) due to objective mismatch (Tajwar
et al., 2024; Zhang et al., 2024b) and requires auxilliary losses to perform well. Another option is to utilize negative data
for training verifiers (Hosseini et al., 2024; Yu et al., 2023) but this line of work still only trains the policy using positive
data. We introduce a conceptual model of negative data, where we understand how certain choices of negative data can
assign per-step credits, which we then use to establish the equivalence of preference optimiztion to to advantage weighted
RL. Self-explore method in Hwang et al. (2024) can be viewed as an special instance of our general framework. Another
work exploiting per-step credits is Wang et al. (2024): through tree-based sampling they identify and use the reasoning
subsequence that led to the most incorrect answers under the SFT policy for training a reward model. While this is indeed
related, our conceptual model and analysis also aims to understand why assigning per-step credits can generalize better by
unlearning spurious correlations, e.g., when the credits are given by the Q-function of the “best-of-K” SFT policy.
12
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
-----
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
**B. Limitations and Broader Impact**
While our work provides some results and conceptual models to understand the role of synthetic data for reasoning, there
are still many open questions that need to be answered to fully understand its utility. While synthetic data from LLMs like
Gemini and GPT-4 holds great potential, for more complex reasoning problems (more complicated than the datasets evaluated
in our work), synthetic data generated from more capable models can contain errors and generating negative/positive data
by referencing synthetic data answers can reinforce unwanted spurious correlations highlighted in our work. This means
that other novel recipes for generating synthetic problems may be utilized in the future, and our analysis might need to be
re-done with those novel recipes. That said, we believe that our insights about algorithmic behavior with synthetic data are
still quite general and should transfer to these novel settings as well. Ultimately, we would want that training on synthetic
data improves transfer and generalization abilities of the model in general reasoning scenarios, and to this end, an evaluation
of transfer capabilities is an important avenue that future work should focus on.
**Broader impact. Our work focuses purely on understanding the role of synthetic data in improving reasoning capabilities of**
LLMs. While excessive use of synthetic data can have unintended side effects upon deployment (e.g., fitting onto spurious
correlations as we illustrate in Section 4) and advanced reasoning capabilities may have the potential to affect economy,
human life, and society in both good and bad ways, we believe that these societal impacts are not unique or special to our
work when compared to other works studying similar problems. In fact, with capabilities in foundation models improving
day-by-day, future research and policy decisions would only benefit from more conceptual models to understand how
algorithms operate and how data affects performance, which our work attempts to study.
**C. Proof of Theorem 5.1**
We first restate the theorem statement and then provide a proof for this below. Our main goal in this theorem is to show that
training with per-step DPO is equivalent to running advantage-weighted RL shown in the theoretical result.
**Theorem C.1 (Equivalence of advantage-weighted RL and DPO with per-step pairs). The optimal policy from Equation 1**
_with Dπ[±]sft_ _[given by][ (][x][,][ [][y]1∶i[,][ +][y]i+1[]][,][ [][y]1∶i[,][ −][y]i+1[])][ where the positive and negative traces share prefix][ y]1∶i_ [∼] _[π]sft[, and]_
−yi+1 ∼ _πsft_ ⋅ **_x, y1∶i_** _, +yi+1 ∼_ _σ_ _Aπ˜[(][x][,][ y]1∶i[;][ ⋅][)][ −]_ _[A]π˜[(][x][,][ y]1∶i[;][ −][y]i+1[))][, is identical to the optima of the advantage-]_
_weighted RL objective:_
( ∣ ) (
_L_
maxπ Ex∼psyn **_x_** _,y∼πsft_ ⋅ **_x_** ∑ log π **_yi_** **_x, y0∶i−1_** ⋅ exp _Aπ˜[(][x][,][ y]0∶i−1[,][ y]i[)/][β][)]][ .]_ (5)
_i=1_
( ) ( ∣ )
[ ( ∣ ) (
_Proof. To prove this statement, we make the following observation: DPO (Rafailov et al., 2023) is equivalent to optimizing_
a KL-divergence penalized expected reward objective in an induced Bradly-Terry model of preferences defined by the
reward function. That is, for any reward function r **_x, y_** over contexts x ∼ _µ and responses y, the optimal solution to the_
following RL objective:
( )
maxπ Ex∼µ,y∼π ⋅ **_x_** _r_ **_x, y_** − _βDKL_ _π_ ⋅ **_x_** _πsft_ ⋅ **_x_** _,_ (6)
( ∣ )
is given by the following advantage-weighted optimal policy, [ ( )] π[∗] ⋅ ⋅ : ( ( ∣ )∣∣ ( ∣ ))
∀x, y, π[∗] **_y_** **_x_** ∝ _πsft_ **_y_** (x∣ ⋅) exp _β_ _,_ (7)
and one can learn this optimal policy by running DPO on preference tuples( ∣ ) ( ∣ ) ( _[r][(][x][,][ y]x[)], y)_ 1, y2 sampled by the Bradly-Terry
model (Bradley & Terry, 1952) induced by the reward function r:
( )
_p_ **_y1 ≽_** **_y2_** **_x_** = exp _r_ **_x,exp y1_** _r +x exp, y1_ _r_ **_x, y2_** (8)
( ( ))
Given this background information, we know that the optimal advantage-weighted RL policy optimizing Equation ( ∣ ) 5 is given
( ( )) ( ( )) _[.]_
by:
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
_π[(][x][,][ y]0∶i−1[,][ y]i[)]_
∀x, y0∶i, π **_yi_** **_x, y0∶i−1_** ∝ _πsft_ **_yi_** **_x, y0∶i−1_** ⋅ exp _β_
( ∣ ) ( ∣ ) ( _[A][˜]_
13
_._ (9)
)
-----
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
Combining Equation 9 with the equivalence between Equation 7 and the Bradly-Terry model (Equation 8), we obtain
that, if preference pairs **_x,_** **_y1∶i, +yi+1_** _,_ **_y1∶i, −yi+1_** were sampled from the SFT policy: +yi+1 ∼ _πsft_ ⋅ **_x, y0∶i_** and
−yi+1 ∼ _πsft_ ⋅ **_x, y0∶i_**, and labeled according to Equation 8 applied on advantage estimates, then we obtain the desired
equivalence result. ( [ ] [ ]) ( ∣ )
( ∣ )
**D. Theory: Why Does Negative Data Improve Generalization?**
We saw in Section 5.3 that collecting negative data from an appropriate SFT policy πsft and an appropriate K, and training
on this data improves generalization performance of the resulting model. In this section, building on the equivalence to
advantage-weighted RL (Theorem 5.1), we attempt to formalize this observation into a performance guarantee. In particular,
we show below that training on negative data implies that we are able to improve over the SFT policy, especially via the
detection of critical steps, that attain high advantages, Aπ˜[(][x][,][ y]0∶i−1[,][ y]i[)][, that are otherwise not prioritized by training on]
positive data alone. Our theoretical result extends guarantees from the RL literature (Kumar et al., 2022) comparing RL with
imitation learning to show that indeed the use of RL (and hence negative data) improves over imitation alone.
**Notation and setup. Define the policy obtained after advantage-weighted RL training as πneg. Concretely, πneg** **_y_** **_x_** is
given as:
( ∣ )
1 _Aˆπ˜[(][x][,][ y]0∶j[,][ y]j+1[)]_
∀x, y0∶j+1, πneg **_yj+1_** **_x, y0∶j_** = _πsft_ **_yj+1_** **_x, y0∶j_** ⋅ exp _,_ (10)
Ẑ **_x, y0∶j_** _β_
( ∣ ) ( ∣ ) ( )
where the normalization factoris given by Z **_x, y0(∶j_** for each of the per-step policy distributions. This normalization factor)
is a critical factor that will drive the core of the theoretical result. We also note that the normalization factor in Equation 10
is derived from empirical advantage estimates and not from the expected estimates for the advantage value. Following( )
Agarwal et al. (2019); Kumar et al. (2022), we operate in a tabular setting with a discrete (but combinatorially-large and
variable-length) action space of responses, and our proof follows Theorem 4.4 in Kumar et al. (2022).
**Theorem D.1 (Utility of negative data over positive data.). Let πneg denote the policy obtained after advantage-weighted RL**
_(Equation 10) under an empirical distribution ˆµ over prompts x. Then the difference between the expected reward (i.e., task_
_success rate), J_ ⋅ _, attained by πneg and πsft is lower-bounded as:_
_L_
( )
_J_ _πneg_ − _J_ _πsft_ ≳ _β ⋅_ Exi∼µ,ˆ **_yi,0∶L∼πneg_** ⋅ **_xi_** log Z **_xi, yi,0∶j_**
∑
_j=1_
( ∣ ) _c0_
( ) ( ) − _(overestimation in[[]_ _A[ˆ]π˜[(][x][,][ y](_ 0∶i−1[,][ y]i[))])][ +] _,_
syn
_D_
√
_where Z_ ♣, ◦ _denotes the sum over exponentiated differences of the advantage and log likelihood values under∣_ ∣ _πsft for all_
_possible candidate steps given a problem ♣_ _and a partial solution ◦. That is,_
( )
_π[(][♣][,][ ◦][;][ ♠][)]_
Z ♣, ◦ ∶= exp + log πsft ♠ ♣, ◦ _,_
∑ _β_
♠∈ _step candidates_
( ) ( _[A][˜]_ ( ∣ ))
_c0 is a constant depending upon the Rademacher complexity of the space of policies πneg close to the SFT policy under the_
_KL-divergence,_ syn _denotes the size of synthetic training prompts._
_D_
∣ ∣
_Proof. To begin the proof, we recall that we are operating in a discrete action space of steps yi, although this space is_
exponentially large. Since we operate in discrete action spaces, we invoke Lemma 5 from Agarwal et al. (2019) for analyzing
softmax policy gradient methods (this Lemma was extended by Lemma B.11 from Kumar et al. (2022) for comparing BC vs
offline RL). Denote by _J_ _π_, the reward attained by policy π in expectation over the empirical distribution _µ:_
[̂] ̂
_Ĵ_ _πneg(_ −) _J[̂]_ _πsft_ ∶= Ex∼µ̂ _V_ _[π][neg]_ **_x_** − Ex∼µ̂ _V_ _[π][sft]_ **_x_** ≥ _βEx∼µ̂_ Z **_x_** _._ (11)
We utilize the performance difference lemma (( ) ( ) Kakade & Langford[[][̂] ( )], 2002[[][̂] ) on the MDP induced by the set of initial problems( )] [[][log][ ̂] ( )]
14
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
-----
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
in the training distribution _µ, and the model induced deterministic dynamics distribution:_
̂
_L_
_Ĵ_ _πneg_ − _J[̂]_ _πsft_ = ∑ Ex∼µ,̂ **_y0∶j−1∼πneg_** ⋅ **_x_** ∑ _πneg_ **_yj_** **_x, y0∶j−1_** _A[ˆ]π˜[(][x][,][ y]0∶i−1[,][ y]i[)]_
_j=1_ ⎡ **_yj_** ⎤
( ) = _j∑L=(1_ Ex)∼µ,̂ **_y0∶j−1∼πneg_** ⋅ **_x_** ⎡∑yj _π(neg∣_ )y⎢⎢⎢⎢⎢⎢⎢⎣j **_x, y0∶(j−1_** ∣ log _πneg_ )yπj sftx, yyj0∶jx−,1 y ⋅0∶jZ[̂]−1x⎥⎥⎥⎥⎥⎥⎥⎦, y0∶j ⎤
= β ⋅ ∑L Ex∼µ,̂ **_y0∶j−1∼(π∣neg)_** ⋅⎢⎢⎢⎢⎢⎢⎢⎣x KL( ∣neg[(][⋅][∣][x][,][ y]0)∶j−1[)][, π]sft([(][⋅][∣]∣[x][,][ y]( 0∶j∣−1[))]) [ +][ log]( )[ ̂]Z **_x,) y⎥⎥⎥⎥⎥⎥⎥⎦0∶j_**
_j=1_
_L_ ( ∣ )
[[][D] [(][π] ( )]
≥ _β ⋅_ ∑ Ex∼µ,̂ **_y0∶j−1∼πneg_** ⋅ **_x_** Z **_x, y0∶j_** _._
_j=1_
( ∣ _L)_
[[][log][ ̂] ( )]
= β ⋅ Ex∼µ,ˆ **_yi,0∶L∼πneg_** ⋅ **_x_** log Z **_x, y0∶j_** _._
∑
_j=1_
( ∣ )
[[] ( )]
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
Now, we can prove the desired result by accounting for the gap in the success rate between the actual distribution over
**_x ∼_** _µ and the empirical distribution induced by problems in the dataset_ _µ:_
̂
_J_ _πneg_ − _J_ _πsft_ ≥ _J_ _πneg_ − _J[̂]_ _πneg_ + _J_ _πneg_ − _J[̂]_ _πsft_ − _J_ _πsft_ − _J[̂]_ _πsft_
[̂]
(a) (b) (c)
( ) ( ) ( ) ( ) ( _L_ ) ( ) ( ) ( )
ÍÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÑÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÏ ÍÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÑÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÏ ÍÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÑÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÏc0
≥ _β ⋅_ Ex∼µ,ˆ **_yi,0∶L∼πneg_** ⋅ **_x_** log Z **_x, y0∶j_** −
∑ [̂]
( ∣ ) _j=1_ _Dsyn_
[[] _L_ ( )] √ _c0_
≥ _β ⋅_ Ex∼µ,ˆ **_yi,0∶L∼πneg_** ⋅ **_x_** log Z **_x, y0∶j_** − ∣ ∣ + ∆,
∑
( ∣ ) _j=1_ _Dsyn_
[[] ( )] √
∣ ∣
where c0 is a constant that depends on the Rademacher complexity of the function class of policies πneg (coming from a
uniform bound that we invokve to bound term (a), since πneg depends on the dataset samples), and this term arises since the
empirical distribution over prompts is not the same as the true population. This concentration term decays as the size of the
synthetic data (number of problems) are scaled up. The term ∆ denotes the overestimation error between the estimated
advantages _Aπ˜[(][x][,][ y]0∶i−1[,][ y]i[)][ and the true advantages][ A]π˜[(][x][,][ y]0∶i−1[,][ y]i[)][, in expectation under the distribution of the learned]_
[̂]
policy. The estimation error ∆ depends on πsft and the value of K used if the rollout policy ˜π corresponds to the BoK(πsft)
policy. This proves the theorem.
**Interpretation & perspectives. Also note that the improvement in performance between πneg and πsft depends on the**
advantage estimate: if the advantage estimates are high, then this term is large, meaning that the more the fraction of
high-advantage critical states, the higher the improvement. In addition, the bound also says that if the over-estimation ∆ in
the advantage estimate is large, the performance improvement is small. This is perhaps expected: consider the scenario when
the BoK(πsft) policy is used to collect data, for a large K. In this scenario, the divergence between the empirical advantage
estimate _Aπ˜_ [and the expected estimate][ A]π˜ [is likely large. In the worst case, the estimate][ ̂]Aπ˜ [can arbitrarily overestimate]
[̂]
_Aπ˜[, as it would take on a high value even if there is just][ one][ sequence among the][ K][ rollouts that successfully solves the]_
problem. For example, a spurious step may be labeled incorrectly as critical in this case and training on negative data may
not improve (consistent with running per-step DPO on an over-trained SFT checkpoint in Figure 7). On the other hand, if
advantages are more accurate, training on negative data should improve performance.
15
-----
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
**E. Synthetic Data Generation**
Prompt used for GSM8K/MATH synthetic data (Li et al., 2024)
Please act as a professional math teacher. Your goal is to create high quality math problems to help students learn
math. You will be given a math question. Please create a new question based on the Given Question and following
instructions.
To achieve the goal, you have one job.
# Please generate a similar but new question according to the Given Question.
You have four principles to do this. # Ensure the new question only asks for one thing, be reasonable, be based on
the Given Question, and can be answered with only a number(float or integer). For example, DO NOT ask, ‘what is
the amount of A, B and C?’.
# Ensure the new question is in line with common sense of life. For example, the amount someone has or pays must
be a positive number, and the number of people must be an integer.
# Ensure your student can answer the new question without the given question. If you want to use some numbers,
conditions or background in the given question, please restate them to ensure no information is omitted in your new
question.
# You only need to create the new question. Please DO NOT solve it.
Given Question: <insert question from original dataset here>
Your output should be in the following format:
CREATED QUESTION: <your created question>
For GSM8K, we replace the phrase “Your goal is to create high quality math problems to help students learn math.” with
“Your goal is to create high quality math word problems to help students learn math.”, as we found this to produce problems
that were closer to GSM-style problems.
To generate the synthetic data, we used OpenAI credits worth approximately 3000 US dollars.
**F. Negative Data Identifies Spurious Steps with Advantage Estimates**
In this section, we present several examples from the GSM8K and MATH synthetic datasets, and higlight the steps that were
deemed to be low or high advantage based on the estimated advantage value. The darker the intensity of the color, the lower
the advantage of that step.
Spurious step identified by negative data
**Question: Ken created a care package to send to his brother, who was away at boarding school. Ken placed a box on**
a scale, and then he poured into the box enough jelly beans to bring the weight to 2 pounds. Then, he added enough
brownies to cause the weight to triple. Next, he added another 2 pounds of jelly beans. And finally, he added enough
gummy worms to double the weight once again. What was the final weight of the box of goodies, in pounds?
**Positive response with identified spurious step:**
When the weight was originally on the scale, the box weighed 0 pounds. After Ken added the jelly
beans to the box, its weight was 2 pounds - 0 pounds = «2-0=2»2 pounds. Then Ken added the
brownies to the box, which caused the box to weigh 2 pounds * 3 = «2*3=6»6 pounds. The next
step was to add 2 pounds of jelly beans, which caused the box to weigh 6 pounds + 2 pounds =
«6+2=8»8 pounds. First, we need to figure out what the weight of the box was when Ken put the
jelly beans in it. 2 pounds of jelly beans was enough to triple the weight, so the box was at 2 pounds
/ 3 = «2/3=.6666666666666667».6666666666666667 pounds. Finally, Ken added the gummy worms,
causing the box to weigh 8 pounds * 2 = «8*2=16»16 pounds. The answer is 16
16
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
-----
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
Spurious step identified by negative data
**Question: Betty is saving money for a new wallet which costs $100. Betty has only half of the money she needs.**
Her parents decided to give her $15 for that purpose, and her grandparents twice as much as her parents. How much
more money does Betty need to buy the wallet?
**Positive response with identified spurious step:**
Betty’s parents gave her 15, so her grandparents gave her 2 * 15 = «15*2=30»30. In total, Betty received
30 + 15 = «30+15=45»45. The total amount of money Betty needs is 100 * 2 = «100*2=200»200. Betty
needs 100 / 2 = «100/2=50»50 in total. Betty still needs to save 50 - 45 = «50-45=5»5. The answer is 5
Spurious step identified by negative data
**Question: To make pizza, together with other ingredients, Kimber needs 10 cups of water, 16 cups of flour, and 1/2**
times as many teaspoons of salt as the number of cups of flour. Calculate the combined total number of cups of
water, flour, and teaspoons of salt that she needs to make the pizza.
**Positive response with identified spurious step:**
Since 1 cup is equal to 16 teaspoons, then Kimber needs 128/16 = «128/16=8»8 cups of salt. So in total,
she needs 10+16+8 = «10+16+8=34»34 cups of water, flour, and salt. In cups, the amount of salt is
128/6 = «128/6=21.33»21.33 cups The answer is 34
Spurious step identified by negative data
**Question: Eight friends compete in a pie eating contest. The chart displays the number of pies each person**
consumed. Emily ate the most pies, and Mike ate the least. How many more pies did Emily eat compared to Mike?
[asy]\ndefaultpen(linewidth(1pt)+fontsize(10pt));
\npair[] yaxis = new pair[10];\nfor( int i = 0 ; i < 10 ; ++i )
{\n\nyaxis[i] = (0,i);\n\ndraw(yaxis[i]--yaxis[i]+(17,0));\n}
\ndraw((0,0)--(0,9));\ndraw((17,9)--(17,0));
\n\nfill((1,0)--(1,5)--(2,5)--(2,0)--cycle,grey);
\nfill((3,0)--(3,8)--(4,8)--(4,0)--cycle,grey);
\nfill((5,0)--(5,3)--(6,3)--(6,0)--cycle,grey);
\n\nlabel("0",yaxis[0],W);
\nlabel("1",yaxis[1],W);\nlabel("2",yaxis[2],W);
\nlabel("3",yaxis[3],W);\nlabel("4",yaxis[4],W);
\nlabel("5",yaxis[5],W);\nlabel("6",yaxis[6],W);
\nlabel("7",yaxis[7],W);\nlabel("8",yaxis[8],W);
\nlabel("Friends/Participants",(8.5,0),S);
\nlabel("Results of a Pie Eating Contest",(8.5,9),N);
\nlabel(rotate(90)*"$\\#$ of Pies Eaten",(-1,4.5),W);[/asy]
**Positive response with identified spurious step:**
To solve this problem, we need to identify the number of pies Emily and Mike consumed, and then find
the difference between their consumption. From the chart provided: - Emily: Consumed 8 pies. - Mike:
Consumed 1 pie. To determine how many more pies Emily ate compared to Mike, we first identify how
many pies each person ate. - Emily ate 5 pies. - Mike ate 1 pie. Now, calculating the difference between
Emily’s and Mike’s consumption: [ Difference = Emily’s consumption - Mike’s consumption = 8 - 1 =
7. ] Therefore, Emily ate 7 more pies than Mike. The answer is 7
17
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
-----
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
Spurious step identified by negative data
**Question: When the base-16 number 1AB316 is written in base 2, how many base-2 digits (bits) does it have?**
**Positive response with identified spurious step:**
To solve this problem, we first need to convert the base-16 number 1AB3_{16} to base-2. Step 1:
Convert each digit of the number from base-16 to base-2. The number 1AB3_{16} consists of the
digits 1, A, B, and 3. Step 2: Convert each digit individually to base-2 representation. - 1_{16} =
0001_2 - A_{16} = 1010_2 - B_{16} = 1011_2 - 3_{16} = 0011_2 Step 3: Combine the base-2
representations of each digit. Concatenating the individual base-2 representations, we get: 1AB3_{16}
= 0001101010110011_2 Step 4: Count the total number of digits in the base-2 representation. The
base-2 representation of 1AB3_{16} is: 0001101010110011_2 The base-16 number (1AB3_{16} ) can
be written in base 10 (decimal) as: [ 1 times 163 + A times 16[ˆ] 2 + B times 16 + 3 ] where (A ) and (B )[ˆ]
are decimal digits. This sequence of digits has a total of 16 digits. The answer is boxed{16}
**G. Details on Star Graph Problem**
The star graph problem we study is borrowed from Bachmann & Nagarajan (2024), where given a graph in the shape of a
star and a query (center/end node pair), the model is asked to output the full path between the start/end nodes.
**Goal. Bachmann & Nagarajan (2024) show that πsft minimizes SFT loss by memorizing the “hard-to-predict” node adjacent**
to the center, and copying the rest of the path from the input graph. This task is highlights the failure of SFT at planning
problems (akin to math reasoning). Thus, we use this as a case study to understand:
- when accurate advantage estimation is possible with few negative samples from the πsft model.
- whether there are generalization benefits of advantage-weighted RL when advantage estimates are accurate
- when advantage-weighted RL can unlearn the memorized feature that causes πsft to fail.
**SFT dataset. The data we use for supervised fine-tuning consists of 30000 of random star graphs (see examples below)**
where each graph has a centre node with out degree 2. Hence, there are two paths that originate from the centre node. Each
path from the center to one of the end nodes is of length 4. Each node on the path is denoted with a randomly sampled
number from 0 to 20. For example, in the sample “8,3|3,10|14,13|10,1|17,14|8,17/8,13=8,17,14,13”. The graph is given by
the adjacency list “8,3|3,10|14,13|10,1|17,14|8,17/8,13”, the query is denoted by “8,13”, and the correct path is given by
“8,17,14,13”.
**Test-time inference from the model.** At test time, the input to the LLM is only thw graph and the query:
“8,3|3,10|14,13|10,1|17,14|8,17/8,13=” and the model is expected to generate the full path from start node 8 to end node 13.
When evaluating the test performance of an LLM, we calculate 0 1 accuracy averaged over 1000 test star graphs (that are
different from train star graphs). The accuracy on a sample is 1 when the LLM accurately predicts all nodes in the graph.
/
**Failure models of the SFT model, πsft. A model with perfect accuracy (0 error) would be the one that has accurately**
learned the correct feature of backtracking the path from the end node to the start node, and then producing it in reverse. This
computation is precisely what makes the adjacent token “hard-to-fit”. On the other hand, if the LLM minimizes next-token
prediction loss during SFT by instead memorizing the hard-to-fit adjacent token by overfitting on the random input graph
instance, at test time the accuracy would be zero. An intermediate solution that SFT model instead learns is to output a path
that is adjacent to the node. At training time, it only needs to memorize which of the two possible path to predict. Note that
even this solution does not require the model to backtrack, and is thus easier to quickly learn with a few samples. This would
quickly minimize the loss on all but the adjacent node, which the model memorizes as training progresses. On the test set,
this model would then have 50% test accuracy. Note, that as we increase the size of the graph or the node vocabulary size it
becomes easier for the model to overfit on the hard to predict adjacent token given random combinations of the input graph.
Thus, we choose the vocabulary size to be 20, which is higher than what is needed to represent any input graph of this size.
Below we provide examples from degree two, path length 4, node 20 problem, where
18
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
-----
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
Examples of 20 node path length 4 star graph problem
Example 1: 8,3|3,10|14,13|10,1|17,14|8,17/8,13=8,17,14,13
Example 2: 14,16|8,10|9,5|3,14|9,3|5,8/9,16=9,3,14,16
Example 3: 14,1|10,4|9,7|10,17|4,9|17,14/10,7=10,4,9,7
Example 4: 19,8|7,18|14,15|15,7|14,19|8,10/14,10=14,19,8,10
Example 5: 1,6|10,1|6,12|10,17|17,18|18,5/10,12=10,1,6,12
**SFT Training details. We finetune a pretrained GPT-2 model with 125 million parameters. We train with a batch size of**
128, Adam without any weight decay, and a constant learning rate of 1e − 5 .
**Advantage estimation and per-step DPO training equivalent to advantage-weighted RL. For a sample from πsft, we**
estimate the advantage of each step by sampling 5 rollouts conditioned on the subsequence uptill the step. We then pair
subsequences with shared prefix, y1∶i differing in the last step +yi+1 vs. −yi+1, where the former is the one with the highest
estimated advantage and the latter is the one with the lowest estimated advantage. Note that this preference pair construction,
closely approximates the preference pair distribution in Theorem 5.1, which would imply that the DPO objective being
optimized closely approximates advantage weighred RL in Equation 4.
Given these pairs for a batch of star graph problems in SFT data, we update the model with a single gradient step on the
DPO objective in Equation 1. In the next iteration, advantage is estimated and pairs are constructed on a fresh batch of star
graphs. We set β = 0.1 in the DPO objective and use the same batch size (one preference pair per star graph). Starting from
an SFT checkpoint we train in the above manner for at least 200 iterations. The SFT model is trained for over 600 iterations.
**H. Implementation Details**
**Datasets and pretrained LLMs. We run all our experiments on GSM8K and MATH datasets. Each dataset has about 7.5k**
training examples. The GSM8K has about 1.3k and MATH has 5k test examples. We conduct experiments with DeepSeekMath-7B pretrained LLM and LLama2-7B, both of which have pretrained weights publicly available on Huggingface.
**Details for SFT/RFT training. For our positive data scaling results, the SFT model is trained for 5 epochs with a learning**
rate of 1e − 5, and a batch size of 64 for all sizes of syn. We use a holdout validation set to choose the checkpoint and
_D_
report the performance of the best performing checkpoint across the five epochs. To generate RFT data we only train the
SFT model for 2 epochs (under-trained checkpoint). For each question we sample M = 100 times, with a temperature of 0.7
and following Yuan et al. (2024a) we retain at most 4 most diverse (based on edit distance) and correct completions. This is
for our results in Figure 2(a,b). For Figure 2(c), we sample more than 4 correct solutions and keep sampling responses until
we have a dataset of size 10k, 20k, . . ., 50k, when questions are given by the Dsyn of size 8k and 16k. For our experiment
on the RFT dataset with purposely inserted spurious steps, as we describe in the main paper, we obtain spurious steps by
computing which intermediate steps in a negative response lead to most incorrect solutions and randomly insert this in
between reasoning steps for a positive solution in RFT data. See examples below.
**Details for per-step DPO training. Training data for DPO is generated in the procedure outlined in (Hwang et al., 2024).**
The value of K is 5 for the BoK _πsft_ policy used to estimate the advantage, and a single rollout from the best-of-K policy
is used to estimate the advantage of each step in “rejected” response, where the “chosen” and “rejected” pairs for each
question are generated from the RFT data, using the method in (( ) Pal et al., 2024). We train per-step DPO with β = 0.3 for all
_Dsyn from GSM8K, and β = 0.1 for all Dsyn from MATH. We train for three epochs, with a learning rate of 1e −_ 6 and a
batch size of 64 prompts. We use the rmsprop optimizer and a cosine learning rate decay with a warmpup ratio of 0.1.
Overall, to replicate our experiments, we need approximately 1000 hours of compute on H100 clusters with 8 GPUs.
19
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
-----
**Learning to Reason by Failing: Offline RL on Sub-optimal Rollouts Scales Synthetic Data by 8x**
RFT data with spurious steps highlighted
**Prompt:**
Karen’s work tote bag is twice the weight of her husband Kevin’s briefcase when the briefcase is empty. When
Kevin puts his laptop and work papers in his briefcase, it is twice the weight of Karen’s tote. Kevin’s work papers
are a sixth of the weight of the contents of his full briefcase. If Karen’s tote weighs 8 pounds, how many more
pounds does Kevin’s laptop weigh than Karen’s tote?
**Response:**
If Karen’s tote weighs 8 pounds, and it is twice the weight of Kevin’s empty briefcase, then Kevin’s empty briefcase
weighs 8/2 = «8/2=4»4 pounds. In words, “Karen’s tote weighs 8 pounds” is “T = 8”, so thus Keven is 2 pounds.
When Kevin puts his laptop and work papers in his briefcase, it is twice the weight of Karen’s tote at 8 pounds, so
the briefcase now weighs 8+4 = «8+4=12»12 pounds. Kevin’s work papers are a sixth of the weight of the contents
of his full briefcase, which weighs 12 pounds, so the work papers weigh 12/6 = «12/6=2»2 pounds. If Kevin’s laptop
and work papers weigh 12 pounds in total, and the work papers weigh 2 pounds, then the laptop weighs 12-2 =
«12-2=10»10 pounds. Thus, Kevin’s laptop is 10-8 = «10-8=2»2 pounds heavier than Karen’s tote. The answer is 2.
RFT data with spurious steps highlighted
**Prompt:**
Carl has a jar full of marbles. He takes out 12 marbles to play a game with, but he accidentally drops them and 1/2
the marbles get lost. So Carl takes out 10 more marbles. While he is playing his game his mother comes home
with another bag of marbles for him, which has 25 marbles in it. If Carl can’t find his lost marbles, then how many
marbles is he going to put in the jar after he plays his game, from both his original marbles and the new ones?
**Response:**
First find out how many marbles Carl loses: 12 marbles * 1/2 = «12*1/2=6»6 marbles Then add that number to the
number of marbles Carl takes out of the jar to play his game to find out how many total marbles he takes out: 6
marbles + 10 marbles = «6+10=16»16 marble. Now add that number to the number of marbles in the jar to find out
how many marbles are left: 16 marbles + 12 marbles = «16+12=28»28 marbles. Then add in the marbles Carl gets
from his mother to find out how many marbles he has in total: 16 marbles + 25 marbles = «16+25=41»41 marbles.
The answer is 41.
20
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
-----
| [
"Saurabh, Garg",
"Amrith, Setlur",
"Xinyang, Geng",
"Naman, Garg",
"Virginia, Smith",
"Aviral, Kumar"
] | 2024-06-13T00:00:00 | null | false | 0 | 0 | null | https://openreview.net/forum?id=v2PV1yCFJk | null | null |
Learning to Solve Geometry Problems via Simulating Human Dual-Reasoning Process | Geometry Problem Solving (GPS), which is a classic and challenging math problem, has attracted much attention in recent years. It requires a solver to comprehensively understand both text and diagram, master essential geometry knowledge, and appropriately apply it in reasoning. However, existing works follow a paradigm of neural machine translation and only focus on enhancing the capability of encoders, which neglects the essential characteristics of human geometry reasoning. In this paper, inspired by dual-process theory, we propose a Dual-Reasoning Geometry Solver (DualGeoSolver) to simulate the dual-reasoning process of humans for GPS. Specifically, we construct two systems in DualGeoSolver, namely Knowledge System and Inference System. Knowledge System controls an implicit reasoning process, which is responsible for providing diagram information and geometry knowledge according to a step-wise reasoning goal generated by Inference System. Inference System conducts an explicit reasoning process, which specifies the goal in each reasoning step and applies the knowledge to generate program tokens for resolving it. The two systems carry out the above process iteratively, which behaves more in line with human cognition. We conduct extensive experiments on two benchmark datasets, GeoQA and GeoQA+. The results demonstrate the superiority of DualGeoSolver in both solving accuracy and robustness from explicitly modeling human reasoning process and knowledge application. | A Dual-Reasoning Geometry Solver (DualGeoSolver) to simulate the dual-reasoning process of humans for GPS and demonstrates the superiority of DualGeoSolver in both solving accuracy and robustness from explicitly modeling human reasoning process and knowledge application. | [
"Tong, Xiao",
"Jiayu, Liu",
"Zhenya, Huang",
"Enhong, Chen",
"Jinze, Wu",
"Jing, Sha",
"Shijin, Wang"
] | 2024-05-09T00:00:00 | IJCAI 2024 Natural Language Processing | false | 0 | 0 | null | http://arxiv.org/abs/2405.06232 | https://arxiv.org/abs/2405.06232 | https://www.semanticscholar.org/paper/e67eb673b3580db48691cc28bb0061a31163be21 |
|
Leveraging Large Language Models for Autoformalizing Theorems: A Case Study | N/A | null | [
"Michail, Karatarakis"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
|
Logic Contrastive Reasoning with Lightweight Large Language Model for Math Word Problems | This study focuses on improving the performance of lightweight Large Language Models (LLMs) in mathematical reasoning tasks. We introduce a novel method for measuring mathematical logic similarity and design an automatic screening mechanism to construct a set of reference problems that integrate both semantic and logical similarity. By employing carefully crafted positive and negative example prompts, we guide the model towards adopting sound reasoning logic. To the best of our knowledge, this is the first attempt to utilize retrieval-enhanced generation for mathematical problem-solving. Experimental results demonstrate that our method achieves a 15.8% improvement over the Chain of Thought approach on the SVAMP dataset and a 21.5 % improvement on the GSM8K dataset. Further application of this method to a large-scale model with 175 billion parameters yields performance comparable to the best results on both aforementioned datasets. Finally, we conduct an analysis of errors during the reasoning process, providing valuable insights and directions for future research on reasoning tasks using large language models. | A novel method for measuring mathematical logic similarity and an automatic screening mechanism to construct a set of reference problems that integrate both semantic and logical similarity are introduced, in the first attempt to utilize retrieval-enhanced generation for mathematical problem-solving. | [
"Ding, Kai",
"Ma, Zhenguo",
"Yan, Xiaoran"
] | 2024-08-29T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.00131 | https://arxiv.org/abs/2409.00131 | https://www.semanticscholar.org/paper/4ab4502a78c615539922a526f7116a8654849a65 |
|
LogicPro: Improving Complex Logical Reasoning via Program-Guided Learning | In this paper, we present a novel approach, called LogicPro, to enhance Large Language Models (LLMs) complex Logical reasoning through Program Examples. We do this effectively by simply utilizing widely available algorithmic problems and their code solutions. First, we constructed diverse test samples input based on algorithmic questions and code solutions. Then, we designed different complex reasoning questions based on algorithmic problems and test samples. Finally, combining the intermediate variable outputs of the code solutions and the complex reasoning questions, we derived the reasoning process and the final answer. With this approach, we can construct a dataset that is sufficiently difficult (all models are ineffective), diverse (synthesized from 2,360 different algorithmic questions), and scalable (building different test samples and collecting more algorithmic questions). In addition, we obtain a high-quality reasoning process guided by the values of intermediate variables. As a result, our approach achieves significant improvements in multiple models for the BBH$^{27}$, GSM8K, HellSwag, Logicqa, Reclor, and RTE datasets, outperforming a wide range of existing reasoning datasets. | This approach achieves significant improvements in multiple models for the BBH$^{27}$, GSM8K, HellSwag, Logicqa, Reclor, and RTE datasets, outperforming a wide range of existing reasoning datasets. | [
"Liangcai, Gao",
"Shuai, Peng",
"Yuchen, Yan",
"Yang, Liu",
"Jin, Jiang",
"Yonggang, Jin",
"Zhi, Tang",
"Mengdi, Zhang",
"Xunliang, Cai",
"Yixin, Cao"
] | 2024-09-19T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.12929 | https://arxiv.org/abs/2409.12929 | https://www.semanticscholar.org/paper/343ce0e86d3047520e7d3945c5dfa4e67a42a728 |
|
Look Before You Leap: Problem Elaboration Prompting Improves Mathematical Reasoning in Large Language Models | Large language models (LLMs) still grapple with complex tasks like mathematical reasoning. Despite significant efforts invested in improving prefix prompts or reasoning process, the crucial role of problem context might have been neglected. Accurate recognition of inputs is fundamental for solving mathematical tasks, as ill-formed problems could potentially mislead LLM's reasoning. In this study, we propose a new approach named Problem Elaboration Prompting (PEP) to enhance the mathematical capacities of LLMs. Specifically, PEP decomposes and elucidates the problem context before reasoning, therefore enhancing the context modeling and parsing efficiency. Experiments across datasets and models demonstrate promising performances: (1) PEP demonstrates an overall enhancement in various mathematical tasks. For instance, with the GPT-3.5 model, PEP exhibits improvements of 9.93% and 8.80% on GSM8k through greedy decoding and self-consistency, respectively. (2) PEP can be easily implemented and integrated with other prompting methods. (3) PEP shows particular strength in handling distraction problems. | A new approach named Problem Elaboration Prompting (PEP) is proposed to enhance the mathematical capacities of LLMs by decomposing and elucidates the problem context before reasoning, therefore enhancing the context modeling and parsing efficiency. | ## Look Before You Leap: Problem Elaboration Prompting Im- proves Mathematical Reasoning in Large Language Models
**Haoran Liao, Jidong Tian, Shaohua Hu, Hao He, Yaohui Jin**
MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
_{liaohaoran,frank92,hushaohua,hehao,jinyh}@sjtu.edu.cn_
**Abstract**
Large language models (LLMs) still grapple with complex tasks like mathematical reasoning. Despite significant efforts invested in improving prefix
prompts or reasoning process, the crucial role of problem context might
have been neglected. Accurate recognition of inputs is fundamental for
solving mathematical tasks, as ill-formed problems could potentially mislead LLM’s reasoning. In this study, we propose a new approach named
Problem Elaboration Prompting (PEP) to enhance the mathematical capacities of LLMs. Specifically, PEP decomposes and elucidates the problem
context before reasoning, therefore enhancing the context modeling and
parsing efficiency. Experiments across datasets and models demonstrate
promising performances: (1) PEP demonstrates an overall enhancement
in various mathematical tasks. For instance, with the GPT-3.5 model, PEP
exhibits improvements of 9.93% and 8.80% on GSM8k through greedy decoding and self-consistency, respectively. (2) PEP can be easily implemented
and integrated with other prompting methods. (3) PEP shows particular
strength in handling distraction problems (example in Fig. 1).
Figure 1: We proposed Problem Elaboration Prompting (PEP) for enhancing problem context, thereby improving subsequent reasoning. As depicted in the example, PEP decouples
spurious relationships and refines statements, preventing downstream distraction errors.
**1** **Introduction**
Recent large language models (LLMs), such as the GPT-3 model family with 175 billion
parameters (Brown et al., 2020, inter alia), have demonstrated remarkable performance
across various NLP tasks. Chain-of-thought (CoT) prompting (Wei et al., 2022b; Kojima
et al., 2022) successfully elicits reasoning behavior and emergent abilities (Wei et al., 2022a)
by explicitly guiding the model to generate intermediate rationales step by step, further
promoting the development of artificial general intelligence (AGI). Despite the success,
performing multi-hop and compositional reasoning for complex tasks like mathematical
solving can still face challenges (Hendrycks et al., 2021; Lewkowycz et al., 2022), even when
the required knowledge is limited to the scope of primary school (Cobbe et al., 2021).
-----
Figure 2: An overview of the proposed PEP and other problem-related methods. Rather
than creating sub-questions or plans to guide subsequent reasoning, PEP focuses on clarifying and enriching the problem context, i.e., PEP can be integrated with these methods.
One area of research aims to improve the quality of reasoning outputs through diverse
rationale decoding strategies (Wang et al., 2022c; Suzgun et al., 2022) and iteration-based
answer refinement (Saunders et al., 2022; Kim et al., 2023; Zheng et al., 2023). Considering the
sensitivity of the model to inputs (Lu et al., 2022b; Wang et al., 2022a; Shi et al., 2023), another
area of research focuses on augmenting the robustness of prefix-prompts (Fu et al., 2022;
Wang et al., 2022b; Shao et al., 2023). However, the role of problem has been overlooked.
Most studies assume that the provided information is concise and relevant, while in realworld situations, the inputs could be ill-formed. For instance, LMs can be easily distracted
by irrelevant context (Shi et al., 2023) and often struggle to capture intricate implied implicatures (Ruis et al., 2022; Chan et al., 2023). Besides, even when the problem is well-formed, it
may still be complex and unsuitable for the LM’s comprehension. For instance, although
the language model can be knowledgeable, it may encounter difficulties in identifying what
knowledge are required to answer the question (Bian et al., 2023) or correctly integrating
intermediate rationales to generate the overall solution (Press et al., 2022).
Several works have noticed the crucial role of problem, suggesting pre-processing before
reasoning. Least-to-Most (L2M) (Zhou et al., 2022a) proposes to decompose the final asked
question into simpler sub-questions, Plan-and-Solve (PaS) (Wang et al., 2023b) requires
a preliminary global plan, while Self-ask (Press et al., 2022) suggests a dynamic asking
strategy before each step of generating rationales. However, these methods primarily focus
on decomposing questions or creating guidance, without understanding or discernment of
the problem itself. Therefore, they could also potentially be misled by ill-formed problems.
In this study, we introduce a new method named Problem Elaboration Prompting (PEP),
which involves decomposing and elucidating the problem context prior to reasoning. Our
method aims to clarify the problem description and enhance the context modeling, rather
than creating specific guidance. We illustrated an overview of PEP in Fig 2, along with a
comparison with other problem-related prompting methods. Specifically, PEP adopts a
human-like thought process that emphasizes the importance of thoroughly comprehending
the problem’s conditions and requirements before engaging in reasoning: look before you leap.
-----
PEP is also inspired by previous researches on semantic parsing, which suggest parsing the
given problem into specific representations, such as Python code or condensed symbols (Gao
et al., 2022; Chen et al., 2022; Hu et al., 2023), to facilitate subsequent reasoning.
We conduct evaluations of the proposed approach on four mathematical datasets and additionally investigate its performance in addressing the distraction problems (Shi et al., 2023).
Both zero-shot and few-shot PEPs demonstrate an overall enhancement across datasets,
models and answer types, employing greedy decoding and self-consistency settings. For
instance, with GPT-3.5, we observed an improvement of 9.93% and 8.80% on GSM8k (Cobbe
et al., 2021) with greedy decoding and self-consistency, separately. When using ChatGPT,
PEP achieves SOTAs with 98.23% on SingleEq (Koncel-Kedziorski et al., 2015), and 88.7%
on SVAMP (Patel et al., 2021).
In summary, our contributions are listed below:
1. We propose a new method, Problem Elaboration Prompting (PEP), for enhancing
LLM’s reasoning. It is easy to implement and integrate other prompting methods.
2. We evaluate PEP through extensive experiments using both open-source LMs and
GPT model family, exploring the role of problem context for reasoning.
3. We demonstrate PEP is effective in mitigating the distraction problem, indicating
the promising prospect in dealing with ill-formed problems of other types.
**2** **Related Works**
2.1 Emergent Abilities and Prompting
With the large-scale unsupervised pre-training technique, LLMs (Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023a, inter alia) can perform new tasks conditioning on
few in-context inputs via mimicking the format of demonstrations (Webson & Pavlick, 2022;
Min et al., 2022; Pan et al., 2023). Instruction tuning further improves the generalization
of prompts via training a language model to follow the general instructions rather than
specific examples (Wei et al., 2021; Ouyang et al., 2022a; Chung et al., 2022).
Instead of directly generating the final answer, chain-of-thought prompting (CoT) methods (Wei et al., 2022b; Creswell et al., 2022; Jung et al., 2022) suggest guiding LLMs to
reach the final answer through a step-by-step process, eliciting the emergent reasoning
ability (Wei et al., 2022a; Chan et al., 2022; Prystawski & Goodman, 2023). Zelikman et al.
(2022) also proposed to use prompts to augment the trainset with rationales. However, such
methods typically necessitate an LLM larger than 100B (Wei et al., 2022b), making it difficult
to directly apply on small models (Zhang et al., 2023).
2.2 Improving CoT Reasoning
Various techniques have been proposed to improve the standard CoT (Wang et al., 2022c;
Suzgun et al., 2022; Saunders et al., 2022; Gao et al., 2022; Kim et al., 2023; Yao et al., 2023),
most with a particular focus on controlling the decoding process and validating intermediate
answers. Significant efforts have also been made to improve the prompts selection and
ensemble (Fu et al., 2022; Lu et al., 2022a; Zhou et al., 2022b; Shao et al., 2023), to enhance the
generation robustness. Besides, increasing the diversity of prompts, questions, or rationales
shows notable advantages for the reasoning (Zhang et al., 2022; Wang et al., 2022b).
PEP shares similarity with the study conducted by Lampinen et al. (2022), which explores
the impact of explanations in context. However, it focus on the post-answer explanation,
while we focus on elaborating the problem. PEP can also be regarded as a type of LLM-based
agent system, sharing the concepts of analysis and decomposition (Shinn et al., 2023; Park
et al., 2023; Wang et al., 2023a). While they primarily focus on observing environment and
reflecting the feedback (Zheng et al., 2023; Qian et al., 2023; Wu et al., 2023; Miao et al., 2023),
PEP focuses on comprehending and parsing the context.
-----
**Q: Brian’s friend Bobby has 5 fewer than 3 times as many video games as Brian does. If**
Brian has 20 video games but lost 5 right before the comparison was made, how many does
Bobby have?
**Standard CoT:**
**A: Three times as many video games as Brian has is 3*20=60 video games. Five fewer than 3**
times as many video games as Brian has is 60-5=55 video games.
Final answer: 55. (Wrong: ignore the precondition placed at end.)
**PEP + CoT**
(+Elaboration):
Segment 1: Bobby has 5 fewer than 3 times as many video games as Brian does. Rephrased:
Bobby’s number of video games is 3 times Brian’s number minus 5. Equation: B = 3B - 5
Segment 2: Brian has 20 video games. Equation: B = 20
Segment 3: Brian lost 5 video games right before the comparison was made. Equation: B =
20 - 5
**A: Using segment 2’s equation, we know that Brian has 20 video games. Using segment**
3’s equation, we know that Brian lost 5 video games, leaving him with 15 video games. We
can substitute this value into segment 1’s equation and solve for Bobby’s number of video
games. B = 3(15) - 5, B = 40.
Final answer: 40. (Correct: process conditions in suitable order.)
Table 1: Recognizing the problem is imperative for reasoning. As illustrated, CoT wrongly
engages in hasty reasoning (red), neglecting the prerequisite conditions placed afterwards (cyan). Meanwhile, PEP utilizes conditions in the correct order (blue).
**3** **Problem Elaboration Prompting**
A beneficial cognitive paradigm of humanity is the ability of thoughtful contemplation,
as exemplified by a common figurative: look before you leap. In this work, we propose to
simulate such a thinking process by prompting the model to elaborate on the problem before
generating or reasoning. It is crucial for the LLM to ensure it has a thorough understanding
of the problem space, encompassing the given conditions and the asked question, avoiding
a hasty generation and exploration within the answer space (Table 1).
3.1 Method Formulation
Given the problem context denoted by P = (X, y0) ∈ _D, where X = {x1, x2, · · ·, xi}_
represent the statements and y0 represents the asked question, prompting methods attempt to leverage the strength of LLMs to solve the task by generating rationale yi =
_f_ _M(X, y0; pt,_ _P[ˆ]k_ _y<i) step by step until reaching the final answer yt. Specifically, pt denotes_
_|_
zero-shot instructions and _P[ˆ]k indicate k concatenated exemplars (X[ˆ]_ _k, ˆyk,0). Note that such_
prompting methods do not modify the LLM M.
In the PEP-aid language model, we suggest to pre-process the given problem P by decomposing and elucidating the context into smaller and more concise segments to enhance LLMs’
comprehension: P[′] = ({x1[′] [,][ x]2[′] [,][ · · ·][,][ x]m[′] _[}][,][ y]0[′]_ [) =][ f] _[M][(][X][,][ y][0][;][ p][e][)][, where][ m][ ≥]_ _[i][ and][ p][e][ is a spe-]_
cific instruction. Then, the LLM can continue its reasoning by yi = f _M(X, y0; pt,_ _P[ˆ]k|P[′], y<i)_
utill reaching yt. Thus, PEP can be easily combined with previous prompting methods.
3.2 Designing Principles
The design principle of problem elaboration in this study consists of two aspects: (i) de**composing: breaking down the original sentences into distinct segments to disentangle**
complex or intertwined concepts; (ii) elucidating: providing explanations or rephrasing the
segments in a manner that is more conducive to the model’s understanding.
-----
The concept of decomposition is widely spread in many previous works. Except for introduced
problem-related methods, recent community adopts a decomposition approach in different
fields to apply LMs to solve complex problems (Wei et al., 2022b; Khot et al., 2022; Liang
et al., 2023; Hong et al., 2023), bridging the compositionality gap of powerful LMs (Press
et al., 2022). In contrast, PEP takes a different approach by breaking down the entire problem
into simpler segments, rather than creating sub-questions or decomposing the reasoning
process. It focuses on organizing and clarifying the existing information from the problem.
Meanwhile, it could be beneficial to elucidate the segments following decomposition, as it
elicits the model to organize existing information in a comprehensive view of the problem
and recognize the underlying implicatures of the question. Moreover, it introduces diversity
into the context, which has been demonstrated to enhance reasoning (Zhang et al., 2022;
Wang et al., 2022b), thereby mitigating the risk of relying on specific words or descriptions
as shortcuts. A break-down analysis of our designed principles is presented in Sec. 4.4.
3.3 Prompts Generation
Since the elaboration can be diverse, We evaluate and select the instruction of zero-shot
PEP based on a subset of GSM8k-train of 200 points (see Sec A). The selected instruction is
“Decompose the given question into smaller segments, elucidating each segment as you rephrase it.”
We further adopt the exemplars from Zhou et al. (2022a) and adjusted them for different
methods for fairness. All instructions and exemplars can be found in the appendix B.
To investigate the behavior of PEP, we randomly selected 200 instances from the experiments
with a manual categorization. PEP primarily utilizes question-answer pairs, declarative
sentences, and interrogative sentences to review and examine the problem context. There
are around 10.5% of instances that can be mixed with sub-questions, planning instructions,
or intertwined rationales. We find it primarily occurs when the questions themselves contain
instructions or options.
**4** **Experiment**
4.1 Setup
**Datasets.** We evaluate PEP on four elementary datasets, with
a focus on mathematical reasoning: (1) SingleEq (KoncelKedziorski et al., 2015), (2) GSM8k (Cobbe et al., 2021), (3)
**SVAMP (Patel et al., 2021), (4) AQuA (Ling et al., 2017).**
Furthermore, we investigate the distraction problem using
GSMIC (Shi et al., 2023): we randomly sampled 500 examples
separately for 2-step problems and m-step problems, denoted
by (5) GSMIC-1k. The details are shown in Table 2.
Dataset Num Answer
SingleEq 508 free
GSM8k 1319 free
AQuA 254 options
SVAMP 1000 free
GSMIC-1k 1000 free
Table 2: Datasets Stats.
**Baselines & Prompts.** We evaluate PEP with two elementary answer types: (1) Chainof-Thoughts (CoT) (Kojima et al., 2022) for textual reasoning, and (2) Program-ofThoughts (PoT) (Chen et al., 2022) for code-based reasoning. Three problem-related methods
are compared to PEP: (3) Least-to-Most (L2M) (Zhou et al., 2022a), (4) Plan-and-Solve (PaS)
(Wang et al., 2023b) and (5) Self-ask (Press et al., 2022). To investigate the distraction
problem, we also adopt (6) IRR-INST. suggested by Shi et al. (2023). All instructions and
exemplars can be found in the appendix B.
**Language Models & Decoding** We conduct our main experiments on four open-source
LMs (1) two LLama2 models (Touvron et al., 2023b) and (2) two Mistral models (Jiang
et al., 2023a; 2024). We also employ two GPT models (3) “text-davinci-003” (davinci)
and (4) recent released “gpt-3.5-turbo-0125” (turbo) (Brown et al., 2020; Ouyang et al.,
2022b; OpenAI, 2023). We evaluate the performance with both greedy decoding and selfconsistency decoding (SC) (Wang et al., 2022c) to ensure the reproducibility of experiments.
-----
Dataset
PEP
Model AnsType Average
(ours) SingleEq GSM8k AQuA SVAMP
LLama2 COT % 73.82 29.72 21.26 48.1 (+0.33)
`7B-hf-chat` ! 71.87 27.32 23.62 51.4
LLama2 COT % 80.91 41.47 24.41 59.7 (+0.68)
`13B-hf-chat` ! 78.54 42.0 27.17 61.5
Mistral-7B % 79.33 44.49 27.35 66.5
COT (-1.04)
`Instruct-v0.2` ! 78.74 41.71 27.95 65.1
Mistral-8x7B % 90.16 68.61 45.67 77.9
COT (+0.21)
`Instruct-v0.1` ! 89.96 70.74 44.31 78.2
% 87.40 50.19 38.19 72.1
COT (+4.71)
! 90.35 60.12 40.55 75.7
% 95.67 63.84 29.13 78.3
POT (+1.27)
! 96.46 64.75 29.53 81.3
% 96.46 82.56 58.66 78.6
COT (+0.57)
! 97.05 82.03 56.3 83.2
% 96.46 77.48 47.64 81.4
POT (+1.33)
! 96.06 79.68 47.64 84.9
GPT-3.5
```
text-davinci
-003
```
ChatGPT
```
gpt-3.5 turbo-0125
```
Table 3: Accuracies (x100) of proposed PEP against zero-shot CoT and PoT.
Dataset
PEP
Model AnsType Average
(ours) SingleEq GSM8k AQuA SVAMP
GPT-3.5 % 87.99 52.31 39.76 71.7 62.94
`text-davinci` COT+SC ! 89.57 61.11 39.76 73.6 66.01
`-003` (+1.58) (+8.80) (+0.00) (1.90) (+3.07)
ChatGPT % 97.83 87.57 67.72 86.3 84.85
`gpt-3.5-` COT+SC ! 98.23 88.48 67.32 88.7 85.68
`turbo-0125` (+0.40) (+0.91) (-0.40) (+2.40) (+0.83)
Table 4: Solve rates (x100) of proposed PEP against self-consistency CoT. The temperature is
0.7 and each answer are voted from 20 samples.
4.2 Main Results
The main results are presented in Table 3 and Table 4, showing proposed PEP’s performances
on various LMs, answer types and decoding strategies. We only test PoT on two GPT models,
as its templates might not be designed for these smaller LMs. Considering the token cost,
we validate the self-consistency CoT on two GPT models. The comparison and integration
with other problem-related methods are shown in table 5, using greedy decoding and
```
turbo-0125, while verifying two different few-shots settings.
```
**PEP performs well in a variety of situations.** Overall, PEP outperforms the standard
CoT in most cases, with the exception of Mistral-7B, but demonstrating improvements
for Mistral-8x7B. The performance in PoT and self-consistency settings further validates
PEP’s effectiveness and adaptability. The enhancement in PoT could be attributed to PEP
simplifying parsing difficulties, thereby facilitating code generation (Jiang et al., 2023b). It is
also noteworthy that PEP performs remarkably well on davinci, achieving improvements
of 9.93% and 8.80% in greedy search and self-consistency, respectively.
**PEP can be effectively integrated with other prompting methods.** Unlike the mentioned
problem-related methods, PEP aims to enhance the original problem, making it compatible
-----
Dataset
PEP
Methods K-shots Average
(ours) SingleEq GSM8k AQuA SVAMP
% 97.64 80.59 54.33 81.3
CoT k=1 (+0.48)
! 97.24 79.38 54.72 84.4
% 96.85 **82.11** 57.09 83.4
L2M k=1 (+0.65)
! **98.03** 80.44 57.09 **84.5**
% 93.9 77.63 54.33 78.7
PaS* k=1 (-0.01)
! 94.09 77.63 55.51 77.3
% 89.57 61.11 40.94 74.2
Self-Ask* k=1 (+7.63)
! 95.08 59.29 **58.27** 83.7
% **98.03** 82.94 56.69 82.3
CoT k=4 (+0.99)
! **98.03** **83.93** 59.06 82.9
% 97.64 81.65 57.87 82.5
L2M k=4 (+1.57)
! 97.44 81.8 **61.02** 85.7
% 96.26 79.76 57.48 77.9
PaS* k=4 (+1.50)
! 96.85 81.58 57.48 81.5
% 94.49 78.7 53.54 80.0
Self-Ask* k=4 (+3.20)
! 95.47 81.43 55.91 **86.7**
Table 5: Comparison and integration of PEP with problem-related methods. We utilized
exemplars from Zhou et al. (2022a) and adjusted them for other methods* (details in Sec B).
The best and second-best results are highlighted in bold and underlined, respectively.
for integration. As shown in Table 5, incorporating PEP improves the performance of these
problem-related methods in both k=1 and k=4 few-shot settings, while the combination
with the standard CoT also ranks highly in performance. Besides, we observed Self-ask
underperforms when k=1. It’s likely because one example might fail to elicit the dynamic
QA process and LMs could abruptly terminate after generating follow-up questions.
4.3 Distraction Problem
One particular challenge of ill-formed problems is known as distraction problem (Shi et al.,
2023): irrelevant sentences can distract LMs to generate errors. These sentences can be
completely irrelevant or relevant to the problem but should have no impact on inference.
We tested PEP using a subset of GSMIC (Shi et al., 2023). Two metrics are utilized: (1) micro
_accuracy: averaged accuracy per example, and (2) macro accuracy: averaged accuracy per_
base problem. Norm is the accuracy normalized by scores on base problems, measuring how
a method is affected by the distractors.
**PEP effectively mitigates the distraction problems.** As shown in Table 6, PEP surpasses
CoT and L2M of both zero-shot and one-shot settings, in micro- and macro- metrics, indicating superior performance in addressing such ill-formed issues. Beyond overall accuracy,
PEP also exhibits enhanced robustness as evidenced by norm accuracy. From the macro
perspective, the improvements and stability of PEP are also remarkable.
**PEP performs well when prompted with prior knowledge.** When the model is consciously prompted to ignore irrelevant content for the given problem, referred to as IRRINST. (Shi et al., 2023), we observed significant improvements in CoT, L2M and PEP. Despite
this, PEP still outperforms the 0-CoT and 1-L2M by achieving larger improvements for
most cases. PEP particularly excels in 2-step problems and norm accuracies. However, its
performance on macro accuracies is inferior, potentially due to a conflict between Irr-Inst.
and the one-shot exemplar.
-----
PEP **Micro Accuracy** **Macro Accuracy**
Method
(ours)
2 st _≥2 st_ Overall Norm 2 st _≥2 st_ Overall Norm
% 75.4 78.0 76.7 89.08 31.67 40.0 35.0 48.61
0-CoT ! 84.2 81.8 83.0 92.32 56.67 30.0 46.0 61.33
(+8.8) (+3.8) (+6.3) (+3.2) (+25.0) (-10.0) (+11.0) (+12.7)
% 84.2 81.2 82.7 90.98 46.67 32.5 41.0 47.67
1-CoT ! 87.8 81.8 84.8 95.07 56.67 37.5 49.0 61.25
(+3.6) (+0.6) (+2.1) (+4.1) (+10.0) (+5.0) (+8.0) (+13.6)
% 84.6 80.8 82.7 90.98 51.67 27.5 42.0 51.22
1-L2M ! 83.2 84.0 83.6 93.62 50.0 32.5 43.0 58.11
(-1.4) (+3.2) (+0.9) (+2.6) (-1.7) (+5.0) (+1.0) (+6.9)
+ IRR-INST. (Shi et al., 2023)
% 80.6 85.8 83.2 97.65 58.33 55.0 57.0 72.15
0-CoT
! 88.2 87.2 87.7 98.76 61.67 50.0 57.0 76.0
(+7.6) (+1.4) (+4.5) (+1.1) (+3.3) (-5.0) (0.0) (+3.8)
% 87.4 89.4 88.4 94.55 68.33 62.5 66.0 83.54
1-CoT
! 90.6 83.6 87.1 96.89 71.67 57.5 66.0 78.57
(+3.2) (-5.8) (-1.3) (+2.3) (+3.3) (-5.0) (0.0) (-5.0)
% 85.8 85.8 85.8 96.3 61.67 50.0 57.0 74.03
1-L2M
! 88.0 85.0 86.5 96.86 61.67 45.0 55.0 69.62
(+2.2) (-0.8) (+0.7) (+0.6) (0.0) (-5.0) (-2.0) (-4.4)
Table 6: Micro and macro accuracies (×100) on GSMIC-1k. We follow the metrics and
instructions suggested in (Shi et al., 2023).
Figure 3: Breakdown accuracies w.r.t. irrelevant sentence factors (T: Topic, RO: Role Overlap,
NR: Num. Range). Lower accuracy suggests the model is more sensitive to that factor.
4.4 Ablation Study
**Breakdown analysis of distracting factors.** We evaluated the performance of CoT, L2M
and PEP under various distracting factors, as depicted in Fig 3. PEP significantly outperforms the basic baselines on almost all factors, both in micro and macro accuracies,
indicating its potential benefits for downstream reasoning by aiding in problem context
recognition and parsing. When prompted by the Irr-inst., PEP shows consistent improvements for most cases, except for 1-CoT. Overall, PEP demonstrates better improvements
in handling out-of-distribution distraction factors, specifically (1) off-topic sentences, (2)
non-overlapping role names, and (3) out-of-range numbers. The impact of these factors is
more pronounced on the macro- than the micro- metric.
-----
Example 1 (PEP misleads implicit meaning) Example 2 (PEP discards important information)
**Q: An interior design firm offers installation for**
$129.00. It includes hanging 4 mirrors, 2 shelves, 1
chandelier, and 10 pictures. They will install additional items for an extra $15.00 per item. Angela
has 6 mirrors and 2 chandeliers and 20 pictures that
she needs hung. How much will this cost her?
**Q: Mrs. Harrington bought 12 boxes of oranges**
at the market. She gave her mom and her sister
2 boxes of oranges each. Then she kept 1/4 of
the oranges and sold the rest. How many oranges
did Mrs. Harrington sell if each box contains 20
oranges?
**PEP + CoT:** **PEP + CoT:**
S1: What is the base installation fee for the inte- S1: How many boxes of oranges did Mrs. Harringrior design firm? $129.00. S2: What is included ton buy at the market? 12 boxes. S2: How many
in the base installation fee? 4 mirrors, 2 shelves, 1 boxes of oranges did Mrs. Harrington give to her
chandelier, and 10 pictures. S3: How much does mom and her sister? 2 boxes. S3: How many boxes
the interior design firm charge for each additional of oranges did Mrs. Harrington keep for herself?
item installed? Extra $15.00 per item installed. S4: Mrs. Harrington kept 1/4 of the oranges.S4: How
How many additional items does Angela need in- many boxes of oranges did Mrs. Harrington sell?
stalled? Angela needs 6 mirrors, 2 chandeliers, and The rest.
20 pictures installed, which is a total of 28 addi- Solution: 1. Calculate the total number of oranges
tional items. Mrs. Harrington bought: 12x20=240. 2. Calculate
Solution: The cost of the additional items is 28 the total number of oranges she gave to her mom
items x $15.00 per item = $420.00. Therefore, the to- and her sister: 2x2=4, 4x20=80. 3. Total number of
tal cost for Angela’s installation is $129.00 + $420.00 oranges kept for herself: 1/4x240=60. 4. Calculate
= $549.00. the total number of oranges sold: 240-80-60=100.
Table 8: Error Analysis via turbo. When pre-processing the problems, PEP may changes the
underline meaning and discarding information (red) of the original sentence (cyan).
**Break-down analysis of PEP components.** Method GSM8k SVAMP GSMIC Overall
We verify two components in PEP: (1) DEC:
COT 82.56 78.6 76.7 79.60
decomposing only, and (2) ELU: elucidating
PEP 82.03 83.2 83.0 82.67
only, and (3) ETD: elucidating first then de
DEC 82.79 83.6 79.0 82.50
composing. As shown in Table 7, both DEC
ELU 81.80 83.1 82.2 82.57
and ELU are required for PEP, with perfor
ETD 80.14 82.5 81.1 81.59
mance varying across datasets. On GSM8k
and SVAMP, DEC even outperforms PEP,
Table 7: Ablation of PEP components.
while ELU is more effective on GSMIC. Besides, ETD consistently performed worse, suggesting that the coordination and operating
order of components are also crucial for PEP.
4.5 Error Analysis
We present two error cases in Table 8. PEP may ignore the potential implicatures of original
sentence, resulting in ambiguity of rephrasing. It may also focus too much on the local
clause and neglect the nested logical structure and temporal relations for given statements.
Besides, PEP might break the continuous context, thus changing the implicit meanings. The
focus on localities might also constrain required associative thinking. We present these error
cases in Sec C in Appendix. In addition, except increasing the cost of context length, PEP
may also be inefficient for very long descriptions. For certain forms of data, such as short
but challenging questions, structured data in table, it could be difficult to elaborate.
**5** **Conclusion**
In this study, we proposed a novel method, Problem Elaboration Prompting (PEP), to improve the inference capabilities of LLMs. PEP offers several advantages: 1) PEP outperforms
baselines across mathematical datasets, decoding strategies, and answer types; 2) PEP does
not necessitate the complex creation of plans or sub-questions, but just echoes and enriches
the problem context in one pass. It is also compatible with most prompting methods that
enhance prefix-prompts or rationales; 3) PEP helps mitigate the distraction issue, indicating
its potential in tackling other types of ill-formed problems.
-----
**References**
Ning Bian, Xianpei Han, Le Sun, Hongyu Lin, Yaojie Lu, and Ben He. Chatgpt is a knowledgeable but inexperienced solver: An investigation of commonsense problem in large
language models. ArXiv, abs/2303.16421, 2023.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla
Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.
Language models are few-shot learners. Advances in neural information processing systems,
33:1877–1901, 2020.
Chunkit Chan, Jiayang Cheng, Weiqi Wang, Yuxin Jiang, Tianqing Fang, Xin Liu, and
Yangqiu Song. Chatgpt evaluation on sentence level relations: A focus on temporal,
causal, and discourse relations. ArXiv, abs/2304.14827, 2023.
Stephanie Chan, Adam Santoro, Andrew Lampinen, Jane Wang, Aaditya Singh, Pierre
Richemond, James McClelland, and Felix Hill. Data distributional properties drive
emergent in-context learning in transformers. Advances in Neural Information Processing
_Systems, 35:18878–18891, 2022._
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning tasks.
_ArXiv, abs/2211.12588, 2022._ [URL https://api.semanticscholar.org/CorpusID:](https://api.semanticscholar.org/CorpusID:253801709)
```
253801709.
```
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra,
Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann,
Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker
Barnes, Yi Tay, Noam M. Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Benton C. Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy
Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev,
Henryk Michalewski, Xavier Garc´ıa, Vedant Misra, Kevin Robinson, Liam Fedus, Denny
Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov,
Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon
Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta,
Mark D´ıaz, Orhan Firat, Michele Catasta, Jason Wei, Kathleen S. Meier-Hellstern, Douglas
Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with
pathways. ArXiv, abs/2204.02311, 2022.
Hyung Won Chung, Le Hou, S. Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li,
Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu,
Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Dasha Valter, Sharan
Narang, Gaurav Mishra, Adams Wei Yu, Vincent Zhao, Yanping Huang, Andrew M. Dai,
Hongkun Yu, Slav Petrov, Ed Huai hsin Chi, Jeff Dean, Jacob Devlin, Adam Roberts,
Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-finetuned language models.
_ArXiv, abs/2210.11416, 2022._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. ArXiv,
abs/2110.14168, 2021.
Antonia Creswell, Murray Shanahan, and Irina Higgins. Selection-inference: Exploiting
large language models for interpretable logical reasoning. ArXiv, abs/2205.09712, 2022.
Yao Fu, Hao-Chun Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. Complexitybased prompting for multi-step reasoning. ArXiv, abs/2210.00720, 2022.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan,
and Graham Neubig. Pal: Program-aided language models. ArXiv, abs/2211.10435, 2022.
-----
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang,
Dawn Xiaodong Song, and Jacob Steinhardt. Measuring mathematical problem solving
with the math dataset. ArXiv, abs/2103.03874, 2021.
Sirui Hong, Xiawu Zheng, Jonathan P. Chen, Yuheng Cheng, Ceyao Zhang, Zili Wang, Steven
Ka Shing Yau, Zi Hen Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, and Chenglin
Wu. Metagpt: Meta programming for multi-agent collaborative framework. ArXiv,
[abs/2308.00352, 2023. URL https://api.semanticscholar.org/CorpusID:260351380.](https://api.semanticscholar.org/CorpusID:260351380)
Hanxu Hu, Hongyuan Lu, Huajian Zhang, Wai Lam, and Yue Zhang. Chain-of-symbol
prompting elicits planning in large langauge models. ArXiv, abs/2305.10276, 2023.
Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary,
Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Emma Bou Hanna, Florian
Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, L’elio Renard Lavaud,
Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang,
Szymon Antoniak, Teven Le Scao, Theophile Gervet, Thibaut Lavril, Thomas Wang,´
Timothee Lacroix, and William El Sayed. Mixtral of experts.´ _ArXiv, abs/2401.04088, 2024._
[URL https://api.semanticscholar.org/CorpusID:266844877.](https://api.semanticscholar.org/CorpusID:266844877)
Albert Qiaochu Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume
Lample, Lucile Saulnier, L’elio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock,
Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothee Lacroix, and William El Sayed.´
[Mistral 7b. ArXiv, abs/2310.06825, 2023a. URL https://api.semanticscholar.org/](https://api.semanticscholar.org/CorpusID:263830494)
```
CorpusID:263830494.
```
Xue Jiang, Yihong Dong, Lecheng Wang, Qiwei Shang, and Ge Li. Self-planning code
[generation with large language model. ArXiv, abs/2303.06689, 2023b. URL https:](https://api.semanticscholar.org/CorpusID:257495755)
```
//api.semanticscholar.org/CorpusID:257495755.
```
Jaehun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra Bhagavatula, Ronan Le
Bras, and Yejin Choi. Maieutic prompting: Logically consistent reasoning with recursive
explanations. In Conference on Empirical Methods in Natural Language Processing, 2022.
Tushar Khot, H. Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and
Ashish Sabharwal. Decomposed prompting: A modular approach for solving complex
tasks. ArXiv, abs/2210.02406, 2022.
Geunwoo Kim, Pierre Baldi, and Stephen McAleer. Language models can solve computer
tasks. ArXiv, abs/2303.17491, 2023.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa.
Large language models are zero-shot reasoners. Advances in Neural Information Processing
_Systems, 2022._
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. Parsing algebraic word problems into equations. Transactions of the Association
_for Computational Linguistics, 3:585–597, 2015._
Andrew Kyle Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan, Kory Matthewson,
Michael Henry Tessler, Antonia Creswell, James L. McClelland, Jane X. Wang, and Felix
Hill. Can language models learn from explanations in context? In Conference on Empirical
_Methods in Natural Language Processing, 2022._
Aitor Lewkowycz, Anders Johan Andreassen, David Dohan, Ethan Dyer, Henryk
Michalewski, Vinay Venkatesh Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag,
Theo Gutman-Solo, et al. Solving quantitative reasoning problems with language models.
In Advances in Neural Information Processing Systems, 2022.
Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang,
Zhaopeng Tu, and Shuming Shi. Encouraging divergent thinking in large language
[models through multi-agent debate. ArXiv, abs/2305.19118, 2023. URL https://api.](https://api.semanticscholar.org/CorpusID:258967540)
```
semanticscholar.org/CorpusID:258967540.
```
-----
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale
generation: Learning to solve and explain algebraic word problems. In Annual Meeting of
_the Association for Computational Linguistics, 2017._
Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit,
Peter Clark, and A. Kalyan. Dynamic prompt learning via policy gradient for semistructured mathematical reasoning. ArXiv, abs/2209.14610, 2022a.
Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. Fantastically
ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
_(Volume 1: Long Papers), pp. 8086–8098, 2022b._
Ning Miao, Yee Whye Teh, and Tom Rainforth. Selfcheck: Using llms to zero-shot check
their own step-by-step reasoning. ArXiv, 2023.
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi,
and Luke Zettlemoyer. Rethinking the role of demonstrations: What makes in-context
learning work? In Conference on Empirical Methods in Natural Language Processing, 2022.
OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin,
Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language
models to follow instructions with human feedback. Advances in Neural Information
_Processing Systems, 35:27730–27744, 2022a._
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin,
Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language
models to follow instructions with human feedback. Advances in Neural Information
_Processing Systems, 35:27730–27744, 2022b._
Jane Pan, Tianyu Gao, Howard Chen, and Danqi Chen. What in-context learning ”learns”
in-context: Disentangling task recognition and task learning. ArXiv, abs/2305.09731, 2023.
Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and
Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior. ArXiv,
2023.
Arkil Patel, S. Bhattamishra, and Navin Goyal. Are nlp models really able to solve simple
math word problems? In North American Chapter of the Association for Computational
_Linguistics, 2021._
Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Mike
Lewis. Measuring and narrowing the compositionality gap in language models. ArXiv,
abs/2210.03350, 2022.
Ben Prystawski and Noah D. Goodman. Why think step-by-step? reasoning emerges from
the locality of experience. ArXiv, abs/2304.03843, 2023.
Cheng Qian, Chi Han, Yi R Fung, Yujia Qin, Zhiyuan Liu, and Heng Ji. Creator: Disentangling abstract and concrete reasonings of large language models through tool creation.
_ArXiv, 2023._
Laura Ruis, Akbir Khan, Stella Rose Biderman, Sara Hooker, Tim Rocktaschel, and Edward Grefenstette. Large language models are not zero-shot communicators. ArXiv,
abs/2210.14986, 2022.
William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, and
Jan Leike. Self-critiquing models for assisting human evaluators. ArXiv, abs/2206.05802,
2022.
-----
Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, and Weizhu Chen.
Synthetic prompting: Generating chain-of-thought demonstrations for large language
models. ArXiv, abs/2302.00618, 2023.
Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Huai hsin Chi,
Nathanael Scharli, and Denny Zhou. Large language models can be easily distracted by
irrelevant context. ArXiv, abs/2302.00093, 2023.
Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and
Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning. 2023.
Mirac Suzgun, Luke Melas-Kyriazi, and Dan Jurafsky. Follow the wisdom of the crowd:
Effective text generation via minimum bayes risk decoding. ArXiv, abs/2211.07634, 2022.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux,
Timothee Lacroix, Baptiste Rozi´ ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al.`
Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971,
2023a.
Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine
Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Daniel M.
Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David´
Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj
Goswami, Naman Goyal, Anthony S. Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan,
Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Kloumann, A. V. Korenev,
Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich,
Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog,
Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan
Schelten, Ruan Silva, Eric Michael Smith, R. Subramanian, Xia Tan, Binh Tang, Ross
Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zhengxu Yan, Iliyan Zarov, Yuchen
Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert
Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned
[chat models. ArXiv, abs/2307.09288, 2023b. URL https://api.semanticscholar.org/](https://api.semanticscholar.org/CorpusID:259950998)
```
CorpusID:259950998.
```
Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, and Huan
Sun. Towards understanding chain-of-thought prompting: An empirical study of what
matters. ArXiv, abs/2212.10001, 2022a.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi
Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large
language models. ArXiv, 2023a.
Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng
Lim. Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by
large language models. ArXiv, abs/2305.04091, 2023b.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Huai hsin Chi, and Denny Zhou.
Rationale-augmented ensembles in language models. ArXiv, abs/2207.00747, 2022b.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Huai hsin Chi, and Denny
Zhou. Self-consistency improves chain of thought reasoning in language models. ArXiv,
abs/2203.11171, 2022c.
Albert Webson and Ellie Pavlick. Do prompt-based models really understand the meaning
of their prompts? In Proceedings of the 2022 Conference of the North American Chapter of
_the Association for Computational Linguistics: Human Language Technologies, pp. 2300–2344,_
2022.
Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan
Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners.
In International Conference on Learning Representations, 2021.
-----
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani
Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed Huai hsin Chi, Tatsunori
Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities
of large language models. Trans. Mach. Learn. Res., 2022, 2022a.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed H Chi, Quoc V
Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language
models. 2022b.
Yiran Wu, Feiran Jia, Shaokun Zhang, Qingyun Wu, Hangyu Li, Erkang Zhu, Yue Wang,
Yin Tat Lee, Richard Peng, and Chi Wang. An empirical study on challenging math
problem solving with gpt-4. ArXiv, 2023.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and
Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language
models. ArXiv, abs/2305.10601, 2023.
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. Star: Bootstrapping reasoning
with reasoning. Advances in Neural Information Processing Systems, 35:15476–15488, 2022.
Zhuosheng Zhang, Aston Zhang, Mu Li, and Alexander J. Smola. Automatic chain of
thought prompting in large language models. ArXiv, abs/2210.03493, 2022.
Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. Multimodal chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923,
2023.
Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo Li, and Yu Li. Progressive-hint
prompting improves reasoning in large language models. ArXiv, abs/2304.09797, 2023.
Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale
Schuurmans, Olivier Bousquet, Quoc Le, and Ed Huai hsin Chi. Least-to-most prompting
enables complex reasoning in large language models. ArXiv, abs/2205.10625, 2022a.
Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris
Chan, and Jimmy Ba. Large language models are human-level prompt engineers. ArXiv,
abs/2211.01910, 2022b.
**A** **Experiment Details**
**Prompt Design and Selection.** We randomly sampled 200 points from GSM8k dataset for
prompt selections. The alternatives are generated by communicating with ChatGPT. We test
the original zero-shot instruction and the standard CoT used in our experiments. Then, we
test four brief prompts and one particularly detailed prompt. The results are list in Table 9.
An interesting phenomenon is that the overly detailed prompt (P5) resulted in a significant
decrease.
**Model Usage.** We used turbo-0301, an early version of ChatGPT, for instruction selection
and type analysis in PEP. Given the possibility that OpenAI may restrict access to early
models, we tested PEP on four open-source models, text-davinci-003 and the recently
released turbo-0125, as well to validate the generalization of the instructions. We utilized
both Llamma models and Mistral-7B with bfloat16. Due to cost constraints, we loaded
```
Mistral-8x7B in 4 bits.
```
**B** **All used instructions and exemplars**
B.1 Zero-shot prompts
To ensure fairness and clarity of semantics when combining multiple instructions, we use
“Let’s solve the question step by step” as the zero-shot CoT instead of “Let’s think step by
step.”. We compared these two instructions in Table 9. The used prompts are list as follow:
-----
Label Prompt Accuracy
C0 Think it step by step. 80.0
C1 Solve the question step by step 84.0
P1 Decompose the given question into smaller segments, elucidating each 87.0
segment as you rephrase it.
P2 Break down the following question into concise phrases and elaborate on 86.0
each phrase while rewriting.
P3 Rewrite the following question by decomposing it into shorter clauses 85.0
and providing explanations for each clause.
P4 Restructure the subsequent question by dissecting it into more concise 80.0
clauses and enhancing clarity through explanatory rephrasing.
P5 Break down the problem into independent, concise, and complete phrases,
aligning the meaning of each phrase with the original text. Focus on
expressing only one concept, action, or condition in each phrase. Then,
provide detailed explanations for each phrase, analyzing the implicit
messages, defining terms, and using precise professional vocabulary to
accurately convey the meaning, aiming to match the potential intention
of the target problem.
62.0
Table 9: Results of prompt selection. While P1 was selected and employed in our experiments
to demonstrate the effectiveness of the concept of elaboration, there could be better prompts.
CoT = ”Let’s solve the problem step by step. {IRR Inst}{FORMAT Inst}\nQuestion: {qst}”
PEP = ”Decompose the given question into smaller segments, elucidating each segment as you rephrase
it. Then, solve the problem step by step. {IRR Inst}{FORMAT Inst}\nQuestion: {qst}”
IRR Inst = ”Feel free to ignore irrelevant information given in the questions.”
B.2 Extract the answers
In order to extract results better, we add a standardized output instruction after each
prompt during generation, namely
(1) for free-answered questions, we use
FORMAT Inst = ”End the solution in the format: ’Final answer: \boxed{X}’, where X is arabic
numerals or ’N\A’ if the problem is unsolvable.”
(2) for questions with options, we use
FORMAT Inst = ”End the solution in the format: ’Final answer: \boxed{X}’, where X is the choice.”
Finally, we use a one-shot exemplars to extract the answers from generations by turbo-0125.
For those unrecognized solutions, we extract the answers manually.
Extract Template = “‘Given the textual solution or code execution solution, output the numeric answer
that can be converted into float value of the problem. If the solution does not yield a result, output
”unsolved”. Only output the numeric value or ”unsolved”.
### Example:
Solution: The total amount of money Janet makes from selling eggs at the farmers’ market per day is 21
- 4 = 17 eggs x $2 = $34.
Therefore, the final answer is: Janet makes $34 per day at the farmers’ market.
Answer: $34
-----
Result: 34
###
Original Problem: {qst}
Solution: {sol}
Result:“‘
B.3 Few-shot exemplars
In this section, we enumerate all the exemplars used. They are directly adopted for CoT and
L2M from Zhou et al. (2022a), which are specifically designed for GSM8k. For PEP, PaS, and
Self-ask, we re-generate the exemplars using GPT-4 for proper modifications.
In practice, the one-shot setting uniformly employs the first exemplar, while the four-shot
utilizes all examples. When integrating PEP with other problem-related methods, we simply
append the corresponding elaboration part of PEP to the original question of other examples,
before any other generations. When employing IRR-Inst, we position it at the beginning.
The overall template structure is as follows, using the one-shot L2M+PEP+IRR inst as an
example for illustration:
“‘Solve grade school math problems. Feel free to ignore irrelevant information given in the questions.\n
Question: Elsa has 5 apples. Anna has 2 more apples than Elsa. How many apples do they have
together?
Problem Elaboration:
Segment 1: Elsa has 5 apples.
Segment 2: Anna has 2 more apples than Elsa.
Segment 3: How many apples do they have together?
Rephrased question: If Elsa has 5 apples and Anna has 2 more apples than Elsa, how many apples do
they have together?
Answer: Let’s break down this rephrased problem: 1. How many apples does Anna have? 2.
How many apples do Elsa and Anna have together?
1. Anna has 2 more apples than Elsa. So Anna has 2 + 5 = 7 apples.
2. Elsa and Anna have 5 + 7 = 12 apples together.
Question: {qst}
Problem Elaboration:“‘
B.3.1 exemplars for CoT
Question: Elsa has 5 apples. Anna has 2 more apples than Elsa. How many apples do they have
together?
Answer: Anna has 2 more apples than Elsa, so Anna has 2 + 5 = 7 apples. Elsa and Anna have 5 + 7
= 12 apples together. The answer is 12.
Question: Four years ago, Kody was only half as old as Mohamed. If Mohamed is currently
twice 30 years old, how old is Kody?
Answer: We were told that Mohamed is currently twice 30 years old, so he is currently 30 * 2 =
60 years old. That means that four years ago he must have been 60 - 4 = 56 years old. Four
years ago, Kody was half as old as Mohamed, so Kody must have been 56 / 2 = 28 years old
then. Since Kody was 28 years old four years ago, she must now be 28 + 4 = 32 years old. The answer is 32.
Question: Carla bought 2 bags of mini peanut butter cups on clearance. Each bag was $6.00 but was
75% off. How much did she spend on 2 bags of candy?
Answer: Each bag was $6.00 but was 75% off. So each bag cost $6.00 * (1 - 0.75) = $6.00 * 0.25 =
$1.50. Carla bought 2 bags. So she spent $1.50 * 2 = $3.00. The answer is 3.
-----
Question: If Pam is currently twice as young as Rena is, and in 10 years Rena will be 5 years older than
her, how old is Pam now?
Answer: Since Rena will be 5 years older than Pam in 10 years, she must be 5 years older than Pam now
as well. If Pam is currently twice as young as Rena, that means that Rena is currently twice as old as
Pam is. So if P stands for Pam’s age now and R stands for Rena’s age now, then we know that R = 2 *
P And since Rena is 5 years older than Pam now, we know that R = P + 5. By substitution, we have P
+ 5 = 2 * P, which means that P = 5. The answer is 5.
FOUR SHOT CoT = “‘{exemplars}
Question: {qst}
Answer:
B.3.2 exemplars for PEP
Question: Elsa has 5 apples. Anna has 2 more apples than Elsa. How many apples do they have
together?
Problem Elaboration:
Segment 1: Elsa has 5 apples. This segment tells us the number of apples Elsa has.
Segment 2: Anna has 2 more apples than Elsa. This segment tells us that Anna has more apples than
Elsa, specifically 2 more.
Segment 3: How many apples do they have together? This segment is asking us to find the total number
of apples both Elsa and Anna have combined.
Solution:
Step 1: Determine the number of apples Elsa has. Elsa has 5 apples.
Step 2: Determine the number of apples Anna has. Anna has 2 more apples than Elsa, so she has 5 + 2
= 7 apples.
Step 3: Determine the total number of apples they have together. Together, Elsa and Anna have 5 + 7
= 12 apples. The answer is 12.
Question: Four years ago, Kody was only half as old as Mohamed. If Mohamed is currently twice 30
years old, how old is Kody?
Problem Elaboration:
Segment 1: Four years ago, Kody was only half as old as Mohamed. This means that the age difference
between Kody and Mohamed is constant and it is the same four years ago as it is now.
Segment 2: If Mohamed is currently twice 30 years old. This means that Mohamed’s current age is 60
years old.
Solution:
Step 1: Determine Mohamed’s age four years ago. If Mohamed is currently 60 years old, then four years
ago he was 60 - 4 = 56 years old.
Step 2: Determine Kody’s age four years ago. Since Kody was half as old as Mohamed four years ago,
then Kody was 56 / 2 = 28 years old four years ago.
Step 3: Determine Kody’s current age. If Kody was 28 years old four years ago, then Kody is currently
28 + 4 = 32 years old. The answer is 32.
Question: Carla bought 2 bags of mini peanut butter cups on clearance. Each bag was $6.00 but was
75% off. How much did she spend on 2 bags of candy?
Problem Elaboration:
Segment 1: Identify the original price of the bags of candy. The original price of each bag of candy is
$6.00.
Segment 2: Determine the discount on each bag. The bags are 75% off.
Segment 3: Calculate the discounted price of each bag. To find the discounted price, we need to
calculate 75% of $6.00.
Segment 4: Determine the total cost for 2 bags. Once we have the discounted price of one bag, we
multiply it by 2 to find the total cost for 2 bags.
Solution: Step 1: The original price of each bag is $6.00. Step 2: The discount on each bag is 75%.
Step 3: To calculate 75% of $6.00, we multiply 6 by 0.75, which equals $4.50. This means that $4.50 is
-----
the amount of the discount. Step 4: To find the discounted price of each bag, we subtract the discount
from the original price. So, $6.00 - $4.50 = $1.50. Each bag costs $1.50 after the discount. Step 5: To
find the total cost for 2 bags, we multiply the discounted price by 2. So, $1.50 * 2 = $3.00. The answer is 3.
Question: If Pam is currently twice as young as Rena is, and in 10 years Rena will be 5 years older than
her, how old is Pam now?
Problem Elaboration:
Segment 1: Pam is currently twice as young as Rena is. This means that Pam’s current age is half of
Rena’s current age.
Segment 2: In 10 years, Rena will be 5 years older than Pam. This means that if we add 10 years to
both Pam’s and Rena’s current ages, the difference between their ages will be 5 years.
Solution:
Step 1: Let’s denote Rena’s current age as R and Pam’s current age as P. From the first segment, we
know that P = R/2.
Step 2: From the second segment, we know that R + 10 = P + 10 + 5. We can simplify this to R = P
+ 5.
Step 3: Now we can substitute P from the first equation into the second equation. So, R = R/2 + 5.
Step 4: To solve for R, we multiply both sides of the equation by 2 to get rid of the fraction. This gives
us 2R = R + 10.
Step 5: Subtract R from both sides to get R = 10. So, Rena is currently 10 years old.
Step 6: Substitute R = 10 into the first equation to find P. This gives us P = 10/2 = 5. So, Pam is
currently 5 years old. The answer is 5.
FOUR SHOT PEP = “‘{exemplars}
Question: {qst}
Problem Elaboration: “‘
B.3.3 exemplars for L2M
Question: Elsa has 5 apples. Anna has 2 more apples than Elsa. How many apples do they have
together?
Answer: Let’s break down this problem: 1. How many apples does Anna have? 2. How many apples do
Elsa and Anna have together?
1. Anna has 2 more apples than Elsa. So Anna has 2 + 5 = 7 apples.
2. Elsa and Anna have 5 + 7 = 12 apples together. The answer is 12.
Question: Four years ago, Kody was only half as old as Mohamed. If Mohamed is currently twice 30
years old, how old is Kody?
Answer: Let’s break down this problem: 1. How old was Mohamed four years ago? 2. How old is Kody?
1. We were told that Mohamed is currently twice 30 years old, so he is currently 30 * 2 = 60 years old.
That means that four years ago he must have been 60 - 4 = 56 years old.
2. Four years ago, Kody was half as old as Mohamed, so Kody must have been 56 / 2 = 28 years old
then. Since Kody was 28 years old four years ago, she must now be 28 + 4 = 32 years old. The answer is 32.
Question: Carla bought 2 bags of mini peanut butter cups on clearance. Each bag was $6.00 but was
75% off. How much did she spend on 2 bags of candy?
Answer: Let’s break down this problem: 1. How much did she spend on 2 bags of candy?
1. Each bag was $6.00 but was 75% off. So each bag cost $6.00 * (1 - 0.75) = $6.00 * 0.25 = $1.50.
Carla bought 2 bags. So she spent $1.50 * 2 = $3.00. The answer is 3.
Question: If Pam is currently twice as young as Rena is, and in 10 years Rena will be 5 years older than
her, how old is Pam now?
Answer: Let’s break down this problem: 1. How much older is Rena than Pam currently? 2. How old is
Pam now?
1. Since Rena will be 5 years older than Pam in 10 years, she must be 5 years older than Pam now as
-----
well.
2. If Pam is currently twice as young as Rena, that means that Rena is currently twice as old as Pam is.
So if P stands for Pam’s age now and R stands for Rena’s age now, then we know that R =2 * P And
since Rena is 5 years older than Pam now, we know that R = P + 5. By substitution, we have P + 5 =
2 * P, which means that P = 5. The answer is 5.
FOUR SHOT L2M = “‘{exemplars}
Question: {qst}
Answer: Let’s break down this problem:“‘
B.3.4 exemplars for Self-ask
Question: Elsa has 5 apples. Anna has 2 more apples than Elsa. How many apples do they have
together?
Are follow up questions needed here: Yes.
Follow up: How many apples does Anna have?
Intermediate answer: Anna has 5 + 2 = 7 apples.
Follow up: How many apples do Elsa and Anna have together?
Intermediate answer: Elsa and Anna have 5 + 7 = 12 apples together. The answer is 12.
Question: Four years ago, Kody was only half as old as Mohamed. If Mohamed is currently twice 30
years old, how old is Kody?
Are follow up questions needed here: Yes.
Follow up: How old is Mohamed currently?
Intermediate answer: Mohamed is 30 * 2 = 60 years old. Follow up: How old was Kody four years ago?
Intermediate answer: Kody was (60 - 4) / 2 = 28 years old four years ago.
So the final answer is: Kody is 28 + 4 = 32 years old. The answer is 32.
Question: Carla bought 2 bags of mini peanut butter cups on clearance. Each bag was $6.00 but was
75% off. How much did she spend on 2 bags of candy?
Are follow up questions needed here: Yes.
Follow up: How much was the discount for each bag?
Intermediate answer: The discount for each bag is $6.00 * 75% = $4.50.
Follow up: How much did Carla pay for each bag after the discount?
Intermediate answer: Carla paid $6.00 - $4.50 = $1.50 for each bag.
So the final answer is: Carla spent $1.50 * 2 = $3.00 on 2 bags of candy. The answer is 3.00.
Question: If Pam is currently twice as young as Rena is, and in 10 years Rena will be 5 years older than
her, how old is Pam now?
Are follow up questions needed here: Yes.
Follow up: What about Rena and Pam’s current ages?
Intermediate answer: It tells us that Rena’s age is twice Pam’s age. So if P stands for Pam’s age now
and R for Rena’s age now, then R = 2 * P. And since Rena is 5 years older than Pam now, we have R =
P + 5.
Follow up: What is Pam’s age now?
Final answer: By substituting P + 5 in place of R in equation R = 2 * P, we get P + 5 = 2 * P, which
simplifies to P = 5. So, Pam is 5 years old. The answer is 5.
FOUR SHOT SK =“‘{exemplars}
Question: {qst}
Are follow up questions needed here: Yes.
Follow up: “‘
-----
It’s worth noting that the template recommended in self-ask (Press et al., 2022) actually ends
with “Are follow up questions needed here:”, but we found it always generates a “Yes” or
“No” and then stops, especially in one-shot setting. Therefore, we made above adjustments.
B.3.5 exemplars of PaS
Question: Elsa has 5 apples. Anna has 2 more apples than Elsa. How many apples do they have
together?
Plan:
Step 1: Find out how many apples Anna has.
Step 2: Add the number of apples Elsa has to the number of apples Anna has to find the total number
of apples they have together.
Solution:
Step 1: Anna has 5 + 2 = 7 apples.
Step 2: Together, Elsa and Anna have 5 (Elsa’s apples) + 7 (Anna’s apples) = 12 apples. So, Elsa and
Anna have 12 apples together. The answer is 12.
Question: Four years ago, Kody was only half as old as Mohamed. If Mohamed is currently twice 30
years old, how old is Kody?
Plan:
Step 1: Find out how old Mohamed is currently.
Step 2: Find out how old Mohamed was four years ago.
Step 3: Since Kody was half as old as Mohamed four years ago, find out Kody’s age four years ago.
Step 4: Add four years to Kody’s age to find out his current age.
Solution:
Step 1: Mohamed is currently 2 * 30 = 60 years old.
Step 2: Four years ago, Mohamed was 60 - 4 = 56 years old.
Step 3: Four years ago, Kody was 56 / 2 = 28 years old. Step 4: Currently, Kody is 28 + 4 = 32 years
old. So, Kody is 32 years old. The answer is 32.
Question: Carla bought 2 bags of mini peanut butter cups on clearance. Each bag was $6.00 but was
75% off. How much did she spend on 2 bags of candy?
Plan:
Step 1: Find out how much discount Carla got on each bag.
Step 2: Subtract the discount from the original price to find out the price Carla paid for each bag. Step
3: Multiply the price Carla paid for each bag by the number of bags she bought to find out how much
she spent in total.
Solution:
Step 1: The discount on each bag is 75/100 * $6.00 = $4.50.
Step 2: The price Carla paid for each bag is $6.00 - $4.50 = $1.50.
Step 3: Carla spent $1.50 * 2 = $3.00 on 2 bags of candy. So, Carla spent $3.00 on 2 bags of candy.
The answer is 3.00.
Question: If Pam is currently twice as young as Rena is, and in 10 years Rena will be 5 years older than
her, how old is Pam now?
Plan:
Step 1: Set up an equation based on the information that Rena’s age is twice Pam’s age.
Step 2: Set up another equation based on the information that Rena is 5 years older than Pam.
Step 3: Substitute the second equation into the first to solve for Pam’s age.
Solution:
Step 1: If P stands for Pam’s age now and R for Rena’s age now, then R = 2 * P.
Step 2: And since Rena is 5 years older than Pam now, we have R = P + 5.
Step 3: By substituting P + 5 in place of R in equation R = 2 * P, we get P + 5 = 2 * P, which
simplifies to P = 5. So, Pam is 5 years old. The answer is 5.
FOUR SHOT PaS = “‘{exemplars}
-----
Question: {qst}
Plan:“‘
B.4 Implementation of PoT
We implement PoT using the following template. For PEP, we insert null into {ela}. For the
PoT+PEP, we first use zero-shot PEP to generate the content for the Elaboration part, and
then insert it into {ela}.
PoT zeroshot temp = “‘# Question: {qst}
_{ela}_
# Answer the question by implementing a solution() function.
# Generate the code only.
# Let’s write a Python program step by step, and then return the answer
# Firstly, we need write the solution() starting with defining variable:“‘
**C** **Example Input and Output Pairs**
In Table 10 we provide more error cases. As shown, LLMs could also make mistakes during
elaboration, misleading the following reasoning. PEP may break down the continuous
context, thus changing the implicit implicatures. Besides, PEP may focus too much on the
locality of sentences, constraining the associative thinking of CoT.
-----
**Question 1:** It is approximately 1955 kilometers from San Diego, California to New York City, New York. If
Bernice drove 325 kilometers for 4 days, how many kilometers will she still need to drive?
**CoT** To find the total distance Bernice drove, we can multiply the distance she drove each day (325
kilometers) by the number of days she drove (4 days): 325 km/day x 4 days = 1300 kilometers.
Therefore, Bernice drove 1300 kilometers. To find how many kilometers she still needs to drive,
we can subtract the distance she drove from the total distance between San Diego and New
York City: 1955 km - 1300 km = 655 kilometers. Therefore, Bernice still needs to drive 655
kilometers. (Correct)
**Error: The decomposition broke the continuous context into segments, changing the related implicit**
**implicatures.**
**PEP + CoT:** Segment 1: What is the distance between San Diego, California and New York City, New York?
The distance between San Diego, California and New York City, New York is approximately
1955 kilometers. Segment 2: How many kilometers did Bernice drive? Bernice drove 325
kilometers. Segment 3: How many days did Bernice drive for? Bernice drove for 4 days.
Segment 4: How many kilometers will Bernice still need to drive? To find out how many
kilometers Bernice will still need to drive, we need to subtract the distance she has already
driven from the total distance between San Diego and New York. 1955 km - 325 km = 1630 km.
Therefore, Bernice will still need to drive 1630 kilometers.
**Question 2:** Sandy wants to lose as much weight as Joey does but needs 4 weeks to lose the same amount
of weight that Joey loses in a single week. If Joey loses 8 pounds in 4 weeks, how many weeks
will it take Sandy to lose the same amount of weight?
**CoT** First, we need to find out how much weight Joey loses in a single week. We can do this by
dividing the total weight he loses in 4 weeks by 4: 8 pounds ÷ 4 weeks = 2 pounds per week.
So Joey loses 2 pounds per week. Next, we need to figure out how much weight Sandy wants
to lose. We know she wants to lose the same amount as Joey, which is 8 pounds. Now we can
use the information given in the problem to find out how many weeks it will take Sandy to
lose 8 pounds. We know that Sandy needs 4 weeks to lose the same amount of weight that Joey
loses in a single week. So we can set up a proportion: 4 weeks to lose 2 pounds = x weeks to
lose 8 pounds. To solve for x, we can cross-multiply: 4 weeks * 8 pounds = 2 pounds * x weeks;
32 pounds = 2x; Divide both sides by 2: 16 weeks = x. So it will take Sandy 16 weeks to lose the
same amount of weight that Joey loses in 4 weeks. (Correct)
**Error: The elucidation emphasized and rephrased the given condition, changing the underlying**
**implicatures.**
**PEP + CoT:** Segment 1: Sandy wants to lose as much weight as Joey does. Elucidation: Sandy has a weight
loss goal that is equivalent to Joey’s weight loss goal. Segment 2: Sandy needs 4 weeks to lose
the same amount of weight that Joey loses in a single week. Elucidation: Sandy’s weight loss
rate is slower than Joey’s weight loss rate. It takes Sandy 4 weeks to lose the same amount of
weight that Joey loses in just one week. Segment 3: Joey loses 8 pounds in 4 weeks. Elucidation:
Joey’s weight loss goal is to lose 8 pounds, and he achieves this goal in 4 weeks. Segment 4:
How many weeks will it take Sandy to lose the same amount of weight? Elucidation: The
question is asking for the number of weeks it will take Sandy to achieve the same weight loss
goal as Joey.
Solution: Since Joey loses 8 pounds in 4 weeks, his weight loss rate is 8/4 = 2 pounds per week.
Since Sandy needs 4 weeks to lose the same amount of weight that Joey loses in a single week,
her weight loss rate is 1/4 = 0.25 pounds per week. To lose 8 pounds like Joey, Sandy will need
to lose at the same rate of 2 pounds per week. Therefore, it will take Sandy 8/2 = 4 weeks to
lose the same amount of weight as Joey.
**Question 3:** Patty’s Plumbing charges $40 to visit a house to make a repair, plus $35 per hour, or part
thereof, for labor, plus parts. One job took 2.25 hours and used $60 in parts. How much did
Patty charge?
**CoT** 1. Calculate the labor cost: The job took 2.25 hours, so Patty charged for 3 hours (since she
charges for any part of an hour). The labor cost is $35 per hour, so the labor cost for 3 hours
is 3 x $35 = $105. 2. Calculate the total cost: The visit fee is $40. The labor cost is $105. The
parts cost is $60. The total cost is $40 + $105 + $60 = $205. Therefore, Patty charged $205 for the
job. (Correct)
**Error: PEP focuses on known information and may constrain the associative thinking of CoT**
**during reasoning.**
**PEP + CoT:** Segment 1: What is the base fee for Patty’s Plumbing to visit a house for a repair? Patty’s
Plumbing charges $40 to visit a house for a repair. Segment 2: How much does Patty’s Plumbing
charge for labor? Patty’s Plumbing charges $35 per hour, or part thereof, for labor. Segment
3: How much does Patty’s Plumbing charge for parts? The job used $60 in parts. Segment 4:
How long did the job take? The job took 2.25 hours.
Solution: To calculate the total cost of the job, we need to add the base fee, labor cost, and parts
cost. Base fee: $40 Labor cost: $35 per hour x 2.25 hours = $78.75 Parts cost: $60. Total cost: $40
+ $78.75 + $60 = $178.75. Therefore, Patty charged $178.75 for the job.
Table 10: Error cases from GSM8k. For certain sentences in the problem (cyan), CoT can
understand it correctly (blue), while PEP may lead to erroneous reasoning and generate
errors (red)
-----
| [
"Haoran, Liao",
"Jidong, Tian",
"Shaohua, Hu",
"Hao, He",
"Yaohui, Jin"
] | 2024-03-26T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2402.15764 | https://arxiv.org/abs/2402.15764 | https://www.semanticscholar.org/paper/5e5f6e3620a9b472cfa8cb81b264f350cd77d5d4 |
MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit | Large language models (LLMs) have been explored in a variety of reasoning tasks including solving of mathematical problems. Each math dataset typically includes its own specially designed evaluation script, which, while suitable for its intended use, lacks generalizability across different datasets. Consequently, updates and adaptations to these evaluation tools tend to occur without being systematically reported, leading to inconsistencies and obstacles to fair comparison across studies. To bridge this gap, we introduce a comprehensive mathematical evaluation toolkit that not only utilizes a python computer algebra system (CAS) for its numerical accuracy, but also integrates an optional LLM, known for its considerable natural language processing capabilities. To validate the effectiveness of our toolkit, we manually annotated two distinct datasets. Our experiments demonstrate that the toolkit yields more robust evaluation results compared to prior works, even without an LLM. Furthermore, when an LLM is incorporated, there is a notable enhancement. The code for our method will be made available at \url{https://github.com/MARIO-Math-Reasoning/math_evaluation}. | null | ## MARIO Eval: Evaluate Your Math LLM with your Math LLM –A mathematical dataset evaluation toolkit
**Boning Zhang[∗], Chengxi Li[∗], Kai Fan[∗†]**
Zhejiang University, Alibaba Group
```
[email protected], {xiji.lcx,k.fan}@alibaba-inc.com
```
**Abstract**
Large language models (LLMs) have been explored in a variety of reasoning
tasks including solving of mathematical problems. Each math dataset typically
includes its own specially designed evaluation script, which, while suitable for
its intended use, lacks generalizability across different datasets. Consequently,
updates and adaptations to these evaluation tools tend to occur without being
systematically reported, leading to inconsistencies and obstacles to fair comparison
across studies. To bridge this gap, we introduce a comprehensive mathematical
evaluation toolkit that not only utilizes a python computer algebra system (CAS)
for its numerical accuracy, but also integrates an optional LLM, known for its
considerable natural language processing capabilities. To validate the effectiveness
of our toolkit, we manually annotated two distinct datasets. Our experiments
demonstrate that the toolkit yields more robust evaluation results compared to prior
works, even without an LLM. Furthermore, when an LLM is incorporated, there
is a notable enhancement. The code for our method will be made available at
```
https://github.com/MARIO-Math-Reasoning/math_evaluation.
```
**1** **Introduction**
With appropriate prompts or external tools, Large language models (LLMs) have attained human
parity in various tasks. Nonetheless, mathematical reasoning remains a formidable challenge for
LLMs, necessitating a systematic evaluation to accurately compare their performance on such tasks.
However, this field suffers from poor automatic evaluation that is neither robust nor complete enough
for accurate evaluation on math problem answers, owing to their complexity and diversity. Recent
LLMs reasoning methods (ToRA [4] and MathChat [12]) lack unified math evaluation standards even
on the same dataset such as MATH [4]. Such discrepancy between different evaluation scripts may
not accurately reflect their true reasoning capability. Consequently, we are motivated to design a
more comprehensive automatic evaluation toolkit that encompasses various math concepts within
answers. Our aim is to establish a convenient and standardized evaluation framework to support
future research in mathematical reasoning.
Upon conducting a thorough review of existing automatic math evaluation methods, we pinpointed
several key shortcomings. Traditionally, the assessment of mathematical answers has heavily relied on
simplistic methods such as direct string comparisons or simple rules, inadequate to address complex
situations. As illustrated in Figure 1, identical answer expression may imply different math concepts,
while different expressions may be actually equivalent under certain conditions.
In light of such limitations, we propose a novel two-stage math evaluation toolkit that features an
optional integration with LLM. The initial stage involves identifying the type of answer based on
our predefined set of mathematical concepts. Subsequently, the second stage invokes the relevant
type-specific function to evaluate the equivalence between the expected and predicted answers. When
integrating the LLM, it has the potential to significantly enhance the precision of both the classification
of answer types and the determination of answer equivalence. We evaluate the equivalence of answers
using our toolkit in various configurations: without LLM, with LLM, and LLM-only. This evaluation
_∗Equal contribution. This work was done when the first author’s Internship at Alibaba._
_†Corresponding Author_
-----
Figure 1: Most previous evaluation tools judge correctness solely based on the answer, while ours
also takes into account the answer type of the question implied.
|Type|Code Friendly Definition|
|---|---|
|Real|A number can be used to measure a continuous one-dimensional quantity.|
|Complex|A number can be written in the form of a + bi, a, b and i is the imaginary unit. ∈R|
|Set|A collection of different elements which are typically mathematical objects of any kind.|
|Interval(s)|A interval is a set of real numbers that lie between two fixed endpoints without any gaps. Intervals represent the intersection or union of several interval.|
|Vector|A collection of ordered elements which are typically mathematical objects of any kind.|
|Matrix|A rectangular array of numbers, symbols, or expressions, arranged in rows and columns.|
|Expression|A combination of symbols that is well-formed according to rules that depend on the context, including at least one unknown variable, but no equal sign, e.g., x2 + y2 1 −|
|Function|A function from a set X to a set Y is an assignment of an element of Y to each element of X, e.g., f(x, y) = x2 + y2 1. −|
|Equation|A equation states that two quantities are the same, in the form of A = B. In our case, at least one of A and B should be Expression, e.g., x2 + y2 = 1.|
|Inequality|A relation which makes a non-equal comparison between two numbers or expressions, e.g., x2 + y2 < 1.|
|Others|Do not belong to above 10 types.|
Table 1: The type definitions lack complete rigor by design, with the intention to align these types
with those defined in Python or SymPy as Table 2.
reveals that the hybrid approach (with LLM) effectively leverages the numerical precision offered by
Python packages and the natural language comprehension capabilities of the LLM. In summary, our
key contributions are as follows:
1. We construct a mathematical evaluation toolkit with an optionally integratable LLM. It
covers a variety of mathematical reasoning answer types.
2. We curated three mathematical answer evaluation datasets, including original questions
and answers, model predicted answers, and our manually annotated answer types and
equivalences.
3. We plan to make our datasets and evaluation toolkit publicly available to support and advance
the efforts of the research community.
**2** **Main Framework**
In this section, we outline the design pattern of our mathematical evaluation toolkit. It mainly includes
two modules. One is a type classification module that is utilized to ascertain the type of the expected
answer. The other is an evaluation module that is crafted to assess the equivalence between the
predicted response and the expected answer.
-----
Table 2: Our basic design attempts to align defined types in Python language.
|Type|Package|Aligned type|
|---|---|---|
|Set Vector Matrix Interval(s) Expression Function Equation Inequality|Python Python SymPy SymPy SymPy SymPy SymPy SymPy|set list Matrix Interval Expr Function Equality Relational|
**Algorithm 1 Design of our math evaluation**
**Require: question q, answer a, prediction p**
1: atype = rule_classifier(a, p)
2: try
3: `ans = is_equiv(a, p, atype)` _▷_ _False or Error may be re-evaluated later_
4: except
5: `ans = False`
6: if not ans and LLM is not None then
7: **atype = LLMtype_prompt(q, a)**
8: **try**
9: `ans = is_equiv(a, p, atype)` _▷_ _Only Error will be re-evaluated_
10: **except**
11: `ans = LLMequiv_prompt(q, a, p)`
**return ans**
**2.1** **Type Definitions**
Drawing the inspiration from the way mathematical theories are formulated by using a set of
axioms, we establish our types ranging from fundamental concepts (e.g., Real) to complex composite
structures (e.g., Matrix). The type definitions are delineated in Table 1. It is important to note that
these definitions are intentionally not fully rigorous; our primary aim is to ensure consistency with
the types inherent within the standard Python language, as well as with the Python computer algebra
system (CAS) package, SymPy. For instance, the type ’Set’ in our framework specifically corresponds
to the set class in Python, represented with curly braces (“{...}"), which is distinct from the concept
of a mathematical set that also contains intervals. Table 2 provides a comprehensive comparison
between the types defined in our framework and the built-in types in Python and SymPy. For Real
and Complex, we define the customized types, because SymPy considers real number as complex as
well.
**2.2** **Design Pattern**
Based on the definitions, we initially create a rule-based type classifier that depends solely on the
expected and the predicted answers. For example, an answer string that starts with "\beginmatrix" is
classified as a Matrix type. Subsequently, for the "Real" and "Expression" types, we develop two
fundamental equivalence functions. For all other types, their respective equivalence functions can
recursively invoke these two basic functions according to the internal data structures.
However, as Figure 1 illustrates, relying solely on the answer string to determine its type is not reliable,
since the same string can represent vastly different types. Inspired by the remarkable natural language
understanding capabilities of LLMs, we propose incorporating LLMs into the math evaluation process
to eliminate the confusions highlighted in the introduction. With proper prompt, LLMs can analyze
both the question and the answer to discern the intended answer type. Furthermore, they can even
directly assess the equivalence between the answer and the prediction. To counteract the issue of
hallucination, LLMs will be employed exclusively for cases where the rule-based method fails. The
overall design is depicted in Algorithm 1.
-----
|Toolkit|Equivalence Accuracy|Col3|Col4|
|---|---|---|---|
||MATH|GK2023|GK2023-ToRA|
|LLM only|95.03%|93.77%|-|
|---|---|---|---|
|MATH +Algorithm 1|92.51% 95.15%|91.39% 93.18%|92.21% -|
|---|---|---|---|
|ToRA +Algorithm 1|86.08% 96.69%|87.24% 94.07%|95.06% -|
|---|---|---|---|
|DeepSeek-Math +Algorithm 1|86.46% 96.63%|88.43% 94.07%|92.47% -|
|---|---|---|---|
Table 3: Equivalence accuracy. LLM: gpt-3.5-turbo. Numbers in red depend on LLM.
|basic design + LLM type + LLM equiv|97.23% 97.88% 98.59%|96.74% 97.33% 97.03%|98.96% - -|
|---|---|---|---|
**2.3** **Datasets and Setups**
We mainly conducted experiments on two datasets, the MATH testset [5] and GaoKao2023-Math-En
(GK2023) [6], which contain 5,000 and 385 high school level math problems, respectively. We
performed supervised fine-tuning on the DeepSeek-Math [13] base model on MATH trainset and
infer on the two testsets to extract the model-predicted answers. In addition, we downloaded the
math problem solving LLM ToRA-70B [4]. We then directly applied it to the GaoKao2023 dataset
for inference, denoting this specific test as GK2023-ToRA. Subsequently, we manually verified the
correctness of the predicted answers for all three datasets.
For toolkit comparison, we utilized the official repositories from MATH, ToRA, and DeepSeek-Math.
We assess our evaluation toolkit from three perspectives: equivalence accuracy, type classification
**accuracy, and solution accuracy.**
```
equiv_acc = [#][{][human_eval][ =][ toolkit_eval][}]
```
#dataset
```
type_acc = [#][{][human_type][ =][ llm_type][}]
```
#Dataset
```
sol_acc = [#][{][toolkit_eval][ =][ True][}]
```
#Dataset
**2.4** **Main Results**
The equivalence accuracy results are shown in Table 3. There is a significant difference in the
performance of the three open-source mathematical evaluation toolkits when assessing the same
pairs of expected and predicted answers. Remarkably, gpt-3.5-turbo [8] appears to surpass these
toolkits in terms of making correctness judgments. In contrast, our toolkit in the configuration of the
basic design attains superior accuracy compared to gpt-3.5-turbo, and its performance is further
enhanced when integrated with it.
On the two testsets, our basic design can achieve about 97% accuracy, and the incorporation of
LLM technology provides an additional improvement of about 1%. While this improvement may
appear minor, significant accuracy enhancements are observed when integrating prior toolkits with
our method outlined in Algorithm 1. These results demonstrate that LLMs can significantly amplify
the effectiveness of existing tools. On GaoKao2023-ToRA, most of the inferred results are incorrect,
_i.e., the expected and predicted answers are quite difference. Despite this, all toolkits exhibited_
satisfactory performance, indicating that integrating the LLM may not be necessary. However, we
can still observe that our basic design still achieves better than ToRA toolkit, which was specifically
tailored for its output.
-----
Table 4: Type accuracy with different LLMs
|Model|Accuracy on MATH testset|Col3|
|---|---|---|
||Type|Equivalence|
|gpt-3.5-turbo Qwen-Max|93.74% 93.30%|98.59% 97.65%|
0.65
0.6
0.55
0.5
0.45
0.4
True Accuracy MATH ToRA DeepSeek-Math Ours
Figure 2: Solution accuracy with different toolkits. Left: MATH. Right: GK2023
**2.5** **Ablation Studies**
**Type Accuracy In the first study as illustrated in Table 4, we specially evaluate two commercial**
LLMs: gpt3.5-turbo and Qwen-Max [1]. We choose them for two primary reasons. First, their
cost-effective API pricing enables the handling of 5K problems. Because for a comprehensive
assessment of the LLMs in type classification, we utilized the entire dataset with the LLMs instead
of the procedure defined in Algorithm 1, activating LLM type prediction as needed. Second, LLMs
with larger size typically exhibit enhanced abilities in natural language understanding and instructionfollowing, both of which are crucial for accurate type determination. Meanwhile, the accuracy of
equivalence reported in Table 4 reflects the performance adhering strictly to the procedure outlined in
Algorithm 1. In general, we can conclude that better LLMs may bring larger improvement over our
basic design.
**Solution Accuracy In the second study, depicted in Figure 2, we can observe that previous evaluation**
toolkits exhibit a significant discrepancy (e.g., over 10% gap in the MATH dataset) from the true
accuracy, thereby making it less reliable to access the actual mathematical reasoning capabilities
of math LLMs. Someone may argue that the metric of solution accuracy may not be entirely
accurate, as erroneous judgments over the correct and incorrect answers can potentially cancel
each other out. However, we can still observe a strong correlation between equivalence accuracy
and solution accuracy. Nonetheless, a strong correlation exists between equivalence accuracy and
solution accuracy. For instance, in the MATH dataset and with our toolkits, higher equivalence
accuracy (> 90%) correlates with more robust solution accuracy evaluations (less difference to the
true accuracy).
**Case Studies We carried out three case studies using our manually annotated test sets in the Ap-**
pendix 6,7,8.
**3** **Related Works**
In order to explore the reasoning ability of LLMs and their efficiency in understanding special symbols,
researchers utilize annotated data fine-tuning, notably via ‘chain-of-thought’ data approaches [2, 10]
and integration of external knowledge sources [11]. Automated proof assistants exemplified by
GPT-f [9] further underscores this exploration. English math word problem (MWP) corpus, such as
ASDiv [7], GSM8K [3] and MATH [5] are proposed to evaluate the math reasoning capabilities of
LLMs in the past few years. However, as the most widely used dataset, reasoning experiments on
MATH lack unified automatic evaluation methods. Our work releases a normalized math evaluation
work which reaches accuracy above 95% on MATH, further reflects the true capability of LLMs
reasoning work more faithfully.
-----
**4** **Conclusion**
We presented a comprehensive mathematical evaluation toolkit that is able to optionally incorporate
LLMs. This toolkit is designed to take both advantages of the CAS and LLM, and the experiments
demonstrate our toolkit can promote the accuracy and consistency of math evaluation and facilitating
fair cross-comparison of different studies.
**References**
[1] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin
Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu,
Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren,
Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu,
Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu,
Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang,
Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. Qwen technical report, 2023.
[2] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan
Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu,
Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie
Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent
Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob
Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-finetuned
language models, 2022.
[3] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John
Schulman. Training verifiers to solve math word problems, 2021.
[4] Zhibin Gou, Zhihong Shao, Yeyun Gong, yelong shen, Yujiu Yang, Minlie Huang, Nan Duan,
and Weizhu Chen. ToRA: A tool-integrated reasoning agent for mathematical problem solving.
[In The Twelfth International Conference on Learning Representations, 2024. URL https:](https://openreview.net/forum?id=Ep0TtjVoap)
```
//openreview.net/forum?id=Ep0TtjVoap.
```
[5] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn
Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset,
2021.
[6] Minpeng Liao, Wei Luo, Chengxi Li, Jing Wu, and Kai Fan. Mario: Math reasoning with code
interpreter output – a reproducible pipeline, 2024.
[7] Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and
developing english math word problem solvers. In Annual Meeting of the Association for
_[Computational Linguistics, 2020. URL https://api.semanticscholar.org/CorpusID:](https://api.semanticscholar.org/CorpusID:220047831)_
```
220047831.
```
[8] R OpenAI. Gpt-4 technical report. arXiv, pages 2303–08774, 2023.
[9] Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem prov[ing. ArXiv, abs/2009.03393, 2020. URL https://api.semanticscholar.org/CorpusID:](https://api.semanticscholar.org/CorpusID:221535103)
```
221535103.
```
[10] Christian Szegedy. A promising path towards autoformalization and general artificial intelligence. In International Conference on Intelligent Computer Mathematics, 2020. URL
```
https://api.semanticscholar.org/CorpusID:220729524.
```
[11] Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha,
Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee,
Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun,
Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts,
Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch,
Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel
-----
Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen,
Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina,
Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm,
Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise AgueraArcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. Lamda: Language models for dialog
applications, 2022.
[12] Yiran Wu, Feiran Jia, Shaokun Zhang, Han-Tai Li, Erkang Zhu, Yue Wang, Yin Tat Lee,
Richard Peng, Qingyun Wu, and Chi Wang. An empirical study on challenging math problem
[solving with gpt-4. ArXiv, abs/2306.01337, 2023. URL https://api.semanticscholar.](https://api.semanticscholar.org/CorpusID:259063798)
```
org/CorpusID:259063798.
```
[13] Qihao Zhu Runxin Xu Junxiao Song Mingchuan Zhang Y.K. Li Y. Wu Daya Guo Zhihong Shao,
Peiyi Wang. Deepseekmath: Pushing the limits of mathematical reasoning in open language
[models, 2024. URL https://arxiv.org/abs/2402.03300.](https://arxiv.org/abs/2402.03300)
**5** **Appendix**
**6** **Case Study on our MATH**
In this case, there the same real number is presented in different form in the answer and predicted
point. Thus the evaluation toolkit must compare the numerical equivalence between the number of
coordinates to correctly assess them.
```
Example 1:
Reason: Different Form
Quesiton: Suppose $P$ is the point $(5,3)$ and $Q$ is the point $(-3,6)$. What
is the midpoint of $\\overline{PQ}$?
Ground Truth: \\left(1,\\frac{9}{2}\\right)
prediction: $(1.0, 4.5)$
Human: True
Tora: False
MATH: False
Ours: True
```
In this case, same expressions are not presented in the exact same form. The evaluation toolkit may
need to simplify the polynomial to the same level to evaluate the them.
```
Example 2:
Reason: Expression
Quesiton: Factor $36-4x^2$ completely.
Ground Truth: 4(3-x)(3+x)
prediction: $-4(x - 3)(x + 3)$
Human: True
Tora: False
MATH: False
Ours: True
```
In the following case, the answer has a followed unit string, which should be cleaned before evaluation.
```
Example 3:
Reason: Unit
Quesiton: Diana can either invest $20,\\!000$ dollars for $4$ years with a simple
interest rate of $6\\%$ or an interest rate of $7\\%$ which compounds quarterly.
How many more dollars, rounded to the nearest dollar, would she get with the better
interest rate than with the worse one?
```
-----
```
Ground Truth: 1599 \\text{ dollars}
prediction: $1,599.00$
Human: True
Tora: False
MATH: False
Ours: True
```
In this case, the answer is a vector, and the comparison should be applied in element-wise manner.
```
Example 4:
Reason: Elementwise comparision
Question: Convert the point $( 1, -1, -6 )$ in rectangular coordinates to cylindrical
coordinates. Enter your answer in the form $(r,\\theta,z),$ where $r > 0$ and
$0 \\le \\theta < 2 \\pi.$
Ground Truth: \\left( \\sqrt{2}, \\frac{7 \\pi}{4}, -6 \\right)
Prediction: (1.4142135623730951,5.497787143782138,-6)
Human: True
Tora: False
MATH: False
Ours: True
```
**7** **Case Study on our GAOKAO**
In this case, the answer and the prediction are both intervals in different form. They should be
converted to a interval range to be properly compared.
```
Example 1:
Reason: Interval
Quesiton: Given sets $M=\\{x|x+2\\geq 0\\},N=\\{x|x-1<0\\}$, find $M \\cap N$.
Ground Truth: x \\in [-2, 1)
prediction: $[-2, 1)$
Human: True
Tora: False
MATH: False
Ours: True
```
For this case, the ground truth are expressed as a set, and it means the order of the prediction does not
matter.
```
Example 2:
Reason: Set
Quesiton: If the universal set is $U=\\{1,2,3,4,5\\}$, and $M=\\{1,4\\},N=\\{2,5\\}$,
find $N \\cup \\overline{M}$.
Ground Truth: 2, 3, 5
prediction: 5, 3, 2
Human: True
Tora: False
MATH: False
Ours: True
```
For this case, line functions are not presented in the exact same form. The evaluation toolkit may need
to simplify the polynomial on the right side to the same level, then evaluate the function equation.
```
Example 3:
```
-----
```
Reason: Expression
Quesiton: Find the tangent line to the function $y=\\frac{e^{x}}{x+1}$ at point
$\\left(1,\\frac{e}{2}\\right)$.
Ground Truth: y=\\frac{e}{4}x+\\frac{e}{4}
prediction: $f(x) = \\frac{e(x + 1)}{4}$
Human: True
Tora: False
MATH: False
Ours: True
```
**8** **Case Study on MATH of ToRA**
In this instance, a set of values should be answered to the question. So the order of numbers separated
by commas does not matter.
```
Example 1:
Reason: Set
Quesiton: A line segment of length $5$ has one endpoint at $(1, 2)$ and
the other endpoint at $(4, b)$. Find all possible values of $b$, separated by commas.
Ground Truth: 6,-2
prediction: -2,6
Human: True
Tora: False
MATH: False
Ours: True
```
-----
| [
"Boning, Zhang",
"Chengxi, Li",
"Kai, Fan"
] | 2024-04-22T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2404.13925 | https://arxiv.org/abs/2404.13925 | null |
MATPROVE Dataset - Mathematical Problem-solving Dataset of Lessons and Exercises | N/A | null | null | [
", Martin"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | https://aitp-conference.org/2024/abstract/AITP_2024_paper_13.pdf | null | null |
MAgICoRe: Multi-Agent, Iterative, Coarse-to-Fine Refinement for Reasoning | Large Language Models' (LLM) reasoning can be improved using test-time aggregation strategies, i.e., generating multiple samples and voting among generated samples. While these improve performance, they often reach a saturation point. Refinement offers an alternative by using LLM-generated feedback to improve solution quality. However, refinement introduces 3 key challenges: (1) Excessive refinement: Uniformly refining all instances can over-correct and reduce the overall performance. (2) Inability to localize and address errors: LLMs have a limited ability to self-correct and struggle to identify and correct their own mistakes. (3) Insufficient refinement: Deciding how many iterations of refinement are needed is non-trivial, and stopping too soon could leave errors unaddressed. To tackle these issues, we propose MAgICoRe, which avoids excessive refinement by categorizing problem difficulty as easy or hard, solving easy problems with coarse-grained aggregation and hard ones with fine-grained and iterative multi-agent refinement. To improve error localization, we incorporate external step-wise reward model (RM) scores. Moreover, to ensure effective refinement, we employ a multi-agent loop with three agents: Solver, Reviewer (which generates targeted feedback based on step-wise RM scores), and the Refiner (which incorporates feedback). To ensure sufficient refinement, we re-evaluate updated solutions, iteratively initiating further rounds of refinement. We evaluate MAgICoRe on Llama-3-8B and GPT-3.5 and show its effectiveness across 5 math datasets. Even one iteration of MAgICoRe beats Self-Consistency by 3.4%, Best-of-k by 3.2%, and Self-Refine by 4.0% while using less than half the samples. Unlike iterative refinement with baselines, MAgICoRe continues to improve with more iterations. Finally, our ablations highlight the importance of MAgICoRe's RMs and multi-agent communication. | This work proposes MAgICoRe, which avoids excessive refinement by categorizing problem difficulty as easy or hard, solving easy problems with coarse-grained aggregation and hard ones with fine-grained and iterative multi-agent refinement, and employs a multi-agent loop with three agents. | [
"Swarnadeep, Saha",
"Elias, Stengel-Eskin",
"Justin Chih-Yao, Chen",
"Mohit, Bansal",
"Archiki, Prasad"
] | 2024-09-18T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2409.12147v1 | https://arxiv.org/abs/2409.12147 | https://www.semanticscholar.org/paper/c7ecb703d810dfb0794b7a168eb5e2a6b381a534 |
|
MM-MATH: Advancing Multimodal Math Evaluation with Process Evaluation and Fine-grained Classification | To advance the evaluation of multimodal math reasoning in large multimodal models (LMMs), this paper introduces a novel benchmark, MM-MATH. MM-MATH consists of 5,929 open-ended middle school math problems with visual contexts, with fine-grained classification across difficulty, grade level, and knowledge points. Unlike existing benchmarks relying on binary answer comparison, MM-MATH incorporates both outcome and process evaluations. Process evaluation employs LMM-as-a-judge to automatically analyze solution steps, identifying and categorizing errors into specific error types. Extensive evaluation of ten models on MM-MATH reveals significant challenges for existing LMMs, highlighting their limited utilization of visual information and struggles with higher-difficulty problems. The best-performing model achieves only 31% accuracy on MM-MATH, compared to 82% for humans. This highlights the challenging nature of our benchmark for existing models and the significant gap between the multimodal reasoning capabilities of current models and humans. Our process evaluation reveals that diagram misinterpretation is the most common error, accounting for more than half of the total error cases, underscoring the need for improved image comprehension in multimodal reasoning. | This paper introduces a novel benchmark, MM-MATH, consisting of 5,929 open-ended middle school math problems with visual contexts, with fine-grained classification across difficulty, grade level, and knowledge points, which incorporates both outcome and process evaluations. | ### MM-MATH: Advancing Multimodal Math Evaluation with Process Evaluation and Fine-grained Classification
**Kai Sun[∗], Yushi Bai[∗], Ji Qi, Lei Hou, Juanzi Li**
Tsinghua University
**Abstract**
To advance the evaluation of multimodal math
reasoning in large multimodal models (LMMs),
this paper introduces a novel benchmark, MMMATH. MM-MATH consists of 5,929 openended middle school math problems with visual contexts, with fine-grained classification
across difficulty, grade level, and knowledge
points. Unlike existing benchmarks relying
on binary answer comparison, MM-MATH incorporates both outcome and process evaluations. Process evaluation employs LMM-as-ajudge to automatically analyze solution steps,
identifying and categorizing errors into specific error types. Extensive evaluation of ten
models on MM-MATH reveals significant challenges for existing LMMs, highlighting their
limited utilization of visual information and
struggles with higher-difficulty problems. The
best-performing model achieves only 31% accuracy on MM-MATH, compared to 82% for
humans. This highlights the challenging nature of our benchmark for existing models and
the significant gap between the multimodal reasoning capabilities of current models and humans. Our process evaluation reveals that diagram misinterpretation is the most common
error, accounting for more than half of the total error cases, underscoring the need for improved image comprehension in multimodal
reasoning. The code and dataset are available
[at https://github.com/kge-sun/MM-Math.](https://github.com/kge-sun/MM-Math)
tasks require understanding multimodal information and interleaving reasoning within this information (Lightman et al., 2023). To further advance
LMM’s mathematical capabilities, we believe the
following two issues urgently need addressing: (1)
**What are the specific reasons that lead to the**
**model’s mistakes, such as misunderstanding the**
diagram or errors in reasoning? (2) How does
**the model perform across different categories**
**of multimodal math problems, and which spe-**
cific types of problems does the model excel at or
struggle with?
In this paper, we introduce MM-MATH benchmark to provide a more fine-grained and reliable
assessment of LMMs’ multimodal math capability. MM-MATH comprises a total of 5,793 openended multimodal math problems from middle
school. We show an overview of the design of
MM-MATH in Figure 1. To address the aforementioned issue (1), MM-MATH combines traditional
outcome evaluation (comparing the model’s answer
to groundtruth and reaching binary result) with
**process evaluation. Process evaluation involves**
using LMM-as-a-judge (Zheng et al., 2023) to automatically identify errors in the model’s output
process and categorize the causes of these errors.
Concretely, we employ GPT-4V (OpenAI, 2023)
to compare the step-by-step solution generated by
the model with our annotated groundtruth solution,
and identify the first error in the model’s process
to determine the main reason that leads to a wrong
answer. We categorize the causes of LMM’s errors
into four types, including diagram misinterpreta_tion, reasoning error, calculation error, and textual_
_condition misunderstanding._
In response to issue (2), MM-MATH includes
**fine-grained classification, where the problems**
are classified along three dimensions: difficulty,
grade level, and knowledge points, to evaluate the
breadth, depth, and specific knowledge for math
reasoning capabilities of LMMs. For difficulty, we
**1** **Introduction**
Due to their exceptional performance in handling
complex text and images, large multimodal models (LMMs) such as GPT-4V (OpenAI, 2023) and
Claude-3 (Anthropic, 2024) have garnered significant interest in both industry and academia. Previous studies suggest that they still underperform
on multimodal math reasoning tasks (Chen et al.,
2021; Lu et al., 2023; Zhang et al., 2024), as such
*Equal contribution
-----
**Q: BC = 1/2 AB, D is the midpoint of AC,** …, we see that BC = 1/2 AB. Since D is the midpoint
and DC = 3cm. What is the length of AB? of AC, we can deduce that AB = 2AD, …, we have AB
= 2AD = 2×3cm = \boxed{6cm}.
**LMM**
Outcome evaluation
**Question**
Evaluation
: 6cm≠4cm
_Knowledge_
_classification_ Process evaluation
**Answer** **Process: Since D is… Furthermore,**
Hard III since BC=1/2 AB, ∴BC=1/3AC=1/3×
Medium II 6=2cm, … =\boxed{4cm}.
**Answer: 4cm** Reasoning error
Easy Grade I - Deduction on
**GPT-4V**
#### MM-MATH AB=2AD is wrong
Figure 1: An overview of the MM-MATH benchmark design. The problems are classified along their difficulty,
grade level, and knowledge point. We include both outcome evaluation and process evaluation to identify and
attribute the error in model’s reasoning process.
**2** **MM-MATH**
**2.1** **Overview of MM-MATH**
**Design Principle. Multimodal mathematical rea-**
soning tasks demand an understanding of both the
problem’s text and the associated diagram, requiring math reasoning to produce a step-by-step solution that leads to the final answer. We adopt an
open-ended format for two reasons: 1) Other formats, such as multiple choice, make it easier for the
model to guess the correct answer by chance (Wang
et al., 2024b). 2) Open-ended format better facilitates step-by-step solution process to help identify
the error in the model’s response. We adhere to
the following principles when constructing MMMATH:
- Comprehensive coverage: We aim to cover as
many types and difficulty levels of problems as
possible. Consequently, we collect all math problems that contain visual content from exams and
textbooks used in secondary schools.
- Computation problems only: While math problems may include proofs, computations, and
drawings, we exclusively select computationtype problems for our dataset.
- Uniform data format: Each problem in the
dataset includes a question statement, an image,
a human-annotated step-by-step solution process,
and multi-dimensional metadata annotations.
- Multi-dimensional metadata annotations: For
each problem, we also provide its grade level, difficulty, and knowledge point tagging from human
classify problems into three levels—easy, medium,
and hard—based on the accuracy of human students on the problems. For grade level, we include
problems in middle school, encompassing all relevant visual math problems taught in each grade.
For knowledge points, each problem is classified
according to a predefined three-level knowledge
taxonomy by experienced teachers. These comprehensive annotations in the MM-MATH dataset
result in clear difficulty distinction, extensive data
coverage, and systematic knowledge organization.
We conduct an extensive evaluation of both opensource and closed-source LMMs on MM-MATH.
Outcome evaluation reveals that our benchmark
poses significant challenges for existing LMMs.
For example, the latest Sota model, GPT-4o (OpenAI, 2024), achieves an accuracy of only 31%,
compared to an 82% accuracy of human students.
Moreover, all models perform poorly on hard-level
problems, with none exceeding 11% accuracy, and
some models even fail to solve any problems correctly. We further find that current LMMs’ multimodal reasoning remains primarily text-based,
lacking effective utilization of graphical information. This is evidenced by the minimal accuracy
difference—only 2-3 percentage points—between
when the model is given only textual input and
when it is provided with both text and images. Our
process evaluations show that diagram misinterpretation accounts for more than 50% of the total
errors for current LMMs, suggesting the most critical direction for improvement is enhancing their
abilities to recognize and interpret math diagrams.
-----
Table 1: Comparison of our MM-MATH benchmark with existing multimodal benchmarks. For the ‘size’ column,
we only include the number of multimodal math problems in each benchmark.
**Benchmark** **Size** **Question Type** **Grade** **Fine-grained Classification** **Process Evaluation**
UniGeo (Chen et al., 2022) 4,998 choice middle school ✓
GeoQA (Chen et al., 2021) 5,010 choice middle school
GeoQA+ (Cao and Xiao, 2022) 2,518 choice middle school
Geometry3K (Lu et al., 2021) 3,002 choice middle school
OlympiadBench (He et al., 2024) 3,102 open-ended Olympiad-level ✓
MathVista (Lu et al., 2023) 6,141 choice & open-ended -
MathVerse (Zhang et al., 2024) 2,612 choice & open-ended - ✓
MM-MATH 5,929 open-ended middle school ✓ ✓
Table 2: Key statistics of MM-MATH.
**Statistic** **Number** **11%Circles** **19%**
Total Problems 5,929
**Difficulty** **Triangles14%** **Properties of Shapes**
*Easy 378 **8%**
*Medium*Hard 4,4881,063 **Quadratic Functions6%** **Function22%** **Shape Trans-formation** **Acute Angle Functions**
**Grade*Grade Seven** 682 **Linear Functions8%** **10%**
*Grade Eight 2,590 **Inversel Functions7%** **Shapes10%**
*Grade Nine 2,657 **Similarity of**
Average Question Length 488
Average Answer Length 275
Max Question Length 2,391
Max Answer Length 2,781
educational taxonomy as its metadata.
**Dataset Overview. MM-MATH is the first multi-**
modal math benchmark to include process evaluation and fine-grained classification, as highlighted
in the comparison of existing multimodal benchmarks in Table 1. Detailed statistics for MMMATH are provided in Table 2, and the distribution
of knowledge points is illustrated in the pie chart
in Figure 2.
**2.2** **Dataset Construction Pipeline**
**Data collection. The problems in MM-MATH**
[dataset are sourced from the 21st Century Edu-](https://www.21cnjy.com/)
[cation Network, which is one of the largest on-](https://www.21cnjy.com/)
line question banks for primary and secondary
schools in China. It provides a comprehensive
collection of challenging, curriculum-aligned, and
exam-relevant questions designed to assess student
learning capabilities. We restrict the problems from
the 2021-2022 academic year, manually filtering
for computational math problems with visual con
**11%Circles**
**19%**
**Quadrilaterals**
**Triangles14%** **Properties of Shapes**
**52%** **Lines**
**8%**
**Quadratic Functions6%** **Function22%** **Shape Trans-formation** **Acute Angle Functions**
**26%** **7%**
**Linear**
**Functions8%**
**10%**
**Axial Symmetry**
**Inversel Functions7%** **Shapes10%**
**Similarity of**
Figure 2: Knowledge point distribution of MM-MATH.
_Properties of Shapes refers to the characteristics of dif-_
ferent shapes, Shape transformation investigates the
deformation and movements of shapes, and Function
refers to the mutual reasoning between algebraic expressions and graphs.
text.
**Format transformation. Most of the problems**
from the original database are in MathML format.
However, considering the widespread use of LaTeX in existing mathematical datasets, we devise
a systematic approach to convert MathML into
standard LaTeX format for easier integration with
other datasets. Specifically, we utilize MathConverter[1] to transform MathML representations of
mathematical formulas into LaTeX. For instance,
1
2 [in MathML is converted to \frac{1}{2} in La-]
TeX. Additionally, we establish string conversion
rules to change symbol elements into LaTeX format. For example, we convert “\text{△}” to “\triangle”. To address the use of non-standard punctuation in Chinese strings, such as full-width plus
signs, we leverage GPT-4 (Achiam et al., 2023)
for conversion, with manual verification of the fi
[1https://github.com/hexinnovation/MathConverter.](https://github.com/hexinnovation/MathConverter)
-----
nal output. This systematic process ensures the
accuracy of numerical values in LaTeX while maintaining readability and standardization in the output. During GPT-4 processing, we also encapsulate
the final answers within \boxed{}, a technique inspired by the construction approach of the MATH
dataset (Hendrycks et al., 2021), which facilitates
comparison with groundtruth answers for outcome
evaluation.
Our collected data contains four distinct question types: multiple-choice, fill-in-the-blank, openended, and composite questions. We convert them
into uniform open-ended questions in the following manners. For multiple-choice and fill-in-theblank questions, we rephrase them into open-ended
forms and extract their explanations as step-by-step
derivations. For composite questions with a common textual problem and multiple sub-questions,
we treat each statement as the premise for subquestions, integrating the conclusions of preceding
sub-questions into the subsequent ones. More details for the transformation process are presented
in Appendix B.
Additionally, since the original data is in Chinese, catering to Chinese students, we translate the
dataset into English using GPT-4. To ensure accuracy, we manually verify the translations. This effort aims for a fairer comparison of LMMs trained
in different languages.
**Fine-grained classification. We categorize our**
dataset across several dimensions, including difficulty, grade level, and knowledge point. Problems are classified by difficulty—simple, medium,
and hard—based on the average accuracy achieved
by students. Simple problems have a scoring rate
above 85%, medium between 70% and 85%, and
hard below 70%. From Table 2, it can be seen
that the number of problems of each difficulty level
follows a Gaussian distribution.
Next, we organize questions by educational
grade: seven, eight, and nine grade, representing
the three years of junior middle school in China.
Since higher-grade knowledge generally requires
an understanding of lower-grade knowledge as a
prerequisite, this classification allows us to better
study whether the LMMs exhibit a similar dependency on prior knowledge when solving problems.
Additionally, each problem is tagged with specific knowledge points, identified based on insights
from teachers. This enables targeted retrieval, application, and analysis of the model’s knowledge
gaps in specific areas. In Figure 2, we present the
knowledge point taxonomy and the proportion of
data in each category.
**3** **Evaluation**
**3.1** **Evaluation Protocols**
Recent advancements in LMMs have enabled the
generation of textual responses for mathematical
problem-solving (Chen et al., 2024; Liu et al.,
2024b; Hong et al., 2023; Qi et al., 2024), a process
that imitates human reasoning in mathematics. This
capability introduces new evaluation criteria focusing on the generative nature of LMMs, especially
concerning the intermediate solving steps. Accordingly, we propose a systematic method for assessing the performance of LMMs in the MM-MATH
datasets in Figure 1, divided into three phases: (1)
LMM generates formatted solutions to math problems, (2) Compare the generated solution against
the groundtruth solution, and (3) Score the result
to evaluate model performance and identify process errors. Specifically, we input the textual problem and associated images, prompting the model
to generate solutions with answers encapsulated
in \boxed{}. Our prompt details are provided in
Appendix C. During the outcome comparison, we
extract the final results from \boxed{}. For process
comparison, we use GPT-4V (OpenAI, 2023) to automatically perform a comparative analysis of the
model-generated solutions against the groundtruth
solutions.
**3.2** **Evaluation Strategy**
Existing large model math benchmarks (He et al.,
2024; Wang et al., 2024a; Liu et al., 2024c) predominantly use binary comparison to assess the
problem-solving capabilities of LLMs or LMMs,
focusing on the final answer as the primary indicator of the model’s capability. However, the accuracy of an answer typically relies on a correct
intermediate reasoning process, with accurate reasoning steps leading to correct answers, and incorrect reasoning leading to erroneous solutions.
By analyzing the model’s solution process, we can
identify the causes of errors and provide a more
accurate assessment. Thus, our evaluation incorporates two methods: outcome evaluation and process
evaluation, designed to assess both the final answer
and the model’s reasoning process.
**Outcome Evaluation.**
Our evaluation requires that answers generated
by LMMs be encapsulated within \boxed{}, en
-----
|Question: … and CB is extended to point F such that BF = BC. … what is the length of AC? Answer: 5|Response: Process evaluation First, since CD is the median, point D is the midpoint of II. Reasoning Error AB. so AD=AB. - Should be AD=DB ! Moreover, we know BF = BC, we have CF = 2BC = 8. " … IV. Textual condition Next, we observe that BCFE is a parallelogram, …, thus misunderstanding AC = 12 = 3. Therefore, the final answer is 3. - Should be BF=BC I. Diagram III. Calculation error misinterpretation - AC = 12 = 2 3 - Not a parallelogram|
|---|---|
Figure 3: Example for four different types of errors in multimodal math reasoning.
abling direct comparison. We judge final answers
according to their category: (1) For numerical answers, we accept the model’s answers as long as
the numerical gap to the groundtruth answer falls
within a permissible error margin, e.g., 1.414 is
acceptable for an answer of _√2 as their difference_
is less than 0.01. (2) For expression-type answers
such as y = ax + b, we utilize the SymPy package to simplify expressions. We then compare the
model’s simplified output with the groundtruth expression for exact matching. (3) For interval-type
answers like (a, b) or a < x < b, we standardize
them into the format (a, b) and verify the equality
of boundary values. Additionally, we address special cases where models append extra signs to final
results (e.g., cm) or generate exponential values
like 2[2024], by removing the extra sign and transforming the values for proper comparison. We
manually verified 500 evaluation results using our
outcome evaluation pipeline and found only 13
errors.
**Process Evaluation. The problem-solving pro-**
cess of the multimodal model involves multiple
factors, including a deep understanding of the problem conditions, extracting information from diagrams, and utilizing the models’ knowledge to derive results. Consequently, our process evaluation
takes the original textual question, associated image, and the groundtruth solution, and uses GPT-4V
to compare the content generated by LMMs, with
the prompt shown in Appendix C. The solutions
generated by LMMs may contain numerous errors.
In our prompt design, we aim to identify the first
**error in the model’s generated process compared**
to groundtruth, since it is often the initial error
that leads to further mistakes, resulting in incorrect
outcomes. We use this first error to classify the
cause of error in our process evaluation. Through
deeper examination, we find that the first identified
error may sometimes not be the main error of the
models’ solution, which we will analyze further in
Appendix D. We classify the errors into four types,
exemplified in Figure 3.
I. Diagram misinterpretation: This refers to the
LMM’s inability to accurately understand the elements and their attributes in diagrams, such as the
shapes, geometries, and their spatial relationships.
II. Reasoning error: This occurs when the model
lacks or incorrectly applies logical reasoning
knowledge. For instance, in the case of Figure 3,
the model incorrectly reasons that AD = AB from
_D is the midpoint of AB, while AD = DB should_
be the correct deduction.
III. Calculation error: This error arises from the
computational step during problem-solving and includes mistakes caused by miscalculations in equations and functions.
IV. Textual condition misunderstanding: This type
of error involves a model misinterpreting the given
conditions of a textual problem. For example, in
Figure 3, the problem states that BF = BC, but
the model mistakenly interprets this condition as
_BF =_ [1]
2 _[BC][ during the solution process.]_
**4** **Experiments**
**4.1** **Experimental Setup**
To comprehensively investigate the challenges of
MM-MATH and the mathematical proficiency of
models, we structure our experiments around two
setups: (1) Text-Only Reasoning and (2) Multimodal Reasoning. For the first setting, we evaluate LMMs, including Gemini-Pro-V (Gemini,
2023), Claude-3-Opus (Anthropic, 2024), GPT4[2] (Achiam et al., 2023), GPT-4V (OpenAI, 2023),
and GPT-4o (OpenAI, 2024) by providing only
the textual contexts (i.e., questions) as inputs. For
2We use the gpt-4-0125-preview version for GPT-4.
-----
Table 3: The outcome performance of both closed-source and open-source large models on MM-MATH in
comparison with the human-level baseline. The evaluation involves three dimensions: difficulty, grade levels, and
_knowledge points, each comprised of three fine-grained classes. The results are presented as percentages of accuracy._
|Model|Easy Medium Hard|Seven Eight Nine|Trans Shape Func|Average|
|---|---|---|---|---|
_Baseline_
|Human|90.7 81.9 47.6|85.6 73.7 77.9|81.1 83.2 77.5|80.4|
|---|---|---|---|---|
_Large Multimodal Models (w/o Image)_
|Gemini-Pro-V Claude-3-Opus GPT-4 GPT-4V GPT-4o|10.1 5.7 1.8 31.7 17.3 7.2 37.0 20.3 7.2 35.2 18.1 7.2 41.4 23.9 3.6|10.0 5.3 6.7 32.5 14.9 2.2 38.7 17.1 26.2 31.2 17.2 22.3 35.0 23.9 30.5|6.6 5.7 6.4 20.8 18.5 12.9 23.3 21.4 18.1 18.4 21.4 13.3 22.8 29.7 19.4|6.2 19.2 22.5 20.4 27.6|
|---|---|---|---|---|
_Large Multimodal Models (w/ Image)_
|DeepSeek-VL-7B-Chat Yi-34B-Chat LLaVA-V1.6-34B InternVL-4B-Chat-1.5 Qwen-VL-Max Gemini-Pro-V Claude-3-Opus GPT-4V GPT-4o|17.4 4.7 1.4 12.9 5.0 1.5 8.8 5.4 1.8 18.5 10.7 1.8 14.5 11.2 3.6 19.3 8.2 0.0 29.5 19.3 3.6 37.8 21.2 1.8 45.8 30.0 10.9|7.5 6.6 3.9 21.3 5.6 3.5 12.6 6.5 4.2 12.5 11.1 11.9 16.2 1.1 11.3 1.5 7.4 11.5 32.5 16.4 23.0 28.7 17.9 28.0 40.0 26.0 36.0|3.4 6.0 3.5 5.0 7.6 3.8 4.0 6.5 3.8 11.4 12.3 5.5 11.0 12.5 10.5 10.4 10.6 7.1 20.6 21.7 16.9 22.2 24.7 19.5 30.7 33.7 26.2|5.4 6.5 5.8 11.6 11.4 9.7 20.3 23.1 31.8|
|---|---|---|---|---|
the second setting, we feed the entire multimodal
contexts (i.e., questions and images) as inputs and
evaluate both closed-source LMMs including GPT4V, GPT-4o, Claude-3-Opus, Qwen-VL-Max (Bai
et al., 2023) and open-source LMMs for DeepSeekVL-7B-Chat (Lu et al., 2024), Yi-34B-Chat (Young
et al., 2024), InternVL-4B-Chat-V1.5 (Chen et al.,
2024), and LLaVA-V1.6-34B (Liu et al., 2024a).
All selected models are capable of generating
responses in the expected format, thus ensuring the
validity of the evaluation.
**4.2** **Outcome Evaluation Results**
We first analyze the performance of all models on
the final outcomes of MM-MATH in comparison to
a human-level baseline (the average performance
of middle-school examinees from the online platform). The experimental results are shown in Table 3. Here are our main findings from the results.
**MM-MATH presents substantial challenges for**
**current LMMs** From the evaluation results, we
find that the most representative closed-source
model to date, GPT-4o, performed the best across
the board, achieving an average accuracy of 31.8%,
which significantly outperformed the best opensource model, InternVL-4B-Chat-1.5, with an average accuracy of 11.6%. However, compared to
the human-level baseline of 80.4%, this best performance of the LMM still remains substantial room
for improvement by 48.6%.
**LMMs gain limited benefits from visual contexts**
Another notable observation is that LMMs with the
text-only setups (i.e., only questions as inputs) exhibit only slight degradation in performance compared to the multimodal setups (i.e., questions and
images as inputs). For example, there are differences of 4.2%, 2.7%, and 0.8% for the models
GPT-4o, GPT-4v, and Claude-3-Opus, respectively.
This result suggests that current LMMs primarily
rely on linguistic knowledge to solve mathematical
problems, and their utilization of visual contexts
is limited. Detailed case studies are provided in
Appendix E.
**Conclusion from discriminative evaluation di-**
**mensions and capability distribution** In the
difficulty dimension of MM-MATH, we can see
the discriminative stepwise degradations in models
performance on progressively challenging subsets
(e.g., the 10.9%, 30.0%, and 45.8% accuracy scores
on Easy, Medium and Hard subsets for GPT-4o).
This result indicates that the proposed evaluation dimensions exhibit a significant differentiation across
three difficulty levels, making it more beneficial for
exploring the capability shortcomings of models.
In addition, the three types of knowledge points
also provide us with opportunities to understand the
capabilities of models from different fine-grained
perspectives.
-----
1000 Diagram MisinterpretationReasoning Error 937 932
800
647 624 683 717 636 579 585
600 529 409 424 468 480 476
400 339 365 317
Number of Errors
200
0
GPT-4oGPT-4VClaude-3-OpusQwen-VL-MaxGemini-Pro-VInternVL-4B-Chat-1.5DeepSeek-VL-7B-ChatYi-34B-ChatLLaVA-V1.6-34B
Figure 5: Number of the first two errors in
evaluated LMMs.
200
Calculation Error
175 Textual Condition Misunderstanding
150
125
100
75
51 55
Number of Errors 5025 12 29 26 28 19 32 24 25 27 33 31 16 28 24 33 22
0
GPT-4oGPT-4VClaude-3-OpusQwen-VL-MaxGemini-Pro-VInternVL-4B-Chat-1.5DeepSeek-VL-7B-ChatYi-34B-ChatLLaVA-V1.6-34B
Figure 6: Number of the last two errors in
evaluated LMMs.
Diagram Misinterpretation Calculation Error
Reasoning Error Textual Condition Misunderstanding
GPT-4o GPT-4V Claude-3-Opus
36.4% 36.8% 38.7%
56.8% 58.2% 57.0%
Qwen-VL-Max Gemini-Pro-V InternVL-4B-Chat-1.5
38.8% 38.4%
53.0%
39.7%
56.6% 57.4%
DeepSeek-VL-7B-Chat Yi-34B-Chat LLaVA-V1.6-34B
27.1% 24.4%
48.0%
47.5%
69.5% 71.6%
Figure 4: Proportion of four types of errors in various LMMs, with
diagram misinterpretation errors and reasoning errors constituting
the majority.
Regarding the evaluation results on different
grade levels, one notable finding is that the accuracy distribution of most models across the three
grade levels is similar to the distribution of human
behaviors. For example, the models GPT-4o, GPT4V, and Claude-3-Opus all showed the best performance on the Seventh-grade subset (with 40.0%,
28.7%, and 32.5% accuracy scores), followed by
the ninth-grade subset (with 36.0%, 28.0% and
23.0% accuracy scores), and were the least accurate
on the seventh-grade subset (with 26.0%, 17.9%
and 16.4% accuracy scores, respectively). This result suggests that the learning curve of LMMs in
solving mathematical problems is similar to that
of humans but falls short of reaching the human
cognitive level.
**4.3** **Process Evaluation Results**
Benefiting from the comprehensive annotation, we
further evaluate the models performance on the solution process to thoroughly investigate the causes
of errors and pinpoint the weaknesses of LMMs.
Considering the variability in natural language expressions, we employ GPT-4V to compare the solutions generated by LMMs with the groundtruth
solutions, and identify the first error in the solutions
to analyze the causes of errors. We empirically find
that this method can effectively align the solutions
from LMMs with the groundtruth, enabling an unbiased validation of the errors. Though effective,
we find that there is still room for improvement in
this measurement, with approximately 9% of errors not being correctly identified (see the detailed
analysis in Appendix F).
Figure 4 illustrates the proportion of different
error types in both open-source and closed-source
LMMs. Figure 5 and 6 further show the number of
errors for each error type. Our main findings are
detailed below.
**Weak comprehension of elements in images is**
**a major cause** It is evident that errors related
to the recognition of image elements or their attributes constitute the highest proportion, exceeding half of the total errors. This indicates that existing LMMs cannot yet sufficiently incorporate
image information into their reasoning processes,
limiting their efficacy in multimodal reasoning.
Intriguingly, among closed-source LMMs—GPT4o, GPT-4V, Claude-3-Opus, Gemini-Pro-V, and
-----
Qwen-VL-Max—the proportion of errors in image recognition are highly consistent, around 57%.
This might imply that the visual encoder modules
used by these models have common issues and cannot handle certain types of images. Additionally,
the much lower proportion of diagram misinterpretation errors in InternVL-4B-Chat-1.5 (39.7%) explains why a 4B small model has even better overall
performance than Gemini-Pro-V (57.4%) or QwenVL-Max (56.6%). Therefore, the key to enhancing
the LMM’s multimodal math problem-solving ability lies in understanding the visual context, and this
step does not necessitate a large model size. Examples of reasoning errors involving image elements
and attributes are provided in Appendix H.
**Multimodal models exhibit poor use of theorems**
**during reasoning** We find that reasoning errors
in large language models (LLMs) are often due to
the incorrect application of theorems, accounting
for about 40% of overall errors. Misuse or omission of theorems misleads these LMMs, leading to
errors (e.g., GPT-4V misuses the cosine rule, resulting in no solution, as detailed in Appendix G).
Unlike image understanding, we find that a larger
model size effectively helps reduce reasoning errors in the model. For instance, while InternVL4B-Chat-1.5 exhibits fewer image understanding
errors even with smaller model size, it still encounters more reasoning errors (636) compared to larger
models such as Gemini-Pro-V (480) and Qwen-VLMax (468).
**Calculation is not a primary issue but reflects**
**a capability gap** In the process evaluation of
LMMs, calculation errors constitute a relatively
lower proportion. However, the error in some models (e.g., GPT-4o, 51 errors) is significantly higher
compared to others (e.g., GPT-4V, 29 errors). This
indicates that while calculation is not the primary
problem, equipping them with more powerful numerical computation capabilities can further boost
the models’ problem-solving success rates.
**Models have an effective understanding of the**
**textual problem** As shown in Figure 4, among
all nine models from both open-source and closedsource, the proportion of errors due to misunderstanding of the textual conditions is extremely
small (less than 2% of the total errors). This suggests that the text-based capabilities of LMMs are
not the bottleneck in solving multimodal mathematical problems. Instead, we should focus more
on fine-grained recognition and reasoning of visual
content to enhance the capabilities of LMMs.
**5** **Related Work**
Using large models to solve mathematical problems has recently become a research hotspot.
GSM8k (Cobbe et al., 2021) has widely been used
to evaluate the mathematical abilities of various
LLMs (Touvron et al., 2023; Anil et al., 2023; Gao
et al., 2023b). However, its problems are relatively
simple, and many models can achieve an accuracy
rate of 90% or higher. Recently, more challenging
mathematical benchmarks (Hendrycks et al., 2021;
Liu et al., 2024c; He et al., 2024) have emerged
to further advance mathematical reasoning in language models, but these are typically text-only
based reasoning.
Multimodal mathematical benchmarks trace
back to the study of geometry problems (Seo et al.,
2015; Chen et al., 2022), where geometric elements are described through a specialized parsing language (Seo et al., 2015; Zhang et al., 2022;
Hao et al., 2022) or text described language (Gao
et al., 2023a). Recent rapid developments in
LMMs (Alayrac et al., 2022; Wang et al., 2023;
Liu et al., 2024b; Qi et al., 2024) have led to numerous multimodal math benchmarks (Lu et al., 2023;
Yue et al., 2024; Ying et al., 2024) to assess their
capabilities. However, these benchmarks primarily
composed of multiple-choice questions, evaluating
model performance based on outcome examination. Given the dual nature of multimodal models—
integrating both images and text—such simplistic
evaluations are inadequate. Although some benchmarks, like MathVerse (Zhang et al., 2024), have
begun to focus on the problem-solving process,
they still rely on a binary evaluation approach. In
comparison, our MM-MATH benchmark is constructed with step-by-step solution which enables
both outcome and process evaluations of LMMs.
**6** **Conclusion**
This paper introduces MM-MATH, a challenging
benchmark for evaluating multimodal math reasoning in LMMs. Our findings reveal while current LMMs demonstrate some reasoning ability,
they heavily rely on textual information and struggle to utilize visual cues. This is evidenced by
the minimal accuracy difference between text-only
and multimodal settings, and the prevalence of diagram misinterpretation errors. MM-MATH’s fine
-----
grained classification highlights the need for models that can handle varying problem difficulties and
leverage knowledge across different grade levels.
**7** **Limitations**
We limit our benchmark’s mathematical knowledge
to the middle school level, representing only a portion of K-12 education. In the future, we plan to
expand the scope of MM-MATH to include high
school and college-level multimodal mathematics.
Our evaluation results highlight the current deficiencies of LMMs in solving mathematical problems. While improvements to LMMs have not yet
been made to address these shortcomings, our next
step involves targeted training to enhance the models’ problem-solving capabilities. We believe our
dataset will significantly aid this process, as they
contain detailed solutions paired with each problem.
**References**
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama
Ahmad, Ilge Akkaya, Florencia Leoni Aleman,
Diogo Almeida, Janko Altenschmidt, Sam Altman,
Shyamal Anadkat, et al. 2023. Gpt-4 technical report.
_arXiv preprint arXiv:2303.08774._
Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc,
Antoine Miech, Iain Barr, Yana Hasson, Karel
Lenc, Arthur Mensch, Katherine Millican, Malcolm
Reynolds, et al. 2022. Flamingo: a visual language
model for few-shot learning. Advances in neural
_information processing systems, 35:23716–23736._
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak
Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng
Chen, et al. 2023. Palm 2 technical report. arXiv
_preprint arXiv:2305.10403._
[Anthropic. 2024. Claude3 system card.](https://www.anthropic.com/news/claude-3-family)
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang,
Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou,
and Jingren Zhou. 2023. Qwen-vl: A frontier large
vision-language model with versatile abilities. arXiv
_preprint arXiv:2308.12966._
Jie Cao and Jing Xiao. 2022. An augmented benchmark
dataset for geometric question answering through
dual parallel text encoding. In Proceedings of the
_29th International Conference on Computational Lin-_
_guistics, pages 1511–1520._
Jiaqi Chen, Tong Li, Jinghui Qin, Pan Lu, Liang Lin,
Chongyu Chen, and Xiaodan Liang. 2022. Unigeo: Unifying geometry logical reasoning via reformulating mathematical expression. arXiv preprint
_arXiv:2212.02746._
Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan Liang,
Lingbo Liu, Eric P Xing, and Liang Lin. 2021.
Geoqa: A geometric question answering benchmark
towards multimodal numerical reasoning. _arXiv_
_preprint arXiv:2105.14517._
Zhe Chen, Weiyun Wang, Hao Tian, Shenglong Ye,
Zhangwei Gao, Erfei Cui, Wenwen Tong, Kongzhi
Hu, Jiapeng Luo, Zheng Ma, et al. 2024. How far
are we to gpt-4v? closing the gap to commercial
multimodal models with open-source suites. arXiv
_preprint arXiv:2404.16821._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, et al. 2021. Training verifiers to solve math
word problems. arXiv preprint arXiv:2110.14168.
Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, Jianhua Han,
Hang Xu, Zhenguo Li, et al. 2023a. G-llava: Solving
geometric problem with multi-modal large language
model. arXiv preprint arXiv:2312.11370.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023b. Pal: Program-aided language
models. In International Conference on Machine
_Learning, pages 10764–10799. PMLR._
Team Gemini. 2023. Gemini: a family of highly
capable multimodal models. _arXiv preprint_
_arXiv:2312.11805._
Yihan Hao, Mingliang Zhang, Fei Yin, and Lin-Lin
Huang. 2022. Pgdp5k: A diagram parsing dataset for
plane geometry problems. In 2022 26th International
_Conference on Pattern Recognition (ICPR), pages_
1763–1769. IEEE.
Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu,
Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han,
Yujie Huang, Yuxiang Zhang, et al. 2024. Olympiadbench: A challenging benchmark for promoting agi
with olympiad-level bilingual multimodal scientific
problems. arXiv preprint arXiv:2402.14008.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. arXiv preprint
_arXiv:2103.03874._
Wenyi Hong, Weihan Wang, Qingsong Lv, Jiazheng
Xu, Wenmeng Yu, Junhui Ji, Yan Wang, Zihan Wang,
Yuxiao Dong, Ming Ding, et al. 2023. Cogagent: A
visual language model for gui agents. arXiv preprint
_arXiv:2312.08914._
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri
Edwards, Bowen Baker, Teddy Lee, Jan Leike,
John Schulman, Ilya Sutskever, and Karl Cobbe.
2023. Let’s verify step by step. _arXiv preprint_
_arXiv:2305.20050._
-----
Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan
[Zhang, Sheng Shen, and Yong Jae Lee. 2024a. Llava-](https://llava-vl.github.io/blog/2024-01-30-llava-next/)
[next: Improved reasoning, ocr, and world knowledge.](https://llava-vl.github.io/blog/2024-01-30-llava-next/)
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae
Lee. 2024b. Visual instruction tuning. Advances in
_neural information processing systems, 36._
Hongwei Liu, Zilong Zheng, Yuxuan Qiao, Haodong
Duan, Zhiwei Fei, Fengzhe Zhou, Wenwei Zhang,
Songyang Zhang, Dahua Lin, and Kai Chen. 2024c.
Mathbench: Evaluating the theory and application
proficiency of llms with a hierarchical mathematics
benchmark. arXiv preprint arXiv:2405.12209.
Haoyu Lu, Wen Liu, Bo Zhang, Bingxuan Wang, Kai
Dong, Bo Liu, Jingxiang Sun, Tongzheng Ren, Zhuoshu Li, Yaofeng Sun, et al. 2024. Deepseek-vl:
towards real-world vision-language understanding.
_arXiv preprint arXiv:2403.05525._
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, KaiWei Chang, Michel Galley, and Jianfeng Gao. 2023.
Mathvista: Evaluating mathematical reasoning of
foundation models in visual contexts. arXiv preprint
_arXiv:2310.02255._
Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan
Huang, Xiaodan Liang, and Song-chun Zhu. 2021.
Inter-gps: Interpretable geometry problem solving
with formal language and symbolic reasoning. In
_Proceedings of the 59th Annual Meeting of the Asso-_
_ciation for Computational Linguistics and the 11th_
_International Joint Conference on Natural Language_
_Processing (Volume 1: Long Papers), pages 6774–_
6786.
[OpenAI. 2023. GPT-4V(ision) system card.](https://openai.com/research/gpt-4v-system-card)
[OpenAI. 2024. Hello gpt-4o. https://openai.com/](https://openai.com/index/hello-gpt-4o/)
[index/hello-gpt-4o/.](https://openai.com/index/hello-gpt-4o/)
Ji Qi, Ming Ding, Weihan Wang, Yushi Bai, Qingsong
Lv, Wenyi Hong, Bin Xu, Lei Hou, Juanzi Li, Yuxiao Dong, et al. 2024. Cogcom: Train large visionlanguage models diving into details through chain of
manipulations. arXiv preprint arXiv:2402.04236.
Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren
Etzioni, and Clint Malcolm. 2015. Solving geometry
problems: Combining text and diagram interpretation.
In Proceedings of the 2015 conference on empirical
_methods in natural language processing, pages 1466–_
1476.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint_
_arXiv:2307.09288._
Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Mingjie
Zhan, and Hongsheng Li. 2024a. Measuring multimodal mathematical reasoning with math-vision
dataset. arXiv preprint arXiv:2402.14804.
Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi
Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei
Zhao, Xixuan Song, et al. 2023. Cogvlm: Visual expert for pretrained language models. arXiv preprint
_arXiv:2311.03079._
Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni,
Abhranil Chandra, Shiguang Guo, Weiming Ren,
Aaran Arulraj, Xuan He, Ziyan Jiang, et al. 2024b.
Mmlu-pro: A more robust and challenging multi-task
language understanding benchmark. arXiv preprint
_arXiv:2406.01574._
Kaining Ying, Fanqing Meng, Jin Wang, Zhiqian Li,
Han Lin, Yue Yang, Hao Zhang, Wenbo Zhang, Yuqi
Lin, Shuo Liu, et al. 2024. Mmt-bench: A comprehensive multimodal benchmark for evaluating large
vision-language models towards multitask agi. arXiv
_preprint arXiv:2404.16006._
Alex Young, Bei Chen, Chao Li, Chengen Huang,
Ge Zhang, Guanwei Zhang, Heng Li, Jiangcheng
Zhu, Jianqun Chen, Jing Chang, et al. 2024. Yi:
Open foundation models by 01. ai. arXiv preprint
_arXiv:2403.04652._
Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng,
Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang,
Weiming Ren, Yuxuan Sun, et al. 2024. Mmmu: A
massive multi-discipline multimodal understanding
and reasoning benchmark for expert agi. In Pro_ceedings of the IEEE/CVF Conference on Computer_
_Vision and Pattern Recognition, pages 9556–9567._
Ming-Liang Zhang, Fei Yin, Yi-Han Hao, and ChengLin Liu. 2022. Plane geometry diagram parsing.
_arXiv preprint arXiv:2205.09363._
Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun
Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan
Lu, Kai-Wei Chang, Peng Gao, et al. 2024. Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? _arXiv preprint_
_arXiv:2403.14624._
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan
Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin,
Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023.
Judging llm-as-a-judge with mt-bench and chatbot
arena. Advances in Neural Information Processing
_Systems, 36._
-----
**A** **Data Source for Human Performance**
The 21st Century Education Network provides academic proficiency reports that analyze students’ knowledge mastery after each exam. We compile the end-of-term exam scores for each problem.
**B** **Open-Ended Transformation**
Our initial collection of MM-MATH problems includes four types: multiple-choice, fill-in-the-blank,
open-ended, and composite questions. For multiple-choice and fill-in-the-blank questions, which include
an answer and a step-by-step solution, we modifies the final part of the questions into descriptive language,
removing single choice or fill-in-the-blank answers, and using the step-by-step solution as the answer
described in Figure 7. For composite questions, we treat the main textual problem as the common stem for
sub-questions and used the conclusion of one sub-question as the textual problem for the next described
in Figure 8.
**Choice Question**
As shown in the figure, given that the diameter of circle ⊙
**or**
𝑂 is 4 and ∠𝐴𝐶𝐵= 45[∘], the length of AB is ()?
**Fill-in-the-Blank:**
**Option:** A. 2, B. 2 2, C. 2 3, D. ["]
#
**Explanation:** Solution: Connect OA and OB, as shown in the figure, ∵
∠AOB = 2∠ACB = 2×45[∘]= 90[∘], ∴△AOB is an isosceles right
triangle, ∴AB = 2OA = 2 2. Therefore, AB = 2 2.
Hence the answer is B or 2 2
As shown in the figure, given that the diameter of circle ⊙
𝑂 is 4 and ∠𝐴𝐶𝐵= 45[∘]. What is the length of AB?
Solution: Connect OA and OB, as shown in the figure, ∵
**Open-End Answer:**
∠AOB = 2∠ACB = 2×45[∘]= 90[∘], ∴△AOB is an isosceles right
triangle, ∴AB = 2OA = 2 2. Therefore, AB = 2 2 .
Figure 7: An example of converting multiple-choice and fill-in-the-blank questions to open-ended format. The final
part of the textual problem “()” is rewritten in descriptive language, and the main content of the explanation is used
as the answer.
**C** **Prompt Design**
Table 4 details the construction of the two types of prompts. For process evaluation prompts, our
repeated experiments highlighted several key points: 1) Use the term “incorrect” for textual condition
_misunderstanding to help GPT-4V classify the errors accurately. 2) Use the term “misinterpretation” for_
_diagram misinterpretation errors to identify recognition mistakes during comparisons. 3) For reasoning_
errors, it is important to include specific examples.
For prompts that instruct the model to generate answers, we ensure the model produces a final answer
enclosed in \boxed{}.
**D** **First Error Identified**
The first error identified by GPT-4V, when comparing the problem-solving process generated by LMMs to
the ground truth, may not necessarily be the initial error in the problem-solving process. As shown in
Figure 9, the first error determined by GPT-4V is △ABD ∼△CBE rather than intial error
_AC_
_BC_ [=][ AD]DE [.]
-----
Table 4: This table presents the prompts used for process evaluation and answer generation by various LMMs in the
MM-MATH benchmark.
**Phase** **Input** **Prompt**
Based on the given question stem, the diagram, and the correct
answer, compare the model’s response to identify the first error
in model’s response. Then determine which of the following
categories the error belongs to, or if there is no error, classify it
as category five:
1. Misinterpretation of diagram elements or properties: For
example, incorrect coordinate recognition, identifying parallel
lines as intersecting lines, or inventing or misusing elements or
properties not present in the diagram (e.g., identifying a shape
as a square when it is not).
2. Incorrect application of math theorems: For instance,
wrongly applying a specific theorem, such as using the
Pythagorean theorem on a non-right triangle, or omitting necessary theorems, such as failing to apply the similarity theorem to
obviously similar triangles.
3. Calculation errors: Such as mistakes in addition, subtraction,
multiplication, division, or square root calculations.
4. Incorrect use of given question stems: For example, if the
stem states AB=1/2CD but the model generates AB=CD, indicating a failure to use the condition correctly.
5. Other: No errors.
Provide a detailed analysis, including the first mistake, the
reason for the classification, and the correct approach to solving
the problem. If there are no errors, only provide the analysis.
The output format should be:
–First error:
–Error category:
–Detailed analysis:
Solve the following mathematics problem, write out the solution
process according to the question, and use the same LaTeX
format as the question in the solution process. Please display
the final answer in the format \boxed{}.
Model’s response
Question
Diagram
Groundtruth Answer
Question
Diagram
**Process**
**Evaluation**
(GPT-4V)
**Answer**
**Generation**
(LMMs)
-----
As shown in the figure, in △ABC, ∠ACB = 90[∘], and CD is
**Composite Textual** the altitude to side AB. Fold side AC in half, and the fold line
**Question** is EF . Connect CE . CD bisects ∠BCE.
**Question 1:** Connect DF, prove that AF = DF.
**Question 2:** Find the measure of ∠A.
Solution: ∵ EF is the axis of symmetry of AC, ∴FA = FC, EA =
EC, ∴∠ECA = ∠A . ∵CD is the altitude to side AB, ∴∠CDE =
**Explanation 2:**
∠CDB = 90[∘]. ∵CD bisects ∠BCE, ∴∠DCE = ∠DCB . Also,
∵ CD = CD, ∴△CDE ≅△CDB (ASA). ∴∠CED = ∠CBD.
∵∠CED = ∠A + ∠ECA, and ∠ECA = ∠𝐴. ∴∠CBD = 2∠A.
∵∠ACB = 90[∘], ∴∠A + ∠B = 90[∘], ∴3∠A = 90[∘], ∴∠A = 30[∘].
As shown in the figure, in △ABC, ∠ACB = 90[∘], and CD is the
altitude to side AB. Fold side AC in half, and the fold line is
**Composite Question**
EF . Connect CE . CD bisects ∠BCE. Connect DF, AF = DF.
Find the measure of ∠A.
**Open-End Answer:** ∵ EF is the axis of symmetry of AC, ∴FA = FC, EA = EC, ∴
∠ECA = ∠A . ∵CD is the altitude to side AB, ∴∠CDE =
∠CDB = 90[∘]. ∵CD bisects ∠BCE, ∴∠DCE = ∠DCB . Also,
∵ CD = CD, ∴△CDE ≅△CDB (ASA). ∴∠CED = ∠CBD.
∵∠CED = ∠A + ∠ECA, and ∠ECA = ∠𝐴. ∴∠CBD = 2∠A.
∵∠ACB = 90[∘], ∴∠A + ∠B = 90[∘], ∴3∠A = 90[∘], ∴∠A = 30[∘]
Figure 8: An example of converting composite question to open-ended format. Since Question 1 is a proof,
we exclude it. We treat the main stem as the stem of Question 2, and incorporate the conclusion of Question 1
(highlighted in red) as a new condition into the stem of Question 2.
**E** **Text Reason First**
Figure 10 and Figure 11 illustrate examples of multimodal reasoning. Regardless of whether all problem
conditions are provided, multimodal models tend to rely solely on textual analytical methods, neglecting
the information in the images. This approach increases the complexity of problem-solving and leads to a
higher likelihood of errors.
**F** **Case of Prompt Effectiveness**
Figure 12 illustrates the effectiveness of our designed prompt. Although there is no textual description of
DOB in the groundtruth problem-solving process, GPT-4V successfully identified and categorized the
error. The error was classified as a mathematical theory error by GPT-4V because the solution did not
apply the similarity theorem for reasoning.
**G** **Misapplication of Theorems**
Figure 13 illustrates a case where GPT-4V incorrectly applied the cosine theorem during problem-solving.
In this case, the sides used in the cosine theorem do not form a triangle in the image. Despite this, the
model forcibly applied the theorem, leading to calculation errors. This improper use of the theorem on
incorrect image elements further resulted in erroneous problem-solving steps.
**H** **Element Recognition Error in Image**
We present two cases where GPT-4V exhibited errors in image element recognition during problemsolving. In Figure 14, the image contains only parallel lines; however, GPT-4V incorrectly identified a
triangle. In Figure 15, AFD is a straight line, but GPT-4V mistakenly perceived it as a triangle. These
cases demonstrate GPT-4V’s deficiencies in accurate image element recognition, leading to erroneous
reasoning.
-----
**Question: As shown in the figure, in△ABC and △ADE,**
∠ACB=∠AED=90[∘], ∠ABC=∠ADE. Connect BD and
CE. If AC:BC=3:4, what is the ratio BD:CE?
𝐴𝐵 5
**Solution: Since ∠ACB=90[∘]** and AC:BC=3:4, we have
𝐴𝐶 [=] 3[.][ ∵∠][ACB=][∠][AED]
𝐴𝐵 𝐴𝐶
=90[∘] and ∠ABC=∠ADE, it follows that △ABC ∼△ADE, thus
𝐴𝐷[=] 𝐴𝐸 [and][ ∠][DA]
𝐵𝐷 𝐴𝐵 5
E=∠BAC. Therefore,∠DAB=∠EAC, and △DAB∼△EAC, hence
𝐶𝐸 [=] 𝐴𝐶 [=] 3 [.]
## First Error
**GPT-4V Answer: From the problem statement, we know △ABC ∼△**
ADE because they are both right triangles and share a common acute angle.
According to the properties of similar triangles, we have
AC AD BD AB
[
BC [=] DE[]][ … Therefore, ][△ABD ∼△CBE][. Thus, we have][ []CE [=] BC[]][ …]
BD 5 5
Therefore, [
CE [=] 4[]][So the final answer ][is] 4 [.]
**First error: "Therefore, △𝐴𝐵𝐷 ∼△CBE".**
**Error category: 1. Misinterpretation of diagram elements or properties.**
**Detailed analysis: The model made a mistake in determining the similarity of**
- ABD and △CBE. In reality, △ABD and △CBE are not similar. …
Figure 9: An example of first error detecting by GPT-4V
**Question: As shown in the figure, the graph of the linear**
function y! = x + b intersects with the graph of the
linear function y" = kx + 4 at point P(1, 3). What is the
solution set for the inequality x + b ≤kx + 4 with
respect to x?
**Solution: According to the graph, when x ≤1, x + b ≤kx + 4, that is, the**
solution set for the inequality x + b ≤kx + 4 with respect to x is x ≤ 1 .
**GPT-4V Answer:First, based on the lines y1 = x + b and y2 = kx + 4**
intersecting at point P(1, 3), we can derive two equations to determine the
values of b and k. For y1 = x + b, substituting the coordinates of point P(1, 3),
we get: 3 = 1 + b Thus, we can solve for b: b = 2 For y2 = kx + 4, similarly
substituting the coordinates of point P(1, 3), we get: 3 = k + 4 Solving for k: k
= -1 Now we know the equations of the lines are y1 = x + 2 and y2 = −x + 4.
Given the inequality x + b ≤kx + 4 from the problem, substituting b = 2
and k = -1 gives: x + 2 ≤−x + 4 Next, solve this inequality: 2x ≤2, x ≤
1, So the solution set is x less than or equal to 1. The final answer is x ≤1 .
Figure 10: A case where LMMs reasoning relies solely on textual analysis, resulting in increased complexity.
-----
**Question: As shown in the figure, the linear function y =**
-x - 2 intersects with the graph of y = kx + b at point
P 2, n .What is the solution set for the inequality kx + b
<-x-2 with respect to x?
**Solution: From the graph, it is observed that when x < 2, the graph of the line y**
= kx + b is below the graph of the line y =-x -2. Therefore, the solution set for
the inequality kx+b <-x-2 with respect to x is x < 2 .
**GPT-4V Answer: According to the problem, point P(2, n) lies on both lines y**
= -x - 2 and y = kx + b, so we can substitute the coordinates of point P into
these two equations to get two equalities. First, substituting into the first
equation gives: [n = −2 −2 = −4]which is the value of n. Next,
substituting x = 2 and n = -4 into the second equation gives: [−4 = k ⋅2 +
b, −4 = 2k + b, b = −4 − 2k] … Therefore, the final answer is
dependent on the value of k .
Figure 11: A case where LMMs reasoning ignores image information, relying only on text and leading to errors.
**Question: As shown in the figure, the diagonals AC and**
BD of rhombus ABCD intersect at point O. Given that AC
= 8 and BD = 6, and point E is the midpoint of BC, what
is the length of OE?
1
**Solution: ∵quadrilateral ABCD is a rhombus,∴OC=OA=**
2[AC=4,OB=OD][=]
1
OA[2] + OB[2] =5.∵point E is midpoint of AB,
2 [BD = 3][,][∠BOC = 90][∘][.][∴][AB][=]
1 5
and O is the midpoint of AC,∴OE is the median of △ABC,∴OE=
2[AB][=] 2 [.]
**GPT-4V Answer: …, We also know that since all sides of the rhombus are**
equal, AB = BC = CD = DA. Because OE is the median of the rhombus‘s
diagonal BD, it bisects triangle DOB, making △BOE ∼△DOE. Now, …, we
%& '& '&×'% *.,×
have = = 2.5. Therefore, the final answer is
'% [=] (% [.][Thus, OE = ] (%
2.5 .
**First error: Because OE is the median of the rhombus's diagonal BD, it bisects**
triangle DOB, making △BOE ∼△DOE.
**Error category: 2. Incorrect application of math theorems**
**Detailed analysis: The model incorrectly assumes that OE is the median of**
diagonal BD and incorrectly applies the properties of similar triangles. ...
Figure 12: A case demonstrating the effectiveness of our prompt. GPT-4V successfully identified the reasoning
error DOB, which was not present in the solution.
-----
**Question: As shown in the figure, △ABC and △ADE are**
both equilateral triangles, point D is on side BC, DE
intersects AC at point F, and if AB=6, AD=5, and CD=4,
what is the length of EF?
**Solution: ∵△ABC and △ADE are both equilateral triangles,∴∠B = ∠C =**
∠ADE = 60[∘], DE = AD = 5, ∵∠ADC = ∠ADE + ∠CDE = ∠B + ∠BAD, ∴
"# "$ ' ) *+
∠CDE = ∠BAD, ∴△ABD ∼△DCF, ∴
$% [=] $& [, ∴] ( [=] $& [, ∴DF =], [, ∴EF =]
)
DE −DF =
, [.]
**GPT-4V Answer: … we have BC = 6. Given that CD = 4, we find BD = BC**
- CD = 6 - 4 = 2. Next, applying the cosine rule to find the measure of ∠BCD:
BC[-] = BD[-] + CD[-] −2×BD×CD× cos ∠BCD, Substituting BC = 6, BD = 2,
and CD = 4 into the formula to solve for cos ∠BCD, …, However, we
obtained a result of cos ∠BCD greater than 1, which is impossible since the
cosine value must be within the range [-1, 1]. This indicates that there might be
an error in the problem's information.
**First error: BC[2]** = BD[2] + CD[2] −2×BD×CD× cos ∠BCD
**Error category: 2. Incorrect application of math theorems**
**Detailed analysis: The model incorrectly applied the cosine rule in the solution**
process. The problem clearly states that $\triangle ABC$ and $\triangle ADE$
are equilateral triangles, …
Figure 13: A reasoning error case: misuse of the cosine rule
**Question: As shown in the figure, given AB ∥𝐶𝐷 ∥EF**
!"
and BD: DF = 2: 5, what is the value of
!# [?]
**GPT-4V Answer: According to the figure, segments AB ∥CD ∥EF and BD:DF**
= 2:5. Since 𝐴𝐵 ∥𝐶𝐷, by the properties of similar triangles, we know that
triangles ABD and ACD are isosceles, thus AD = BD. Similarly, since CD ∥EF,
triangles CDE and EDF are isosceles, so ED = DF.,…
Figure 14: A case of diagram misinterpretation : parallel lines misidentified as a triangle.
**Question: As shown in the figure, quadrilateral ABCD is**
a rectangle, point E is on the extension of BA, AE = AD,
and EC intersects AD and BD at points F and G,
respectively. If AF = AB, what is the value of 𝐴𝐷∶𝐴𝐵?
**GPT-4V Answer: First, since ABCD is a rectangle, 𝐴𝐵 ⊥𝐴𝐷, and AB = CD**
and AD = BC. According to the problem, AE = AD, so △𝐴𝐸𝐷 is an isosceles
right triangle, thus ∠𝐴𝐸𝐷= 45[∘].Additionally, since AF = AB = CD,△𝐴𝐹𝐷 and
- 𝐵𝐶𝐷 are congruent, 𝑠𝑜∠𝐴𝐷𝐹= ∠𝐵𝐷𝐶= 90[∘].Therefore, …
Figure 15: A case of diagram misinterpretation: a straight line misidentified as a triangle.
-----
| [
"Kai, Sun",
"Yushi, Bai",
"Lei, Hou",
"Ji, Qi",
"Juanzi, Li"
] | 2024-04-07T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2404.05091v4 | https://arxiv.org/abs/2404.05091 | https://www.semanticscholar.org/paper/c49df7a70fd13060353d81e6d1cdc1bbbfd58357 |
MWP-BERT: A Numeracy-augmented Pre-trained Encoder for Math Word Problems | N/A | null | # MWP-BERT: A Numeracy-augmented Pre-trained Encoder for Math Word Problems
**Zhenwen Liang[1], Jipeng Zhang[2], Lei Wang[3],**
**Wei Qin[4], Jie Shao[5], and Xiangliang Zhang[ ][1]**
1University of Notre Dame, {zliang6, xzhang33}@nd.edu
2Hong Kong University of Science and Technology, [email protected]
3Singapore Management University, [email protected]
4Hefei University of Technology, [email protected]
5University of Electronic Science and Technology of China, [email protected]
**Abstract**
Math word problem (MWP) solving faces a dilemma in number representation
learning. In order to avoid the number representation issue and reduce the search
space of feasible solutions, existing works striving for MWP solving usually
replace real numbers with symbolic placeholders to focus on logic reasoning.
However, instead of the number value itself, it is the reusable numerical property
that matters more in numerical reasoning. Therefore, we argue that injecting
numerical properties into symbolic placeholders with contextualized representation
learning schema can provide a way out of the dilemma in the number representation
issue here. In this work, we introduce this idea to the popular pre-training language
model (PLM) techniques and build MWP-BERT, an effective contextual number
representation PLM. We demonstrate the effectiveness of our MWP-BERT on
MWP solving and several MWP-specific understanding tasks on both English and
Chinese benchmarks.
**1** **Introduction**
MWP solving system aims to perform symbolic reasoning by searching through a combinatorial
solution space given the text description evidence. Despite the great performance achieved by the
previous methods, there still exists fundamental challenges in number representation for MWP
solving. More exactly, number values are required to be considered as vital evidence in solution
exploration but existing works are known to be inefficient in capturing numeracy information Wallace
et al. [2019]. Intuitively, we could simply treat explicit numbers in the same way with words, i.e.,
assign position for all numbers in the vocabulary. However, there would be an infinite number of
candidates during prediction and it would be impossible to learn their deep representations. In other
words, the solution space will be extremely large and the complexity is unacceptable. Therefore,
almost all existing works follow the number mapping technique Wang et al. [2017] to replace all
numbers with symbolic placeholders (e.g., “x1”, “x2”). The core idea here is to get a reasonable
solution space by restricting neural networks to leave out numerical characteristics and focus on logic
reasoning. However, most of the current MWP solvers do not consider the background knowledge in
the context and are usually inefficient in capturing numeracy properties. An example is shown in
Fig. 1. Small perturbations in the problem description actually bring large variations in reasoning
logic and equation. If the model simply regards “75” and “10%” as the same placeholder “x3”, and
does not notice the small variation in the context, a wrong solution will be generated.
To this end, a group of numeracy grounded pre-training objectives is designed to leverage the
corpus of MWP and encourage the contextual representation to capture numerical information.
36th Conference on Neural Information Processing Systems (NeurIPS 2022) Workshop on Math-AI.
-----
FFN Feedforward Networks C Vector Concatenation Labels: Integer/Non-integer (I/N),Same/Different with answer (S/D)
𝑍 FFN 4 𝑍 FFN I 𝑍2 FFN C ×
𝑍3 FFN
𝑍1 FFN S 𝑍2 FFN
𝑍1 FFN I 𝑍1 FFN < 𝑍3 FFN C 0
**Self-Supervised** **Weak-Supervised** **Fully-Supervised**
**Number Representations** 𝑍 **Mean Pooling**
𝑍1 𝑍2 𝑍3 𝑍4
x1 x2 x3 x4
BERT/RoBERTa
Some workers are producing x1 clothes ... x2 days and x3 clothes … x4 more …
660 75 5 3
**Problem**
Figure 2: The overall architecture of our BERTbased MWP solver. Our method enables the
solver to learn from unlabeled, incompletely
labeled and fully labeled MWPs by different
pre-training tasks.
**Text: Some workers are producing 660 clothes. It has been 5 days and 75 clothes**
are produced per day. But they have to finish all clothes in 3 more days. How
many clothes should be processed per day from now?
**Equation: (660 −75×5) ÷ 3**
**Reasoning Logic:** ÷
− 3
660 ×
75 5
**Text: Some workers are producing 660 clothes. It has been 5 days and 10% of**
the total clothes are produced per day. But they have to finish all clothes in 3
more days. How many clothes should be processed per day from now?
**Equation: 660× 1 −10%×5 ÷ 3**
**Reasoning Logic:** ÷
× 3
660 −
1 ×
10% 5
Figure 1: The second question is obtained from
the first one by minor modifications. However,
their solution equation and corresponding equation tree structure are different from each other.
Experiments conducted on both Chinese and English benchmarks show the significant improvement
of our proposed approach over all competitors.
**2** **Method**
An overview of pre-training objectives and our model architecture is shown in Figure 1. In general,
pre-training objectives are designed to inject contextual priori and numerical properties as soft
constraints for representation learning. They are categorized into three types given provided training
signals, i.e., self-supervised, weakly-supervised, and fully-supervised.
**2.1** **Self-supervised Objectives**
In this part, we only consider input text descriptions for each example. Also, these objectives can
alleviate the costs of collecting MWP corpus by constructing supervision signals without solution
answers and equations.
**Masked Language Modeling.** We follow Devlin et al. [2019] and introduce masked language
modeling (MLM) for basic contextual representation modeling. Specially, we apply masks on 10%
of tokens, randomly replace 10% of tokens with other tokens and keep 80% of tokens unchanged.
Later, the manipulated sentence is utilized to reconstruct the original sentence.
**Number Counting.** Another pre-training objective is to predict the number of numbers that
appeared in MWP description. The amount of a number corresponds with the cardinality of variable
sets. This also reflects the basic understanding of the difficulty of an MWP and can act as a key
contextual MWP number understanding feature.
**Number Type Grounding.** This objective aims at linking contextual number representations with
corresponding number types to tell the difference between discrete and continuous concepts/entities.
For numerical reasoning in MWP solving, we only need to handle whole numbers as well as noninteger numbers (decimal, fraction and percentage). Ideas here are that whole numbers usually
associate with discrete entities (for example, desks, chairs and seats) while non-integer numbers often
connect with continuous concepts (for example, proportions, rate, velocity). Besides, comparisons
among whole numbers got different issues compared with rational numbers. Therefore, we propose a
classification objective to predict if a number is a whole number or a non-integer number.
-----
**2.2** **Weakly-supervised Objectives**
Given both text descriptions of MWPs and corresponding value answers, we can model dependencies
among answer number and numbers in text descriptions so that contextual representation perceive
the existence of the target variable number that does not appear in the text descriptions. In detail,
we design 3 novel pre-training objectives specializing in value-annotated MWPs to improve number
representation in our MWP-BERT.
**Answer Type Prediction.** Determining the type of answer number can provide us discrete/continuous nature of the target entity/concept. Thus, we want to predict the type (whole/non-integer) of the
answer value given global representations of an MWP.
**Context-Answer Type Comparison.** Besides the global context feature, an MWP-BERT also
needs to associate context numbers and answer numbers (the target number does not explicitly appear
in the text). Thus, another objective is proposed to predict if the quantities appeared in the MWP text
fall into the same category as the answer (i.e. they are all whole or non-integer).
**Number Magnitude Comparison.** Beyond type, the magnitude of a number serves as the foundation of numerical reasoning. By associating magnitudes evaluation with contextual representation,
the model can get a better perception of variance over key reasoning cues like time, size and intensity.
**2.3** **Fully-supervised Objectives**
Given both equations and answers for MWPs, we can design fully-supervised training tasks to
associate number representation with reasoning flows (solution equation). Mathematical equations
are known to be binary tree structures with operators on root nodes and numbers on leaf nodes. The
motivation is to encourage models to learn structure-aware number representations that encode the
information on how to make combinations over atomic operators and numbers. We incorporate two
pre-training objectives based on the solution equation tree.
**Operation Prediction.** The first one is a quantity-pair relation prediction task that focuses on the
local feature of the equation tree. The goal is to predict the operator between two quantity nodes in
the solution tree. This is in fact a classification task with 5 potential targets, i.e., +, −, ×, ÷ and ∧.
**Tree Distance Prediction.** Another pre-training objective is to incorporate the global structure of
the equation tree in a quantitative way. Inspired by Hewitt and Manning [2019], we consider the
depth of each number and operator on the corresponding binary equation tree to be the key structure
priori. Thus, we design another fully-supervised objective to utilize this information. More exactly,
given the representation of two number nodes in an equation tree, this is a regression problem that
predicts the distance (difference of their depth) between them.
**3** **Experiment**
For the Chinese initial model, we use an upgraded patch of Chinese BERT which is pre-trained with
the whole word masking (WWM)[1] Cui et al. [2020]. For the English pre-training models, we use the
official source on this website[2].
**3.1** **Dataset**
We conduct experiment based on Math23k Wang et al. [2017], MathQA Amini et al. [2019] and
Ape-210k Zhao et al. [2020]. Since many noisy examples exist in Ape-210k, e.g., examples without
equation annotations or answer values, we re-organize Ape-210k to Ape-clean and Ape-unsolvable,
where the training set of Ape-clean and the whole Ape-unsolvable are used for pre-training. For the
English MWP, we use the training set of MathQA Amini et al. [2019] to perform pre-training.
[1https://github.com/ymcui/Chinese-BERT-wwm](https://github.com/ymcui/Chinese-BERT-wwm)
[2https://huggingface.co/bert-base-uncased and https://huggingface.co/roberta-base](https://huggingface.co/bert-base-uncased)
-----
**3.2** **Probing Evaluation**
We re-run all the pre-training tasks as probing tasks to evaluate our modeling’s understanding ability
and test MWP-BERT in a zero-shot scenario, i.e. without fine-tuning the parameters of MWP-BERT
and MWP-RoBERTa for the sake of fair comparison. Besides, we borrow an MWP-specific sequence
labeling task, quantity tagging Zou and Lu [2019] (“QT”), to further compose MWP understanding
evaluation settings. Briefly speaking, this task requires the model to assign “+”, “-” or “None” for
every quantity in the problem description and can serve as an MWP understanding evaluation tool to
examine the model’s understanding of each variable’s logic role in the reasoning flow. We extract the
corresponding vectors of all quantities according to their positions in the encoded problem. Next, a
2-layer feed-forward block is connected to output the final prediction. Significant improvements can
be observed in Table 1, and demonstrate the effectiveness of our proposed pre-training techniques in
improving the number representation of PLMs.
|Col1|NumCount NTGround ATPred CATComp NumMComp OPred TPred|QT|
|---|---|---|
|Metric|MSE ↓ Acc ↑ Acc ↑ Acc ↑ Acc ↑ Acc ↑ MSE ↓|Acc ↑|
|BERT RoBERTa|3.08 0.87 0.75 0.77 0.77 0.50 0.97 3.20 0.86 0.76 0.78 0.77 0.51 0.99|84.5 84.6|
|MWP-RoBERTa MWP-BERT|0.69 0.92 0.86 0.87 0.86 0.86 0.44 0.67 0.92 0.85 0.87 0.86 0.87 0.45|91.0 91.5|
Table 1: The evaluation results on MWP-specific understanding tasks. All tasks correspond to the
tasks mentioned in section 1. Note that the metric for 2 tasks is mean-squared-error, while others use
classification accuracy. “QT” stands for quantity tagging.
**3.3** **MWP Solving**
Given a textual description of a mathematical problem, which contains several known variables,
MWP solving targets getting the correct answer for the corresponding question. A solver is expected
to be able to predict an equation that can exactly reach the answer value. We adapt our proposed
encoder with multiple different traditional solvers by replacing their RNN encoder with MWP-BERT
to show its generalization ability across various solvers. The results show that our MWP-BERT
outperforms vanilla BERT Devlin et al. [2019] and has great adaptivity on different solvers and we
successfully achieve state-of-the-art accuracy.
|Col1|Math23k|Math23k|MathQA|
|---|---|---|---|
||Math23k|Math23k∗|MathQA|
|State-of-the-art Baselines REAL Huang et al. [2021] BERT-CL Li et al. [2021] Gen&Rank Shen et al. [2021] DeductiveReasoner Jie et al. [2022]|82.3 83.2 85.4 85.1|80.0 − 84.3 83.0|− 76.3 − 78.6|
|Adapting MWP-BERT BERT Devlin et al. [2019] + GTS Xie and Sun [2019] MWP-BERT + GTS Xie and Sun [2019] MWP-BERT + Teacher Liang and Zhang [2021] MWP-BERT + Graph2Tree Zhang et al. [2020b]|83.8 84.7 85.1 85.6|82.0 82.4 82.8 83.8|75.1 76.2 77.3 78.9|
Table 2: Comparison of answer accuracy (%) among our proposed models and different baselines.
Math23k column shows the results on the public test set and Math23k[∗] is 5-fold cross validation on
Math23k dataset. MathQA is adapated from Li et al. [2021], Tan et al. [2021]. “BERT” represent
results without our pre-training.
**4** **Conclusion**
We propose MWP-BERT, an MWP-specific PLM model with 8 pre-training objectives to solve the
number representation issue in MWP. Experimental results show the superiority of our proposed
MWP-BERT across various downstream tasks on generation and understanding. In terms of the
most representative task MWP solving, our approach achieves state-of-the-art. Better numerical
understanding ability is also demonstrated in the probing evaluation. We believe that our study can
serve as a useful pre-trained pipeline and a strong encoder in the MWP community.
-----
**References**
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh
Hajishirzi. Mathqa: Towards interpretable math word problem solving with operation-based
formalisms. In NAACL, pages 2357–2367, 2019.
Daniel Andor, Luheng He, Kenton Lee, and Emily Pitler. Giving BERT a calculator: Finding
operations and arguments with reading comprehension. In Kentaro Inui, Jing Jiang, Vincent Ng,
and Xiaojun Wan, editors, EMNLP, pages 5946–5951, 2019.
Ting-Rui Chiang and Yun-Nung Chen. Semantically-aligned equation generation for solving and
reasoning math word problems. In NAACL, 2018.
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. Revisiting pretrained models for chinese natural language processing. In EMNLP: Findings, pages 657–668,
2020.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep
bidirectional transformers for language understanding. In NAACL, pages 4171–4186, 2019.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner.
DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Jill
Burstein, Christy Doran, and Thamar Solorio, editors, NAACL, pages 2368–2378, 2019.
Mor Geva, Ankit Gupta, and Jonathan Berant. Injecting numerical reasoning skills into language
models. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel R. Tetreault, editors, ACL, pages
946–958, 2020.
John Hewitt and Christopher D. Manning. A structural probe for finding syntax in word representations. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, NAACL, pages 4129–4138,
2019.
Yining Hong, Qing Li, Daniel Ciao, Siyuan Huang, and Song-Chun Zhu. Learning by fixing: Solving
math word problems with weak supervision. In AAAI, 2021a.
Yining Hong, Qing Li, Ran Gong, Daniel Ciao, Siyuan Huang, and Song-Chun Zhu. Smart: A
situation model for algebra story problems via attributed grammar. In AAAI, 2021b.
Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin, and Wei-Ying Ma. How well do computers
solve math word problems? large-scale dataset construction and evaluation. In ACL, pages
887–896, 2016.
Shifeng Huang, Jiawei Wang, Jiao Xu, Da Cao, and Ming Yang. Recall and learn: A memoryaugmented solver for math word problems. In Findings of EMNLP, pages 786–796, 2021.
Zhanming Jie, Jierui Li, and Wei Lu. Learning to reason deductively: Math word problem solving as
complex relation extraction. In ACL, pages 5944–5955, 2022.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
_arXiv:1412.6980, 2014._
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang.
Parsing algebraic word problems into equations. Transactions of the Association for Computational
_Linguistics, 3:585–597, 2015._
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. Learning to automatically solve
algebra word problems. In ACL, pages 271–281, 2014.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy,
Veselin Stoyanov, and Luke Zettlemoyer. BART: denoising sequence-to-sequence pre-training for
natural language generation, translation, and comprehension. In Dan Jurafsky, Joyce Chai, Natalie
Schluter, and Joel R. Tetreault, editors, ACL, pages 7871–7880, 2020.
-----
Jierui Li, Lei Wang, Jipeng Zhang, Yan Wang, Bing Tian Dai, and Dongxiang Zhang. Modeling
intra-relation in math word problems with different functional multi-head attentions. In ACL, pages
6162–6167, 2019.
Zhongli Li, Wenxuan Zhang, Chao Yan, Qingyu Zhou, Chao Li, Hongzhi Liu, and Yunbo Cao.
Seeking patterns, not just memorizing procedures: Contrastive learning for solving math word
problems. arXiv preprint arXiv:2110.08464, 2021.
Zhenwen Liang and Xiangliang Zhang. Solving math word problems with teacher supervision. In
_IJCAI, 2021._
Qianying Liu, Wenyv Guan, Sujian Li, and Daisuke Kawahara. Tree-structured decoding for solving
math word problems. In EMNLP, pages 2370–2379, 2019.
Jinghui Qin, Lihui Lin, Xiaodan Liang, Rumin Zhang, and Liang Lin. Semantically-aligned universal
tree-structured solver for math word problems. In Bonnie Webber, Trevor Cohn, Yulan He, and
Yang Liu, editors, EMNLP, pages 3780–3789, 2020.
Jinghui Qin, Xiaodan Liang, Yining Hong, Jianheng Tang, and Liang Lin. Neural-symbolic solver
for math word problems with auxiliary tasks. In ACL, pages 5870–5881, 2021.
Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin Jiang, Ming Zhang, and Qun Liu. Generate
& rank: A multi-task framework for math word problems. In Marie-Francine Moens, Xuanjing
Huang, Lucia Specia, and Scott Wen-tau Yih, editors, EMNLP, pages 2269–2279, 2021.
Yibin Shen and Cheqing Jin. Solving math word problems with multi-encoders and multi-decoders.
In COLING, pages 2924–2934, 2020.
Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. Automatically solving
number word problems by semantic parsing and reasoning. In EMNLP, pages 1132–1142, 2015.
Minghuan Tan, Lei Wang, Lingxiao Jiang, and Jing Jiang. Investigating math word problems using
pretrained multilingual language models. CoRR, abs/2105.08928, 2021.
Avijit Thawani, Jay Pujara, Filip Ilievski, and Pedro A. Szekely. Representing numbers in NLP: a
survey and a vision. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek HakkaniTür, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou, editors,
_NAACL-HLT, pages 644–656, 2021._
Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. Do NLP models know
numbers? probing numeracy in embeddings. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun
Wan, editors, EMNLP, pages 5306–5314, 2019.
Lei Wang, Yan Wang, Deng Cai, Dongxiang Zhang, and Xiaojiang Liu. Translating a math word
problem to a expression tree. In EMNLP, pages 1064–1069, 2018.
Lei Wang, Dongxiang Zhang, Jipeng Zhang, Xing Xu, Lianli Gao, Bing Tian Dai, and Heng Tao Shen.
Template-based math word problem solvers with recursive neural networks. In AAAI, volume 33,
pages 7144–7151, 2019.
Yan Wang, Xiaojiang Liu, and Shuming Shi. Deep neural solver for math word problems. In EMNLP,
pages 845–854, 2017.
Qinzhuo Wu, Qi Zhang, and Zhongyu Wei. An edge-enhanced hierarchical graph-to-tree network for
math word problem solving. In Findings of EMNLP, pages 1473–1482, 2021a.
Qinzhuo Wu, Qi Zhang, Zhongyu Wei, and Xuanjing Huang. Math word problem solving with
explicit numerical values. In ACL, pages 5859–5869, 2021b.
Zhipeng Xie and Shichao Sun. A goal-driven tree-structured neural model for math word problems.
In IJCAI, pages 5299–5305, 2019.
Weijiang Yu, Yingpeng Wen, Fudan Zheng, and Nong Xiao. Improving math word problems with
pre-trained knowledge and hierarchical reasoning. In EMNLP, pages 3384–3394, 2021.
-----
Ma Yuhui, Zhou Ying, Cui Guangzuo, Ren Yun, and Huang Ronghuai. Frame-based calculus of
solving arithmetic multi-step addition and subtraction word problems. In 2010 Second International
_Workshop on Education Technology and Computer Science, volume 2, pages 476–479. IEEE, 2010._
Jipeng Zhang, Roy Ka-Wei Lee, Ee-Peng Lim, Wei Qin, Lei Wang, Jie Shao, and Qianru Sun.
Teacher-student networks with multiple decoders for solving math word problem. In IJCAI, pages
4011–4017, 2020a.
Jipeng Zhang, Lei Wang, Roy Ka-Wei Lee, Yi Bin, Yan Wang, Jie Shao, and Ee-Peng Lim. Graph-totree learning for solving math word problems. In ACL, pages 3928–3937, 2020b.
Xikun Zhang, Deepak Ramachandran, Ian Tenney, Yanai Elazar, and Dan Roth. Do language
embeddings capture scales? In Trevor Cohn, Yulan He, and Yang Liu, editors, Findings of EMNLP,
volume EMNLP 2020, pages 4889–4896. Association for Computational Linguistics, 2020c.
Wei Zhao, Mingyue Shang, Yang Liu, Liang Wang, and Jingming Liu. Ape210k: A large-scale and
template-rich dataset of math word problems. arXiv preprint arXiv:2009.11506, 2020.
Yanyan Zou and Wei Lu. Text2math: End-to-end parsing text into math expressions. In EMNLP,
pages 5330–5340, 2019.
**Appendix**
**Related Works**
**Math Word Problems Solving.** There exist two major types of MWP, equation set MWP Wang
et al. [2017], Zhao et al. [2020] and arithmetic MWP Qin et al. [2020], Huang et al. [2016]. This
work focuses on arithmetic MWP, which is usually paired with one unknown variable. Along the path
of the MWP solver’s development, the pioneer studies use traditional rule-based methods, machine
learning methods and statistical methods Yuhui et al. [2010], Kushman et al. [2014], Shi et al. [2015],
Koncel-Kedziorski et al. [2015]. Afterward, inspired by the development of sequence-to-sequence
(Seq2Seq) models, MWP solving has been formulated as a neurosymbolic reasoning pipeline of
translating language descriptions to mathematical equations with encoder-decoder framework Wang
et al. [2018, 2019], Li et al. [2019], Zhang et al. [2020b], Yu et al. [2021], Wu et al. [2021a]. By
fusing hard constraints into decoder Chiang and Chen [2018], Liu et al. [2019], Xie and Sun [2019],
Shen and Jin [2020], Zhang et al. [2020a], MWP solvers achieve much better performance then.
Several works propose to utilize multi-stage frameworks Wang et al. [2019], Huang et al. [2021],
Shen et al. [2021], Liang and Zhang [2021] to make more robust solvers. Also, several new works
made attempts to improve MWP solver beyond supervised settings Hong et al. [2021a,b].
Among all these previous studies, the most relevant ones to our work can be categorized into two
groups. First, it has been noted that number values and mathematical constraints play a significant
role in supporting numerical reasoning. Wu et al. [2021b] proposed several number value features to
enhance encoder and Qin et al. [2021] designed new auxiliary tasks to enhance neural MWP solvers.
Compared with their work, we first introduce pre-training language model (PLM) and concentrate on
representation learning to resolve numerical understanding challenges. Second, regarding the usage
of pre-training techniques for MWP solving, Shen et al. [2021] introduced BART-based Lewis et al.
[2020] MWP solver and incorporated specialized multi-task training for obtaining more effective
pre-training Seq2Seq models for MWP. Compared with them, our work focuses on the number
representation learning issue of MWP and achieves a more flexible pre-training representation
module for MWP solving, which can be applied in various MWP-related tasks other than solution
generation.
**Numeracy-aware Pre-training Models.** Number representation has been recognized as one of
the main issues in word representation learning. Existing methods make use of value, exponent,
sub-word and character methods Thawani et al. [2021] to obtain number representations for explicit
number values. These methods are known to be less effective in extrapolation cases like testing with
numbers not appearing in the training corpus.
Previous related works Andor et al. [2019], Wallace et al. [2019], Geva et al. [2020] mainly focus on
shallow numerical reasoning tasks shown in DROP dataset Dua et al. [2019], which usually serves as
-----
a benchmark for evaluating numerical machine reading comprehension (Num-MRC) performance.
Compared with MWP solving, Num-MRC’s main focus is laid on extracting answer spans from a
paragraph, which are more fault-tolerant with no needs to predict number tokens. Besides, their
solution generation tasks only contain simple computations like addition/subtraction and there are
only integers in DROP. More exactly, several research efforts have been made to deal with this
kind of math-related reading comprehension task by synthesizing new training examples Geva et al.
[2020], incorporating special modules considering the numerical operation Andor et al. [2019] and
designing specific tokenization strategies Zhang et al. [2020c]. Since MWP solving requires further
consideration of the complex composition of reasoning logic in MWP text, the symbolic placeholder
is more effective in MWP solving. Thus, instead of dealing with explicit number values, our work
focuses on improving representation for symbolic placeholders by injecting numerical properties in a
probabilistic way.
**Implementation Details**
We pre-train our model on 4 NVIDIA TESLA V100 graphic cards and fine-tune on 1 card. The
model was pre-trained for 50 epochs (2 days) and fine-tuned for 80 epochs (1 day) with a batch size
of 32. Adam optimizer Kingma and Ba [2014] is applied with an initial learning rate of 5e-5, which
would be halved every 30 epochs. A dropout rate of 0.5 is set during training to prevent over-fitting.
During testing, we use a 5-beam search to get reasonable solutions. The hyper-parameters setting of
our BERT and RoBERTa is 12 layers of depth, 12 heads of attention and 768 dimensions of hidden
features. Our code and data have been open-sourced on Github [3].
**Ape-clean Dataset**
Ape210k is a recently released large MWPs dataset Zhao et al. [2020], including 210,488 problems.
The problems in Ape210k are more diverse and difficult than those in Math23k. Not only the stronger
requirement of common-sense knowledge for getting solutions but also the missing of ground-truth
solution equations or answers, will take extra obstacles for MWP solving. Among all these cases,
the problems without answers can not be used in fully-supervised setting. Besides, the problems
without annotated equations but only answer values can be used in the weakly-supervised learning
setting. Therefore, we follow the rules below to select the usable problems from Ape210k to construct
an Ape-clean dataset, which can be used for the fully-supervised learning setting. (i). We remove
all MWPs that have no answer values nor equations. (ii). We remove all MWPs that only have
answer values without equations. (iii). We remove all MWPs with a problem length m > 100 or an
answer equation length n > 20, as they will bring obstacles for training. (iv). We remove all MWPs
requiring external constants except 1 and π. (v). We remove all duplicated problems with the MWPs
in Math23k because almost all problems in Math23k can be found in Ape-210k. After data filtering,
the Ape-clean dataset contains 81,225 MWPs, including 79,388 training problems and 1,837 testing
problems. The remaining 129,263 problems in Ape210k are regarded as Ape-unsolvable, which can
be used in the pre-training tasks in the settings of self-supervised and weakly-supervised learning.
[3https://github.com/LZhenwen/MWP-BERT](https://github.com/LZhenwen/MWP-BERT)
-----
| [
"Zhenwen, Liang",
"Jipeng, Zhang",
"Lei, Wang",
"Wei, Qin",
"Jie, Shao",
"Xiangliang, Zhang"
] | 2022-01-01T00:00:00 | NeurIPS 2022 MATH-AI Workshop | false | 0 | 0 | null | null | null | null |
Making Large Language Models Better Reasoners with Step-Aware Verifier | Few-shot learning is a challenging task that requires language models to generalize from limited examples. Large language models like GPT-3 and PaLM have made impressive progress in this area, but they still face difficulties in reasoning tasks such as GSM8K, a benchmark for arithmetic problems. To improve their reasoning skills, previous work has proposed to guide the language model with prompts that elicit a series of reasoning steps before giving the final answer, achieving a significant improvement on GSM8K from 17.9% to 58.1% in problem-solving rate. In this paper, we present DIVERSE (Diverse Verifier on Reasoning Step), a novel approach that further enhances the reasoning capability of language models. DIVERSE has three main components: first, it generates diverse prompts to explore different reasoning paths for the same question; second, it uses a verifier to filter out incorrect answers based on a weighted voting scheme; and third, it verifies each reasoning step individually instead of the whole chain. We evaluate DIVERSE on the latest language model code-davinci-002 and show that it achieves new state-of-the-art results on six of eight reasoning benchmarks (e.g., GSM8K 74.4% to 83.2%). | null | [
"Yifei, Li",
"Zeqi, Lin",
"Shizhuo, Zhang",
"Qiang, Fu",
"Bei, Chen",
"Jian-Guang, Lou",
"Weizhu, Chen"
] | 2022-01-01T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2206.02336 | https://arxiv.org/abs/2206.02336 | null |
|
Markup Language for Mathematical Reasoning with LLMs | N/A | null | # Markup Language for Mathematical Reasoning with LLMs
Ryutaro Yamauchi[1], Sho Sonoda[2][,][4], Akiyoshi Sannai[3][,][4], and Wataru Kumagai[4][,][2]
1 ALBERT Inc., Shinjuku, Tokyo, Japan
```
ryutaro [email protected]
```
2 Center for Advanced Intelligence Project, RIKEN, Chuo, Tokyo, Japan
```
[email protected]
```
3 Kyoto University, Department of Physics, Graduate School of Science, Kyoto, Japan
```
[email protected]
```
4 The University of Tokyo, Hongo, Tokyo, Japan
```
[email protected]
```
**Introduction – Large language models (LLMs) can solve diverse tasks without additional**
training, despite being trained on a simple task of predicting the next token, when conditioned
on an appropriate prompt [1]. LLMs have shown remarkable performance in various natural language processing (NLP) tasks, including arithmetic inference. And it is reported that effective
prompts can improve LLMs’ performance. For example, we can improve LLMs’ arithmetic reasoning ability by having them output intermediate steps to solve a problem rather than having
them output the final step directly. This technique is called Chain-of-Thought (CoT) [8, 4].
One challenge in mathematical reasoning with LLMs is how to handle errors in calculations
or reasoning that may occur in LLMs’ output [3]. Many previous studies have taken the
approach of reducing the occurrence of errors themselves by increasing the model size [7], tuning
the dataset [5], or integrating with external tools [2, 9]. However, since LLMs are probabilistic
language generative models, we cannot completely eliminate LLMs’ errors. And since LLMs can
make mistakes, the integration of LLMs and external tools does not always work. Therefore, to
further enhance LLMs’ mathematical reasoning abilities, a different approach is needed, which
is to make LLMs recognize and correct errors that occur in their reasoning.
In this study, we attempt to correct errors that occurred within the CoT when using LLMs to
solve mathematical problems by integrating with an external tool (Python REPL). Specifically,
we have LLMs output both CoT and Python codes and feed back the results of Python code
execution to LLMs. Our study differs from previous studies [2] in that we do not have LLMs
directly generate Python codes to solve the problem but rather use Python codes as part of
CoT. However, we found that simply feeding back Python execution results did not lead LLMs
to behave as we expected: when we let them write codes in CoT, LLMs did not wait for the
execution results but also output the fake execution results. Also, when LLMs make mistakes
in CoT, they tend to think that the code is incorrect, even if they write the correct Python
code and get the results of that execution. To avoid these problems, we defined an XML-like
markup language, gave its grammar to LLMs, and used it to interact with LLMs. The markup
language includes THINK tag for CoT, PYTHON tag for Python code description, and OUTPUT
Figure 1: An overview of the mathematical reasoning process. system and assistant continue
to interact until an answer is obtained. All messages are structured in the markup language.
-----
Markup Language for Mathematical Reasoning with LLMs Yamauchi et al.
tag for Python code execution results, etc. By using these tags, the output text of LLMs is
structured. As a result, we can remove fake Python execution result that LLMs output, and
we can make LLMs correct the errors in CoT based on the Python code execution results. Our
method achieved a success rate of 65.6% in the MATH dataset [3].
**Interact with ChatGPT using markup language - In this study, we used OpenAI Chat-**
GPT (GPT-3.5-Turbo) [6] via API, which is an LLM that takes a sequence of messages and
predicts the next message. The API specification allows us to set three roles as the speaker of
a message: assistant, user, and system, where assistant is an AI, user is a human and system
is a conversation context manager. As shown below, we designed the mathematical reasoning
process as an interaction between system and assistant. First, as system, we present the problem to be solved and define the grammar of the markup language and rules of reasoning (m[s]0[).]
Then, when assistant returns an output ( ˆm[a]k[),][ system][ analyzes it, and if it contains Python]
code, system returns the execution result (m[s]k[). If][ assistant][ outputs any undesired elements,]
such as fake Python execution results, system removes them from the assistant’s output (m[a]k[).]
We made system and assistant repeat this process until assistant outputs the answer (Fig. 1).
The markup language we have
defined is based on XML syntax
and is a set of elements consisting
of content enclosed in a start tag
`<TAG> and end tag </TAG>.` Since
the dataset used to train LLMs is
assumed to include text written in
XML and HTML, LLMs can write
the markup language that is similar to them. The markup language
Figure 2: The proposed markup language.
includes the tags DEFINE, PROBLEM,
```
ANSWER, THINK, PYTHON, and OUTPUT. DEFINE tag is for defining tags and rules. The grammar of
```
the language and the tags are defined by DEFINE tag. PROBLEM and ANSWER tags are for describing the problem and answer, and the solving process ends when assistant outputs ANSWER tag.
```
THINK tag is for describing thoughts. By defining THINK tag as ”a tag for describing thinking
```
step-by-step”, we induce Zero-shot-CoT [4]. PYTHON tag is for describing Python code. When
_assistant uses PYTHON tag, system returns the execution result using OUTPUT tag. By using these_
tags, all messages, including the initial system prompt, are written in the markup language,
thus strongly conditioning assistant to output in the markup language.
We instructed LLMs to trust the contents of OUTPUT tags rather than the contents of THINK
tags. This enables LLMs to ignore errors in CoT by referring to the results of Python code
execution. Without this instruction, LLMs would either assume that the Python code was
incorrect and fall into a loop of repeated debugging or ignore the contents of the OUTPUT tag.
**Experiments and Results - We evaluated our method on the MATH dataset [3], which**
contains 12,500 challenging competitive math problems. We sampled 90 problems from the
dataset and solved them using our method. We generated five answers for each question and
evaluated them using two criteria: 1. whether our method answered the problem correctly at
least once, and 2. whether our method answered the problem correctly by a majority vote. As
a result, 75 problems (83.3%) were answered correctly by the proposed method at least once,
and 59 problems (65.6%) were answered correctly by the majority vote. These scores are higher
than the score of Minerva 540B (50.3 %) [5], which was fine-tuned with technical content.
At the conference, we will report on the effects of OUTPUT tag priority instruction in addition
to the details of the above experiment.
-----
Markup Language for Mathematical Reasoning with LLMs Yamauchi et al.
## References
[1] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel
Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and
Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell,
M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33,
pages 1877–1901. Curran Associates, Inc., 2020.
[2] Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and
Graham Neubig. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435, 2022.
[3] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn
Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset.
_NeurIPS, 2021._
[4] Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners. In Advances in Neural Information Processing Systems,
volume 35, pages 22199–22213, 2022.
[5] Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay
Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam
Neyshabur, Guy Gur-Ari, and Vedant Misra. Solving quantitative reasoning problems with language
models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances
_in Neural Information Processing Systems, volume 35, pages 3843–3857. Curran Associates, Inc.,_
2022.
[[6] OpenAI. OpenAI: Introducing ChatGPT. https://openai.com/blog/chatgpt, 2022.](https://openai.com/blog/chatgpt)
[7] OpenAI. Gpt-4 technical report, 2023.
[8] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V
Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. In
S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural
_Information Processing Systems, volume 35, pages 24824–24837. Curran Associates, Inc., 2022._
[9] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao.
React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629,
2022.
-----
| [
"Ryutaro, Yamauchi",
"Sho, Sonoda",
"Akiyoshi, Sannai",
"Wataru, Kumagai"
] | 2023-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
MathDSL: A Domain-Specific Language for Concise Mathematical Solutions Via Program Synthesis | We present MathDSL, a Domain-Specific Language (DSL) for mathematical equation solving, which, when deployed in program synthesis models, outperforms state-of-the-art reinforcement-learning-based methods. We also introduce a quantitative metric for measuring the conciseness of a mathematical solution and demonstrate the improvement in the quality of generated solutions compared to other methods. Our system demonstrates that a program synthesis system (DreamCoder) using MathDSL can generate programs that solve linear equations with greater accuracy and conciseness than using reinforcement learning systems. Additionally, we demonstrate that if we use the action spaces of previous reinforcement learning systems as DSLs, MathDSL outperforms the action-space-DSLs. We use DreamCoder to store equation-solving strategies as learned abstractions in its program library and demonstrate that by using MathDSL, these can be converted into human-interpretable solution strategies that could have applications in mathematical education. | It is demonstrated that a program synthesis system (DreamCoder) using MathDSL can generate programs that solve linear equations with greater accuracy and conciseness than using reinforcement learning systems. | [
"Sagnik, Anupam",
"Maddy, Bowers",
"Omar, Costilla-Reyes",
"Armando, Solar-Lezama"
] | 2024-09-25T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.17490 | https://arxiv.org/abs/2409.17490 | https://www.semanticscholar.org/paper/565ce1ca601656ccd8e7660821b6e38a669c347b |
|
MathDivide: Improved mathematical reasoning by large language models | Large language models have been proven to be capable of handling complex linguistic and cognitive tasks. Therefore their usage has been extended to tasks requiring logical reasoning ability such as Mathematics. In this paper, we propose a prompting technique called MathDivide that breaks down the mathematical problem into simpler subproblems. Each of the subproblems is formulated as an algebraic expression whose value is evaluated by the Python code generated by the LLM for the corresponding algebraic expression. The values fed to the Python code are the numerical values provided in the problem statement. The solutions for the subproblems are composed together to obtain the final answer for the problem statement. Finally, the final answer is compared to the correct answer. If the final answer matches the correct answer, it is produced as output else a refinement prompt is fed to the LLM. We experiment with this prompting technique on both closed-source LLM models and open-source LLM models using GSM8K dataset. The results obtained demonstrate that MathDivide was able to significantly outperform the leading prompting technique called Math-prompter. | A prompting technique called MathDivide is proposed that breaks down the mathematical problem into simpler subproblems and demonstrates that MathDivide was able to significantly outperform the leading prompting technique called Math-prompter. | ## by large language models
Saksham Sahai Srivastava and Ashutosh Gandhi
Department of Computer Science
University of Colorado Boulder
_{saksham.srivastava, ashutosh.gandhi}@colorado.edu_
**Abstract. Large language models have been proven to be capable of**
handling complex linguistic and cognitive tasks. Therefore their usage
has been extended to tasks requiring logical reasoning ability such as
Mathematics. In this paper, we propose a prompting technique called
MathDivide that breaks down the mathematical problem into simpler
subproblems. Each of the subproblems is formulated as an algebraic expression whose value is evaluated by the Python code generated by the
LLM for the corresponding algebraic expression. The values fed to the
Python code are the numerical values provided in the problem statement. The solutions for the subproblems are composed together to obtain the final answer for the problem statement. Finally, the final answer is compared to the correct answer. If the final answer matches the
correct answer, it is produced as output else a refinement prompt is
fed to the LLM. We experiment with this prompting technique on both
closed-source LLM models and open-source LLM models using GSM8K
dataset. The results obtained demonstrate that MathDivide was able to
significantly outperform the leading prompting technique called Mathprompter.
**Keywords: Prompt Engineering, Large Language Models, Mathemati-**
cal Reasoning
**1** **Introduction**
Recent years have been witness to significant advancement in the field of natural
language processing. The groundbreaking work of ”Attention is all you need”[1]
which utilized the idea of self-attention was the foundation for large language
models. This progression in the field of NLP was focused on how large amounts
of text data can be used to train algorithms such that they become capable of understanding and generating human-like text. The complex architecture and vast
amount of training data empower the LLMs to perform linguistic and cognitive
tasks requiring computational intelligence. However, recent research shows that
LLMs are also proficient in performing tasks that require analytical and logical reasoning[2 3] Solving mathematics problems is one such task that requires
-----
of the pre-trained large language models. The pre-trained LLMs can be both
open-source[4,5,6,7,8,9] as well as closed source[10,11,12,13,14,15,16]. It has been
noticed from previous literature that LLMs are good at in-context learning[17]
which led to the evolution of different prompting techniques such as zero-shot[18],
few-shot[19], chain-of-thought(CoT), etc. Kojima et al.[18] showed that the zeroshot-CoT prompting approach significantly improves the reasoning capability
of LLM, therefore in this paper, we develop a zero-shot-CoT-based prompting
technique. We will extend the idea of how a student when given a mathematical
problem solves it in a step-by-step process to LLMs. At each step, the student
intends to solve a subproblem whose solution is required to move to the next step
and ultimately arrive at the final answer for the original problem. We take this
approach because the hidden potential in solving the problem by breaking it into
smaller subproblems was demonstrated by Shridhar et al.[20] where they showed
that this approach can even aid smaller models such as GPT-2 to surpass the
performance of larger models such as GPT-3 6B for various reasoning datasets.
The mathematical problem is often formulated as an algebraic expression
which is deemed to be compact and more representative of the problem details.
Not only that, Imani et al.[21] showed that crafting the mathematical word
problem as an algebraic equation can even beat the performance of state-of-theart zero-shot-CoT approach[18] for mathematical reasoning tasks. We combine
the strength of the two approaches to design a prompt in such a way that the
LLM splits the problem into subproblems that can be solved in sequential steps
and for each sub-problem an algebraic expression is constructed whose numerical
value needs to be determined(using the numerical values provided in the original
problem). We feed the problem statement as well as this prompt to the LLM.
In this approach, the LLM is also leveraging the power of the chain of thoughts
indirectly.
We add an instruction to our prompt which directs the LLM to produce a
Python code snippet for the subproblem and perform the mathematical operation using the Python code. Finally, the prompt instructs the LLM to compose
the solution of all the sub-problems and produce the final answer for the original
problem. If the final answer does not match the gold correct answer then we
provide a refinement prompt to the LLM highlighting which subproblem(s) was
incorrectly solved. This human-based feedback has a great potential to drive the
LLM towards the right answer.
In our approach, we try to make the LLM mimic the human problemsolving strategy when solving a mathematical problem by innovatively combining a chain-of-thoughts approach with algebraic expression formulation, breaking
down complex problems into simple sub-problems, making use of python-code
snippets to ensure computational precision and making use of human feedback
based refinement. The strength of human-based feedback lies in helping the
LLM to learn in an in-context fashion which adapts it to avoid making the same
mistake for future problems This nice blend of structured problem decompo
-----
problem-solving.
**2** **Related Work**
Noorbakhsh et al.[22] presented a technique where they leveraged pre-trained
LLMs which were originally trained on language tasks, for solving mathematical
symbolic problems like differentiation and integration. They fine-tuned the LLM
for similar mathematical tasks using a small training set size. This resembled the
knowledge transfer capability of the LLMs from natural language processing to
mathematical reasoning. Later, Yuan et al.[23] studied the impact of factors such
as pre-training loss, amount of supervised data, and amount of augmented data
on the mathematical reasoning capability of LLMs and their exploration revealed
that lowering the pre-training loss can help LLM to solve the math problem
accurately. Yamauchi et el.[24] proposed an LLM-prompting markup language
(LPML) which is a structured language like XML that facilitates combining of
chain-of-thoughts approach with an external computation tool(Python REPL).
This way a structured output was produced from the LLM which was effective in
guiding the LLM to rectify any reasoning or calculation mistakes. A novel idea
of tutoring the LLM was also used by Liu et al.[25] to improve its performance
for mathematical reasoning. They assessed the LLM’s conceptual understanding
of mathematical problems both as a learner and a tutor, however, they noticed
that LLMs were not great at identifying correct misconceptions. This suggested
a gap in LLM’s proficiency in mimicking a human-like behavior of learning and
tutoring. In our approach we take advantage of the strength lying in structured
output as shown by [24] and therefore we direct the LLM to produce algebraic
expressions for each subproblem like a grade-school mathematics student would
do. This also enhances the LLM’s ability to behave as a learner and identify
misconceptions.
In the realm of refinement of the output produced by LLM, Madaan et el.[26]
developed a self-refine technique where the proposed algorithm would enable the
LLM to enhance its outputs through self-generated feedback iteratively. This approach is completely autonomous and utilizes only the model’s initial outputs
as a basis for continuous improvement and does not require any additional supervised data or training. But for prompt refinement, Zhang et al.[27] proposed
a technique called PREFER where they employed a cycle of feedback and refinement in such a way that the LLM was able to leverage its capabilities to
generate improved prompts. They took advantage of the process of prompt bagging in their technique which incorporated both forward and backward thinking
for output evaluation. In an attempt to generate effective prompts, Billa[28] proposed a dual LLM system where one of the LLM is a ’generator’ and the other
is a ’corrector’. The ’generator’ LLM would perform the task and the ’corrector’ LLM would provide feedback and generate improved prompts, thus in a
way collaborating over time to generate prompts that guide the LLM towards
-----
identify and correct the errors. It is highly probable that the automated refinement techniques might not be able to recognize complex reasoning errors or
have contextual misunderstanding while our approach allows for more accurate,
context-aware, and user-centric refinements.
**3** **Methods**
Our methodology starts with decomposition of a mathematical problem(M) into
smaller, simpler and manageable sub-problems( 1, 2, . . ., _k). We parse the_
_M_ _M_ _M_
natural language representation of the original problem to the LLM so that it
gets to identify the key components in the problem and their relationship with
each other. A prompt(p) is also appended at the end of the problem statement.
Each component of the problem is considered as a subproblem( _i) to be solved._
_M_
Based on the prompt p, the LLM formulates an algebraic expression(ei) for each
subproblem( _i) such that it encapsulates the core mathematical operation re-_
_M_
quired to solve the task ( _i). After the construction of ei, the LLM generates_
_M_
a Python code(p[c]i [) such that it computes the numerical value of][ e][i][ when each of]
the variables in ei is substituted with their values. This means that each variable
in the expression ei will be a parameter for the Python function p[c]i [. If the sub-]
problems are related to each other then each subproblem is solved in sequential
steps which interprets to first M1 being solved, then M2 is solved and finally,
_Mk is solved. If the computation of subproblems does not depend on the eval-_
uation of other subproblems then each subproblem is computed independently.
The final answer s to the original problem is obtained by composing the answer
obtained from each subproblem(si).
The next step is to compare the value of s with the actual desired output z.
If s = z, then we feed the LLM with a refinement prompt(pr) which is based
_̸_
on human feedback and exactly highlights the component where the LLM went
wrong. This also helps the LLM to have an in-context learning. If s = z, no such
refinement prompt is needed. Figure 1 shows the architecture diagram of our
technique. The prompt p used in our technique is given below.
**Prompt p:**
_Given the mathematical problem: M, your task is to solve it step by_
_step. Start by breaking down the problem into smaller, manageable sub-_
_problems. For each sub-problem identified:_
1. Describe the sub-problem in your own words and identify the key
_components and their relationships._
2. Formulate an algebraic expression that captures the core mathemati_cal operation needed to solve this sub-problem. Denote this expression_
_as ei._
3. Generate Python code that computes the numerical value of ei. As_sume each variable in e as input parameters to your Python func_
-----
**Fig. 1. Architecture Diagram of MathDivide Prompting technique**
```
def compute_ei(var1, var2, ...):
# Your Python code here
return result
```
4. Execute the Python code to solve the sub-problem and provide the
_solution._
_Proceed sequentially if the sub-problems depend on each other. If they are_
_independent, you may solve them in any order. Once all sub-problems are_
_solved, combine their solutions to provide the final answer to the original_
_problem M._
_If your final answer does not match the expected solution, you will re-_
_ceive specific feedback indicating where adjustments are needed. Use this_
_feedback to refine your approach and improve your solution._
The prompt p is a bit long but since the LLMs that we would be considering
for our experimentation have a context window of more than 2000 tokens it will
not lead to an issue where the LLM is only focusing on the later part of the
prompt.
**4** **Experiment**
We compare the performance of our prompting technique i.e. MathDivide with
Mathprompter[21] technique which previously was able to successfully beat the
-----
3.5-turbo[10] ii) GPT-4[11] as well as open-source LLM models - i) Llama2[5] 7B parameters ii) Llama3[29] - 8B parameters.
We utilize ChatGPT for harnessing the power of both GPT-3.5-turbo and
GPT-4 LLM models. Since the GPT models do not provide cheap API access
as well as our technique requires manual human-based feedback, we conducted
the experimentation with only the first 250 math word problems in GSM8K[30]
dataset. Although, the Mathprompter technique was evaluated for MultiArith
dataset[31] which is a subset of MATH dataset[32] and used 175B parameter
LLM for few-shot and few-shot-CoT settings and 540B parameter LLM(PaLm
540B) for zero-shot and zero-shot-CoT settings but in order to have a fair and
direct comparison we evaluate the performance of Mathprompter[21] technique
on the same LLM models and dataset.
To run the Llama2 and Llama3 models we utilized an open-source project
called Ollama[33]. Ollama[33] provides a memory-quantized version of many
open-sourced LLMs through an API. For our project, we used llama2:7b-chatq2 K and llama3:8b-instruct-q2 K versions of Llama2 and Llama3 respectively.
We implemented a simple Python script to parse the GSM8K dataset and extract and question and answer. We append our custom prompt to each question
and call Ollama[33] API to get the LLM response. The response is then stored
and compared to the answer to verify if LLM was able to solve it or not. In
case of the MathDivide method for each incorrect response, we provide specific
feedback prompts to correct the response and keep the count of the number of
refinement prompts required to get the correct response. Since, ChatGPT-3.5
and Llama models do not have the ability to execute python codes, we manually executed the code snippets generated by these LLMs. We fed the numerical
answer obtained after code execution back to the LLMs.
We use accuracy as the evaluation metric as it serves as an essential baseline
for estimating the model’s effectiveness in producing correct answers. We compute accuracy for up to 3 human-based feedback loops. In other words, if the
LLM can output the correct answer for a problem within 3 refinement prompts
we consider the problem to be correctly solved by our technique. The mathematical expression for the accuracy of our technique can be given as:
_Accuracy% = [No. of problems solved within 3 refinement prompts]_ 100%
Total no. of problems fed to the LLM _×_
For the mathprompter technique, the accuracy is evaluated without any refinement loop. The accuracy values obtained for both prompting techniques on
the chosen LLM models and dataset is depicted in Figure-2. We also evaluate the
performance of the MathDivide technique when used without the refinement loop
as shown in Figure-3. The results indicate that the proposed prompting technique MathDivide was able to beat the performance of Mathpromter technique.
We observe that both the proprietary models ChatGPT-3.5 and ChatGPT-4
demonstrated a significantly higher accuracy for both the prompting techniques
-----
matical reasoning. The exact reason for this is unknown due to the unavailability
of architecture and training data information of OpenAI’s GPT models. Since,
MathDivide technique which incorporated a refinement loop showed better accuracy than the Mathprompter technique for all the four LLM models, it can
be deduced that the refinement loops are considerably effective in enhancing the
performance of large language models for tasks involving analytical and logical
reasoning.
**Fig. 3. Accuracy Comparison of MathDi-**
vide without refinement loop with Mathprompter
**Fig. 2. Accuracy Comparison of MathDi-**
vide with Mathprompter
Figure-3 shows that even without the refinement loop, MathDivide technique
was able to perform better as compared to Mathprompter. This indicates that
the fundamental approach of breaking the complex problem into simpler subproblems is a beneficial approach for Mathematical reasoning tasks. The problems solved correctly after using refinement loops generally required a single
refinement prompt: ”Check the calculations”.
The experiment was conducted on small dataset of size 250 problems from
GSM8K dataset[30], therefore testing on a larger and diverse dataset(including
real-world complicated math word problems) could provide more comprehensive insights into the robustness and generalizability of these prompting techniques. Furthermore, the use of human feedback-based refinement and the inability of some LLM models to execute programs forces the act of manually feeding
prompts to the LLM. This presents a challenge to conducting experimentation
on huge amounts of data. Therefore, the exploration of automated refinement
techniques for Math word problem-solving tasks would be immensely beneficial
for making the experimentation process more scalable.
This research can also be extended to studying the real time learning and
-----
out requiring any sort of retraining. This could involve developing techniques
that can aid the LLMs to incrementally learn from the refinement prompts and
immediately apply the learned strategy to the new problem given to them.
**5** **Conclusion**
This paper proposes a novel prompting technique called MathDivide which significantly improved the mathematical reasoning capability of large language
models. This technique solves a complex mathematical problem by breaking it
down into smaller and simpler sub-problems. It also leverages a human-feedbackbased refinement loop to further enhance its accuracy. This technique was able to
beat the performance of a leading prompting technique called Mathprompter[21]
which had previously exhibited a better accuracy compared to the state-of-theart[18] zero-shot-Cot prompting approach. Therefore, the proposed technique
highlights the significance of solving the math problem in a structured manner.
Lastly, it is important to address the ethical implications of our research.
We ensure fairness by comparing the proposed prompting technique with stateof-the-art math prompting technique on the same sets of LLMs and dataset.
We maintain transparency of this research work by providing clear and detailed
descriptions of the methods and dataset set used for experimentation. This allows
for replication and verification of our proposed work by other researchers in the
field. This also facilitates accountability in our processes. Lastly, our research
does not have any adverse social implications. In this manner, we aim to uphold
the highest ethical standards by ensuring that our research contribution to LLMs
is both innovative and responsible.
**References**
1. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser,
L., Polosukhin, I.: Attention is all you need. Advances in neural information
processing systems 30 (2017)
2. Pan, L., Albalak, A., Wang, X., Wang, W.Y.: Logic-lm: Empowering large language models with symbolic solvers for faithful logical reasoning. arXiv preprint
arXiv:2305.12295 (2023)
3. Chen, M., Ma, Y., Song, K., Cao, Y., Zhang, Y., Li, D.: Learning to teach large
language models logical reasoning. arXiv preprint arXiv:2310.09158 (2023)
4. Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.A., Lacroix, T.,
Rozi`ere, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient
foundation language models. arXiv preprint arXiv:2302.13971 (2023)
5. Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., et al.: Llama 2: Open foundation
and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023)
6 T M N I t d i t 7b A t d d f i ll
-----
Pannier, B., Almazrouei, E., Launay, J.: The refinedweb dataset for falcon llm:
outperforming curated corpora with web data, and web data only. arXiv preprint
arXiv:2306.01116 (2023)
8. Wang, B., Komatsuzaki, A.: GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. `https://github.com/kingoflolz/mesh-transformer-jax (May`
2021)
9. Roziere, B., Gehring, J., Gloeckle, F., Sootla, S., Gat, I., Tan, X.E., Adi, Y., Liu,
J., Remez, T., Rapin, J., et al.: Code llama: Open foundation models for code.
arXiv preprint arXiv:2308.12950 (2023)
10. OpenAI: OpenAI GPT-3.5-Turbo Technical Report. Technical report (2022) Technical Report.
11. OpenAI: OpenAI GPT-4 Technical Report. Technical report (2023) Technical
Report.
12. Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A.,
Barham, P., Chung, H.W., Sutton, C., Gehrmann, S., et al.: Palm: Scaling language modeling with pathways. Journal of Machine Learning Research 24(240)
(2023) 1–113
13. Anil, R., Dai, A.M., Firat, O., Johnson, M., Lepikhin, D., Passos, A., Shakeri, S.,
Taropa, E., Bailey, P., Chen, Z., et al.: Palm 2 technical report. arXiv preprint
arXiv:2305.10403 (2023)
14. Lewkowycz, A., Andreassen, A., Dohan, D., Dyer, E., Michalewski, H., Ramasesh,
V., Slone, A., Anil, C., Schlag, I., Gutman-Solo, T., et al.: Solving quantitative
reasoning problems with language models. Advances in Neural Information Processing Systems 35 (2022) 3843–3857
15. Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., Chen, A.,
Goldie, A., Mirhoseini, A., McKinnon, C., et al.: Constitutional ai: Harmlessness
from ai feedback. arXiv preprint arXiv:2212.08073 (2022)
16. Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H.P.d.O., Kaplan, J., Edwards,
H., Burda, Y., Joseph, N., Brockman, G., et al.: Evaluating large language models
trained on code. arXiv preprint arXiv:2107.03374 (2021)
17. Dong, Q., Li, L., Dai, D., Zheng, C., Wu, Z., Chang, B., Sun, X., Xu, J., Sui, Z.:
A survey on in-context learning. arXiv preprint arXiv:2301.00234 (2022)
18. Kojima, T., Gu, S.S., Reid, M., Matsuo, Y., Iwasawa, Y.: Large language models
are zero-shot reasoners. Advances in neural information processing systems 35
(2022) 22199–22213
19. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot
learners. Advances in neural information processing systems 33 (2020) 1877–1901
20. Shridhar, K., Stolfo, A., Sachan, M.: Distilling reasoning capabilities into smaller
language models. arXiv preprint arXiv:2212.00193 (2022)
21. Imani, S., Du, L., Shrivastava, H.: Mathprompter: Mathematical reasoning using
large language models. arXiv preprint arXiv:2303.05398 (2023)
22. Noorbakhsh, K., Sulaiman, M., Sharifi, M., Roy, K., Jamshidi, P.: Pretrained language models are symbolic mathematics solvers too! arXiv preprint
arXiv:2110.03501 (2021)
23. Yuan, Z., Yuan, H., Li, C., Dong, G., Tan, C., Zhou, C.: Scaling relationship
on learning mathematical reasoning with large language models. arXiv preprint
arXiv:2308.01825 (2023)
24 Y hi R S d S S i A K i W L l ll ti k
-----
expert tutor: Evaluating math reasoning abilities of large language models with
misconceptions. arXiv preprint arXiv:2310.02439 (2023)
26. Madaan, A., Tandon, N., Gupta, P., Hallinan, S., Gao, L., Wiegreffe, S., Alon, U.,
Dziri, N., Prabhumoye, S., Yang, Y., et al.: Self-refine: Iterative refinement with
self-feedback. Advances in Neural Information Processing Systems 36 (2024)
27. Zhang, C., Liu, L., Wang, C., Sun, X., Wang, H., Wang, J., Cai, M.: Prefer:
Prompt ensemble learning via feedback-reflect-refine. In: Proceedings of the AAAI
Conference on Artificial Intelligence. Volume 38. (2024) 19525–19532
28. Billa, J.G., Oh, M., Du, L.: Supervisory prompt training. arXiv preprint
arXiv:2403.18051 (2024)
[29. Meta: Meta llama3 model. https://ai.meta.com/blog/meta-llama-3/](https://ai.meta.com/blog/meta-llama-3/)
30. OpenAI: Openai grade school math repository. `https://github.com/openai/`
```
grade-school-math
```
31. Roy, S., Roth, D.: Solving general arithmetic word problems. arXiv preprint
arXiv:1608.01413 (2016)
32. Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D.,
Steinhardt, J.: Measuring mathematical problem solving with the math dataset.
arXiv preprint arXiv:2103.03874 (2021)
[33. Ollama: Ollama. https://ollama.com/](https://ollama.com/)
-----
| [
"Saksham Sahai, Srivastava",
"Ashutosh, Gandhi"
] | 2024-05-12T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2405.13004 | https://arxiv.org/abs/2405.13004 | https://www.semanticscholar.org/paper/cca77ff5dd95e395ff0fc725982199340f774c6c |
MathGLM-Vision: Solving Mathematical Problems with Multi-Modal Large Language Model | Large language models (LLMs) have demonstrated significant capabilities in mathematical reasoning, particularly with text-based mathematical problems. However, current multi-modal large language models (MLLMs), especially those specialized in mathematics, tend to focus predominantly on solving geometric problems but ignore the diversity of visual information available in other areas of mathematics. Moreover, the geometric information for these specialized mathematical MLLMs is derived from several public datasets, which are typically limited in diversity and complexity. To address these limitations, we aim to construct a fine-tuning dataset named MathVL, and develop a series of specialized mathematical MLLMs termed MathGLM-Vision by conducting Supervised Fine-Tuning (SFT) on MathVL with various parameter-scale backbones. To extensively evaluate the effectiveness of MathGLM-Vision, we conduct experiments on several public benchmarks and our curated MathVL-test consisting of 2,000 problems. Experimental results demonstrate that MathGLM-Vision achieves significant improvements compared with some existing models, including backbone models and open-source mathematical MLLMs. These findings indicate the importance of diversity dataset in enhancing the mathematical reasoning abilities of MLLMs. | null | [
"Zhen, Yang",
"Jinhao, Chen",
"Zhihuan, Jiang",
"Wenmeng, Yu",
"Zhengxiao, Du",
"Weihan, Wang",
"Bin, Xu",
"Wenyi, Hong",
"Yuxiao, Dong",
"Jie, Tang"
] | 2024-09-09T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.13729 | https://arxiv.org/abs/2409.13729 | https://www.semanticscholar.org/paper/0696511000c091b4eecabc3691714a3c83685e0e |
|
MathLearner: A Large Language Model Agent Framework for Learning to Solve Mathematical Problems | With the development of artificial intelligence (AI), large language models (LLM) are widely used in many fields. However, the reasoning ability of LLM is still very limited when it comes to mathematical reasoning. Mathematics plays an important role in all aspects of human society and is a technical guarantee in the fields of healthcare, transport and aerospace, for this reason, the development of AI big language models in the field of mathematics has great potential significance. To improve the mathematical reasoning ability of large language models, we proposed an agent framework for learning to solve mathematical problems based on inductive reasoning. By emulating the human learning process of generalization of learned information and effective application of previous knowledge in new reasoning tasks, this framework has great performance in the mathematical reasoning process. It improves global accuracy over the baseline method (chain-of-thought) by 20.96% and solves 17.54% of the mathematical problems that the baseline cannot solve. Benefiting from the efficient RETRIEVAL method, our model improves the ability of large language models to efficiently use external knowledge, i.e., the mathematical computation of the model can be based on written procedures. In education, our model can be used as a personalised learning aid, thus reducing the inequality of educational resources. | This work has proposed an agent framework for learning to solve mathematical problems based on inductive reasoning that improves global accuracy over the baseline method (chain-of-thought) and solves 17.54% of the mathematical problems that the baseline cannot solve. | # MathLearner: A Large Language Model Agent Framework for Learning to Solve Mathematical Problems
#### Wenbei Xie[1], Donglin Liu[1, *], Haoran Yan[1, *], Wenjie Wu[1], and Zongyang
Liu[1]
1
Beijing-Dublin International College, Beijing University of Technology, 100 Pingleyuan,
Chaoyang District, 100124, Beijing, China
*These authors contributed equally to this work.
#### 6th August 2024
**Abstract**
With the development of artificial intelligence (AI), large language models (LLM)
are widely used in many fields. However, the reasoning ability of LLM is still very
limited when it comes to mathematical reasoning. Mathematics plays an important
role in all aspects of human society and is a technical guarantee in the fields of
healthcare, transport and aerospace, for this reason, the development of AI big
language models in the field of mathematics has great potential significance. To
improve the mathematical reasoning ability of large language models, we proposed
an agent framework for learning to solve mathematical problems based on inductive
reasoning. By emulating the human learning process of generalization of learned
information and effective application of previous knowledge in new reasoning tasks,
this framework has great performance in the mathematical reasoning process. It
improves global accuracy over the baseline method (chain-of-thought) by 20.96 %
and solves 17.54 % of the mathematical problems that the baseline cannot solve.
Benefiting from the efficient RETRIEVAL method, our model improves the ability
of large language models to efficiently use external knowledge, i.e., the mathematical
computation of the model can be based on written procedures. In education, our
model can be used as a personalised learning aid, thus reducing the inequality of
educational resources.
**Keywords: Large Language Models, Reasoning, Retrieval-augmented Generation**
-----
### 1. INTRODUCTION
With the integration of artificial intelligence (AI) and natural language processing (NLP)
technologies into people’s daily lives, the demand for more advanced and capable language models has become imperative. These language models, often referred to as Large
Language Models (LLMs) have demonstrated remarkable capabilities in dealing with automating intricate tasks, such as understanding and generating human-like text OpenAI et
al., 2023. In the real world, LLMs have been applied in various applications, including AI
chatting systems like ChatGPT to intelligent coding assistant tools like GitHub Copilot
Nat Friedman, 2021; OpenAI et al., 2023.
However, one area where their potential remains largely untapped is in the domain
of mathematics. Mathematics holds immense significance in various aspects of daily life,
serving as the foundation for fields such as science, engineering, finance, and technology.
Mathematics holds immense significance in various aspects of daily life, serving as the
foundation for fields such as science, engineering, finance, and technology. It provides
the language and tools necessary for understanding and modelling natural phenomena,
designing innovative technologies, and making informed decisions in business and finance.
From calculating trajectories in space exploration to optimizing algorithms in computer
science, mathematics permeates every aspect of modern society. However, despite the
importance of mathematics, language models have not been extensively explored in this
domain. Unlike other types of problems, mathematical tasks demand precise and complex
reasoning, pattern recognition, and algorithmic thinking, posing unique challenges for
language models Ananthaswamy, 2023. The ability to accurately solve mathematical
problems is crucial not only for academic success but also for practical applications in
fields such as engineering, finance, and data analysis. Therefore, there is an impending
demand to enhance the reasoning ability of LLMs through training and testing using
Math problems to fit in the growing needs for applications.
Previous efforts to further improve the reasoning ability of LLMs, such as Chain of
Thought (CoT) and Retrieval-Augmented Generation (RAG), have made a significant
impact in enhancing the problem-solving capabilities Lewis et al., 2020; Wei et al., 2022.
CoT, for instance, focuses on guiding language models using a series of reasoning steps
to arrive at more accurate solutions, while RAG leverages retrieval-based techniques to
augment generation, enabling LLMs to learn solutions from existing data. Although
these approaches have made promising results, they also exhibit limitations in solving
problems that are similar to the problems seen before. For example, previous RAG studies
have shown that solutions discovered by LLMs may suffer from overfitting, particularly
when relying on traditional retrievers like BM-25 or Density Paragraph Retrievers (DPRs)
Pradhan, 2023. These traditional retrievers normally rely on keyword match, which will
lead to solutions that have exactly the same keywords as the query problem and struggle
to generalize to similar but unseen problems. Consequently, there is a need for more
-----
advanced techniques that can address the limitations of existing approaches and provide
a generalizing ability to solve similar questions across diverse problem domains.
To address the limitations of previous approaches, we have developed a novel framework called MathLearner, inspired by the principles of human learning, particularly inductive reasoning. Human learning often involves inferring general principles or solutions
from specific examples and applying this knowledge to novel situations. Similarly, MathLearner aims to empower LLMs to learn to resolve math problems by leveraging inductive
reasoning principles.
MathLearner operates in three main stages, mirroring the stages of human learning:
1. Learning from Examples: The framework begins by exposing the LLM to a diverse set of math problems and their solutions, allowing it to learn from annotated
examples.
2. Memorizing Solving Methods: MathLearner then focuses on memorizing various
problem-solving methods and techniques, enabling the LLM to build a repository of
strategies for tackling different types of math problems. approach problem-solving
in a more systematic and structured manner.
3. Recalling Previous Knowledge: Finally, MathLearner equips the LLM with the ability to recall and apply its learned knowledge to solve new math problems, mimicking
the process of retrieving and applying previously learned solutions.
Through these principles and design, MathLearner aims to not only enhance the reasoning ability of LLMs but also to enable continuous and instant learning, ultimately
improving their ability to solve math problems accurately and efficiently. The potential
impact of enhancing LLMs’ math problem-solving abilities extends beyond the realm of
AI research to education and real-world applications. By improving LLMs’ ability to solve
math problems, MathLearner can lower the barrier to self-study among students, providing a more diverse learning platform, and facilitating easier review of incorrect answers.
This can revolutionize education by empowering students to learn independently, offering
varied learning experiences, and simplifying the process of reviewing mistakes.
The dataset we used to train and test is MATH Hendrycks et al., 2021. It has many
challenging mathematics problems that are difficult for LLMs to solve in different categories, such as algebra and geometry, which is useful for us to train and test the problemsolving skills of LLMs for different categories of problems. In addition to that, one of
the key strengths of the MATH dataset is its provision of step-by-step solutions for every
problem, offering detailed insights into the reasoning processes required to arrive at the
correct solutions. By leveraging the step-by-step solutions provided in the dataset, we
can effectively train the model to emulate human-like problem-solving approaches and
improve its overall performance.
In summary, this paper presents several key contributions:
-----
- We propose a new retrieval method based on features to retrieve solutions to similar
problems for the encountered problem.
- We design a learning framework which can effectively reuse previously learned knowledge to solve current problems.
### 2. RELATED WORK
In the field of enhancing LLM reasoning ability, numerous studies have been conducted
to explore the area of LLM reasoning. Step-by-step reasoning, think-in-memory, and
few-shot learning are three hot topics to address this issue. Our framework implements
these three concepts to varying degrees, improving LLM’s mathematical reasoning. The
following is an introduction to these three concepts and related research.
**Step-by-step reasoning Many works indicate that step-by-step reasoning improves**
LLM performance Jiang et al., 2024; Lampinen et al., 2022; Su et al., 2023; Wen et al.,
2024; Y. Zhang et al., 2024; Y. Zhang et al., 2023and correspondingly, that this performance can be further enhanced with appropriate guidance and use of tools Su et al.,
2023; Wen et al., 2024; Y. Zhang et al., 2023. Mingyu and his team found that chain of
thought could significantly improve the predictive accuracy of the LLM’s reasoning across
multiple datasets Jin et al., 2024. They stated that longer reasoning chains containing
misleading information still enhance the reasoning performance of LLMs, indicating that
chain length is more vital than accuracy for problem-solving. Zhang et al. and Zelikman
et al. also point out that LLMs with step reasoning progress can thoroughly and effectively utilise existing conditions to derive novel understandings Zelikman et al., 2023; Y.
Zhang et al., 2024. Wei et al. first introduced the concept of chain of thought (CoT),
a series of intermediate reasoning steps in 2022Wei et al., 2022. Experiments showed
that normalised thought chain cues greatly improved the ability of large language models
to perform complex reasoning across a range of arithmetic, commonsense and symbolic
reasoning tasks. Later Lyu et al. proposed the faithful CoT (FCoT) based on CoT. Unlike CoT, faithful CoT reflects how the model arrives at the answer, which improves the
empirical performance of LLMLyu et al., 2023. It outperforms standard CoT on 9 of 10
benchmarks from 4 diverse domains, with a relative accuracy gain of 6.3% on Math Word
Problems (MWP), 3.4% on Planning, 5.5% on Multi-hop Question Answering (QA), and
21.4% on Relational Inference. These findings and achievements inspire us to enhance the
mathematical problem-solving capabilities of Large Language Models (LLMs) through
step-by-step learning and reasoning applications.
**Think-in-Memory Thinking in memory addresses LLM repetitive reasoning using**
locality-sensitive hashing in dialogue to retrieve long-term memory efficiently. Jiang et
al., 2024; Liu et al., 2023; Luo, Li et al., 2024Referring to useful historical information
not only reduces the process of repetitive reasoning but also increases the accuracy of the
results. Knowledge graphs (KG) and vector databases (VecBD) are two methods that are
-----
widely used to store helpful information<empty citation> KG-based LLM reasoning
uses embedding space to represent the entities and special model architectures for the
reasoning process Jiang et al., 2024; Luo, Yuan-Fang et al., 2024; Yasunaga et al., 2022;
X. Zhang et al., 2022. One earlier work is by Logan IV et al., where they proposed
mechanisms for a neural language model to select and incorporate facts from a knowledge
graph relevant to the text context, enhancing the factual generation capabilities of LLMs
au2 et al., 2019. In the meantime, it is necessary to determine an effective method
for fusing and reasoning over the KG representations and the language context, which
provides situational constraints and nuances. Further research has led to the development
of techniques to enhance the model’s capability for utilizing the knowledge graph (KG).
Abu-Rasheed et al. put forth an approach for LLM prompts to reduce the risk of model
hallucinations and safeguard against including erroneous or imprecise information in 2024
Abu-Rasheed et al., 2024. In 2024, Jiang et al. proposed KG-AgentJiang et al., 2024, an
autonomous LLM-based agent framework. This enables a small LLM to actively make
decisions over the reasoning process aid with KGs. However, in terms of the speed of access
to information, VecDBs have the advantage over KG Mittal et al., 2017, and VecDBs are
well-capable for retrieval applications Asai et al., 2023. VecBD-based LLM reasoning
uses vector databases to build specialized embeddings for special category information
with domain-specific embeddings to support efficient information retrieval during the
processing progress Sacolick, n.d. Numerous studies and practical applications have shown
the significant impact of VecDBs. For instance, Azure AI Search, previously known as
Cognitive Search, enhanced its vector search capabilities using Qdrant‘Qdrant 3’, 2023.
Similarly, Pinecone’s CanopyPinecone Systems Inc., 2023, an open-source framework,
utilizes Pinecone’s VecDB to develop and deploy ready-to-use RAG systems. Given that
vector databases excel in efficiently retrieving similar information, our framework leverages
vector databases for storing learning histories. This think-in-memory method facilitates
more efficient information utilization in the User module of our framework, optimizing
our system’s performance by harnessing the strengths of vector databases for rapid and
relevant data access during the think-in-memory process.
**Few-shot learning It is a method that can predict new classes when only one or a**
few labels are available for each class of training data Fei-Fei et al., 2006; Xu et al., 2022.
This method is particularly valuable in scenarios where data collection costs are high, data
is scarce, or when there is a need for models to adapt to new tasks quickly. Fei-Fei et al.,
2006. Powerful few-shot learning capability also leveraged LLMs to perform reasoning over
KG. In our research, we implement this method with GPT-4.0 to address the high costs in
time and resources required for processing queries, typically 8-10 tokens and 1-2 minutes
each. This method allows us to minimize expenses while achieving comparable results
to extensively annotated methods. By carefully selecting and crafting prompt examples,
we guide the model to generalize from minimal input efficiently, leveraging GPT-4.0’s
advanced training. This approach not only reduces costs but also maintains high-quality
-----
outcomes, demonstrating the effectiveness of few-shot learning in resource-constrained
scenarios.
By applying these three concepts to our framework, the mathematical reasoning of
our framework is enhanced. more details in the framework will be elaborated in the
methodology section.
### 3. METHODOLOGY
In this section, we provide an overview of the MathLearner framework. First, we describe
the problem that this paper desires to solve more accurately. After that, a detailed
description of the learning module is presented. Finally, we introduce the structure of the
application module. Specifically, a further discussion is made about the generation of sets
of features to describe questions, which is a general challenge in the framework.
#### 3.1 Problem Statement
We consider a basic pattern of human learning: For a category of mathematical problems,
one or several problems and their solutions are first given to a student. Based on these
examples the students generalize a general solution to the type of mathematical problems.
Besides, the student would remember a set of generalized features of the given problems.
When the student encounters a new problem, the student would also use a set of previously
learned features to describe the problem. By searching through the memory using feature
match, the student may find a useful solution that previously learned to solve the new
problem. Thus, our objective is to use LLMs to simulate the whole procedure of human
learning and solve mathematical problems. We noticed that the whole pattern mentioned
above can be divided into two parts, learning and applying. Therefore, the MathLearner
framework contains two modules to simulate the pattern.
#### 3.2 Pipeline of Learning Module
**3.2.1** **Modified Solutions Generation and Verification**
As demonstrated in Figure 3.1, the second step of the pipeline is to generate modified
solutions. To effectively be useful for future problem solving, the modified solutions
should be in the form of a program. Therefore, we define a sub-pipeline for generating
the modified solutions under the instruction of Parsel, an efficient program generation
method. First, LLM is asked to divide the given solution into steps. Then, several Parcel
programs, which can be understood as a pseudo-code, are generated based on the steps
in the divided solution. Finally, the Parsel programs are translated into one single real
program containing multiple functions. Compared to the original Parsel implementation,
we decided to enter solutions with questions to simulate the human learning process
better. As the third step, we also learn the validation method from Parsel. After one
-----
Figure 1: a case study on using MathLearner, which performs two main steps: extracting
feature values from the input problem and finding similar solution steps in the database
based on these feature values. If similar solution steps exist in the database, we send
these steps along with the new input problem to GPT so that GPT can generate Python
code to solve the problem. If similar problem-solving steps do not exist in the database,
we send the problem directly to GPT, allowing it to generate the Python code that solves
it. This process makes the problem-solving process more efficient and precise.
modified solution is generated, the program in the solution would be run to check the
correctness of the solution. If it does not pass, the LLM will be required for a new version
of a modified solution.
**3.2.2** **Feature Generation and Storage**
Consider two mathematical problems with similar solutions, one problem is a word problem, which means the information on the problem is presented in ordinary language, while
another problem is expressed by mathematical notation. To match these two problems
into one category can be a challenge by using traditional retrieval methods, which are generally based on keyword match. Thus, a novel retrieval method should be developed to
match problems that have similar solutions. As mentioned before, humans will store the
feature representation of the learned problems. In detail, humans will remember some
features for each step of the solution for the problem. When they encounter a similar
-----
Figure 2: An overview of the pipeline of Learning Module. From left to right, the large
language model 1) receives examples, 2) generates modified solutions, 3) verifies and
modifies possible errors in the solution, and 4) generates features for questions. Then, the
features and solution will be saved into a database.
problem, a set of features will also generated for each of the possible steps of the new
solution. These features will be used to retrieve previously learned solutions. To simulate
this procedure, the Learning module will ask the LLM to generate two types of features.
One is a general description of the type of problem, like algebra or geometry. The other
features are used to describe the operations or theorems used in each of the steps. After
this, the set of features will be translated into vectors and stored in a vector database for
the similarity search in the future.
#### 3.3 Pipeline of Application Module
Figure 3: An overview of the pipeline of Application Module. From left to right, the
large language model 1) receives questions, 2) generates features for entered questions, 3)
retrieves the closest solution, and 4) generates new solutions for entered questions.
-----
**3.3.1** **Extracting Features from Problems**
As demonstrated in Figure 3.2, the second step of the pipeline involves extracting two
types of features from the provided problem: a general description of the problem and
the operations or theorems involved in each step. In step three, these extracted features
are converted to vector form and used to perform a vector search of the database. The
purpose of this step is to quickly match related problems in the database and find similar
solutions. This approach allows LLM to find similar problems faster and to write the code
to solve the problem from the code stored in the database. This process mimics human
problem-solving behaviour by identifying a characteristic description of the problem and
solving it using the appropriate operations or theorems.
**3.3.2** **Feature matching and answer generation**
Matching of features can be done through vector search in the third step. When the
feature matching is successful, the solution ideas stored in the database are sent to the
LLM along with the topic so that it draws on the stored solution ideas to generate solution
ideas for the new problem. This process, which draws on the solution ideas in the database,
helps to improve the correctness of the newly generated solution ideas and speeds up the
generation of solution ideas by the LLM. In this step, we took into account the fact
that when the problem contains completely new features, it means that this step will not
match what is in the database. Therefore, we take the same approach as the training
sub-pipeline and let the LLM generate the corresponding solution. This maximizes the
correctness of the solutions.
#### 3.4 Evaluation Metrics
The effectiveness of the MathLearner framework was quantitatively assessed using a suite
of rigorously defined metrics. For comparison, the Chain of Thought (CoT) framework
served as the baseline. The CoT approach involves breaking down complex problems into
simpler, sequential steps, thereby facilitating deeper reasoning and understanding. This
method has proven effective in enhancing the problem-solving capabilities of language
models by guiding them through a structured thought process.
The evaluation metric we used is also from MATH dataset. We randomly select 150
questions from precalculus section, which is useful to test the performance of our system.
In our study, we categorized each test question into one of four quadrants based on
whether a similar problem was retrieved and whether the final calculated result was correct
(see Figure 3.3). These quadrants serve as a framework for evaluating the performance
of MathLearner and are instrumental in calculating the various metrics detailed in the
subsequent sections.
- Global Accuracy: Global Accuracy measures the proportion of correctly solved problems among all attempted problems. MathLearner’s overall accuracy is calculated as
-----
Figure 4: The category of the questions (U = C&R + C&¬R + ¬C&R + ¬C&¬R)
the ratio of correctly solved problems to the total number of problems. This metric
reflects the proportion of problems correctly solved across all attempts and serves
as a primary indicator of the system’s overall effectiveness. This metric is scaled
between 0 and 1, where 0 indicates no problems solved correctly, and 1 represents
perfect performance across all problems.
Global Accuracy = [Number of Correct Solutions (][C][&][R][ +][ C][&][¬][R][)] (1)
Number of All Question in the Dataset (U ) _[,][ [0][,][ 1]]_
- Accuracy Contribution: This metric evaluates the effectiveness of MathLearner in
finding similar results and producing correct solutions. Specifically, it measures the
proportion of problems where MathLearner finds similar results and provides correct
solutions among all correctly solved problems. It is calculated by:
Accuracy Contribution = [Correct Solutions using Similar Solution (][C][&][R][)]
Number of Correct Solutions (C&R + C&¬R) _[,][ [0][,][ 1]]_
(2)
**– Profitability (Benefit): Profitability quantifies the extent to which finding sim-**
ilar results contributes to correct solutions. It is calculated as the ratio of
global accuracy to the accuracy achieved by CoT, minus 1. It is calculated by:
Global Accuracy
Profitability = (3)
CoT Global Accuracy
_[−]_ [1][,][ [0][,][ +][∞][)]
A higher profitability indicates a higher times contribution to finding similar
results to correct solutions than the baseline.
- Precision Accuracy: This metric assesses the accuracy of the system in scenarios
where the questions find a similar solution. This metric is crucial for applications
that require high precision and is calculated as:
Correct Solutions using Similar Solution (C&R)
Precision Accuracy =
Number of Problem with Similar Solution (C&R + _C&R)[,][ [0][,][ 1)]_
_¬_
(4)
-----
A higher score on this metric reflects the system’s ability to handle precise queries
effectively. When the training set is large enough, most of the problems should find
a similar solution, making the global accuracy approach the precision accuracy.
- Target Achievement Rate: The definition of the target is the framework is expected
to help LLM to answer all the problems using retrieved solutions, which requires
the framework can correctly answer all the problems which CoT failed to do. Thus,
the goal achievement rate can be calculated using the following formula:
Target Achievement Rate = [MathLearner Correct Solutions][ −] [CoT Correct Solutions], [0, 1]
Total Number of CoT Unresolved Problems
(5)
Here, a score of 1 signifies that the framework successfully resolved every problem that CoT failed to solve, while a score of 0 would mean it failed to solve any
additional problems.
By employing these evaluation metrics, we aim to provide a comprehensive assessment
of MathLearner’s performance, considering not only its accuracy but also its effectiveness
in finding similar results and achieving correct solutions across various problem domains.
### 4. Results
We choose "Chain-of-Thought" as our baseline. In this baseline, we do not use any
prompts and let GPT generate the Python code to solve the problem in a step-by-step
approach. In this case, the probability that the code output by GPT can solve the problem
is defined as CoT.
Table 1: Performance of Chain-of-Thought (CoT)
|Evaluation Metrics|Value|Description|
|---|---|---|
|Global Accuracy|41.33%|Number of Precisely Correct Solutions using CoT = 0.4133 Number of All Quetion in the Dataset|
We obtained the relevant test results for our program through multiple tests. We have
used four evaluation criteria to show the performance and accuracy of our program. As
shown in the table below:
- Global Accuracy (Overall): By using our MathLearner for problem-solving, we
successfully solved 75 out of 150 problems. This result demonstrates the overall
performance of MathLearner in problem-solving, which includes solving unknown
problems, i.e., problems that have not been studied, as well as problems that have
been studied similarly.
-----
Table 2: Performance of MathLearner
|Evaluation Metrics|Value|Description|
|---|---|---|
|Global Accuracy|50%|Number of Correct Solutions using MathLearner = 0.5 Number of All Question in the Dataset|
|Profitability (Benefit)|20.96%|Global Accuracy 1 = 0.2096 − CoT Global Accuracy|
|Precision Accuracy|51.55%|Correct Solutions using Similar Solution = 0.5155 Number of Problem with Similar Solution|
|Target Achieve- ment Rate|17.54%|MathLearner Correct Solutions−CoT Correct Solutions = 0.1754 Total Number of CoT Unresolved Problems|
- Profitability (Benefit): This metric quantifies the extent to which the program contributes to having learned a solution to a similar problem, reflecting how effectively
our program uses existing knowledge to find the right solution.
- Precision Accuracy: This metric indicates how effectively the system finds and solves
problems when similar results are available, it can be considered an important indicator of the correctness of our program.
- Target Achievement Rate: Measures the framework’s effectiveness in enabling LLM
to use retrieved solutions to correctly answer all the problems which CoT failed to
address.
By comparing the global accuracy of MathLearner (50%) with the global accuracy
of CoT (41.33%), we can observe a significant improvement in accuracy with the use of
MathLearner, approximately an increase of 10%. In addition, we believe that precision
accuracy better reflects the actual improvement effect of the maths learner, and we observe
that precision accuracy (51.55%) is 1.55% higher than global accuracy of MathLearner
(50%), which indicates that our program has some learning ability to deal with the learnt
maths problems more effectively.
### 5. Discussion
#### 5.1 Advantages
In this thesis, we proposed the "MathLearner" framework, designed to enhance the mathematical problem-solving capabilities of Large Language Models (LLMs). By integrating
principles of human learning, specifically inductive reasoning, MathLearner equips LLMs
to learn and resolve mathematical challenges effectively.
The framework has two key achievements. First, MathLearner has demonstrated
considerable success in boosting LLMs’ capabilities in solving mathematical problems,
-----
especially those requiring precise reasoning and pattern recognition. By introducing a
feature-based retrieval method, the framework effectively enhances the model’s generalization ability, allowing it to approach new and unseen problems with more accurate
solutions. The second Strength of the framework is about diverse training data. By leveraging the MATH dataset, MathLearner has access to a broad spectrum of challenging
mathematical problems. This diversity is crucial, as it mirrors the real-world complexity
and variety of mathematical tasks highlighted in the introduction. The dataset’s comprehensive provision of step-by-step solutions not only deepens the model’s understanding but
also enhances its ability to engage in structured mathematical reasoning, a fundamental
aspect of developing reliable AI tools in educational and technological applications.
#### 5.2 Limitations and Future Work
Although MathLearner has demonstrated an outstanding ability to answer mathematics
problems, we note that there are several limitations in our research.
**Flaws in simulation of human learning process** Although MathLearner can store
learned solutions in an external database through features, and search through the database to retrieve similar solutions using the encountered problem’s feature, the simulation
of the human learning process is still not complete. LLMs still do not have the memory
of what they have "learned" by storing the solutions in an external database. When encountering a new problem, LLMs still use their pre-trained knowledge to generate features
for the problems. Thus, one possible reason why MathLearner can perform better than
the baseline is that LLMs have gained impressions of the problems in their pre-trained
knowledge. To solve this problem, future work can focus on empowering LLMs to perform
real-time updates of knowledge. Using all the possible features to fine-tune LLMs can be
a primary solution to this problem.
**The ambiguity of the definition of feature** In this paper, features for one problem
contain the category of the problem, like geometry or calculus, they also contain the
theorems used in each step of the solution, like the solution formula of quadratic equations
with one variable. However, these features are limited compared with features learned
by humans. Humans can learn specific structures of problems and use these structural
features to put problems into specific question types. All the problems in one question
type can share a common solution. It is difficult to generate this type of feature since
it is hard to describe this kind of feature through words. Furthermore, the lack of pretrained knowledge can result in inconsistency of generated features regarding the same
structure. Future research can try to design a standardised language for regular words in
features and fine-tune LLMs to generate features according to the standardised language
of features.
-----
**Flaws in the format of modified solution** In this paper, all the textual solutions will
be translated into Python programs. However, for some specific categories of problems,
like geometry, it can be difficult to use programs to solve them. In fact, programs are more
fittable to be used to do calculations. Thus, a better practice should use both programs
and natural language to form modified solutions, and only use programs as tools to do
calculation tasks.
### 6. Conclusion
We introduce an agent-based framework designed to tackle mathematical problems through
inductive reasoning. This framework mimics the human ability to generalize from learned
information and apply it effectively to new problem-solving scenarios. Demonstrated to
excel in mathematical reasoning, it has significantly enhanced global accuracy by 20.96%
compared to the baseline Chain of Thought method, and has successfully resolved 17.54%
of the problems that the baseline method failed to solve. Key contributions of MathLearner include its innovative use of feature-based retrieval methods which enhance the
model’s ability to generalize from learned examples and apply this knowledge effectively
to new, similar problems. These advancements have proven particularly effective in educational contexts, where MathLearner can serve as a personalized learning aid, thus democratizing access to quality educational resources.
Despite these advances, the research still points out some limitations. For example,
the framework’s feature generalisation for mathematical problems was not based on the
original learning content, fewer types of features were generalised, and some topics could
not be solved using only procedures to represent the solution process. Future developments will focus on extending the adaptive capabilities of LLM by incorporating more
dynamic data processing and advanced reasoning techniques. This may involve exploring
unsupervised learning paradigms that enable LLMs to independently acquire and apply
new knowledge from unstructured data, thus expanding their range of applications.
### References
Abu-Rasheed, H., Weber, C., & Fathi, M. (2024). Knowledge graphs as context sources
for llm-based explanations of learning recommendations.
[Ananthaswamy, A. (2023). In ai, is bigger better? Nature, 615(7951), 202–205. https:](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/scholarly-journals/ai-is-bigger-better/docview/2786242522/se-2)
[//ucd.idm.oclc.org/login?url=https://www.proquest.com/scholarly-journals/ai-](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/scholarly-journals/ai-is-bigger-better/docview/2786242522/se-2)
[is-bigger-better/docview/2786242522/se-2](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/scholarly-journals/ai-is-bigger-better/docview/2786242522/se-2)
Asai, A., Min, S., Zhong, Z., & Chen, D. (2023, July). Retrieval-based language models
and applications. In Y.-N. ( Chen, M. Margot & S. Reddy (Eds.), Proceedings of
_the 61st annual meeting of the association for computational linguistics (volume 6:_
-----
_[Tutorial abstracts) (pp. 41–46). Association for Computational Linguistics. https:](https://doi.org/10.18653/v1/2023.acl-tutorials.6)_
[//doi.org/10.18653/v1/2023.acl-tutorials.6](https://doi.org/10.18653/v1/2023.acl-tutorials.6)
au2, R. L. L. I., Liu, N. F., Peters, M. E., Gardner, M., & Singh, S. (2019). Barack’s wife
hillary: Using knowledge-graphs for fact-aware language modeling.
Fei-Fei, L., Fergus, R., & Perona, P. (2006). One-shot learning of object categories. IEEE
_[Transactions on Pattern Analysis and Machine Intelligence, 28(4), 594–611. https:](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/scholarly-journals/one-shot-learning-object-categories/docview/67798033/se-2)_
[//ucd.idm.oclc.org/login?url=https://www.proquest.com/scholarly-journals/one-](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/scholarly-journals/one-shot-learning-object-categories/docview/67798033/se-2)
[shot-learning-object-categories/docview/67798033/se-2](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/scholarly-journals/one-shot-learning-object-categories/docview/67798033/se-2)
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., &
Steinhardt, J. (2021). Measuring mathematical problem solving with the math
dataset.
Jiang, J., Zhou, K., Wayne, X. Z., Yang, S., Chen, Z., Zhu, H., & Ji-Rong, W. (2024). Kgagent: An efficient autonomous agent framework for complex reasoning over knowledge graph [Copyright - © 2024. This work is published under http://arxiv.org/licenses/nonexclu
distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions,
you may use this content in accordance with the terms of the License; Last updated
[- 2024-02-21]. https://ucd.idm.oclc.org/login?url=https://www.proquest.com/](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/working-papers/kg-agent-efficient-autonomous-framework-complex/docview/2928715176/se-2)
[working- papers/kg- agent- efficient- autonomous- framework- complex/docview/](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/working-papers/kg-agent-efficient-autonomous-framework-complex/docview/2928715176/se-2)
[2928715176/se-2](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/working-papers/kg-agent-efficient-autonomous-framework-complex/docview/2928715176/se-2)
Jin, M., Yu, Q., Shu, D., Zhao, H., Hua, W., Meng, Y., Zhang, Y., & Du, M. (2024). The
impact of reasoning step length on large language models.
Lampinen, A. K., Dasgupta, I., Chan, S. C. Y., Matthewson, K., Tessler, M. H., Creswell,
A., McClelland, J. L., Wang, J. X., & Hill, F. (2022). Can language models learn
from explanations in context? [Copyright - © 2022. This work is published under
http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance
[with the terms of the License; Last updated - 2022-10-12]. https://ucd.idm.oclc.](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/working-papers/can-language-models-learn-explanations-context/docview/2647479013/se-2)
[org/login?url=https://www.proquest.com/working-papers/can-language-models-](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/working-papers/can-language-models-learn-explanations-context/docview/2647479013/se-2)
[learn-explanations-context/docview/2647479013/se-2](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/working-papers/can-language-models-learn-explanations-context/docview/2647479013/se-2)
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis,
M., Yih, W.-t., Rocktäschel, T., et al. (2020). Retrieval-augmented generation for
knowledge-intensive nlp tasks. Advances in Neural Information Processing Sys_tems, 33, 9459–9474._
Liu, L., Yang, X., Shen, Y., Hu, B., Zhang, Z., Gu, J., & Zhang, G. (2023). Think-inmemory: Recalling and post-thinking enable llms with long-term memory.
Luo, L., Li, Y.-F., Haffari, G., & Pan, S. (2024). Reasoning on graphs: Faithful and
interpretable large language model reasoning.
Luo, L., Yuan-Fang, L., Haffari, G., & Pan, S. (2024). Reasoning on graphs: Faithful and interpretable large language model reasoning [Copyright - © 2024. This
work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the
-----
“License”). Notwithstanding the ProQuest Terms and Conditions, you may use
this content in accordance with the terms of the License; Last updated - 2024-02[28]. https://ucd.idm.oclc.org/login?url=https://www.proquest.com/working-](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/working-papers/reasoning-on-graphs-faithful-interpretable-large/docview/2871971639/se-2)
[papers/reasoning-on-graphs-faithful-interpretable-large/docview/2871971639/se-](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/working-papers/reasoning-on-graphs-faithful-interpretable-large/docview/2871971639/se-2)
[2](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/working-papers/reasoning-on-graphs-faithful-interpretable-large/docview/2871971639/se-2)
Lyu, Q., Havaldar, S., Stein, A., Zhang, L., Rao, D., Wong, E., Apidianaki, M., & CallisonBurch, C. (2023). Faithful chain-of-thought reasoning.
Mittal, S., Joshi, A., & Finin, T. (2017). Thinking, fast and slow: Combining vector spaces
and knowledge graphs.
Nat Friedman. (2021). Introducing GitHub Copilot: your AI pair programmer [Accessed:
April 23, 2024].
OpenAI, Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., Avila, R., Babuschkin, I.,
Balaji, S., Balcom, V., Baltescu, P., Bao, H., Bavarian, M., Belgum, J., . . . Zoph,
[B. (2023). Gpt-4 technical report. https://doi.org/10.48550/ARXIV.2303.08774](https://doi.org/10.48550/ARXIV.2303.08774)
Pinecone Systems Inc. (2023). Canopy [[Software framework]].
Pradhan, R. (2023). Addressing ai hallucinations with retrieval-augmented generation.
_[InfoWorld.com. https://ucd.idm.oclc.org/login?url=https://www.proquest.com/](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/trade-journals/addressing-ai-hallucinations-with-retrieval/docview/2880265392/se-2)_
[trade-journals/addressing-ai-hallucinations-with-retrieval/docview/2880265392/](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/trade-journals/addressing-ai-hallucinations-with-retrieval/docview/2880265392/se-2)
[se-2](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/trade-journals/addressing-ai-hallucinations-with-retrieval/docview/2880265392/se-2)
Qdrant 3 [[Software]]. (2023).
Sacolick, I. (n.d.). Vector databases in llms and search. InfoWorld.com.
Su, X., Le, T., Bethard, S., & Howard, P. (2023). Semi-structured chain-of-thought: Integrating multiple sources of knowledge for improved language model reasoning [Copyright - © 2023. This work is published under http://creativecommons.org/licenses/by/4.0/
(the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use
this content in accordance with the terms of the License; Last updated - 2023-11[17]. https://ucd.idm.oclc.org/login?url=https://www.proquest.com/working-](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/working-papers/semi-structured-chain-thought-integrating/docview/2890528215/se-2)
[papers/semi-structured-chain-thought-integrating/docview/2890528215/se-2](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/working-papers/semi-structured-chain-thought-integrating/docview/2890528215/se-2)
Wei, J., Wang, X., Schuurmans, D., Bosma, M., ichter brian, b., Xia, F., Chi, E., Le,
Q. V., & Zhou, D. (2022). Chain-of-thought prompting elicits reasoning in large
language models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho
& A. Oh (Eds.), Advances in neural information processing systems (pp. 24824–
[24837, Vol. 35). Curran Associates, Inc. https://proceedings.neurips.cc/paper_](https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf)
[files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf](https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf)
Wen, Y., Wang, Z., & Sun, J. (2024). Mindmap: Knowledge graph prompting sparks graph
of thoughts in large language models.
Xu, R., Xing, L., Shao, S., Liu, B., Zhang, K., & Liu, W. (2022). Co-learning for few-shot
learning. Neural Processing Letters, 54(4), 3339–3356.
-----
Yasunaga, M., Ren, H., Bosselut, A., Liang, P., & Leskovec, J. (2022). Qa-gnn: Reasoning
[with language models and knowledge graphs for question answering. https://ucd.](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/working-papers/qa-gnn-reasoning-with-language-models-knowledge/docview/2512590465/se-2)
[idm.oclc.org/login?url=https://www.proquest.com/working-papers/qa-gnn-](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/working-papers/qa-gnn-reasoning-with-language-models-knowledge/docview/2512590465/se-2)
[reasoning-with-language-models-knowledge/docview/2512590465/se-2](https://ucd.idm.oclc.org/login?url=https://www.proquest.com/working-papers/qa-gnn-reasoning-with-language-models-knowledge/docview/2512590465/se-2)
Zelikman, E., Huang, Q., Poesia, G., Goodman, N. D., & Haber, N. (2023). Parsel: Algorithmic reasoning with language models by composing decompositions.
Zhang, X., Bosselut, A., Yasunaga, M., Ren, H., Liang, P., Manning, C. D., & Leskovec,
J. (2022). Greaselm: Graph reasoning enhanced language models for question answering.
Zhang, Y., Mao, S., Ge, T., Wang, X., Xia, Y., Lan, M., & Wei, F. (2024). K-level reasoning
with large language models.
Zhang, Y., Yang, J., Yuan, Y., & Yao, A. C.-C. (2023). Cumulative reasoning with large
language models.
-----
| [
"Wenbei, Xie",
"Donglin, Liu",
"Haoran, Yan",
"Wenjie, Wu",
"Zongyang, Liu"
] | 2024-08-03T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2408.01779 | https://arxiv.org/abs/2408.01779 | https://www.semanticscholar.org/paper/80c87ee5c95ee5ea44f9041c2a5c21d435978a3b |
MathPile: A Billion-Token-Scale Pretraining Corpus for Math | High-quality, large-scale corpora are the cornerstone of building foundation models. In this work, we introduce MathPile, a diverse and high-quality math-centric corpus comprising about 9.5 billion tokens. Throughout its creation, we adhered to the principle of “less is more”, firmly believing in the supremacy of data quality over quantity, even in the pre-training phase. Our meticulous data collection and processing efforts included a complex suite of preprocessing, prefiltering, language identification, cleaning, filtering, and deduplication, ensuring the high quality of our corpus. Furthermore, we performed data contamination detection on downstream benchmark test sets to eliminate duplicates and conducted continual pre-training experiments, booting the performance on common mathematical reasoning benchmarks. We aim for our MathPile to boost language models’ mathematical reasoning and plan to open-source its different versions and processing scripts to advance the field. | null | null | null | null | NeurIPS 2024 | true | 0 | 0 | null | https://neurips.cc/virtual/2024/poster/97685 | null | null |
MathPrompter: Mathematical Reasoning using Large Language Models | Large Language Models (LLMs) have limited performance when solving arithmetic reasoning tasks and often provide incorrect answers. Unlike natural language understanding, math problems typically have a single correct answer, making the task of generating accurate solutions more challenging for LLMs. To the best of our knowledge, we are not aware of any LLMs that indicate their level of confidence in their responses which fuels a trust deficit in these models impeding their adoption. To address this deficiency, we propose ‘MathPrompter’, a technique that improves performance of LLMs on arithmetic problems along with increased reliance in the predictions. MathPrompter uses the Zero-shot chain-of-thought prompting technique to generate multiple algebraic expressions or python functions to solve the same math problem in different ways and thereby raise the confidence level in the output results. This is in contrast to other prompt based CoT methods, where there is no check on the validity of the intermediate steps followed. Our technique improves over state-of-the-art on the ‘MultiArith’ dataset (78.7% - 92.5%) evaluated using 175B parameter GPT-based LLM. | null | ## MATHPROMPTER: MATHEMATICAL REASONING
### USING LARGE LANGUAGE MODELS
**Shima Imani,** **Liang Du,** **Harsh Shrivastava**
Microsoft Research, Redmond
Contact: [email protected]
ABSTRACT
Large Language Models (LLMs) have limited performance when solving arithmetic
reasoning tasks and often provide incorrect answers. Unlike natural language
understanding, math problems typically have a single correct answer, making the
task of generating accurate solutions more challenging for LLMs. To the best of our
knowledge, we are not aware of any LLMs that indicate their level of confidence in
their responses which fuels a trust deficit in these models impeding their adoption.
To address this deficiency, we propose ‘MathPrompter’, a technique that improves
performance of LLMs on arithmetic problems along with increased reliance in
the predictions. MathPrompter uses the Zero-shot chain-of-thought prompting
technique to generate multiple Algebraic expressions or Python functions to solve
the same math problem in different ways and thereby raise the confidence level
in the output results. This is in contrast to other prompt based CoT methods,
where there is no check on the validity of the intermediate steps followed. Our
technique improves over state-of-the-art on the MultiArith dataset (78.7% →
92.5%) evaluated using 175B parameter GPT-based LLM.
1 INTRODUCTION
Recent advancements in natural language processing (NLP) can be attributed to massive scaling of
Large Language Models (LLMs) Vaswani et al. (2017); Devlin et al. (2018); Raffel et al. (2020);
Brown et al. (2020); Rae et al. (2021); Chowdhery et al. (2022); Thoppilan et al. (2022). A very
interesting recent discovery that the LLMs are naturally good (in-context) Zero-shot or few-shot
learners turned out to be very useful Brown et al. (2020); Liu et al. (2021; 2023). This led to the
development of ‘prompting’ technique, where the user provides a small context for solving the task athand to the LLM. This conditioning of the models on a few examples is termed as few-shot prompting,
while providing instructions to solve a task is known as Zero-shot prompting. Extensive research
efforts are being poured into designing these prompts, either manually Schick & Schütze (2020);
Reynolds & McDonell (2021) or automatically Shin et al. (2020); Gao et al. (2020). Although quite
successful for single-step system-I tasks Stanovich & West (2000); Liu et al. (2023), the prompting
techniques were inadequate in their performance on system-II tasks where multi-step reasoning is
required Rae et al. (2021). As humans, we tend to break down a problem and attempt to solve them
step-by-step. Extending this intuition to LLMs led to the development of ‘chain-of-thought’ (CoT)
prompting technique Wei et al. (2022); Wang et al. (2022). The use of CoT has led to improved
performance on a range of NLP tasks Talmor et al. (2018); Gao et al. (2020); Patel et al. (2021);
Cobbe et al. (2021); Geva et al. (2021); Chowdhery et al. (2022); Srivastava et al. (2022)
In this work, we investigate Zero-shot-CoT methods for solving mathematical reasoning tasks. To
the best of our knowledge, we found the recent work by Kojima et al. (2022) that proposed a
Zero-shot-CoT technique to be the state-of-the-art where they demonstrated a remarkable accuracy
improvement on the ‘MultiArith’ Roy & Roth (2016) data (17.7% → 78.7%). Now, we identify
two key aspects that lacks in the previous CoT prompting based SOTA, namely (1) Although, the
chain-of-thought followed by the model improved the results, but there is no check on the validity
**of the steps followed by the chain-of-thought prompting and (2) The confidence in the predictions**
of LLMs are often not provided. In order to address these gap to some extent, we derive inspiration
from how we humans solve a math question by breaking it down to a simpler multi-step procedure
and make use of multiple ways to validate our approach at each step. Specifically, given a question Q,
-----
Input Query
(I) Generating
Algebraic template
**Algebraic prompt**
Write a mathematical equation and
generate the answer format starting
with `Answer ='
(II) Math-Prompts LLM
**Python prompt**
Write a python function that returns
the answer.
(III) Compute
Verification
"Eval()"
(IV) Statistical
significance
Figure 1: MathPrompter flow. We outline the MathPrompter process with an example alongside.
(I) Generating Algebraic template: We first generate its corresponding Algebraic expression Qt that
replaces the numerical entries by variables. (II) Math-prompts: Then, we provide multiple prompts P
to the LLM that can solve Qt analytically in different ways. For eg. P can be ‘Derive an Algebraic
expression’ or ‘Write a Python function’ etc. Following this procedure, we end up with P expressions
that analytically solves Qt in terms of its variables. (III) Compute verification: We then evaluate
the P analytical solutions by allotting multiple random values to the Qt variables. (IV) Statistical
_significance: If the solutions of the P analytical functions are in ‘consensus’ over N ∼_ 5 different
variable choices, then we substitute the original values from Q to obtain the final solution. In the case
where there is no definite consensus, we repeat the steps (II), (III) & (IV). Our method, MathPrompter,
uses 175B parameter LLM called GPT3 DaVinci completion engine Brown et al. (2020). We were
able to improve the accuracy on the MultiArith data from 78.7% → 92.5%.
2 METHOD
Since the LLMs are generative models, it becomes very tricky to ensure that the generated answers
are accurate, especially for mathematical reasoning tasks. We take clues from the process followed
by students to solve arithmetic problems. We narrowed down a few steps that students take in order
to verify their solutions, namely
- Compliance with known results: By comparing the solution to a known result, one can assess its
accuracy and make necessary adjustments. This is particularly useful when the question is a standard
problem with a well-established solution.
- Multi-verification: By approaching a problem from multiple perspectives and comparing the results
helps to confirm the validity of the solution and ensure that it is both sound and accurate.
- Cross-checking: The process of solving a problem is just as necessary as the final answer. Verifying
the correctness of the intermediate steps of the process provide a clear understanding of the thought
process behind the solution.
- Compute verification: Utilizing a calculator or computer to perform arithmetic calculations can
assist in verifying the accuracy of the final answer.
-----
2.1 MATHPROMPTER
Our proposed method, MathPrompter, is an attempt to transfer some of this thought process to
the LLM answer generation process. Fig. 1 provides a high-level overview of steps followed by
MathPrompter to solve a mathematical reasoning problem. We use the state-of-the-art GPT-3 DaVinci
completion engine Brown et al. (2020) for the question-answering tasks.
We use the following question ‘Q’ from the MultiArith dataset to demonstrate the problem solving
process followed by MathPrompter.
Q: At a restaurant, each adult meal costs $5 and kids eat free. If a group of 15
people came in and 8 were kids, how much would it cost for the group to eat?
(I) Generating Algebraic template: We begin by transforming the question into its Algebraic form by
replacing the numeric entries with variables using a key-value mapping. In this particular instance,
the modified question ‘Qt’ becomes:
Qt: at a restaurant, each adult meal costs A and kids eat free. if a group of B people
came in and C were kids, how much would it cost for the group to eat?
Mapping: {A:5, B:15, C:8}
(II) Math-prompts: We build up on the intuition provided by the multi-verification and cross-checking
thought processes mentioned above. We generate analytical solutions of Qt using two different
approaches, Algebraic way and Pythonic way. We give the following prompts to the LLM to generate
additional context for Qt
Algebraic prompt: Write a mathematical equation and generate the answer format
starting with ‘Answer =’
Python prompt: Write a Python function that returns the answer.
The LLM model in response to the above prompts generated the following output expressions
# Algebraic expression output
Answer = A*(B-C)
# Python expression output
def total_price(A, B, C):
return A * (B-C)
The above generated analytical solutions gives the user a hint into the ‘intermediate thought process’
of the LLM. Incorporating additional prompts will improve the accuracy and consistency of the
results. This will, in turn, enhance the MathPrompter’s ability to generate more precise and effective
solutions.
(III) Compute verification: We evaluate the expressions generated in the previous step using multiple
randomized key-value mappings of the input variables in Qt. To evaluate the expressions, we used
the Python’s eval() method. We compare the outputs to see if we can find a consensus among the
answers. This also provides us with a higher level of confidence that the answers are correct and
reliable. Once the expressions agree on their outputs, we use the values of the variables in the input
Q to compute the final answer, as below
Algebraic-answer = 35
Pythonic-answer = 35
(IV) Statistical significance: In order to ensure that consensus is reached among various expressions’
output, in our experiments, we repeat the steps (II) & (III) for N ∼ 5 times and report the most
frequent value observed for the answer.
-----
**Model** **Accuracy**
Zero-shot 17.7
Zero-shot (PaLM 540B) 25.5
Zero-shot-CoT 78.7
Zero-shot-CoT (PaLM 540B) 66.1
Zero-shot-CoT + self consistency (PaLM 540B) 89.0
Zero-shot-CoT (MathPrompter) **92.5**
Few-Shot (2 samples) 33.7
Few-Shot (8 samples) 33.8
Few-Shot-CoT (2 samples) 84.8
Few-Shot-CoT (4 samples) 90.5
Few-Shot-CoT (8 samples) 93.0
Zero-Plus-Few-Shot-CoT (8 samples) 92.8
Table 1: Accuracy on MultiArith dataset. MathPrompter outperforms all the Zero-shot & Zero-shot-CoT
baselines. We emphasize that our model’s performance is comparable to 540B parameter models as well as
the SOTA Few-shot-CoT approaches. (If not mentioned explicitly, the models in each row consists of 175B
parameters. Results are borrowed from Kojima et al. (2022). They used Textdavinci-002 (175B) model along
with the same 8 examples as described in Wei et al. (2022) for Few-shot and Few-shot-CoT settings.)
3 EXPERIMENT
3.1 DATASET
We evaluate MathPrompter on MultiArith dataset Roy & Roth (2016), which is a subset of the
Math World Problem Repository Koncel-Kedziorski et al. (2016). This dataset is a collection of
mathematical problems that are specifically designed to test the ability of machine learning models to
perform complex arithmetic operations and reasoning. These problems demand the application of
multiple arithmetic operations and logical reasoning to be sucessfully solved.
3.2 BASELINE
One of the popular baselines is the standard Zero-shot model by Brown et al. (2020). Their train their
models in a way that it is able to recognize and classify new objects or classes that it has never seen
before during training. This was achieved by utilizing the semantic relationships between classes.
We also compared against the state-of-the-art Zero-shot-CoT prompting model by Kojima et al.
(2022). This is a very recent approach that addresses the limitations of the standard Zero-shot learning
by incorporating a ‘context of the task’ using CoT to improve the performance. Briefly, their method
follows this procedure. Given a question Q, the authors use the prompt ‘Lets think step-by-step’
followed by Q to generate a response Z. Then, they use the prompt ‘The answer (Arabic numericals)
is’ followed by Z to get their final result.
3.3 RESULTS
3.3.1 ACCURACY COMPARISONS
Table 1 compares the performance of the MathPrompter against the baseline models. The results
of few-shot & zero-shot learning based approaches are shown. Furthermore, we add the results for
models with different number of parameters to get better highlight the significance of our approach.
Since, MathPrompter is a Zero-shot-CoT (175B parameters) method, we choose the state-of-the-art
Zero-shot-CoT (175B parameters) model by Kojima et al. (2022) and a Zero-shot(175B parameters)
by Brown et al. (2020) for fair comparison. We report an accuracy of 92.5% which is a huge
improvement to the other SOTA models with 78.7% and 17.7% accuracy, respectively.
-----
|Question|SOTA Zero-shot-CoT|MathPrompter|
|---|---|---|
|Q: At the fair Adam bought 13 tickets. After riding the ferris wheel he had 4 tick- ets left. If each ticket cost 9 dollars, how much money did Adam spend riding the ferris wheel?|A (ONE UNNECESSARY STEP ERROR): Let’s think step by step. Adam bought 13 tickets. That means he spent 13 * 9 = 117 dol- lars on tickets. After riding the ferris wheel, he had 4 tickets left. That means he used 9 tickets to ride the ferris wheel. That means he spent 9 * 9 = 81 dollars on riding the ferris wheel. So the answer is 117 - 81 = 36 dollars. Therefore, the answer (arabic numerals) is 36. |Mapping = {A:13, B:4, C:9} # Algabraic answer Answer = A*(B-C) #Python code def total_price(A, B, C): return A * (B-C) Answer = 81 |
|Q: Each chocolate bar in a box cost $3. If a box had 9 bars total and Wendy sold all but 3 bars, how much money would she have made?|A (OTHERS; MAXIMUM LIMIT OF TEXT LENGTH): Let’s think step by step. Each chocolate bar in a box costs $3. If a box had 9 bars total, then Wendy sold all but 3 bars. How much money would she have made? We can solve this problem using alge- bra. Let’s start by creating a variable to rep- resent the number of chocolate bars Wendy sold. We’ll call this variable "x". If Wendy sold "x" chocolate bars, then she would have 9 - x chocolate bars left in the box. We know that each chocolate bar costs $3, so the total cost of the chocolate bars Wendy sold There- fore, the answer (arabic numerals) is: Wendy would have made $12. |Mapping = {A:3, B:9, C:3} # Algabraic answer Answer = A x (B - C) #Python code def money_made(A, B, C): return (B-C)*A Answer = 18 |
|Q: Jerry was helping the cafeteria workers pick up lunch trays, but he could only carry 8 trays at a time. If he had to pick up 9 trays from one table and 7 trays from another, how many trips will he make?|A (COMMONSENSE MISTAKE): Let’s think step by step. Jerry can carry 8 trays at a time. There are 9 trays on the first table. That means Jerry will need to make 2 trips to the first table. There are 7 trays on the second table. That means Jerry will need to make 1 trip to the second table. In total, Jerry will make 3 trips. Therefore, the answer (arabic numerals) is 3. |Mapping = {A:8, B:9, C:7} # Algabraic answer Answer = (B+C)/A #Python code def numTrips(A, B, C): trips = (B + C) // A if (B + C) % A != 0: trips += 1 return trips Answer = 2 |
|Q: Kaleb bought 14 boxes of chocolate candy and gave 5 to his little brother. If each box has 6 pieces inside it, how many pieces did Kaleb still have?|A: Let’s think step by step. Kaleb bought 14 boxes of chocolate candy. This means that he had 84 pieces of chocolate candy. He gave 5 boxes to his little brother, which means that he gave his little brother 30 pieces of chocolate candy. Kaleb still had 54 pieces of chocolate candy. Therefore, the answer (arabic numerals) is 54 .|Mapping = {A:14, B:5, C:6} # Algabraic answer Answer = A*C - B #Python code def candy_pieces(A, B, C): return A*C - B Answer = 79 (COMMONSENSE MISTAKE)|
**Question** **SOTA Zero-shot-CoT** **MathPrompter**
Q: At the fair Adam bought A (ONE UNNECESSARY STEP ERROR): Mapping = {A:13, B:4, C:9}
13 tickets. After riding the Let’s think step by step. Adam bought 13
# Algabraic answer
ferris wheel he had 4 tick- tickets. That means he spent 13 * 9 = 117 dol
Answer = A*(B-C)
ets left. If each ticket cost lars on tickets. After riding the ferris wheel,
#Python code
9 dollars, how much money he had 4 tickets left. That means he used 9
def total_price(A, B, C):
did Adam spend riding the tickets to ride the ferris wheel. That means
return A * (B-C)
ferris wheel? he spent 9 * 9 = 81 dollars on riding the ferris
wheel. So the answer is 117 - 81 = 36 dollars. Answer = 81
Therefore, the answer (arabic numerals) is
36.
Q: Each chocolate bar in A (OTHERS; MAXIMUM LIMIT OF TEXT Mapping = {A:3, B:9, C:3}
a box cost $3. If a box LENGTH): Let’s think step by step. Each # Algabraic answer
had 9 bars total and Wendy chocolate bar in a box costs $3. If a box Answer = A x (B - C)
sold all but 3 bars, how had 9 bars total, then Wendy sold all but #Python code
much money would she 3 bars. How much money would she have def money_made(A, B, C):
have made? made? We can solve this problem using alge- return (B-C)*A
bra. Let’s start by creating a variable to represent the number of chocolate bars Wendy Answer = 18
sold. We’ll call this variable "x". If Wendy
sold "x" chocolate bars, then she would have
9 - x chocolate bars left in the box. We know
that each chocolate bar costs $3, so the total
cost of the chocolate bars Wendy sold Therefore, the answer (arabic numerals) is: Wendy
would have made $12.
Q: Jerry was helping the A (COMMONSENSE MISTAKE): Let’s Mapping = {A:8, B:9, C:7}
cafeteria workers pick up think step by step. Jerry can carry 8 trays
# Algabraic answer
lunch trays, but he could at a time. There are 9 trays on the first table.
Answer = (B+C)/A
only carry 8 trays at a time. That means Jerry will need to make 2 trips to
#Python code
If he had to pick up 9 trays the first table. There are 7 trays on the second
def numTrips(A, B, C):
from one table and 7 trays table. That means Jerry will need to make 1
trips = (B + C) // A
from another, how many trip to the second table. In total, Jerry will
if (B + C) % A != 0:
trips will he make? make 3 trips. Therefore, the answer (arabic
trips += 1
numerals) is 3.
return trips
Answer = 2
Q: Kaleb bought 14 boxes A: Let’s think step by step. Kaleb bought 14 Mapping = {A:14, B:5, C:6}
of chocolate candy and gave boxes of chocolate candy. This means that he
# Algabraic answer
5 to his little brother. If each had 84 pieces of chocolate candy. He gave
Answer = A*C - B
box has 6 pieces inside it, 5 boxes to his little brother, which means
#Python code
how many pieces did Kaleb that he gave his little brother 30 pieces of
def candy_pieces(A, B, C):
still have? chocolate candy. Kaleb still had 54 pieces
return A*C - B
of chocolate candy. Therefore, the answer
(arabic numerals) is 54 . Answer = 79 (COMMONSENSE
MISTAKE)
Table 2: Example-based comparison between current state of art Zero-shot-CoT and MathPrompter.
3.3.2 EXAMPLE COMPARISONS
Table 2 presents a sample set of questions and their respective outputs, intermediate steps, and final
answers generated by both MathPrompterand the current state-of-the-art model Kojima et al. (2022).
For simplicity, only one output of MathPrompter for each question is shown for both the Algebraic
and Pythonic outputs.
The table highlights areas where Kojima et al. (2022) technique falls short, and where these can
be remedied with MathPrompter, which was designed to address these issues. For example, the
generated answers sometimes have one step of error, which can be avoided by running the model
multiple times and reporting the consensus results. Additionally, the reasoning steps in Kojima et al.
(2022) can be excessively lengthy, but the Pythonic or Algebraic methods can address this by typically
requiring fewer tokens. Furthermore, the reasoning steps may be correct, but the final computation is
incorrect. MathPrompter address problem by using the Python’s eval() method function.
-----
In many cases, the MathPrompter generates correct intermediate and final answers. However, there
are a few cases, such as the last question in Table 2, where both the Algebraic and Pythonic outputs
are in agreement, yet erroneous. We plan to address these issues by incorporating additional methods
to further enhance the performance of MathPrompter .
4 CONCLUSIONS & DISCUSSIONS
We introduced MathPrompter, a novel approach that improves LLM performance on mathematical
reasoning problems. It also addresses an important concern of building the user trust to some extent in
the LLM predictions. We translated our intuition on how students solve arithmetic problems to a LLM
model by utilizing the Zero-shot chain-of-thought prompting technique. MathPrompter incorporates
ideas like cross-checking the intermediate steps and solving the same math problem using multiple
approaches in its design. We empirically show that our model is comparable to SOTA Few-shot-CoT
models as well as the larger Zero-shot-CoT models that have 540B parameters. In future, we plan to
further evaluate performance on additional datasets and explore incorporating additional prompts
into MathPrompter.
5 LIMITATION
One of the limitations of our work is that while we are running the MathPrompter multiple times in
different ways to increase the accuracy of our results, this does not always guarantee the correctness
of the output. Both Algebraic and Pythonic expressions have the potential to produce the incorrect
results, even if the prompt outputs match each other. This is the fail case as shown in the last row of
Table 2. Increasing the number of prompts will mitigate this issue. We are currently investigating
techniques that can address this issue in a more principled manner.
REFERENCES
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam
Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm:
Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve
math word problems. arXiv preprint arXiv:2110.14168, 2021.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep
bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Tianyu Gao, Adam Fisch, and Danqi Chen. Making pre-trained language models better few-shot
learners. arXiv preprint arXiv:2012.15723, 2020.
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. Did aristotle
use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of
_the Association for Computational Linguistics, 9:346–361, 2021._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916, 2022.
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. Mawps:
A math word problem repository. In Proceedings of the 2016 conference of the north american
_chapter of the association for computational linguistics: human language technologies, pp. 1152–_
1157, 2016.
Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. What
makes good in-context examples for gpt-3? arXiv preprint arXiv:2101.06804, 2021.
-----
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig.
Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language
processing. ACM Computing Surveys, 55(9):1–35, 2023.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are nlp models really able to solve simple math
word problems? arXiv preprint arXiv:2103.07191, 2021.
Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John
Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language models:
Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446, 2021.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi
Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text
transformer. The Journal of Machine Learning Research, 21(1):5485–5551, 2020.
Laria Reynolds and Kyle McDonell. Prompt programming for large language models: Beyond the
few-shot paradigm. In Extended Abstracts of the 2021 CHI Conference on Human Factors in
_Computing Systems, pp. 1–7, 2021._
Subhro Roy and Dan Roth. Solving general arithmetic word problems. _arXiv preprint_
_arXiv:1608.01413, 2016._
Timo Schick and Hinrich Schütze. It’s not just size that matters: Small language models are also
few-shot learners. arXiv preprint arXiv:2009.07118, 2020.
Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. Autoprompt:
Eliciting knowledge from language models with automatically generated prompts. arXiv preprint
_arXiv:2010.15980, 2020._
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam
Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the
imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint
_arXiv:2206.04615, 2022._
Keith E Stanovich and Richard F West. 24. individual differences in reasoning: Implications for the
rationality debate? Behavioural and Brain Science, 23(5):665–726, 2000.
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. Commonsenseqa: A question
answering challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937, 2018.
Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze
Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. Lamda: Language models for dialog
applications. arXiv preprint arXiv:2201.08239, 2022.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz
Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing
_systems, 30, 2017._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency
improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny
Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint
_arXiv:2201.11903, 2022._
-----
| [
"Shima, Imani",
"Liang, Du",
"Harsh, Shrivastava"
] | 2023-03-03T00:00:00 | ACL 2023 Industry Track | true | 0 | 4 | null | http://arxiv.org/abs/2303.05398 | https://arxiv.org/abs/2303.05398 | https://www.semanticscholar.org/paper/b626560f19f815808a289ef5c24a17c57320da70 |
MathViz-E: A Case-study in Domain-Specialized Tool-Using Agents | There has been significant recent interest in harnessing LLMs to control software systems through multi-step reasoning, planning and tool-usage. While some promising results have been obtained, application to specific domains raises several general issues including the control of specialized domain tools, the lack of existing datasets for training and evaluation, and the non-triviality of automated system evaluation and improvement. In this paper, we present a case-study where we examine these issues in the context of a specific domain. Specifically, we present an automated math visualizer and solver system for mathematical pedagogy. The system orchestrates mathematical solvers and math graphing tools to produce accurate visualizations from simple natural language commands. We describe the creation of specialized data-sets, and also develop an auto-evaluator to easily evaluate the outputs of our system by comparing them to ground-truth expressions. We have open sourced the data-sets and code for the proposed system. | An automated math visualizer and solver system for mathematical pedagogy that orchestrates mathematical solvers and math graphing tools to produce accurate visualizations from simple natural language commands is presented. | ## MathViz-E: A Case-study in Domain-Specialized Tool-Using Agents
**Arya Bulusu[1], Brandon Man[2], Ashish Jagmohan[1], Aditya Vempaty[1],**
**Jennifer Mari-Wyka[1], Deepak Akkil[1]**
1 Emergence AI
2 Massachusetts Institute of Technology
**Abstract**
There has been significant recent interest in harnessing LLMs to control
software systems through multi-step reasoning, planning and tool-usage.
While some promising results have been obtained, application to specific
domains raises several general issues including the control of specialized
domain tools, the lack of existing datasets for training and evaluation,
and the non-triviality of automated system evaluation and improvement.
In this paper, we present a case-study where we examine these issues in
the context of a specific domain. Specifically, we present an automated
math visualizer and solver system for mathematical pedagogy. The system
orchestrates mathematical solvers and math graphing tools to produce accurate visualizations from simple natural language commands. We describe
the creation of specialized data-sets, and also develop an auto-evaluator to
easily evaluate the outputs of our system by comparing them to groundtruth expressions. We have open sourced the data-sets and code for the
proposed system.
**Introduction**
Large Language Models (LLMs) and Large Multimodal Models (LMMs) have had extraordinary recent success in tasks involving natural language generation and code generation.
Spurred by this, there has been significant interest in harnessing LLMs to control software
systems and embodied agents through multi-step reasoning, planning and leveraging tools
and APIs Karpas et al. (2022); Hosseini et al. (2021); Hao et al. (2023); Schick et al. (2023);
Patil et al. (2023). Promising results have been obtained in many tasks, from device and
web-control to game-playing and roboticsWen et al. (2024); Lutz et al. (2024); Wang et al.
(2023a;b); Ahn et al. (2022).
The creation of AI-driven automated systems for specialized domains holds great economic
promise; estimated by a recent study at more than a trillion dollars Company (2024). While
there exist several general multi-agent frameworks that can be built on, such as Autogen
Wu et al. (2023b); Park et al. (2023), the development of LLM-driven agentic workflows in
specialized domains requires overcoming several additional challenges:
- Firstly, such domains come with specialized tools and problems, different from the
general-purpose problems that past work has often focused on.
- Secondly, there is often a paucity of datasets for training or benchmarking. For
example, common math benchmarks like Cobbe et al. (2021), while important, are
of limited value for settings like math pedagogy.
- Thirdly, automated evaluation of such systems is hard, and human evaluation does
not scale. This makes it hard to create continuous improvement loops, which are
essential for robustness.
-----
In this paper, we investigate the above issues in the context of a specialized domain; that
of math pedagogy. Teachers use a variety of in-classroom technological tools in day-today instruction. The variety and complexity of operating these tools imposes a cognitive
and time-overload, that teachers would rather spend with students. Generative AI has
significant potential to simplify the tools available in the classroom, allowing teachers to
spend more time interacting with their students instead of their technology (Company,
2023). The combination of specialized tool-use, and the potential benefits of automation
make classroom pedagogy a well-suited use-case for our exploration.
Reflecting the challenges described above, previous LLM-based math research Mitra et al.
(2024); Yu et al. (2023); Liu et al. (2023); Trinh et al. (2024) has focused on solving math
problems and theorem-proving which, while important, are tangential to in-classroom
teaching. Existing math benchmarks like GSM8K Cobbe et al. (2021) and MATH Hendrycks
et al. (2021) are also of limited value for understanding how LLMs can be applied to the
classroom setting. There are no comprehensive datasets that are purposefully aligned
with educational standards for middle and high school, nor are there datasets for math
visualization pedagogy.
We present an automated math graphing system, that we term MathViz-E, for mathematical
pedagogy. Graphs are an essential tool in the classroom, allowing students to visualize and
interact with mathematical concepts (Donnelly-Hermosillo et al., 2020). Our automated
graphing system takes in voice utterances, converts them to mathematical expressions, and
graphs them with the Desmos graphing calculator (Desmos Studio). This simplifies the
process of creating graphs in the classroom, allowing teachers to more easily incorporate
math visualization techniques into their lessons without disrupting classroom flow.
Our contributions are:
- We present a voice-driven automated graphing system, combining an LLM with
a mathematical solver and a visual graphing calculator, for the domain of inclassroom math pedagogy.
- We design new domain-specific datasets for graphing problems, representative of
the Common Core Math standards (CCSSO & NGA-Center, 2010), focused on a
range of learning objectives that teachers currently use visualization tools to teach.
- Evaluating the accuracy of an automated visual graphing system is non-trivial,
given that the output of the system is a set of math visualizations. We create an
auto-evaluation pipeline to simplify the evaluation of different versions of our
system.
- We present results demonstrating that our proposed system achieves high accuracy
on a wide variety of learning objectives, and show that incorporating multiple tools
significantly out-performs an LLM-only system.
The incorporation of multiple tools, including a solver, provides a foundation of accuracy, as
LLMs alone are incapable of reliably solving several types of math problems (as we will see
in Section 4). This allows the system to produce accurate graphs even for difficult, multi-step
problems requiring complex reasoning. On the other hand, while mathematical solvers
such as Wolfram Alpha (LLC, 2024) can provide accurate answers for many categories of
problems, they are not capable of understanding all types of natural language. An example
of this is anaphora in multi-turn conversations, where a query refers to objects already
graphed in the calculator (e.g. ”Move the function 5 units horizontally”). By using an LLM
to orchestrate across the solver and the visual calculator, we create a robust voice- and
dialog-based system. Thus the combination of an LLM with specialized tools produces an
automated system with strong potential for domain use. We have open-sourced our code
[and datasets at https://github.com/EmergenceAI/MathViz-E.](https://github.com/EmergenceAI/MathViz-E)
**2** **Related Work**
Our work in this paper is related to a large body of recent literature in using LLMs for
tool-usage, multi-step reasoning, and plan execution by agents. There is also related work
-----
in the use of LLMs for mathematical reasoning, and for pedagogy. In this section, we briefly
review this literature.
There has been considerable work in the last couple of years on using LLMs to orchestrate
tools and APIs, motivated by the desire to augment language models’ strong text generation
capabilities with other, more specialized abilities Karpas et al. (2022). Systems like NL2API
Hosseini et al. (2021), ToolkenGPT Hao et al. (2023), Toolformer Schick et al. (2023) and
Gorilla Patil et al. (2023) use language models that learn how to invoke parameterized
APIs. Combining multi-step planning and tool-usage enables the creation of embodied
agents that can plan and act in virtual or real environments. Examples of this include
agents in virtual game environments Baker et al. (2022); Wang et al. (2023a;b) and real-world
robotic agents Ahn et al. (2022); Bousmalis et al. (2023); Wu et al. (2023a); Bhateja et al.
(2023). Beyond single-agent systems, there has also been much interest in multi-agent
systems and frameworks Wu et al. (2023b); Park et al. (2023). In comparison to the above,
our work focuses on a narrow set of tools, but for domain-specific capabilities rather than
general-purpose usage; we investigate math solvers and visual calculators specifically for
mathematical pedagogy.
There has also been significant work in multi-step reasoning and sequential decision making.
Chain-of-thought (CoT) Wei et al. (2023), and its many variants Chu et al. (2023) have shown
gains on several types of reasoning tasks. While initially considered an emergent ability in
large models like PaLM Chowdhery et al. (2022), subsequent work has used techniques like
distillation Fu et al. (2023); Li et al. (2023) to specialize smaller models for specific reasoning
tasks. CoT-style step-by-step reasoning can be further combined with self-critique driven
refinement Madaan et al. (2023); Bai et al. (2022). Another type of approach is to generate
code via LLMs, that can be run via tools like Python interpreters, e.g. Gao et al. (2023). Also
related are approaches like ReAct Yao et al. (2023), Reflexion Shinn et al. (2023) and many
others (e.g. Aksitov et al. (2023)) that employ LLMs for sequential decision making. In the
system we describe in the next section, we use chain-of-thought with a general instructiontuned LLM for multiple purposes, including query reformulation and tool control. While
in this paper we’ve used vanilla CoT and a large model, the use of more specialized CoT
variants and smaller distilled models are intriguing directions for future investigation.
Also related is the literature on training LLMs for mathematical reasoning. A popular
dataset (among many) is the GSM-8K dataset Cobbe et al. (2021) with grade-school math
word problems. Recent work has explored the training of small, parameter-efficient models,
generally through fine-tuning using augmented data; examples include Mitra et al. (2024);
Yu et al. (2023); Liu et al. (2023). Finally the use of LLMs for various pedagogical tasks
includes work on assessment-generation Wang et al. (2022); Elkins et al. (2023); Bulathwela
et al. (2023), learning-content generation Diwan et al. (2023); Rodway & Schepman (2023),
and the use of commercial models like ChatGPT through prompt engineering Adeshola &
Adepoju (2023); Baidoo-Anu & Ansah (2023). In contrast to these, we focus on in-classroom
math pedagogy, wherein a teacher controls math tools through voice and language.
**3** **Methodology**
3.1 Dataset Construction
The Common Core standards (CCSSO & NGA-Center, 2010) are a set of national educational
standards describing what students are expected to know at each grade level, and they have
been widely adopted in the United States. Based on the math Common Core standards,
we identify a set of learning objectives that teachers use visualization tools to teach in the
classroom. We use these categories as the basis of our dataset, creating approximately 10
questions per category to evaluate our system on.
The categories included in our datasets and the style of utterances were refined through
teacher feedback, to reflect language that is commonly used by teachers in math pedagogy
(e.g. ”Graph a unit circle” rather than ”Draw a circle of radius 1”.). This feedback also
informed the type of problems we chose and the learning objectives we targeted in our
datasets. From the identified categories, we create three datasets; the first (referred to as the
-----
utterance-focused dataset) is focused on use cases a teacher might want to have available in
the classroom. The utterances in this dataset are written as commands a teacher might say,
as opposed to written-out problems for a student to solve. The dataset is mainly comprised
of simpler, single-step problems that a teacher might use to demonstrate intermediate steps
in the process of solving a problem. To ensure robust evaluation, we created variants for
utterances in each category.
Table 1: Example row from utterance-focused dataset
Processed Utterance Natural Language Utterance Graph Input
Reflect y = 5x - 4 across the y-axis Reflect y equals five x minus four _y = −5x −_ 4
across the y-axis
Table 2: Example question from multi-turn dataset
Processed Utterance Graph Input
Plot a line that goes through (1, 3) and (4, 8) _y =_ [5]3[x] [+][ 4]3
Plot a parallel line through the origin _y =_ [5]3[x] [,][ y][ =][ 5]3[x] [+][ 4]3
Our second dataset (referred to as the textbook-focused dataset) is focused on multi-step,
complicated problems that require tool use to solve. The main topics in this dataset are
a superset of those in the teacher-focused dataset, but the problems are geared towards
demonstrating the utility of tool use in LLMs. In contrast to the utterance-focused dataset,
we include word problems. The problems in this dataset are based on representative
problems explicitly written for Common Core standards.
Our third dataset, referred to as the multi-turn dataset, is similar to the utterance focused
dataset but includes multiple turns in each question. This requires the system to incorporate
an understanding of the current calculator state into its response.
The datasets include a column for the problems and a column for the graph input associated
with the problem. As the system is meant to be used through natural language commands,
we also included a column with the utterance for the problem (e.g. “Graph y = 5x[2] + 3” vs.
“Graph y equals five x squared plus three”). This column was automatically generated with
GPT-4 (OpenAI, 2023) based on the original problem column and manually checked over
for accuracy. The utterance-focused dataset contains 70 queries across seven categories, the
textbook-focused dataset contains 147 queries across fourteen categories, and the multi-turn
dataset contains 95 utterances across seven categories.
3.2 System
The MathViz-E system consists of four main components: the creation of the solver query,
the generation of a written explanation based on the solver’s output, the generation of the
visual calculator graphing expressions based on the solver’s output, and the validation
and correction of the graphing expressions based on LLM self-critique. We incorporate
multi-turn functionality into the system by including a calculator state in our prompts which
describes the current expressions graphed on the calculator. As a result, the system is able
to understand queries within the context of the expressions that have already been graphed.
In this paper, we use Desmos as visual calculator, Wolfram Alpha as the solver and GPT-4
as the LLM, but the main principles of the system can be broadly applied to other tools and
LLMs. The system is voice-driven; the presented version uses the Web Speech API MDN
Web Docs (2023) for speech recognition, but other ASR pipelines can also be used.
For a given problem, we create the solver query by prompting the LLM with instructions and
a series of examples demonstrating how to write queries for certain math problems. These
examples were chosen by identifying problems the LLM consistently misunderstood. The
LLM is also provided with the spoken-utterance version of the problem and the calculator
-----
Figure 1: Overview of the MathViz-E automated graphing system
state. The calculator state contains the equations that have previously been graphed in the
graphing window, and passing this state allows the system to incorporate this information
into its problem-solving process. Below is a truncated version of the prompt used to create
the Wolfram Alpha query:
Write a Wolfram Alpha query that can be used to solve the problem. The
main purpose of the task is to find the numerical answer to the problem,
not to graph the problem. When writing a query for a word problem, only
include the necessary equation to solve the problem. Ensure that the query
is acceptable by the Wolfram Alpha engine.
For example, if you are asked:
Graph y = 6x[2] + 4 and find the local maxima and minima.
Calculator state: []
You generate:
Find the local maxima and minima of y = 6x[2] + 4
Once the query has been generated, we input it to our solver. Wolfram Alpha provides a
set of pods for each query, with each pod containing a different category of information
related to the query. Wolfram Alpha also provides step-by-step solutions for some problems.
From these results, we extract the solution (generally the second pod, after the “Input
Interpretation” pod) and the step-by-step solution if it is present.
To generate an explanation of the problem, we prompt the LLM with a zero-shot instruction.
Along with the prompt, we provide the natural language utterance version of the problem,
the calculator state, the numerical solution as given by the solver, and the solver’s step-bystep solution, if it is present.
In the cases where Wolfram Alpha provides a step-by-step solution, the LLM only has to
expand upon this solution by providing more detail and explaining the reasoning behind
-----
Figure 2: UI of MathViz-E demonstrated through a multi-turn inverse problem
the steps. When there is no step-by-step solution given, it must write its own explanation
from scratch based on the problem and numerical solution.
In order to generate the Desmos graphing expressions, we prompt the LLM with instructions
and a set of examples. In the prompt, we ask for a chain-of-thought, which helps to generate
more accurate expressions. Chain-of-thought prompting has been shown to improve the
accuracy of LLMs’ reasoning, especially with regards to math problems.(Wei et al., 2022)(Chu
et al., 2023) As with the explanation prompt, we also provide the natural language utterance
version of the problem, the calculator state, the solver’s numerical solution, and the solver’s
step-by-step solution, if there is one. The large number of examples in the prompt helps
guide the LLM towards producing valid Desmos expressions. The provided examples were
created by identifying common points of failure, and writing problems that demonstrate
how to accurately deal with these issues.
In the last self-critique step, we ask the LLM to validate and refine the previously-generated
Desmos expressions by checking for common errors. A majority of these errors arise from
Desmos API deviations from standard latex (e.g. the use of le and ge instead of leq and geq
for inequalities, or using abs(·) instead of | · | for the absolute value function). These checks
also include ensuring that correct graphing variables are used, operations are formatted
correctly, and functions are named properly. This step helps to eliminate basic errors made
by the LLM in the expression-generating step.
**4** **Experiments**
4.1 Autoevaluation
Traditional evaluation metrics for text similarity fail for comparing mathematical statements
due to the precise nature of math statements. Consider the statement 5=2+3. Lexical
-----
similarity metrics, such as Jaccard distance, would consider the statement 5=2+4 more
similar than 5=4+1 since the former statements shares more words in common than the latter.
Furthermore, many existing similarity metrics do not recognize common mathematical
symbols as tokens and thus cannot be converted into a numerical representation. Similarly,
directly examining the visual graph output through a multimodal approach is unlikely to
be precise enough for our purposes. Although LLMs may be able to evaluate equivalence
for simple expressions, their judgement becomes inconsistent for more complex expressions.
As a result, evaluating the equations output by the automated graphing system at a large
scale is nontrivial.
Due to these limiting factors, we create a new autoevaluation pipeline that can precisely
compare two mathematical statements. We use the computer algebra system SymPy (Meurer
et al., 2017) to evaluate the mathematical equations output by the LLM in our LLM+Solver
system when responding to given questions. In order to compare two equations, we use
SymPy to isolate a variable and compare the resulting expressions on the other side of the
equality.
Although this approach leads to accurate checking of math statements, an issue is that
SymPy cannot parse certain formats of equations, which the LLM in the LLM+Solver
system may produce. To combat this, we use an LLM as a backup in the autoevaluation
process, where if SymPy cannot parse an equation, it will let the LLM compare the two math
statements and output a result.
We construct a set of ground truth evaluations by running all the questions in our datasets
through the LLM-only system, and manually evaluating if the system’s output matches the
correct answer. This manually-benchmarked dataset allows us to run different versions of
the autoevaluator on the dataset and check how its evaluations compare to our manuallywritten evaluations.
The simpler version of our autoevaluator only uses an LLM to compare two equations. Using
GPT-4 as the LLM, we compare the results of LLM-only and LLM+SymPy autoevaluators
on the entirety of our utterance-focused and textbook-focused datasets. An older version of
the textbook-focused dataset was used, with the same categories and question styles as the
current version. In the table below, we display the dataset-wide results as well as the results
for selected categories.
Table 3: Accuracy of LLM-only autoevaluator and LLM+SymPy autoevaluator as compared
to manual evaluations
UtteranceFocused
Dataset
TextbookFocused
Dataset
Systems of Linear Graph
Equations Inverse
Functions
Graph
Lines
LLM-Only 77% 76% 50% 40% **92%**
LLM+SymPy **86%** **88%** **100%** **80%** 85%
The addition of SymPy to the autoevaluation pipeline increases the accuracy of evaluations significantly on almost all categories, especially in Systems of Linear Equations. In
general, the LLM+SymPy autoevaluator performs better than the LLM-only autoevaluator
on problems that are well-structured but computationally difficult. These problems are
easily interpretable and can be solved by SymPy, as it can easily handle complex algebraic
manipulations. In contrast, an LLM-only approach would struggle to accurately carry out
algebra.
We see a minor drop in performance for a few categories, such as Graph Lines. This decrease
in accuracy is generally due to SymPy and the LLM both misunderstanding the formatting
of the equation. An important point to note is that the overwhelming majority of incorrectly
evaluated answers are the result of a correct input being marked as incorrect. This occurs
because SymPy will only mark expressions equivalent if they are genuinely equivalent,
and GPT-4 rarely marks inequivalent expressions as being equivalent. As a result, when
using the autoevaluation pipeline, we can trust that nearly all the responses marked as
-----
correct are actually correct, and manually check if the responses marked as incorrect are in
fact incorrect. As a result, this greatly reduces the burden of evaluation when testing the
performance of various iterations of the system.
4.2 Results
For the results reported in this section, we comprehensively evaluate the LLM+Solver system
by manually validating all of the outputs generated for the three datasets (utterance-focused,
textbook-focused, and multi-turn). We compare the performance of the LLM+Solver system
to the results of the LLM-only system. The LLM-only system consists of directly prompting
an LLM with instructions to write Desmos expressions and examples, while also providing
the natural language utterance problem and the calculator state. No solver solution is
provided. Although the framework of the system can be applied to LLMs and solvers
broadly, in this paper we evaluate using GPT-4 as the LLM and Wolfram Alpha as the solver.
Tables 4, 5 and 6 compare the results for the LLM-only system and the LLM+Solver systems.
Table 4: Accuracy of LLM-only and LLM+Solver models
LLM-only LLM+Solver
Utterance-Focused Dataset 66% **90%**
Textbook-Focused Dataset 64% **86%**
Multi-turn Dataset 86% **91%**
Table 5: Accuracy of individual categories in utterance-focused dataset
LLM-only LLM+Solver
Graph Circles 90% **100%**
Transform Shapes 70% 70%
Intersections of Lines 20% **100%**
Transform Functions **100%** 90%
Graph Lines 90% **100%**
Local Minima and Maxima 40% **90%**
X Intercepts, Y Intercepts 50% **80%**
**LLM+Solver vs. LLM-only system Table 4 shows that the addition of the solver results in**
a significant overall performance increase across all three datasets. Tables 5 and 6 further
show that the greatest performance increase occurs in categories such as Local Minima and
Maxima, X and Y intercepts, Intersections of Lines, Systems of Equations, and Tangents
to Circles. These problems require complex calculations which are difficult for GPT-4 to
carry out by itself, meaning that GPT-4 will frequently get them wrong. However, Wolfram
Alpha can solve these problems, given a well-formulated query. As a result, the LLM+Solver
system shows strong improvement over the LLM-only system for these categories.
**Error Modes for LLM+Solver system The proposed system demonstrates excellent accuracy**
across a wide variety of problem types, as demonstrated in Table 5 and Table 6. However,
there are some notable categories on which it has poor accuracy. It is instructive to delve
deeper into these error modes.
- For the category ”Tangents to Parabolas”, the problem arises from the Wolfram
ALpha solver API, which does not return the correct answer in its Step-by-Step
Solution response. This is a specific instance of a broader problem, where it is not
always clear which of the Wolfram Alpha API response fields should be input to the
LLM for further reasoning; fixing this requires conditioning response interpretation
on the problem type (e.g. ”Tangents to Parabola” and ”Tangents to Circles” produce
different types of API responses).
- For ”Rigid Function Transformations”, errors arise from improper handling of
polygon transformations. While the solution is very robust to transformations of
-----
Table 6: Accuracy of individual categories in textbook-focused and multi-turn datasets
LLM-only LLM+Solver
Proportional Relationships 100% 100%
Linear Inequality Systems **100%** 90%
Graph Inequalities 90% **100%**
Graph Lines 100% 100%
Graph Lines (multi-turn) 91% **100%**
Graph Polynomials + Identify Zeros 67% **100%**
Graph Polynomials (multi-turn) 67% **100%**
Systems of Linear + Quadratic Equations 15% **100%**
Systems of Lin + Quad Eqns (multi-turn) 60% **100%**
Graph Circles 100% 100%
Graph Circles (multi-turn) **96%** 83%
Transformations of Functions 100% 100%
Transform Functions (multi-turn) 86% **90%**
Graph Inverse Functions 70% **100%**
Graph Inverse Functions (multi-turn) 90% **100%**
Tangents to Parabolas 0% **11%**
Tangents to Circles 0% **75%**
Linear and Nonlinear Functions 60% **70%**
Linear and Nonlinear Functions (multi) **100%** 70%
Systems of Linear Equations 40% **100%**
Rigid Transformations + Dilations **70%** 60%
parametric shapes, it struggles with non-parametric shape transformations. Improving this is an area of future work.
- For other categories, some common (though occasional) error modes include: incomplete specifications (like ”Draw a circle” without any radius or center specified)
sometimes result in the system producing parametric expressions without associated sliders; a single query containing a list of sub-queries (”Plot the x-intercept,
the y-intercept, the local minima and maxima, and the asymptotes”) occasionally
result in some sub-queries being missed; and tasks of sufficient complexity (”Move
the circle so it is tangent to a line which satisfies property x and property y”) may
be computed incorrectly. The last error mode is not generally an issue for the pedagogical levels that the system is designed for, but could be an issue for higher-ed
(compared to K-12).
**Analysis of the LLM-only system The LLM-only system struggles the most in problems**
that require complicated reasoning and calculations to solve, such as ”Tangents to Circles”,
”Systems of Linear + Quadratic Equations”, ”Intercepts” and others. It tends to fail either
by solving the problem with an incorrect method or executing calculations incorrectly. It
also sometimes plots the underlying functions or shapes, but then does not successfully
compute points of interesections, extrema etc.
For many categories, the performance of the LLM-only system was strong to begin with,
such as Circles, Proportional Relationships, and Graph Lines. These categories contain
simple problems with little mathematical calculation, so GPT-4 is able to succeed at these
problems without the help of a solver. There are some categories for which there are no
Wolfram Alpha queries that can be used to solve the problem, such as Transform Shapes
and Transformations on Functions. These categories do not show much change in the
performance of both systems, as Wolfram Alpha cannot be used to provide answers for
these categories. Incorporation of a more powerful tool, such as Python, could allow the
system to successfully solve these problems.
-----
**5** **Discussion**
The results presented in this paper highlight the potential of domain-specific automation
created via LLM orchestration of specialized tools. The presented case-study highlights
some of the main challenges that need to be overcome in such systems. The paucity of
preexisting datasets requires careful creation of new datasets for benchmarking; there is a
need to identify and control specialized tools; and there is a need for auto-evaluation to
validate and improve system performance.
For the case of mathematical pedagogy, we showed that through the design and development of an LLM-orchestrated system incorporating multiple specialized tools, through the
creation of new benchmark datasets based on Common Core, and through the creation of an
auto-evaluation pipeline, we created an effective automated system and the means to easily
evaluate future iterations. The system has strong performance, and for many categories of
problems it can consistently produce accurate outputs.
For the domain under consideration, there are categories of problems for which the system
is not yet fully reliable. To improve system performance in these categories, there are
several directions that we can take in the future. An especially promising approach is to add
retrieval-based techniques to make adjustments to the system for the individual problem
categories. This individualized approach could improve accuracy for some categories as
compared to our current, one-size-fits-all system. Another key direction is to reduce latency;
we plan to evaluate our system using smaller open-source LLMs, trained via supervised
fine-tuning, instead of a large general model like GPT-4.
More broadly, for domain-specific agentic solutions, the approach presented in this paper
offers a set of patterns that can be generalized. We expect that the generalization and
application of these patterns to other domains and problems will continue to remain fertile
ground for future research.
Acknowledgments
We would like to thank Ravi Kokku, Marc Pickett, Prasenjit Dey, and Paul Haley for helpful
discussions and feedback.
**References**
Ibrahim Adeshola and Adeola Praise Adepoju. The opportunities and challenges of chatgpt
in education. Interactive Learning Environments, pp. 1–14, 2023.
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David,
Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog,
Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui
Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov,
Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada,
Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre
Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia,
Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, and Andy Zeng. Do as i can, not as i say:
Grounding language in robotic affordances, 2022.
Renat Aksitov, Sobhan Miryoosefi, Zonglin Li, Daliang Li, Sheila Babayan, Kavya Kopparapu, Zachary Fisher, Ruiqi Guo, Sushant Prakash, Pranesh Srinivasan, Manzil Zaheer,
Felix Yu, and Sanjiv Kumar. Rest meets react: Self-improvement for multi-step reasoning
llm agent, 2023.
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy
Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen,
Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli,
Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua
Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage,
Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam
-----
Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham,
Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman,
Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom
Brown, and Jared Kaplan. Constitutional ai: Harmlessness from ai feedback, 2022.
David Baidoo-Anu and Leticia Owusu Ansah. Education in the era of generative artificial
intelligence (ai): Understanding the potential benefits of chatgpt in promoting teaching
and learning. Journal of AI, 7(1):52–62, 2023.
Bowen Baker, Ilge Akkaya, Peter Zhokhov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon
Houghton, Raul Sampedro, and Jeff Clune. Video pretraining (vpt): Learning to act by
watching unlabeled online videos, 2022.
Chethan Bhateja, Derek Guo, Dibya Ghosh, Anikait Singh, Manan Tomar, Quan Vuong,
Yevgen Chebotar, Sergey Levine, and Aviral Kumar. Robotic offline rl from internet videos
via value-function pre-training, 2023.
Konstantinos Bousmalis, Giulia Vezzani, Dushyant Rao, Coline Devin, Alex X. Lee, Maria
Bauza, Todor Davchev, Yuxiang Zhou, Agrim Gupta, Akhil Raju, Antoine Laurens, Claudio Fantacci, Valentin Dalibard, Martina Zambelli, Murilo Martins, Rugile Pevceviciute,
Michiel Blokzijl, Misha Denil, Nathan Batchelor, Thomas Lampe, Emilio Parisotto, Konrad
Zołna, Scott Reed, Sergio G˙ omez Colmenarejo, Jon Scholz, Abbas Abdolmaleki, Oliver´
Groth, Jean-Baptiste Regli, Oleg Sushkov, Tom Rothorl, Jos¨ e Enrique Chen, Yusuf Ay-´
tar, Dave Barker, Joy Ortiz, Martin Riedmiller, Jost Tobias Springenberg, Raia Hadsell,
Francesco Nori, and Nicolas Heess. Robocat: A self-improving foundation agent for
robotic manipulation, 2023.
Sahan Bulathwela, Hamze Muse, and Emine Yilmaz. Scalable educational question generation with pre-trained language models. In International Conference on Artificial Intelligence
_in Education, pp. 327–339. Springer, 2023._
[CCSSO and NGA-Center. Common core state standards. https://www.thecorestandards.](https://www.thecorestandards.org/Math/)
```
org/Math/, 2010.
```
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra,
Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann,
Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker
Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben
Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari,
Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny
Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov,
Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon
Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta,
Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck,
Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways,
2022.
Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang Yu, Tao He, Haotian Wang, Weihua Peng, Ming Liu, Bing Qin, and Ting Liu. A survey of chain of thought reasoning:
Advances, frontiers and future, 2023.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and
John Schulman. Training verifiers to solve math word problems, 2021.
McKinsey & Company. What’s the future of generative AI? an early view in
15 charts. `https://www.mckinsey.com/featured-insights/mckinsey-explainers/`
```
whats-the-future-of-generative-ai-an-early-view-in-15-charts, 2023.
```
-----
McKinsey & Company. The economic potential of generative ai: The next productivity frontier. `https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/`
```
the-economic-potential-of-generative-ai-the-next-productivity-frontier,
```
2024.
[PBC Desmos Studio. Desmos graphing calculator. https://www.desmos.com/calculator.](https://www.desmos.com/calculator)
Chaitali Diwan, Srinath Srinivasa, Gandharv Suri, Saksham Agarwal, and Prasad Ram.
Ai-based learning content generation and learning pathway augmentation to increase
learner engagement. Computers and Education: Artificial Intelligence, 4:100110, 2023.
Dermot Francis Donnelly-Hermosillo, Libby F. Gerard, and Marcia C. Linn. Impact of graph
technologies in k-12 science and mathematics education. Computers & Education, 146:
103748, 2020. ISSN 0360-1315. doi: https://doi.org/10.1016/j.compedu.2019.103748.
Sabina Elkins, Ekaterina Kochmar, Iulian Serban, and Jackie CK Cheung. How useful are
educational questions generated by large language models? In International Conference on
_Artificial Intelligence in Education, pp. 536–542. Springer, 2023._
Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. Specializing smaller
language models towards multi-step reasoning, 2023.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan,
and Graham Neubig. Pal: Program-aided language models, 2023.
Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. Toolkengpt: Augmenting frozen
language models with massive tools via tool embeddings, 2023.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang,
Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the
[math dataset, 2021. URL https://arxiv.org/abs/2103.03874.](https://arxiv.org/abs/2103.03874)
Saghar Hosseini, Ahmed Hassan Awadallah, and Yu Su. Compositional generalization for
natural language interfaces to web apis, 2021.
Ehud Karpas, Omri Abend, Yonatan Belinkov, Barak Lenz, Opher Lieber, Nir Ratner, Yoav
Shoham, Hofit Bata, Yoav Levine, Kevin Leyton-Brown, Dor Muhlgay, Noam Rozen, Erez
Schwartz, Gal Shachaf, Shai Shalev-Shwartz, Amnon Shashua, and Moshe Tenenholtz.
Mrkl systems: A modular, neuro-symbolic architecture that combines large language
models, external knowledge sources and discrete reasoning, 2022.
Yuanzhi Li, Sebastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, and Yin Tat´
Lee. Textbooks are all you need ii: phi-1.5 technical report, 2023.
Bingbin Liu, Sebastien Bubeck, Ronen Eldan, Janardhan Kulkarni, Yuanzhi Li, Anh Nguyen,
Rachel Ward, and Yi Zhang. Tinygsm: achieving ¿80
[Wolfram Alpha LLC. Wolfram alpha. https://www.wolframalpha.com/, 2024.](https://www.wolframalpha.com/)
Michael Lutz, Arth Bohra, Manvel Saroyan, Artem Harutyunyan, and Giovanni Campagna.
Wilbur: Adaptive in-context learning for robust and accurate web agents. arXiv preprint
_arXiv:2404.05902, 2024._
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and
Peter Clark. Self-refine: Iterative refinement with self-feedback, 2023.
[MDN Web Docs. Mdn web speech api. https://developer.mozilla.org/en-US/docs/](https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API/Using_the_Web_Speech_API)
```
Web/API/Web_Speech_API/Using_the_Web_Speech_API, 2023.
```
-----
Aaron Meurer, Christopher P. Smith, Mateusz Paprocki, Ondrejˇ Cert[ˇ] ´ık, Sergey B. Kirpichev,
Matthew Rocklin, AMiT Kumar, Sergiu Ivanov, Jason K. Moore, Sartaj Singh, Thilina
Rathnayake, Sean Vig, Brian E. Granger, Richard P. Muller, Francesco Bonazzi, Harsh
Gupta, Shivam Vats, Fredrik Johansson, Fabian Pedregosa, Matthew J. Curry, Andy R.
Terrel, St[ˇ] epˇ an Rou´ cka, Ashutosh Saboo, Isuru Fernando, Sumith Kulal, Robert Cimrman,ˇ
and Anthony Scopatz. Sympy: symbolic computing in python. PeerJ Computer Science, 3:
e103, January 2017. ISSN 2376-5992. doi: 10.7717/peerj-cs.103.
Arindam Mitra, Hamed Khanpour, Corby Rosset, and Ahmed Awadallah. Orca-math:
Unlocking the potential of slms in grade school math, 2024.
[OpenAI. Gpt-4 technical report. https://arxiv.org/abs/2303.08774, 2023.](https://arxiv.org/abs/2303.08774)
Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and
Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior, 2023.
Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez. Gorilla: Large language
model connected with massive apis, 2023.
Paul Rodway and Astrid Schepman. The impact of adopting ai educational technologies on
projected course satisfaction in university students. Computers and Education: Artificial
_Intelligence, 5:100150, 2023._
Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke
Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can
teach themselves to use tools, 2023.
Noah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan,
and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning, 2023.
T.H. Trinh, Y. Wu, Q.V. Le, H. He, and T. Luong. Solving olympiad geometry without human
demonstrations. Nature, 36, 2024.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi
Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large
language models, 2023a.
Zichao Wang, Jakob Valdez, Debshila Basu Mallick, and Richard G Baraniuk. Towards
human-like educational question generation with large language models. In International
_conference on artificial intelligence in education, pp. 153–166. Springer, 2022._
Zihao Wang, Shaofei Cai, Anji Liu, Yonggang Jin, Jinbing Hou, Bowei Zhang, Haowei
Lin, Zhaofeng He, Zilong Zheng, Yaodong Yang, Xiaojian Ma, and Yitao Liang. Jarvis-1:
Open-world multi-task agents with memory-augmented multimodal language models,
2023b.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed H. Chi, Quoc Le, and
Denny Zhou. Chain of thought prompting elicits reasoning in large language models.
_[CoRR, abs/2201.11903, 2022. URL https://arxiv.org/abs/2201.11903.](https://arxiv.org/abs/2201.11903)_
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi,
Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models, 2023.
Hao Wen, Yuanchun Li, Guohong Liu, Shanhui Zhao, Tao Yu, Toby Jia-Jun Li, Shiqi Jiang,
Yunhao Liu, Yaqin Zhang, and Yunxin Liu. Autodroid: Llm-powered task automation in
android. In Proceedings of the 30th Annual International Conference on Mobile Computing and
_Networking, pp. 543–557, 2024._
Hongtao Wu, Ya Jing, Chilam Cheang, Guangzeng Chen, Jiafeng Xu, Xinghang Li, Minghuan
Liu, Hang Li, and Tao Kong. Unleashing large-scale video generative pre-training for
visual robot manipulation, 2023a.
-----
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang,
Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White,
Doug Burger, and Chi Wang. Autogen: Enabling next-gen llm applications via multi-agent
conversation, 2023b.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan
Cao. React: Synergizing reasoning and acting in language models, 2023.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T.
Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own
mathematical questions for large language models, 2023.
-----
| [
"Arya, Bulusu",
"Brandon, Man",
"Ashish, Jagmohan",
"Aditya, Vempaty",
"Jennifer, Mari-Wyka",
"Deepak, Akkil"
] | 2024-07-24T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2407.17544 | https://arxiv.org/abs/2407.17544 | https://www.semanticscholar.org/paper/3e5a8f454b0d5b642281f0a895b5e16aa702d885 |
Mathador-LM: A Dynamic Benchmark for Mathematical Reasoning on Large Language Models | We introduce Mathador-LM, a new benchmark for evaluating the mathematical reasoning on large language models (LLMs), combining ruleset interpretation, planning, and problem-solving. This benchmark is inspired by the Mathador game, where the objective is to reach a target number using basic arithmetic operations on a given set of base numbers, following a simple set of rules. We show that, across leading LLMs, we obtain stable average performance while generating benchmark instances dynamically, following a target difficulty level. Thus, our benchmark alleviates concerns about test-set leakage into training data, an issue that often undermines popular benchmarks. Additionally, we conduct a comprehensive evaluation of both open and closed-source state-of-the-art LLMs on Mathador-LM. Our findings reveal that contemporary models struggle with Mathador-LM, scoring significantly lower than average 3rd graders. This stands in stark contrast to their strong performance on popular mathematical reasoning benchmarks. | It is shown that, across leading LLMs, it is possible to obtain stable average performance while generating benchmark instances dynamically, following a target difficulty level, which alleviates concerns about test-set leakage into training data, an issue that often undermines popular benchmarks. | ## MATHADOR-LM: A DYNAMIC BENCHMARK FOR MATHEMATICAL REASONING ON LARGE LANGUAGE MODELS
**Eldar Kurtic[∗]**
ISTA & Neural Magic, Inc.
```
[email protected]
```
**Amir Moeini[∗]**
ISTA
```
[email protected]
```
**ABSTRACT**
**Dan Alistarh**
ISTA & Neural Magic, Inc.
```
[email protected]
```
We introduce Mathador-LM, a new benchmark for evaluating the mathematical reasoning on large
language models (LLMs), combining ruleset interpretation, planning, and problem-solving. This
benchmark is inspired by the Mathador game, where the objective is to reach a target number using
basic arithmetic operations on a given set of base numbers, following a simple set of rules. We
show that, across leading LLMs, we obtain stable average performance while generating benchmark
instances dynamically, following a target difficulty level. Thus, our benchmark alleviates concerns
about test-set leakage into training data, an issue that often undermines popular benchmarks. Additionally, we conduct a comprehensive evaluation of both open and closed-source state-of-the-art
LLMs on Mathador-LM. Our findings reveal that contemporary models struggle with Mathador-LM,
scoring significantly lower than average 3rd graders. This stands in stark contrast to their strong
performance on popular mathematical reasoning benchmarks. The implementation is available at
```
https://github.com/IST-DASLab/Mathador-LM.
```
**Introduction**
The ability of large language models (LLMs) to approach non-trivial tasks involving both information retrieval and
mathematical reasoning has led to significant research interest in evaluating these properties. Yet, the popularity of
reasoning benchmarks, such as the often-used Grade-School Math (GSM) [1] or MATH [2] datasets, is leading to
performance saturation (see Figure 1), and can potentially lead to training set contamination. Thus, there is a stringent
need to develop new strong benchmarks to evaluate LLM reasoning.
We address this by proposing Mathador-LM, a new benchmark for examining the mathematical reasoning properties of
LLMs. At a high level, Mathador-LM follows the popular Mathador mathematical game [3], in which a human player
is given five base numbers together with a target number, and has to provide a series of calculations, each using one of
the four basic arithmetic operations, which result in the target number.[1] Each base number can only be used once, and
solutions are scored on the number of operations used—a “perfect” solution uses each basic operation and each base
number exactly once.
We define and implement Mathador-LM following the framework for few-shot evaluation of language models [4], and
evaluate leading open and closed LLMs such as LLaMA3 [5], and Qwen2 [6], as well as Claude [7] and GPT3.5/4 [8].
See Figure 4 for a sample of results. Our key observations are:
- Mathador is a hard benchmark for LLMs: state-of-the-art open and closed models score below 15% on average,
relative to the maximum achievable score per instance, and significantly below the mean of 43.7% across
3rd-grade students in 2023 [9].
- We observe clear correlations between model size and game performance, where models below 3B parameters
obtain negligible accuracy, state-of-the-art models in the 7-8B range obtain scores of 5-7%, and 70-72B models
*Equal contribution
1Our game formulation follows the mathematical game organized in France for students between the 4th and 8th grades, to which
more than 10’000 pupils participated in 2023.
-----
Figure 1: Comparative results on Mathador-LM, MMLU, and GSM8k, across the Llama3-Instruct (8B and 70B),
Phi-3-Instruct (small and medium), and Qwen2-Instruct model families. Interpolation lines show very high scores
and clear saturation on MMLU and GSM8k at or beyond the level of specialized humans, whereas on Mathador-LM
contemporary models are significantly below the average 3rd grader. MMLU and GSM8K results obtained from
[10, 11, 6].
reach the top scores of 10-15%, together with Claude-Opus. Remarkably, GPT4 and Claude-Haiku models
both obtain below 7%.
- We also provide detailed breakdowns of performance relative to instance hardness (number of existing
solutions), number of shots (example instances provided), and failure modes.
- Importantly, Mathador-LM has the property that model performance is stable across randomly-generated
_problem instances of the same difficulty, i.e. with the same number of maximum solutions. Thus, we can_
generate one-time dynamic instances of similar difficulty, preventing “over-fitting.”
Our results are especially relevant in the context of recent work [12, 13] raising concerns about contamination
across popular benchmarks used to evaluate the performance of LLMs. Their findings span three different axes: 1)
existing decontamination techniques often fail to identify problematic samples, 2) synthetic data generated by closedsource models (e.g., GPT-3.5/4 [8]) exhibits subtle test-set contamination, and 3) popular open-source datasets (e.g.,
RedPajama [14], StarCoder [15], The Stack [16], FLAN CoT [17]) are also contaminated to varying degrees, ranging
from 0.5% to 19% [12]. This evidence, together with the fact that performance on the few standard benchmarks [1, 2] for
mathematical reasoning is rapidly saturating[2], as described in Figure 1, necessitates enhancing our existing evaluation
protocols and significantly improving the decontamination of existing datasets with static benchmarks.
We propose an alternative pathway towards reliable examination of LLM performance via dynamic, one-time benchmarks
that mitigate contamination by being created on-the-fly, independently for each evaluation run. Mathador-LM satisfies
these properties: given its nature, the benchmark can be programmatically generated and verified, making it ideally
suited for fresh, one-time evaluations of LLMs. This approach mitigates issues such as test-set leakage into training
data and provides a reliable method to evaluate closed-source models, even in the absence of detailed information about
their training data. Moreover, results reveal interesting trends across different model families and sizes, and allowing to
isolate model proficiency across instruction-following, mathematical reasoning, planning, and combinatorial search.
**2** **The Mathador-LM Benchmark**
The informal definition of the Mathador-LM game we use is provided in Figure 2, which coincides with the prompt we
provide to the LLM in the default version of the game. In Table 1 we present the scoring system for the benchmark. An
example instance of the benchmark is provided in Figure 3, together with basic and “optimal” solutions.
be a permutation of a subset of operands and define the set of expressionsFormal Definition. Given a set of operands A = {ai ∈ N|1 ≤ _i ≤_ 5} and target value t ∈ N, let P ∈{S!|S ∈P(A)}
2For instance, the best achieved accuracy on GSM at the time of writing is already of 97.1% [18].EP = n(P _[c], O)|P_ _[c]_ _∈_ _C(P_ ), O ∈{+, ×, −, ÷}[|][P][ |][o]
-----
Figure 3: An example problem demonstrating both simple and best (Mathador)
solutions.
Table 1: Scoring system for Mathador-LM
benchmark. The Mathador Bonus refers
to the optimal solution, achieved by using
all five base numbers and each of the four
operators exactly once.
**Category** **Points**
Target number reached 5 points
**Operators**
Addition 1 point
Multiplication 1 point
Subtraction 2 points
Division 3 points
Mathador Bonus 6 points
**Invalid Solutions**
Target number not reached 0 points
Reuse of numbers 0 points
Negative numbers 0 points
Non-integer numbers 0 points
Figure 2: The prompt for Mathador-LM benchmark.
where C(P ) is the set of all legal parenthesization of P . Consequently the set of all expressions = _P_
expression E ∈E has the value val(E) which is derived by associating the ith opening parenthesis in E _P_ _[c][E]with the[P][ . Each]_
operator Oi. Given the score function s : E → N we are looking for E[∗] = argmaxE∈E s(E) s.t. val(E) =[S] _t._
Each expression E can be represented in an expanded form repr(E) by writing the evaluation of each parenthesis when
both of its nested values have been evaluated. For instance, repr(E) of E = ((17, ((8, 4), 11)), 2), (×, ÷, −, +) is
the Mathador solution illustrated in Figure 3. In Mathador-LM we use repr(E) as the representation since it is more
human-readable and Table 1 for scoring. The accuracy of expression E is defined as s(E)/s(E[∗]).
**Difficulty Measure.** For a specific set of operands, Et = {E ∈E| val(E) = t, s(E) > 0} is the set of all solutions
for target t. We define the difficulty measure of target t as _E_ _Et_ _[s][(][E][)][/][|][E][t][|][2][,][ following the intuition that instances]_
_∈_
with few but higher-scoring solutions are harder.
[P]
**3** **Model Evaluations**
**3.1** **Main Results**
**Evaluation Setup.** A dataset of Mathador-LM problems is generated for each model evaluation by sampling the
operand dataset A based on the official rules [3] and then sampling from possible targets {t|∃E ∈E s.t. val(E) = t}
-----
Figure 4: Detailed results on Mathador-LM across open and closed models, including confidence intervals.
based on the desired difficulty distribution. The prompt in Figure 2 is populated based on a newly generated problem
set to get the final prompt. The model’s generated answer to the prompt is parsed to get the solution block which is then
scored. Models are generally able to follow the instruction format, as shown in Table 4.
**Results and Discussion.** Figure 4 presents evaluations on several popular open and closed models. We observe
that small models (≤ 3B) and Mistral-7B tend to perform below < 2% average accuracy (0.36 points per instance,
on average), meaning that they reach a correct solution (worth ≥ 6 points) less than 6% of the time. Surprisingly,
well-performing medium models such as Qwen2-7B, Llama-3-8B, and Phi-3-medium perform on par with GPT 3.5 and
GPT4, as well as Claude-Haiku (5 to 7%), at a level corresponding to reaching a correct solution less than 20% of the
time. Further, we observe a higher tier for 70B models and Claude-Opus, which reach similar ∼ 12% performance. In
Appendix ?? we expand our analysis, and detail the score distribution across models.
**Stability.** A reliable benchmark must be reproducible, which is why most benchmarks are static. Table 2 shows that
we can obtain consistent scores on Mathador-LM even when we dynamically re-generate the benchmark, by sampling
instances with a similar difficulty mix. The easy, medium, and hard datasets are taken from the beginning, middle, and
end of the sorted list of targets, based on difficulty (see Section 2). The mixed dataset contains equal fractions from
each type.
Table 3: Impact of the number of shots on the evaluation
of Llama-3-70B-Instruct on Mathador-LM.
**# shots** 2 5 10 20
**Accuracy**
13.1 ± 0.6 13.9 ± 0.7 14.25 ± 0.6 14.34 ± 0.9
**(%)**
Table 4: Error types of instruction-following models on
Mathador-LM, in percentages.
Formatting Calculation Missed Illegal
Error Error Target Operand
Qwen2-7B 5.5 20.9 6.8 66.8
Llama-3-8B 0.3 17.3 7.1 75.3
Llama-3-70B 0.9 3.1 32.5 63.5
Table 2: Stability across 5 evaluations of LLama-3-70BInstruct on datasets of varying sizes and difficulties.
Observe that the performance on the standard “mixed”
benchmark is very stable across number of samples.
**# Samples** **Difficulty** **Accuracy (%)**
100 mixed 12.3 ± 1.7
250 mixed 11.8 ± 1.1
500 mixed 11.5 ± 0.5
easy 15.1 ± 0.8
medium 12.1 ± 0.6
hard 4.3 ± 0.2
mixed 11.3 ± 0.5
1000
1500 mixed 12.0 ± 0.5
-----
**3.2** **Ablations**
**Impact of Number of Shots.** We investigate whether increasing the number of “shots” in the few-shot evaluation
setup helps performance on Mathador-LM, as few-shot prompting [19] is known to enhance in-context learning abilities
of LLMs [20]. We report results in Table 3. Surprisingly, for Mathador-LM, we found that two shots are sufficient
to grasp the formatting and evaluation flow. Further increasing of this number only marginally improves results. In
Appendix 3.2 we further explore how the results are affected by different text-generation (decoding) strategies, such as
greedy [21] and nucleus sampling [22].
**Errors Analysis.** In Table 4 we present a breakdown of the errors that LLMs make when evaluated on Mathador-LM
benchmark, categorized into four types: Formatting, Calculation, Missed Target, and Illegal Operand. These results
highlight that the most significant challenges faced by the model are related to the use of illegal operands, which
collectively make up over 60% of the errors. This indicates that existing models still struggle even with moderate
reasoning abilities. (This is in line with the recent findings of [23].) To address the most common error made by LLMs
(Illegal Operand), we augmented our prompting strategy to explicitly show the model the set of allowed operands at
each step of the calculation process. Surprisingly, this did not improve results.
Figure 5: Distribution of scores for several models showing low correlation of higher overall performance with number
of high scoring solutions.
**Score Distribution.** Models are instructed that only their last answer will be scored, and there is no obvious strategy
for reaching a more complicated and higher scoring answer from a lower scoring one, as this is part of the task.
Consequently, it is natural that even similarly performing models may have quite different score distributions as they
may aim to obtain answers with different complexity levels (e.g., one may aim to obtain only highest-scoring answers,
but may fail to obtain one more often than if simply aiming to reach the target). Figure 5 shows the score distribution
for several low and high performing models. For instance, it is interesting to observe that Claude-3-opus outputs several
times more max-scoring solutions than Llama-3-70b-instruct, while the models score about the same on average, based
-----
on Figure 4, or that Phi-3-small focuses on obtaining simple answers correct (just reaching the target, but not focusing
on reaching high scores), which has resulted in a higher overall performance relative to Phi-3-medium, which produces
higher-scoring solutions.
**Text Generation Strategies.** Given that the nature of MathadorLM benchmark is based on generating text to arrive at a solution, we
investigate whether different decoding methods for language generation have any effect on the results. Therefore we consider both, the
simple greedy decoding [21] and the more advanced nucleus sampling [22]. We conduct an extensive search, exploring all possible
combinations of temperature (0.0, 0.3, 0.5, 0.7, 0.9) and Top-p (0.1,
0.3, 0.5, 0.7, 1.0) hyper-parameters. As can be seen from Table 5,
the results are not affected by choices of different text-generation
strategies.
**4** **Discussion and Limitations**
Table 5: Results with Llama-3-70B-Instruct on
Mathador-LM benchmark under different text
decoding techniques, evaluated across three fewshot configurations.
2-shots 5-shots 20-shots
Greedy 12.8 ± 0.5 13.9 ± 0.1 14.2 ± 1.1
Nucleus 13.1 ± 0.6 13.8 ± 0.7 14.2 ± 0.9
We introduced a new challenging LLM mathematical reasoning benchmark. Our benchmark is dynamic, as it can be
generated on-the-fly, mitigating the risks of test-set leakage and overfitting. The current setup can be easily extended to
vary difficulty levels by, for example, adjusting the ranges of base numbers, or the total number of operands.
By design, Mathador-LM is limited to a search-based mathematical task, which has been linked to both conceptual and
procedural skills [3]. Another limitation we plan to investigate in future work is prompting techniques, which might
alleviate the relatively low LLM performance on this task. Additionally, we plan to explore supervised fine-tuning
strategies.
**Acknowledgments**
The authors would like to express their gratitude to TogetherAI for providing the computational resources that made
this research possible.
**References**
[1] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert,
Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv
_preprint arXiv:2110.14168, 2021._
[2] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob
Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874,
2021.
[3] Sébastien Puma, Emmanuel Sander, Matthieu Saumard, Isabelle Barbet, and Aurélien Latouche. Reconsidering
conceptual knowledge: Heterogeneity of its components. Journal of Experimental Child Psychology, 227:105587,
2023.
[4] Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey
Hsu, Kyle McDonell, Niklas Muennighoff, et al. A framework for few-shot language model evaluation. Version
_v0. 0.1. Sept, page 8, 2021._
[5] Meta AI. Llama 3: Advanced Language Models for Open Research. GitHub repository, 2024.
[6] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei
Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming
Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang,
Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang,
Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru
Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. Qwen technical report. arXiv preprint
_arXiv:2309.16609, 2023._
[7] Anthropic. Claude: Conversational Language Understanding AI. Anthropic Website, 2023.
-----
[8] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo
Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint
_arXiv:2303.08774, 2023._
[9] Mathador. Mathador résultats du concours 2023, 2023.
[10] Edward Beeching, Clémentine Fourrier, Nathan Habib, Sheon Han, Nathan Lambert, Nazneen Rajani, Omar
[Sanseviero, Lewis Tunstall, and Thomas Wolf. Open llm leaderboard. https://huggingface.co/spaces/](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
```
open-llm-leaderboard/open_llm_leaderboard, 2023.
```
[11] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt.
Measuring Massive Multitask Language Understanding, January 2021.
[12] Shuo Yang, Wei-Lin Chiang, Lianmin Zheng, Joseph E Gonzalez, and Ion Stoica. Rethinking benchmark and
contamination for language models with rephrased samples. arXiv preprint arXiv:2311.04850, 2023.
[13] Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan
Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, et al. Textbooks are all you need. arXiv preprint
_arXiv:2306.11644, 2023._
[14] Together. Redpajama: An open source recipe to reproduce llama training dataset, 2023.
[15] Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc
Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas
Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier,
Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham
Oblokulov, Zhiruo Wang, Rudra Murthy, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca,
Manan Dey, Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni,
Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire
Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane
Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite,
Carlos Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries.
Starcoder: may the source be with you! 2023.
[16] Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Muñoz Ferrandis, Yacine Jernite,
Margaret Mitchell, Sean Hughes, Thomas Wolf, et al. The stack: 3 tb of permissively licensed source code. arXiv
_preprint arXiv:2211.15533, 2022._
[17] Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret
Zoph, Jason Wei, et al. The flan collection: Designing data and methods for effective instruction tuning. In
_International Conference on Machine Learning, pages 22631–22648. PMLR, 2023._
[18] Qihuang Zhong, Kang Wang, Ziyang Xu, Juhua Liu, Liang Ding, Bo Du, and Dacheng Tao. Achieving> 97%
on gsm8k: Deeply understanding the problems makes llms perfect reasoners. arXiv preprint arXiv:2404.14963,
2024.
[19] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances
_in neural information processing systems, 33:1877–1901, 2020._
[20] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten
Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. arXiv preprint
_arXiv:2206.07682, 2022._
[21] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are
unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
[22] Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration.
_arXiv preprint arXiv:1904.09751, 2019._
[23] Marianna Nezhurina, Lucia Cipolina-Kun, Mehdi Cherti, and Jenia Jitsev. Alice in wonderland: Simple tasks showing complete reasoning breakdown in state-of-the-art large language models. arXiv preprint arXiv:2406.02061,
2024.
-----
| [
"Eldar, Kurtic",
"Amir, Moeini",
"Dan, Alistarh"
] | 2024-06-19T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2406.12572 | https://arxiv.org/abs/2406.12572 | https://www.semanticscholar.org/paper/69958f6cacf86537c8fb7e4efaa8fb2d8e519ce2 |
Mathematical Entities: Corpora and Benchmarks | Mathematics is a highly specialized domain with its own unique set of challenges. Despite this, there has been relatively little research on natural language processing for mathematical texts, and there are few mathematical language resources aimed at NLP. In this paper, we aim to provide annotated corpora that can be used to study the language of mathematics in different contexts, ranging from fundamental concepts found in textbooks to advanced research mathematics. We preprocess the corpora with a neural parsing model and some manual intervention to provide part-of-speech tags, lemmas, and dependency trees. In total, we provide 182397 sentences across three corpora. We then aim to test and evaluate several noteworthy natural language processing models using these corpora, to show how well they can adapt to the domain of mathematics and provide useful tools for exploring mathematical language. We evaluate several neural and symbolic models against benchmarks that we extract from the corpus metadata to show that terminology extraction and definition extraction do not easily generalize to mathematics, and that additional work is needed to achieve good performance on these metrics. Finally, we provide a learning assistant that grants access to the content of these corpora in a context-sensitive manner, utilizing text search and entity linking. Though our corpora and benchmarks provide useful metrics for evaluating mathematical language processing, further work is necessary to adapt models to mathematics in order to provide more effective learning assistants and apply NLP methods to different mathematical domains. | null | ## Mathematical Entities: Corpora and Benchmarks
**Jacob Collard, Valeria de Paiva, Eswaran Subrahmanian**
National Institute of Standards and Technology, Topos Institute, Carnegie Mellon University
[email protected], [email protected], [email protected]
**Abstract**
Mathematics is a highly specialized domain with its own unique set of challenges. Despite this, there has been
relatively little research on natural language processing for mathematical texts, and there are few mathematical
language resources aimed at NLP. In this paper, we aim to provide annotated corpora that can be used to
study the language of mathematics in different contexts, ranging from fundamental concepts found in textbooks
to advanced research mathematics. We preprocess the corpora with a neural parsing model and some manual
intervention to provide part-of-speech tags, lemmas, and dependency trees. In total, we provide 182397 sentences
across three corpora. We then aim to test and evaluate several noteworthy natural language processing models
using these corpora, to show how well they can adapt to the domain of mathematics and provide useful tools for
exploring mathematical language. We evaluate several neural and symbolic models against benchmarks that we
extract from the corpus metadata to show that terminology extraction and definition extraction do not easily
generalize to mathematics, and that additional work is needed to achieve good performance on these metrics.
Finally, we provide a learning assistant that grants access to the content of these corpora in a context-sensitive
manner, utilizing text search and entity linking. Though our corpora and benchmarks provide useful metrics for
evaluating mathematical language processing, further work is necessary to adapt models to mathematics in order
to provide more effective learning assistants and apply NLP methods to different mathematical domains.
**Keywords: terminology extraction, definition extraction, entity linking, mathematics, category theory, in-**
formation retrieval
**1.** **Introduction**
The domain of mathematics has a number of
unique features from the perspective of computational linguistics research. Like most specialized domains, mathematics has its own vocabulary and quirks of language usage that differentiate it from other areas. However, mathematical language also frequently contains inline formulas (where mathematical expressions are embedded
in natural language), rigorously defined concepts,
and formal language for describing proofs, theorems, and other mathematically rigorous statements. Mathematics is also an extremely multidisciplinary domain. Different forms of mathematics are used in a wide variety of scientific domains.
Often, new branches of mathematics are applied to
different domains, resulting in new areas of applied
mathematics. As a result, mathematical language
can change quickly, and new, blended domains can
arise mixing the language use of research mathematics and other scientific domains. Finally, there
is a limited amount of annotated data for mathematical language.
These unique features grant particular importance
to computational linguistic research on mathematics. Any work on mathematical language will be
applicable to a wide variety of scientific domains,
and will improve the experience of researchers and
students attempting to bridge gaps between their
own fields of study and different sub-fields in mathematics. However, natural language processing for
mathematics comes with its own unique set of chal
lenges, as well. Processing formulas (or even language that simply contains formulas) can be especially difficult, as can reconciling the differences
between everyday language and specialized mathematical terms. Unfortunately, while there has been
some recent work on natural language processing
for mathematics, there is still a lack of benchmarks and comprehensive studies describing what
is needed to fully take advantage of this fundamental domain.
Several tasks common to computational linguistics
research are of special interest in mathematics:
- Terminology extraction (TE) for identifying
fundamental mathematical concepts themselves;
- Definition extraction (DE) for identifying formal definitions of mathematical concepts;
- Entity linking (EL) for connecting mathematical concepts to databases; and
- Collocation retrieval (CR) for identifying contexts of use for mathematical concepts.
There are other tasks that are also of interest that
play a less significant role in this paper. These
include (but are not limited to) syntactic and semantic parsing of mathematical text, relation extraction, and extracting and linking other specialized environments such as proofs and theorems. It
is also possible to link natural language to formal
-----
systems such as theorem provers (e.g., Lean[1], Isabelle[2], and Coq[3]) or computer algebra systems
(e.g., Sage[4] and GAP [5]). Linking mathematics concepts both to a structured database representation
and to proof assistants snippets of code is a newer
task, Horowitz and de Paiva (2023). These are important tasks, and we hope to provide some insight
into them, but they are not the focus of this paper.
Instead, our hope is to provide a collection of
mathematical language corpora that provide insight and benchmarking for computational linguistics research on mathematics. This collection currently consists of three corpora, each of which
covers a different context in the field of category
theory. For these corpora, we provide benchmarks and discussions of several high-end models
for terminology extraction and definition extraction, provide a simple model of entity linking, and
show how an interface for collocation retrieval can
aid in the use of these corpora as a resource for
students and researchers. The benchmarks and
[corpora are available at https://github.com/](https://github.com/ToposInstitute/parmesan_benchmarks)
```
ToposInstitute/parmesan_benchmarks and the
```
[learning assistant is available at https://github.](https://github.com/ToposInstitute/parmesan)
```
com/ToposInstitute/parmesan.
```
**2.** **Previous Work**
**2.1.** **NLP and Mathematics**
There has been some scientific work on natural
language processing for mathematical texts, sometimes referred to as mathematical language processing (MathLP). Most of these works focus on
the representation and processing of mathematical formulas. For example, Kristianto et al. (2017)
and Dadure et al. (2022) provide methods for representing mathematical formula for information retrieval.
**2.2.** **Terminology Extraction**
Terminology extraction is the task of identifying
the set of phrases in a text which represent key
concepts in a domain. Terminology extraction is
closely related to named entity recognition, and
often uses the same techniques; the difference is
that terminology extraction is interested in a different set of entities, which usually are not people,
places, or organizations. This is an important task
for mathematics, since it can be used to identify
key vocabulary for downstream tasks such as definition extraction and entity linking, and for the
creation of indices or glossaries.
There has been a wide variety of research on terminology extraction. Early methods made use of
[1https://leanprover-community.github.io/](https://leanprover-community.github.io/)
2https://isabelle.in.tum.de/
[3https://coq.inria.fr/](https://coq.inria.fr/)
[4https://www.sagemath.org/](https://www.sagemath.org/)
[5https://www.gap-system.org/](https://www.gap-system.org/)
regular expressions or other rule-based operations
applied to words or part-of-speech tags. The mwetoolkit3 (Ramisch, 2012), for example, provides
a framework for developing and searching regular expressions at different layers of representation
to extract multi-word expressions, which may be
candidates for terminology extraction. TextRank
(Mihalcea and Tarau, 2004) is another early terminology extraction model that incorporates additional statistical information into a graph-based
algorithm.
More recent terminology extraction methods use
deep learning. DyGIE++ (Wadden et al., 2019)
combines entity extraction with relation and event
extraction to achieve strong results on several
benchmarks. It was followed by models such as
SpERT.PL (Sai et al., 2021) and PL-Marker (Ye
et al., 2022), which introduced additional linguistic
information and a novel packing strategy, respectively.
Notably, many of these methods combine terminology extraction with relation extraction. Though
relation extraction is not a primary goal of this
paper, it is of interest to mathematics, since the
relationships between mathematical concepts can
be quite complex, but should be relatively welldefined.
Most previous work on terminology extraction has
not been applied specifically to math. However,
many of the models mentioned above have been
applied to other scientific domains and evaluated
against the SciERC dataset (Luan et al., 2018),
which consists of 500 scientific abstracts. Since the
sciences often make use of mathematical notation
and concepts, it seems reasonable to expect that
some transfer between the domains is possible.
**2.3.** **Definition Extraction**
Definition extraction is the task of identifying the
parts of a text that define a particular word or
phrase. There are often two components or strategies involved in definition extraction: identifying
sentences that contain definitions, and identifying
the terms and definitions precisely in text. Some
models, such as (Veyseh et al., 2019), use a joint
model which simultaneously identifies definitional
sentences and precise terms and definitions. This
model is evaluated against three datasets: WCL,
WC00, and DEFT. Word Class Lattices (WCL)
is a definition extraction benchmark consisting of
sentences from Wikipedia which distinguishes between definitional and non-definitional sentences
(Navigli and Velardi, 2010). WC00 also distinguishes between definitional and non-definitional
sentences, and contains over 2000 sentences from
the ACL anthology from the scientific domain (Jin
et al., 2013). DEFT consists of two subcorpora:
one covering textbooks from domains including bi
-----
resource for Category Theory nLab[7]. This corpus has undergone similar preprocessing to TAC. It
was selected as an exemplar for fundamental concepts in category theory and as a comprehensive
reference. In addition to mapping concepts directly
to nLab articles, it is also possible to see concepts
used in the context of other articles. For example,
in addition to the article on categories itself, the
word “category” appears in many other contexts
within nLab that can help to elucidate its meaning. To prepare this corpus for use, we remove the
Markdown markup, leaving only plain text. We
have also filtered out documents describing books
as well as meta-articles such as lists and categories.
The third corpus consists of the entire text (4058
sentences) of Basic Category Theory (BCT) by
Tom Leinster (Leinster, 2014). This is an introductory textbook, intended for students without
advanced degrees in mathematics. As such, it is
an exemplar of introductory concepts in category
theory, similar to nLab, though more foundational.
The text of the book is freely available for edit[ing at https://arxiv.org/abs/1612.09375. We](https://arxiv.org/abs/1612.09375)
process the entire LATE[X code of the textbook with]
LaTeXML[8] to convert it into plain text for processing.
To handle LATE[X markup in all three corpora, we]
use the LaTeXML converter to identify mathematical expressions. Completely removing mathematical expressions could introduce problems; since we
later apply parsing to the corpora, the gaps caused
by removing inline math will produce ungrammatical sentences and thus invalid dependency trees.
However, in their raw form, mathematical expressions are represented using LATE[X, which can be]
difficult to read, especially when used to represent
complex formulas. Therefore, we convert mathematical expressions into plain text phrases using
LaTeXML. These approximate the original mathematical formulas, providing the parser with linguistic material free of markup. For example, the expression \mathbb{Z}^n can be represented as simply Z^n.
Though these corpora do not include large amounts
of annotation, they are associated with some useful metadata. The TAC corpus includes titles, authors, dates, and keywords selected by the authors
to describe their abstracts. These keywords will
be used as part of the evaluation in the next section. The nLab corpus includes titles and dates, of
which the titles are used as part of the evaluation
in the next section. The BCT corpus contains LaTeX markup describing theorems and definitions,
which provide additional context and can be used
to evaluate definition extraction.
For all corpora, we also provide annotations of de
[7http://ncatlab.org](http://ncatlab.org)
[8https://dlmf.nist.gov/LaTeXML](https://dlmf.nist.gov/LaTeXML)
ology, history, and physics; and one covering contracts (Spala et al., 2019).
The joint model described in Veyseh et al. (2019)
reports an F1 score of up to 85.3 on WCL, 66.9 on
W00, 54.0 on DEFT Textbooks, and 71.7 on DEFT
Contracts. Another model, (Vanetik et al., 2020)
is designed specifically for mathematics and combines dependency tree and word vector representations to construct a neural classification model.
Their model scores above 0.9 on WCL and above
0.82 on WC00, and above 0.8 on WFM, a benchmark drawn from Wolfram MathWorld specifically for mathematics (Vanetik et al., 2019). This
shows significant variation between different domains. Ideally, we expect to have similar results
for the category theory domain, but due to differences in the data that may not be reflected in
training, some differences are possible.
**2.4.** **Entity Linking**
Entity linking is the task of connecting an entity
(often one extracted by a terminology extraction
system) to a representation of that entity in a
knowledge base such as WikiData[6]. Linking is similar in some ways to word sense disambiguation, in
that the correct knowledge base record must be
identified in the case that a word or phrase cannot
be unambiguously attached to a single record. By
linking entities to a knowledge base, a system can
provide information about entities in a corpus from
a manually curated repository such as WikiData.
As with terminology extraction, most entity linking methods have not been specifically evaluated
on mathematical corpora. However, there have
been a variety of entity linking models that have
achieved good results in other areas. Raiman and
Raiman (2018) uses a type system combined with
a neural classifier to constrain and classify the entities associated with a candidate term. It is also
common for entity linking models to make use of a
knowledge graph, as Mulang’ et al. (2020) do.
**3.** **Corpus Development**
We prepare three initial mathematical corpora for
use in the study of mathematical language processing. The first corpus consists of 755 abstracts (3188
sentences) from Theory and Applications of Cate_gories (TAC), a journal of category theory. This_
corpus is very similar to the one presented in Collard et al. (2022), but has undergone additional
processing and cleaning, which we describe here.
This corpus was selected as an exemplar for stateof-the-art mathematical research. The abstracts
contain many novel concepts as well as advanced
contexts of use for fundamental concepts.
The second corpus consists of 11653 articles
(175151 sentences) from the online encyclopedic
6wikidata.org
-----
root
nsubj
cop
amod amod
Reflexive coequalizers are sifted colimits
reflexive coequalizer be sift colimit
ADJ NOUN AUX VERB NOUN
JJ NNS VBP VBN NNS
Figure 1: An annotated sentence from the TAC
corpus with the original sentence, lemmas, coarsegrained and fine-grained POS tags, and labeled dependencies.
pendency trees, part of speech tags (using the universal dependencies tagset for coarse-grained annotations and spaCy’s English tagset for fine-grained
annotations), and lemmas in CONLL-U format[9].
These annotations were generated automatically
using the open NLP framework spaCy[10]. The annotations are not fundamental to the rest of the
work presented in this paper, and have not been
rigorously evaluated. However, we do hope to provide manual corrections and other improvements
to the data so that these annotations can be used
for evaluation. An example annotated sentence is
given in Figure 1.
**4.** **Experiments**
We evaluate several high-performing models for
each of three tasks: terminology extraction, definition extraction, and entity linking. Though we
do not train these models, we hope to show how
well current state-of-the-art models perform when
generalizing to the domain of mathematics, and to
highlight how our corpora can be used to evaluate
these models.
**4.1.** **Terminology Extraction**
We evaluate terminology extraction models against
four benchmarks provided by our corpora: the set
of author-selected keywords in TAC, the set of
nLab titles, the set of glossary terms in BCT, and a
set of automatically extracted multi-word expressions using mwetoolkit3. Alone, each of these evaluations is imperfect. There are many terms identified by each benchmark which are not identified
by the others. However, most of these benchmarks
should exclusively contain valid mathematical concepts, with the exception of the automaticallyextracted MWEs, which serve to identify novel
concepts which none of the other benchmarks can.
To evaluate each model, we present it with the text
of all three corpora and retrieve the set of extracted
[9https://universaldependencies.org/format.](https://universaldependencies.org/format.html)
```
html
```
[10http://spacy.io](http://spacy.io)
**Benchmark** **Examples**
Author Keywords Abelian categorification, normal epimorphism, open map
nLab Titles Balanced monoidal
category, Fiber,
Triple category
Glossary Homotopy, Manifold, Partially
ordered set
MWEs Free double category, state sum
construction, representable definition
Table 1: Examples of each benchmark type
entities. This list is compared to the four benchmarks described above. The models are not penalized for failing to extract specific instances of
an entity; the only target is to extract the set of
entities appearing in the corpus.
Table 2 shows the results of each model on each
of the four benchmarks, as well as the combined
score for all four benchmarks. The results are also
broken down by corpus, since each corpus provides
a different context of use which may be of interest
to evaluation.
**4.2.** **Definition Extraction**
To provide a benchmark for definition extraction,
we use the definition environments found in BCT.
These definitions were explicitly identified by the
author in the LaTeX code, though they may not be
the only definitional statements in the book. Each
of the evaluated models is given the entire text of
the book, with the task of identifying definitional
content as well as the headword for each definition. We record precision, recall, and F1 scores for
the number of words matched between the benchmark definition and the predicted definition. Table
3 shows the results for two advanced definition extraction systems.
**4.3.** **Entity Linking**
To evaluate entity linking, we have constructed a
set of 126 distinct mathematical concepts and identified WikiData entries that correspond to them
within the field of category theory. This correspondence was determined manually by a mathematician. In some cases, there are still multiple WikiData entries that could correspond to the entity.
In these cases, all possible entries are included.
We evaluate two entity linking models: a simple
query-based model using the Wikidata query service[11] and a simple neural model[12].
[11https://query.wikidata.org/](https://query.wikidata.org/)
[12https://github.com/egerber/](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
-----
|Col1|BCT Glossary|Keywords|Titles|MWEs|Combined|
|---|---|---|---|---|---|
||P R F1|P R F1|P R F1|P R F1|P R F1|
|TAC Corpus||||||
|Textrank DyGIE++ SpERT.PL PL-Marker|0.13 0.60 0.21 0.18 0.38 0.24 0.10 0.66 0.17 0.22 0.40 0.28|0.15 0.55 0.23 0.22 0.35 0.27 0.14 0.77 0.23 0.23 0.38 0.28|0.09 0.46 0.14 0.12 0.27 0.16 0.08 0.63 0.14 0.11 0.27 0.16|0.25 0.78 0.38 0.28 0.70 0.40 0.33 0.68 0.44 0.30 0.60 0.4|0.15 0.59 0.24 0.2 0.43 0.27 0.16 0.69 0.26 0.21 0.41 0.28|
|nLab Corpus||||||
|Textrank DyGIE++ SpERT.PL PL-Marker|0.08 0.68 0.14 0.14 0.58 0.23 0.05 0.67 0.09 0.34 0.66 0.45|0.12 0.65 0.23 0.20 0.46 0.28 0.09 0.65 0.16 0.25 0.44 0.32|0.08 0.55 0.14 0.04 0.60 0.08 0.03 0.68 0.06 0.15 0.34 0.21|0.21 0.69 0.32 0.25 0.72 0.37 0.22 0.72 0.34 0.35 0.65 0.45|0.12 0.64 0.20 0.16 0.50 0.24 0.12 0.65 0.20 0.28 0.50 0.36|
|BCT Corpus||||||
|Textrank DyGIE++ SpERT.PL PL-Marker|0.36 0.81 0.50 0.23 0.44 0.30 0.20 0.68 0.3 0.35 0.61 0.44|0.29 0.70 0.41 0.34 0.66 0.44 0.18 0.81 0.30 0.31 0.52 0.39|0.32 0.82 0.46 0.22 0.55 0.31 0.15 0.68 0.25 0.29 0.52 0.37|0.48 0.88 0.62 0.34 0.73 0.46 0.47 0.77 0.58 0.43 0.81 0.56|0.40 0.64 0.49 0.28 0.60 0.38 0.25 0.73 0.37 0.36 0.63 0.46|
|Combined Corpus||||||
|Textrank DyGIE++ SpERT.PL PL-Marker|0.19 0.70 0.30 0.18 0.47 0.26 0.12 0.67 0.20 0.30 0.56 0.39|0.19 0.63 0.29 0.13 0.49 0.21 0.14 0.74 0.24 0.26 0.45 0.33|0.16 0.61 0.25 0.13 0.47 0.20 0.09 0.66 0.16 0.18 0.38 0.24|0.22 0.31 0.26 0.21 0.29 0.24 0.18 0.34 0.24 0.28 0.36 0.32|0.19 0.56 0.28 0.16 0.43 0.23 0.13 0.60 0.21 0.26 0.44 0.33|
[Table 2: Results of terminology extraction on mathematical corpora. Each column represents an evalu-](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
ation benchmark, while each row represents a model applied to a particular corpus.
(P@1), recall, and F1 score. P@1 is used since the
entity linking model may provide many potential
candidates in a ranked list, the latter of which are
much less likely to be valid.
|Col1|P@1 Recall F1|
|---|---|
|Query spaCy|0.60 0.82 0.68 0.59 0.52 0.55|
Table 4: Results of entity linking for category theory.
**5.** **Learning Assistant**
In addition to the corpora and benchmarks described above, we have developed a simple learning
assistant called Parmesan (PARsing Mathematical
Entities Search And Navigation) that provides text
search over data in our corpora. The input to
the interface is a term, which may be any word
or phrase that appears in the corpora, and the
output is a set of sentences in which the term occurs, as well as links to Wikidata and nLab entries.
This search is intended to allow users to identify
the contexts in which an unfamiliar term appears.
This complements the definitions provided by entity linking by showing how the terms are actually
used in different mathematical contexts.
There are two types of context that the learning
assistant provides. At a high level, the three corpora (TAC, nLab, and BCT) represent contexts of
|Col1|Precision Recall F1|
|---|---|
|Vanetik et al. Veyseh et al.|0.12 0.44 0.19 0.03 0.33 0.05|
Table 3: Results of definition extraction on BCT
[We developed the query-based model as a simple](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
[way to retrieve Wikidata entries that are likely to](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
[be in the field of category theory, as opposed to](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
other domains. [The complete query we used is](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
[provided in the supplementary code. This query](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
[finds entries whose label or alias matches the given](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
[phrase, but filters out any entries which belong to](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
[the following classes, which are unlikely to con-](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
[tain mathematical concepts: physical objects, con-](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
[crete objects, physical locations, Wikimedia cat-](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
[egories, activities, human behaviors, artistic con-](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
[cepts, points in time, time intervals, and curren-](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
cies.
[The query may seem somewhat arbitrary; it was](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
[developed over time during the course of this](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
[project to remove specific errors that we found. We](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
[expect to continue developing this query to better](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
[match the needs of users and improve the results](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
of the evaluation.
[Table 4 shows the results of two entity linking mod-](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
[els on our benchmark. We provide precision at 1](https://github.com/egerber/spaCy-entity-linker/tree/master/spacy_entity_linker)
```
spaCy-entity-linker/tree/master/spacy_entity_
linker
```
-----
Figure 2: Search results using our user interface. The given results are from nLab and TAC, respectively.
language use. The TAC corpus, being drawn from
journal articles, provides a view into the stateof-the-art, advanced concepts, and newly-coined
phrases. The nLab and BCT corpora, on the other
hand, are primarily dedicated to descriptions of
common, high-level concepts in category theory.
Each corpus also provide precise contexts in the
form of exemplars of sentences and phrases where
the target term is used. We can easily identify
matches of the phrase that the user has input
by finding corresponding lemmas in the annotated
corpus. We use lemmas (generated with spaCy)
-----
to show the term used with different inflections.
We then display all sentences that contain the corresponding lemmas. However, the distinction between the three corpora is kept clear: results from
TAC are returned separately from nLab, which
are returned separately from BCT. This allows the
user to clearly distinguish between different contexts in which terms appear, both at the sentential level, at the document level, and at the corpus level. Links are provided to individual nLab
articles and to specific TAC abstracts if the user
requires more information or additional context.
Figure 2 shows an example of search results found
by the search engine for the search term “double
category”. At the top is the list of knowledge base
entries found for that concept; in this case, there
is the concept of a double category from category
theory in WikiData (Wikidata entry Q99613675)
and the nLab entry on double categories.
Next are shown results from BCT, nLab, and TAC
sentences. Each document is displayed as a separate card with a link to the original document
(TAC abstract, nLab article, or BCT paragraph).
Notably, there are no results for double category in
BCT, indicating that it is a slightly more advanced
term not found in a typical introductory course.
A list of sentences containing the search term are
then shown within the card. The search term is
highlighted where it appears in the text of each
sentence. As can be seen by this example, variant
forms of the word (such as the plural “categories”)
are shown as well as the exact terms searched by
the user. However, there is currently no additional
semantic or vector-based search to identify similar concepts to “double” or “category”. Since the
system is aimed primarily at learners, we hope to
keep the analysis straightforward and easily interpretable to the user.
The example sentences in Figure 2 reveal certain
facts about double categories that are useful to a
newcomer in the field: they are formally more general than 2-categories; there is a kind of double
category called a free double category; there are
certain mathematical problems of interest for double categories.
The user is also able to hide and display the TAC
and/or nLab and BCT corpora individually if they
are only searching for information from a certain
set of contexts.
**6.** **Discussion**
**6.1.** **Mathematical Language**
**Processing**
The experiments in Section 4 show that additional
work is necessary to achieve strong performance
in mathematical language processing. Though
SpERT.PL achieves high recall, this is coupled
with low precision, suggesting that many of the
terms this model predicts are not actually valid
mathematical terms. This can be confirmed qualitatively by examining the set of false positives
found by each model, as shown in Table 5. Many
of these predicted terms, though valid phrases, are
not mathematical in nature and do not refer to
specific concepts.
|DyGIE++|all, also to obtain, be necessar- ily, e / m, if|
|---|---|
|Textrank|1965, all, any diagram, at least one, several approaches|
|SpERT.PL|it, them, basis, R(S), one|
|PL-Marker|and both, these, ideas, to, a choice of a|
Table 5: Example false positives for terminology
extraction.
These models were not given the opportunity to
adapt to the category theory domain through
training. Providing some training examples from
category theory has the potential to improve these
results. However, it should also be noted that Textrank is an unsupervised model of terminology extraction, and still underperforms on mathematical language relative to its performance in other
domains (Mihalcea and Tarau, 2004). This may
suggest that there is inherent difficulty identifying
mathematical concepts.
There are similar challenges for definition extraction, and we can likewise confirm through false positives that many of the results are unexpected for
mathematics, as shown in Tables 6 and 7.
The entity linking models we present, including the
simple query-based model, perform relatively well,
however, possibly due to a relatively low incidence
of ambiguity in category theory. Though some specific terms are highly ambiguous, other mathematical concepts are complex phrases, which do not
have everyday meanings. Additional stress-testing
with sets of shorter phrases may reveal additional
challenges in the area.
**6.2.** **Mathematical Information**
**Retrieval**
The current implementation of the user interface
provides a tool for learners and researchers in the
field of category theory to search for concepts to
find their contexts of use and information about
them in Wikidata. This provides the user with
different points of view about a concept: concise but highly structured, interconnected data in
Wikidata; the expert, but general and pedagogical, view of nLab; and the cutting-edge research
point-of-view in TAC.
Each of these points of view may be useful to different users, and separating them in the display
-----
|Term|Definition|
|---|---|
|composition|describes the process of attaching the outputs of one circuit to the inputs of another|
|a decomposition|the composition-representative subsets of the hom- set T([m], [0|
|isotropy group|a presheaf of groups on C|
|distributive|finite|
|distributive|both a tensor and a par|
|idempotent relations|the|
|cartesian|every comonad has an Eilenberg-Moore object and every left adjoint arrow is|
|isotropy rank|the isotropy rank of a small category is the ordinal at which the sequence of quotients stabilizes|
|FILL|an intriguing version of|
|Cat|Lax (B N|
Table 6: Definitions extracted by Veyseh et al. (2019)
The equivalence is FOLDS equivalence of the FOLDSSpecifications of the two concepts.
The concept of algebra is given as an adjunction with invertible
counit.
Thus we maintain that the notion of linear-distributive category (which has both a tensor and a par, but is nevertheless
more general than the notion of monoidal category) provides
the correct framework in which to interpret the concept of
Frobenius algebra.
The goal of this article is to emphasize the role of cubical sets
in enriched category theory and infinity-category theory.
A model for an EA sketch in a lextensive category is a ‘snapshot’ of a database with values in that category.
Table 7: Definitional sentences identified by Vanetik et al. (2020)
allows the user to compare and contrast different
contexts of use of the words they are looking for,
providing real-world examples and practical information about novel concepts.
This style of interactive learning can be further
improved as we incorporate resources from other
sources and new natural language processing methods. For example, new corpora can be incorporated to provide new contexts of use for concepts.
Adding a repository of articles from a category theory subsection of arXiv would add contexts from
new preprints and a broader class of mathematical journal articles. Similarly, we can incorporate
entity linking to other databases such as Planet
Math[13] or the Encyclopedia of Mathematics[14].
We can also incorporate new advances in natural
language processing and technology. As shown in
Section 4, terminology extraction suffers from challenges in specific domains such as category theory.
Since the relation extraction algorithms we study
[13https://planetmath.org](https://planetmath.org)
[14https://encyclopediaofmath.org/wiki/Main_](https://encyclopediaofmath.org/wiki/Main_Page)
```
Page
```
are unable to accurately extract mathematical concepts, the relations that build on these concepts are
generally lacking as well. With additional training or other advances in relation extraction, the
addition of relations to the interface would introduce a new type of context to users. By understanding how concepts are related to one another,
a learner can understand the meaning of that concept in terms of more familiar ideas. Adding definition extraction, semantic similarity search, and
other natural language processing methods to the
system can grant it similar improvements.
Other future work for the system includes improving the order of search results, better filters on
Wikidata links, and various performance improvements. The addition of automatic definition extraction is also considered to be of particular importance, since definitions as they appear in context will be especially useful to learners.
The principles of this research are by no means
limited to category theory, though category theory does pose some unique challenges and provides
some unique opportunities due to its growing pres
-----
ence in interdisciplinary research. Similar interfaces could, however, be applied to any field.
Overall, our work provides a new approach to
search for learners new to the field of category theory. This approach is centered around providing
context and domain-specific knowledge about user
concepts. Because the user provides the concepts,
there is less need for error-prone concept extraction, and we can instead rely on entity linking, taking advantage of known properties of the domain.
The system provides several different contexts, allowing the user to compare and contrast disparate
sources of knowledge to find the information they
need about novel concepts.
We have shown that state-of-the-art computational
linguistic tools largely do not apply, without adaptation, to mathematical texts. Precision and recall scores are much lower than originally expected.
However, it may be possible to adapt these models
more effectively with additional training provided
by annotated corpora, vocabularies, and knowledge graphs. We have provided some initial linguistically annotated mathematical corpora and
online tools to build up the toolbox for processing
mathematical texts. Much more additional work is
needed. We hope to continue work in category theory, improving our corpora, building up a knowledge graph for category theory using definitions
and relation identification, as well as representing mathematical results (theorems, lemmas, and
propositions). We have also begun extending this
work into a corpus of linear algebra, showing that
the results are not specific to category theory. Our
work complements previous work concentrating on
proofs by targeting mathematical statements, definitions, and concepts.
**7.** **Disclaimer**
Certain commercial entities, equipment, or materials may be identified in this document in order
to describe an experimental procedure or concept
adequately. Such identification is not intended to
imply recommendation or endorsement by the National Institute of Standards and Technology, nor
is it intended to imply that the entities, materials,
or equipment are necessarily the best available for
the purpose.
**8.** **Bibliographical References**
Jacob Collard, Valeria de Paiva, Brendan Fong,
[and Eswaran Subrahmanian. 2022. Extracting](https://aclanthology.org/2022.wnut-1.2)
[mathematical concepts from text. In Proceedings](https://aclanthology.org/2022.wnut-1.2)
_of the Eighth Workshop on Noisy User-generated_
_Text (W-NUT 2022), pages 15–23, Gyeongju,_
Republic of Korea. Association for Computational Linguistics.
Pankaj Dadure, Partha Pakray, and Sivaji Bandy[opadhyay. 2022. Embedding and generalization](https://doi.org/https://doi.org/10.1016/j.jksuci.2021.05.014)
[of formula with context in the retrieval of mathe-](https://doi.org/https://doi.org/10.1016/j.jksuci.2021.05.014)
[matical information. Journal of King Saud Uni-](https://doi.org/https://doi.org/10.1016/j.jksuci.2021.05.014)
_versity - Computer and Information Sciences,_
34(9):6624–6634.
Yiping Jin, Min-Yen Kan, Jun-Ping Ng, and Xiangnan He. 2013. [Mining scientific terms and](https://aclanthology.org/D13-1073)
[their definitions: A study of the ACL Anthol-](https://aclanthology.org/D13-1073)
[ogy. In Proceedings of the 2013 Conference on](https://aclanthology.org/D13-1073)
_Empirical Methods in Natural Language Process-_
_ing, pages 780–790, Seattle, Washington, USA._
Association for Computational Linguistics.
Giovanni Yoko Kristianto, Goran Topi´c, and Akiko
Aizawa. 2017. [Utilizing dependency relation-](https://doi.org/https://doi.org/10.1007/s10791-017-9296-8)
[ships between math expressions in math IR. In-](https://doi.org/https://doi.org/10.1007/s10791-017-9296-8)
_formation Retrieval Journal, 20(2):132–167._
Tom Leinster. 2014. Basic Category Theory. Cambridge University Press.
Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreferencefor scientific knowledge graph construction. In Proc.
_Conf. Empirical Methods Natural Language Pro-_
_cess. (EMNLP)._
Rada Mihalcea and Paul Tarau. 2004. [Tex-](https://aclanthology.org/W04-3252)
[tRank: Bringing order into text.](https://aclanthology.org/W04-3252) In Proceed_ings of the 2004 Conference on Empirical Meth-_
_ods in Natural Language Processing, pages 404–_
411, Barcelona, Spain. Association for Computational Linguistics.
Isaiah Onando Mulang’, Kuldeep Singh, Chaitali
Prabhu, Abhishek Nadgeri, Johannes Hoffart,
and Jens Lehmann. 2020. [Evaluating the im-](https://doi.org/10.1145/3340531.3412159)
[pact of knowledge graph context on entity dis-](https://doi.org/10.1145/3340531.3412159)
[ambiguation models. In Proceedings of the 29th](https://doi.org/10.1145/3340531.3412159)
_ACM International Conference on Information_
_& Knowledge Management, CIKM ’20, page_
2157–2160, New York, NY, USA. Association for
Computing Machinery.
[Roberto Navigli and Paola Velardi. 2010. Learning](https://aclanthology.org/P10-1134)
[word-class lattices for definition and hypernym](https://aclanthology.org/P10-1134)
[extraction.](https://aclanthology.org/P10-1134) In Proceedings of the 48th Annual
_Meeting of the Association for Computational_
_Linguistics, pages 1318–1327, Uppsala, Sweden._
Association for Computational Linguistics.
Johnathan Raiman and Oliver Raiman. 2018.
[Deeptype: Multilingual entity linking by neu-](https://doi.org/10.1609/aaai.v32i1.12008)
[ral type system evolution.](https://doi.org/10.1609/aaai.v32i1.12008) In Proceedings of
_the Thirty-Second AAAI Conference on Artifi-_
_cial Intelligence (AAAI-18), pages 5406–5413._
-----
Santosh Tokala Yaswanth Sri Sai, Prantika
Chakraborty, Sudakshina Dutta, Debarshi Kumar Sanyal, and Partha Pratim Das. 2021.
Joint entity and relation extraction from scientific documents: Role of linguistic information and entity types. In 2nd Workshop on
_Extraction and Evaluation of Knowledge Enti-_
_ties from Scientific Documents (EEKE2021) at_
_the ACM/IEEE Joint Conference on Digital Li-_
_braries 2021 (JCDL2021), Online._
Natalia Vanetik, Marina Litvak, Sergey Shevchuk,
[and Lior Reznik. 2020. Automated discovery of](https://aclanthology.org/2020.lrec-1.256)
[mathematical definitions in text. In Proceedings](https://aclanthology.org/2020.lrec-1.256)
_of the Twelfth Language Resources and Evalu-_
_ation Conference, pages 2086–2094, Marseille,_
France. European Language Resources Association.
Amir Pouran Ben Veyseh, Franck Dernoncourt,
[Dejing Dou, and Thien Huu Nguyen. 2019. A](https://doi.org/10.48550/arXiv.1911.01678)
[Joint Model for Definition Extraction with Syn-](https://doi.org/10.48550/arXiv.1911.01678)
[tactic Connection and Semantic Consistency.](https://doi.org/10.48550/arXiv.1911.01678)
David Wadden, Ulme Wennberg, Yi Luan, and
Hannaneh Hajishirzi. 2019. Entity, relation, and
event extraction with contextualized span representations. ArXiv, abs/1909.03546.
Deming Ye, Yankai Lin, Peng Li, and Maosong
Sun. 2022. [Packed levitated marker for en-](https://aclanthology.org/2022.acl-long.337)
[tity and relation extraction. In Proceedings of](https://aclanthology.org/2022.acl-long.337)
_the 60th Annual Meeting of the Association for_
_Computational Linguistics (Volume 1: Long Pa-_
_pers), ACL 2022, Dublin, Ireland, May 22-27,_
_2022, pages 4904–4917. Association for Compu-_
tational Linguistics.
**9.** **Language Resource References**
[Lucy Horowitz and Valeria de Paiva. 2023. Math-](https://europroofnet.github.io/cambridge-2023/#horowitz)
[gloss: Linked undergraduate math concepts. Eu-](https://europroofnet.github.io/cambridge-2023/#horowitz)
roProofNet Workshop on Natural Formal Mathematics and Libraries of Formal Proofs and Natural Mathematical Language.
Carlos Ramisch. 2012. A generic framework for
multiword expressions treatment: From acquisition to applications. In Proceedings of the ACL
_2012 Student Research Workshop, Jeju, Repub-_
lic of Korea.
Sasha Spala, Nicholas A. Miller, Yiming Yang,
Franck Dernoncourt, and Carl Dockhorn. 2019.
[DEFT: A corpus for definition extraction in](https://doi.org/10.18653/v1/W19-4015)
[free- and semi-structured text. In Proceedings of](https://doi.org/10.18653/v1/W19-4015)
_the 13th Linguistic Annotation Workshop, pages_
124–131, Florence, Italy. Association for Computational Linguistics.
Natalia Vanetik, Marina Litvak, Sergey Shevchuk,
[and Lior Reznik. 2019. WFM dataset of mathe-](https://github.com/uplink007/FinalProject/tree/master/data/wolfram)
[matical definitions.](https://github.com/uplink007/FinalProject/tree/master/data/wolfram)
**10.** **Code Availability**
All of the code necessary to reproduce the
experiments, build the corpus, and run the in[terface are freely available at https://github.](https://github.com/ToposInstitute/parmesan_benchmarks)
```
com/ToposInstitute/parmesan_benchmarks
```
and `https://github.com/ToposInstitute/`
```
parmesan.
```
-----
| [
"Jacob, Collard",
"Valeria, de Paiva",
"Eswaran, Subrahmanian"
] | 2024-06-17T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2406.11577 | https://arxiv.org/abs/2406.11577 | https://www.semanticscholar.org/paper/638ef6dba04e86debb6a3327e4c3b2e9bf7e9b93 |
Measuring and Improving BERT's Mathematical Abilities by Predicting the Order of Reasoning | Imagine you are in a supermarket. You have two bananas in your basket and want to buy four apples. How many fruits do you have in total? This seemingly straightforward question can be challenging for data-driven language models, even if trained at scale. However, we would expect such generic language models to possess some mathematical abilities in addition to typical linguistic competence. Towards this goal, we investigate if a commonly used language model, BERT, possesses such mathematical abilities and, if so, to what degree. For that, we fine-tune BERT on a popular dataset for word math problems, AQuA-RAT, and conduct several tests to understand learned representations better. Since we teach models trained on natural language to do formal mathematics, we hypothesize that such models would benefit from training on semi-formal steps that explain how math results are derived. To better accommodate such training, we also propose new pretext tasks for learning mathematical rules. We call them (Neighbor) Reasoning Order Prediction (ROP or NROP). With this new model, we achieve significantly better outcomes than data-driven baselines and even on-par with more tailored models. We also show how to reduce positional bias in such models. | null | ## Measuring and Improving BERT’s Mathematical Abilities by Predicting the Order of Reasoning
**Piotr Piekos** **Henryk Michalewski** **Mateusz Malinowski**
University of Warsaw University of Warsaw, Google DeepMind
**Abstract**
Imagine you are in a supermarket. You have
two bananas in your basket and want to buy
four apples. How many fruits do you have
in total? This seemingly straightforward question can be challenging for data-driven language models, even if trained at scale. However, we would expect such generic language
models to possess some mathematical abilities
in addition to typical linguistic competence.
Towards this goal, we investigate if a commonly used language model, BERT, possesses
such mathematical abilities and, if so, to what
degree. For that, we fine-tune BERT on a popular dataset for word math problems, AQuARAT, and conduct several tests to understand
learned representations better.
Since we teach models trained on natural language to do formal mathematics, we hypothesize that such models would benefit from
training on semi-formal steps that explain how
math results are derived. To better accommodate such training, we also propose new pretext tasks for learning mathematical rules. We
call them (Neighbor) Reasoning Order Prediction (ROP or NROP). With this new model,
we achieve significantly better outcomes than
data-driven baselines and even on-par with
more tailored models. We also show how to
reduce positional bias in such models.
the aspects needed to solve these problems. On the
other hand, deep learning (LeCun et al., 2015) aims
to develop artificial general intelligence that scales
better to various problems.
However, despite many successes in computer vision and natural language processing (Devlin et al.,
2018; He et al., 2016; Krizhevsky et al., 2012; Lan
et al., 2019; Mikolov et al., 2013), data-driven methods evade our dream of building a system with
basic, every-day, mathematical skills. As largescale natural language models become more common (Devlin et al., 2018; Brown et al., 2020), we
would expect them to also reason mathematically.
Since natural language understanding also involves symbolic manipulation (Liang, 2016), we
treat mathematical reasoning as a language understanding and revisit the data-driven paradigm.
For that, we rely on a recent language model,
BERT (Devlin et al., 2019), and challenge it with
math word problems (Ling et al., 2017). Even
though such language models have initially shown
promising results, more recent investigation shows
they may rely on various biases in their predictions (Hendricks et al., 2018; Brown et al., 2020;
Bhardwaj et al., 2020; Kurita et al., 2019). Here,
we also follow that line of investigation and show
these models can answer correctly without an understanding of the rationale behind it.
Furthermore, as directly predicting answers to
math problems often requires multiple steps of
reasoning, we show that we can improve BERT’s
generalization by exposing it to rationales (Ling
et al., 2017; Hendricks et al., 2016; Lei et al., 2016).
These are, however, only used during training similarly to a teacher that shows a student a justification
for each answer. But then, the student is evaluated
only on the ability to answer these questions during the college exam correctly with no access to
rationales. Finally, to learn a better representation
from rationales and to improve the generalization
**1** **Introduction**
Automatically solving math word problems has a
long history dating back to the middle sixties (Bobrow, 1964). Early approaches were rule-based
matching systems that solve the problem symbolically. Even though there are some impressive symbolic systems that operate in a relatively narrow
domain, the inability to successfully scale them up
is sometimes presented as a critique of the goodold-fashioned AI, or GOFAI (Dreyfus et al., 1992).
One issue is to create a formalism that covers all
-----
_Figure 1: BERT (right) and our novel extension (left). We use_
_shared architecture but we separate question tokens (green_
_blocks) from rationales (blue blocks) using different segment_
_and positional embeddings. We show all three losses. MLM_
_predicts masked tokens (depicted here as PrQ,k). We use ROP_
_or NROP to predict if the ordering of rationale steps is correct._
_For question-answering, we fine-tune the whole model with a_
_classification layer using softmax. We use the embedding that_
_corresponds to the [CLS] token as the input representation._
even further, we introduce novel pretext tasks and
corresponding losses, which we name (Neighbor)
Reasoning Order Prediction (ROP or NROP). We
also show that permutation invariant losses can lead
to less biased representations. With that, we outperform other data-driven baselines, and are even
on-par with methods that are more tailored to mathworld problems and the AQuA-RAT dataset.
**2** **Methods**
We use the following methods, each initialized with
BERT-base pre-trained on Wikipedia and Books
Corpus (Devlin et al., 2018; Zhu et al., 2015). Note
that, in fine-tuning they all have the same number
of parameters.
1) BERT-base. We fine-tune BERT to predict the
correct answer and show its transfer to math word
problems.
2) BERT-AQuA. We use the MLM loss on the
AQuA-RAT questions before training to predict
correct answer.
3) BERT-AQuA-RAT. We use the MLM loss on
the AQuA-RAT questions and rationales and show
if we can inject knowledge from rationales into
BERT.
4) BERT-(N)ROP. We use the MLM loss and the
novel (N)ROP loss for coherence prediction (defined later) and show if we can improve the results
by focusing the model on rationales.
Later in this paper, we propose permutation invariant losses that additionally reduce positional
biases of the BERT-base model, and can work with
all the pretext tasks described above.
_Figure 2: ROP or NROP with positive (left) and negative_
_(right) labels. We randomly swap two rationales and classify_
_if that change has happened._
**2.1** **Architectures, pretext tasks and losses**
We base our architecture on BERT (Devlin et al.,
2019) that has 12 transformer blocks (Vaswani
et al., 2017). As the core, we use the standard configuration described in (Devlin et al., 2019). We use
three self-supervised losses. One is the standard
Masked Language Modelling (MLM) but extended
to work on rationales. Other two are our new losses,
(Neighbour) Reasoning Order Prediction (ROP or
NROP). Figure 1 shows two variants of our models.
Note that, during fine-tuning, rationales and all the
self-supervised losses are discarded.
**MLM is the Masked Language Modelling (Devlin**
et al., 2019). We randomly mask 15% of the input
tokens by a special token [MASK]. The objective
of this loss is to predict the masked token using
its context casted as a classification problem over
the tokenizer vocabulary. Loss is calculated only
on masked tokens. We extend this loss to rationales. First, we randomly choose whether we mask
a question or rationale. Next, we follow the procedure above applied to either a question or rationale.
However, to encourage binding between questions
and rationales, we use the whole context for the
predictions. Interestingly, there are parallels between masking numbers and solving mathematical
equations, where it can be seen as solving the equation with unknown. For example, 2+[MASK] = 4
becomes 2 + x = 4. As a consequence, models
during training organically deal with mathematical calculations without defining a specific loss
for mathematics allowing soft transitions between
natural and more formal languages.
**ROP is our novel coherence loss.** Since rationales are sequences of consecutive reasoning steps,
the order of the execution is critical as shown in
Figure 2. Following this intuition, we introduce
-----
Reasoning Order Prediction (ROP) that predicts
whether the order of the rationale steps is preserved.
Hence it encourages the network to pay more attention to rationales. The loss is similar to Sentence
Order Prediction (SOP) (Lan et al., 2019), but ours
is focused on learning reasoning steps. NROP is an
extension of ROP where only consecutive rationale
steps are swapped making the prediction (swap
or no swap) task more challenging and, hence, it
can arguably lead to a better representation as understanding the correct ordering is more nuanced.
Indeed, we observe that our models trained with
NROP correctly predict if swap has occurred in
about 75% cases, while with ROP in about 78%
cases (both on the validation set). This indeed,
confirms our hypothesis that NROP task is more
challenging than ROP.
**3** **Results**
**Dataset. We use AQuA-RAT (Ling et al., 2017). It**
has about 100k crowd-sourced math questions with
five candidate answers (one is correct). Each question has a rationale – a step-by-step explanation of
how the answer is computed – that is only available
during training. At test time answer predictions
are based on questions. The train set has roughly
100k question-answer-rationale triples, while dev
and test about 250 question-answer pairs each.
**Main results. Table 1 shows our main results. We**
see that our method is the state-of-the-art among
the models with minimal inductive biases and is
very competitive to the other two models that
are more specific to handle word math problems
(e.g., requires programs). Moreover, even though
BERT is already a stronger model than LSTM, it
is better to use its MLM pretext task and loss on
the AQuA-RAT questions (BERT-AQuA) or even
better on questions and rationales (BERT-AQuARAT). However, models with our novel coherence
prediction losses can better learn from rationales
(BERT-ROP and BERT-NROP).
Moreover, we observe a highly sensitive relationship between dev and test sets (Figure 3, left),
where small changes in the accuracies in the former set can lead to more dramatic changes at test
time. Indeed, the correlation of results between
both sets is only 0.082. As the validation set is
quite small, we propose an extended dev consisting
of 5000 randomly chosen samples from the training
set extended by the whole dev set. Although not
ideal, and the sensitive relationship is still present
|Model|Accuracy|
|---|---|
|Random chance LSTM (Ling et al., 2017) BERT-base (ours) BERT-AQUA (ours) BERT-AQuA-RAT (ours) BERT-ROP (ours) BERT-NROP (ours)|20.0% 20.8% 28.3(±2.0)% 29.1(±1.7)% 32.3(±1.8)% 35.4(±1.0)% 37.0(±1.1)%|
|AQuA-RAT (Ling et al., 2017) MathQA (Amini et al., 2019)|36.4% 37.9%|
_Table 1: Comparison of data-driven (first six rows) with two_
_hybrid approaches that use stronger and hence more specific_
_inductive biases (last two rows). Standard deviation estimates_
_(over random initializations) is given in parentheses, where_
_we see our losses can reduce the variability slightly._
_Figure 3: Accuracies for dev and test sets. Green lines show_
_the iteration that maximizes validation accuracy. The image_
_also shows the sensitivity of relationship between test and the_
_original (left) or our extended (right) validation set._
(Figure 3, right), we have increased the correlation
to 0.401. With such a new validation set, we report
37% test accuracy but we can also see that 40% is
within the reach (Figure 3, right).
**Rationales. We hypothesize that rationales con-**
tain information that is either missing or hard to
extract from questions. For instance, their structure
is different; they are more formal with emphasis
on the logical steps. However, testing that hypothesis is non-trivial as there is a confounding factor – adding more rationales results in more data.
Therefore, we artificially modify the dataset so that
both models (one trained only on questions, and
another one on questions and rationales) are trained
on roughly the same number of data points. For
that, we have estimated that rationales have 1.7
times more tokens than questions. This means that
a question combined with rationale has around 3
times more tokens than just a question. If our hypothesis is valid, training on 20% questions and
rationales should give better results than training
on 60% questions (counting the number of tokens).
We therefore created samples of respective sizes of
just questions and questions combined with rationales. We show our results in Figure 4. The results
suggest that adding more questions is insufficient
and only slightly improves the overall performance.
-----
|Original question|Col2|
|---|---|
|How much is 27 / 3|A)13 B)9 C)3 D)12 E)17|
|Generated questions||
|How much is 27 / 3 How much is 27 / 3 How much is 27 / 3 How much is 27 / 3 How much is 27 / 3|A)9 B)13 C)3 D)12 E)17 A)13 B)9 C)3 D)12 E)17 A)13 B)3 C)9 D)12 E)17 A)13 B)12 C)3 D)9 E)17 A)13 B)17 C)3 D)12 E)9|
_Table 2: Our generation method for the permutation consis-_
_tency test. Models get a point only if they solve all them._
**Correct Option: D**
**Example of a problem solved by BERT A ship went on a**
_voyage. After it had traveled 180 miles a plane started with_
_10 times the speed of the ship. Find the distance when they_
_meet from starting point.?_
**Answers: A)238, B)289, C)200, D)287, E)187**
**Correct Option: C**
|Model|Score|
|---|---|
|Random chance|0.032%|
|BERT|4.33%|
|BERT+NROP|11.02%|
|BERT AUG|13.4%|
|BERT+NROP AUG|19.7%|
|BERT SEP-NC|15.0%|
|BERT+NROP SEP-NC|22.7%|
|BERT SEP-C|16.1%|
|BERT+NROP SEP-C|23.9%|
_Table 3: Our results for the permutation consistency test._
Drop from 37.0% to 11.02% (Table 3) suggests
that models rely strongly on the order of answers.
To reduce such a bias, we test several permutation
invariant losses.
1) AUG. We sample randomly 25 permutations of
all the possible answers and use them during training. Original ordering is not used, so there is no
order bias. This is a data augmentation technique.
2) SEP-NC. The original models are trained on
a 5-class classification task, where we build the
representation by using questions and all the candidate answers, i.e., BERT(Q||P ). Here, || denotes
concatenation, Q is the question and P represents
the sequence of all answers. In SEP-NC, we block
the path between all the candidate answers and the
BERT-base. Next, we use a late-fusion to predict
if the given candidate answer matches with the
question. That is, we use the following formulation f (BERT(Q)||BERT(C)), where C ∈ _P is a_
single candidate answer and f is a multi-layer perception (with two layers). At test time, the model
_Figure 4: Accuracy scores conditioned on the number of_
_tokens available for training. To support our argument that_
_training on rationales is qualitatively different than questions,_
_we align both together so that we have comparable number of_
_tokens in both cases. Plot shows the progression of the dataset_
_size. Starting with 650K of tokens - 20% dataset BERT-AQuA_
_and 6.66% for BERT-NROP and ending with 3.25M - 100% of_
_dataset for BERT-AQuA and 33.3% dataset for BERT-NROP._
_This shows that training with rationales leads to a better_
_representation. Even better than training with more questions._
On the other hand, using rationales is more helpful.
**Embeddings. To better understand the difference**
between BERT and BERT+NROP, we analyze
theirs embeddings. For our analysis, we sample
2500 questions with a single operator in rationales,
and next we visualise them with T-SNE (Van der
Maaten and Hinton, 2008). We show both in Figure
5. We observe that BERT+NROP embeddings preserve more information about different operators.
**Permutation consistency. Random guessing on**
AQuA-RAT yields 20%. With that in mind to separate questions that were solved by chance, we have
constructed a new evaluation task – permutation
consistency test – where each question gets 5 answers at different positions. Table 2 shows our
procedure. Here, models only score a single point
if they solve all 5 questions correctly. Hence, a
random chance is 0.032% in such experiments.
Table 3 shows our results. BERT+NROP solves
almost three times as many questions as BERT.
Additionally, further inspection shows that BERT
relies on choosing the answers that most stand out,
e.g., numbers ending with zeros or floats while every other option is an integer. We didn’t observe
that simple patterns with BERT+NROP. Questions
solved by BERT+NROP usually contain one or two
operations and show that BERT+NROP better understands the problem. Below, we exemplify two
math problems solved by both models.
**Example of a problem solved by BERT+NROP: 8 man**
_work for 6 days to complete a work. How many men are_
_required to complete same work in 1/2 day?_
**Answers: A)93, B)94, C)95, D)96, E)97**
-----
_Figure 5: BERT and BERT+NROP embeddings. Colours represent different operators in rationales (T-SNE). BERT+NROP_
_embeddings better separate operators._
is prompted to score all five candidate answers and
select the one with the highest score. Appendix has
more information about this method.
3) SEP-C. As models trained with SEP-NC do not
have access to all the possible answers, their biases
to answer positions are significantly reduced. However, these models cannot compare each answer to
all other candidate answers. Here, we use the following formulation f (BERT(Q||P )||BERT(C))
to measure the compatibility of the input (question
_Q and all the candidate answers P_ ) with the given
candidate answer C ∈ _P_ . We also reset the positional encoding between every possible answer
in P . In such a way, we hypothesise the network
can learn a less biased representation, and on the
other hand, use relationship between the candidate
answers. Table 3 shows SEP-NC and SEP-C vastly
outperform the original model on the permutation
consistency test. Details are in the appendix.
SEP-NC and SEP-C improve permutation consistency tests. Yet, they give similar results to original
methods in accuracy measuring task. They achieve
respectively 33.5% (SEP-NC) and 35.4% (SEP-C).
**Questions difficulty.** To better understand the
models’ performance, we check which questions
are difficult for the model. We categorize questions
by their difficulty for BERT-NROP and BERT. To
estimate a question’s difficulty, we have ranked the
candidate answers according to the model’s uncertainties. For instance, if the correct answer has the
2nd largest probability, we assign to that question
difficulty two. With that, we group questions into
5 difficulty categories, from the easiest: D1, .., D5.
Manual inspection shows that for BERT+NROP:
_D5 requires additional knowledge or implicitly de-_
fined numbers (e.g., adding first 100 numbers), D4
requires geometry or non-linear equations and systems, D3 requires solving linear systems with a
few basic operations, D2 requires solving simple
equations, and D1 has one or two basic operations
with clearly written numbers. We show an example
from each group in the supplementary material. We
didn’t observe a similar pattern for BERT with the
exception of the easiest group D1 where the model
chooses the answer that is somewhat different from
other candidates. We provide an example of each
group in the supplementary materials.
Finally, we also compare the difficulty of questions with the difficulty perceived by humans. For
that, we have conducted a small-group human
study, where we have asked participants to solve
some AQuA-RAT questions and rate their difficulty.
We find a positive correlation between the difficulty
measured by our models (as described above) to
the difficulty judged by humans. We give more
details in the appendix.
**Conclusions.** We have investigated if BERT (Devlin et al., 2019) – a pre-trained, large language
model – can deal with mathematical reasoning. We
find that its representation is biased (Brown et al.,
2020; Bhardwaj et al., 2020; Kurita et al., 2019)
also in mathematics. We investigate and describe
that bias. Our novel pretext tasks and losses reduce that bias, but the network still finds shortcuts.
We hope our work will spark interest of the community in developing language models capable of
mathematical reasoning.
**Acknowledgements.** We thank Wang Ling (DeepMind) for his comments and suggestions on our draft. Also,
we thank Piotr Bilinski and all participants of the 2020´
Machine Learning Project course at the University of Warsaw for the conversations about the project. All experiments were performed using the Entropy cluster funded by
NVIDIA, Intel, the Polish National Science Center grant
UMO-2017/26/E/ST6/00622 and ERC Starting Grant TOTAL.
The work of Henryk Michalewski was supported by the Polish
National Science Center grant UMO-2018/29/B/ST6/02959.
-----
**Impact Statement**
Our research follows the data-driven paradigm for
creating general-purpose language models with
some mathematical skills. We expect that mathematically aware language models will broaden the
spectrum of topics they can understand, increasing
their reliability and making them more useful.
Improving mathematical abilities and coherence
in language models is likely to affect questionanswering or dialogue systems, search engines or
text summarization systems.
One considerable risk in developing language
models at scale is that they could use various
workarounds and biases to achieve their results.
We have shown that issues in the context of mathematical reasoning. Such problems can become
hazardous when wrong numbers could lead to bad
decisions. Additionally, a person could easily fall
into the fallacy that the order of magnitude is correct even if the answer is incorrect. As we showed,
the model can favour round numbers over the ones
close to the right answer. To mitigate the risk, we
encourage considering additional tests and investigating the models more rigorously.
**A** **AQuA-RAT example**
**Question: A starts a business with Rs.40,000. After 2 months,**
_B joined him with Rs.60,000. C joined them after some more_
_time with Rs.120,000. At the end of the year, out of a total_
_profit of Rs.375,000, C gets Rs.150,000 as his share. How_
_many months after B joined the business, did C join?_
**Options: A) 30, B) 32, C) 35, D) 36, E) 40**
**Rationale:**
_Assume that C was there in the business for x months_
_A : B : C = 40000 ∗_ 12 : 60000 ∗ 10 : 120000 ∗ _x_
= 40 ∗ 12 : 60 ∗ 10 : 120x = 40 : 5 ∗ 10 : 10x
= 8 : 10 : 2x
= 4 : 5 : x
_C’s share = 375000 ∗_ _x/(9 + x) = 150000_
=> 375x/(9 + x) = 150
=> 15x = 6(9 + x)
=> 5x = 18 + 2x
=> 3x = 18
=> x = 18/3 = 6
_It means C was there in the business for 6 months. Given that_
_B joined the business after 2 months. Hence C joined after 4_
_months after B joined_
_Answer is B_
**B** **Input representation**
All BERT variants use the representation that corresponds to a special token [CLS] that we put at the
beginning of the whole input sequence consisting
of question tokens followed by rationale tokens,
and in the downstream, question-answering task,
rationale tokens are replaced by the answer options.
With that, the classification uses the contextual embedding of [CLS] that captures the entire input.
MLM classifies over the entire vocabulary of possible words while the other two losses use a binary
cross-entropy loss for the predictions.
**C** **Training protocol**
We train all our architectures on AQuA-RAT using the following training phases. In all cases, we
choose our best model based on the performance
on the validation set (dev set), and report the final
performance on the test set.
**Pre-training.** Each model is pre-trained on a
large corpus of texts written in natural language
sampled from English Wikipedia and BooksCorpus (Devlin et al., 2018; Zhu et al., 2015). We
use this as the base (BERT-base) model that is also
used in all other variants of BERT. In practice, we
initialize all the models with the weights using the
HuggingFace library (Wolf et al., 2019) and don’t
keep final layer for fine-tuning. Our model therefore has the same number of weights as BERT-base.
**Self-supervision.** Here, we use our newly introduced losses, ROP and NROP, where our models use questions and possibly rationales from the
AQuA-RAT dataset. Both questions and rationales
use the same word embeddings. However, to distinguish between both modalities we use two segment
embeddings. The first one for all the question tokens, and the second one for all the rationale tokens.
That is, the segment embedding is shared among
all the question tokens, and separately among all
the rationale tokens. We use dynamic masking (Liu
et al., 2019). Here, tokens are randomly masked
for each batch. We naturally extend this approach
to other losses that we use in this phase. That is,
ROP and NROP negative examples are randomly
recreated every k epochs, where k = 2 in our case.
**Fine-tuning** is the last training phase. Here, once
our models have learnt the representation during
the self-supervised phase, we tune such a representation to the question-answering downstream task.
In this task, our input consists of question tokens
and possible answer options. There are five such
options that comes with the dataset. Like other
methods, we tread this as a five-class classification
task where the classification head is added on top
of the final embedding of the input. We consider
the embedding corresponding to the first (from the
left) [CLS] token as such the final representation.
-----
**D** **Implementation details**
In our experiments, we use four TITAN V GPUs.
We use a multi-gpu setup. In the pre-training phase,
we use batch size equals to four for each GPU
device. Therefore the effective batch size equals
to sixteen. We use the learning rate 5 · 10[−][5] and
trained the models for 24 epochs. In the fine-tuning
phase, we use early stopping criteria, based on the
accuracy score on the validation set. We use the
following criteria. If the model does not improve
the performance in 15 consecutive epochs, we stop
training, and evaluate a model that yields the highest validation performance. We use ADAM optimizer with learning rate 10[−][5] and gradient clipping
that sets the maximal gradient’s norm to one. All
our settings use the same hyper-parameters but they
differ due to the random initialization of our selfsupervised networks (during the self-supervised
training phase) and the classification networks (during the fine-tuning phase). Self-supervision phase
takes around 4 days on 4 GPUs, whereas finetuning takes 8 hours on a single GPU.
**E** **Question difficulty**
At this section we present an example from each
difficulty group for BERT+NROP and BERT. We
have described the grouping procedure in the main
paper.
**E.1** **BERT+NROP**
_D5: How many ways A boy can reach the top of stairs which_
_contain 10 steps, when he can take either one or two steps_
_every time?_
**Answers: A)88, B)89, C)90, D)91, E)92**
**Correct Answer: B**
**Model Answer: D**
_D4: A square piece of cloth is trimmed by 4 feet on one edge_
_to form a rectangular piece, which is then cut diagonally in_
_half to create two triangles. If the area of each of triangle is_
_70 square feet, what was the perimeter (in feet) of the original_
_piece of square cloth?_
**Options: A)56, B)58, C)60, D)62, E)64**
**Correct Answer: A**
**Model Answer: B**
_D3: Train A leaves a station every 16 minutes and Train B_
_leaves every 17 minutes. If both trains just left the station_
_simultaneously, how long until they do so again?_
**Options: A)272 minutes, B)304 minutes, C)190 minutes, D)70**
_minutes, E)35 minutes_
**Correct Answer: A**
**Model Answer: B**
_D2: 10kg of a mixture contains 30% sand and 70% clay. In_
_order to make the mixture contain equal quantities of clay and_
_sand how much of the mixture is to be removed and replaced_
_with pure sand?_
**Options: A)10/7, B)20/7, C)30/7, D)40/7, E)50/7**
**Correct Answer: B**
**Model Answer: C**
_D1: If one third of 3/4 of a number is 21. Then, find the_
_number?_
**Options: A)84, B)66, C)28, D)19, E)11**
**Correct Answer: D**
**Model Answer: D**
**E.2** **BERT**
_D5: The length of the ribbon was originally 30 cm. It was_
_reduced in the ratio 5 : 3. What is its length now?_
**Answers: A)18, B)30, C)6, D)15, E)12**
**Correct Answer: A**
**Model Answer: B**
_D4: An electric pole, 14 metres high, casts a shadow of 10_
_metres. Find the height of a tree that casts a shadow of 15_
_metres under similar conditions._
**Options: A)21, B)22, C)20, D)23, E)24**
**Correct Answer: A**
**Model Answer: C**
_D3: A rope 20 meters long is cut into two pieces. If the length_
_of one piece of rope is 3 meters shorter than the length of_
_the other, what is the length, in meters, of the longer piece of_
_rope?_
**Options: A)7.5, B)8.9, C)9.9, D)11.5, E)11.7**
**Correct Answer: D**
**Model Answer: B**
_D2: Jerry purchased a 1-year $5,000 bond that paid an an-_
_nual interest rate of 12% compounded every six months. How_
_much interest had this bond accrued at maturity?_
**Options: A)$5102, B)$618, C)$216, D)$202, E)$200**
**Correct Answer: B**
**Model Answer: A**
_D1: I have a money pouch containing Rs. 700. There are_
_equal number of 25 paise coins, 50 paise coins and one rupee_
_coins. How many of each are there?_
**Options: A)453, B)651, C)400, D)487, E)286**
**Correct Answer: C**
**Model Answer: C**
**F** **Permutation invariant methods**
In the main paper, we have shown that typical models can use positional biases in achieving answers.
This results in a low permutation consistency score
-----
(Table 3 in the main paper). To handle that issue,
we have defined extra variants that do not use positional encodings for the answer options and instead
they rely on the retrieval mechanics where input
representations are matched against the candidate
answers. Here, we describe two such variants.
**F.1** **Original methods**
Original models create an embedding of a sentence
extended by possible questions. This embedding
is then transformed by a linear layer to predict the
correct answer. That is,
_o1 = f1(BERT(Q||P_ ))
where o1 is a 5-dimensional vector with probabilities for each possible answer, Q is a question, P
are all possible answers, || represents concatenation, f1 is a single fully connected layer from 768dimensional space to 5-dimensional space with the
softmax activation. BERT is a BERT-base sentence embedding. The same approach is used for
BERT+(N)ROP.
**F.2** **SEP-NC**
In SEP-NC and SEP-C, we use separate embeddings for a question and SEParate embedding for a
candidate answer. They differ, however, in the fact
that SEP-C has access to all five possible answers,
while SEP-NC has access only to one prompted
candidate answer. Therefore NC stands for ”no
candidates”, while C stands for ”candidates”.
We train the SEP-NC model on a binary classification task to predict whether each candidate
answer C is correct. The method produces two
embeddings, one for question and another one for
a candidate answer C ∈ _P_, and next concatenates
them. That is,
_o2 = f2(BERT(Q)||BERT(C))_
where o2 is an estimated probability that C is a
correct answer, P is the sequence of all possible
answers, f2 is a single fully connected layer from
1536 (768 * 2) dimensional space to 1-dimensional
space with the sigmoid activation. Note that, all
candidate answers are independent of the question.
That is, BERT cannot use positional biases in deriving an answer. At test time, the model is prompted
to score all five candidate answers and select the
one with the highest score. We naturally extended
that approach to BERT+ROP and BERT+NROP.
Table 3 (the main paper) shows a significant improvement over the baseline method.
**F.3** **SEP-C**
SEP-NC method could be too restrictive as it does
not allow the model to compare against different
answers. Therefore, we propose another approach
that 1) alleviate the issue with positional biases, but
2) can compare between different answer options.
We call that approach SEP-C.
Originally for each token, a positional encoding
is assigned based on its position. In SEP-C, before assigning positional encoding, we artificially
reset the position at the beginning of each possible answer. For example, if possible answers are:
_a)10, b)20, c)30, d)40, e)50 they are changed into_
10; 20; 30; 40; 50 and after the tokenization, we get
the following list of tokens: [’1’,’0’, ’;’, ’2’, ’0’,
’;’, ’3’, ’0’, ’;’,’4’, ’0’, ’;’, ’5’, ’0’]. Modified positional encoding will assign value based only on
the relative position to the beginning of the current
possible answer. Therefore, in the example above,
each ’0’ will receive the same positional encoding,
and ’1’ will get the same positional encoding as
’2’, ’3’, and so on.
Formally, we have
_o3 = f3(BERT(Q||Pm)||BERT(C))_
where Pm is the sequence of all the possible answers but modified as explained above. Note that,
in this formulation, the model can use the information for all the possible answer options, but their
order is not taken into account. Table 3 (the main
paper) shows a significant improvement over the
baseline method.
**F.4** **Human study**
We carried an initial human study on the group of
16 volunteers from University of Warsaw. Volunteers were Mathematics and Informatics students
from the Faculty of Mathematics, Informatics and
Mechanics. We asked the participants to solve
questions sampled from the AQuA-RAT dataset.
We are interested in the relation between BERTs
difficulty, BERT+NROP difficulty and human difficulty. Therefore to have a full image we would
like to have 2 questions for each question difficulty
pair, for example (D1 BERT, D2: BERT+NROP).
However, that would give 25 combinations and 50
questions if we wanted to have 2 questions per combination. That would be too much to ask from a
volunteer participant. In order to reduce the number of questions, we group our 5 difficulty groups
into 3 categories as follows.
-----
|dataset|A|B|C|D|E|
|---|---|---|---|---|---|
|train|21.03%|22%|22.87%|19.95%|14.15%|
|dev|27.17%|25.98%|16.93%|19.69%|10.24$|
|test|24.80%|22.83%|20.87%|18.11%|13.38%|
_Table 4: Answer distribution in each dataset._
choose the first answer (A) gets about 24% test
accuracy.
**H** **Negative results**
While developing our self-supervised losses, we
have developed another loss that turned out to be
unhelpful. Here, we describe that loss as some
its parts could be insightful for others. (N)ROP is
a local loss focusing on rationales but not on the
connections between questions and rationales. For
that, we have developed Question Rationale Alignment (QRA). QRA changes a rationale with 50%
probability to a randomly chosen rationale from
the current batch. However, simply changing rationales would result in trivially solvable task in most
cases. All the model would have to do is check
whether numbers in the rationale and the question
match. Hence, we mask number tokens with a
special token QRA alone or QRA combined with
NROP does not improve the results, it gives it gives
33.9% accuracy on the test in the best combination,
so we didn’t include it in the main results.
**I** **Related work**
We are inspired by the following research.
**BERTology. We use BERT (Devlin et al., 2019)**
as our core. It uses Transformers (Vaswani et al.,
2017); powerful neural architectures that applies a
trainable function to all the pairs of input embeddings. It also uses masking that covers a fraction
of the input words and requires the network to predict the hidden words based on the context. With
both ingredients, the meaning (representation) of a
word emerges from the “company it keeps” (Firth,
1961). In practice, often, such representations are
pre-trained on large textual corpora with no need
for annotations, and next fine-tuned on the downstream tasks. BERT’s strong performance has resulted in the Cambrian explosion of studies of the
inner working mechanisms and various modifications (Clark et al., 2019; de Vries et al., 2019; Lan
et al., 2019; Liu et al., 2019; Sanh et al., 2019; Radford et al.; Raffel et al., 2019; Yang et al., 2019).
Finally, our Reasoning Order Prediction (ROP) is
inspired by Sentence Order Prediction (SOP) (Lan
et al., 2019). However, ROP works with multiple
_Figure 6: The average human-judged difficulty for questions_
_from each model difficulty group._
- Easy: D1
- Medium: D2 and D3 combined
- Hard: D4 and D5 combined
Because of that we have only 9 possible combinations and by sampling 2 questions from each
combination we still have a feasible number of
questions (18).
Apart from solving the question, we asked to rate
question difficulty on a scale from 1 (the simplest)
to 10 (the most challenging). In general, our participants were knowledgeable in math and solved all
the questions correctly. With that grouping we now
The average human-rated difficulty for each of 9
combinations is presented in Figure 6. The results
show that the progression of human difficulty is
correlated with the difficulty judged by the models. Additionally, the human difficulty seems to
be more sensitive to BERT+NROP difficulty than
to BERTs. In other words, increasing the difficulty of BERT+NROP will increase the human
difficulty more than the increasing difficulty of
BERT. This observation fits our previous observations that BERT+NROP solves the most straightforward questions while BERT is looking for some
leaks, like looking for the roundest answer.
**G** **Distribution of answers**
Table 4 shows the distribution of the answers in
the AQuA-RAT (Ling et al., 2017) dataset in all
the folds. Imbalance in distributions could potentially be used by models to find easy, shortcut solutions. For instance, a constant classifier that always
-----
rationale sentences, where by changing the order
we force the network to understand the consecutive
“reasoning” steps. We have also further extended
ROP to a more difficult Neighbor Reasoning Order
Prediction (NROP).
**Language and math.** Development psychologists (Cocking et al., 1988; Mestre, 2013) often
argue for the necessity of learning languages and
point out that those with limited language skills
are in danger of under-performing at school. Moreover, it is also believed that language studies involve discipline in learning and manipulating formal structures, and thus may promote the development of the organization of thoughts also required in mathematical reasoning. The similarity
between linguistic competence and mathematics
is especially pronounced when solving math word
problems (Fuchs et al., 2006, 2008; Wang et al.,
2016). Interestingly, attention appears to be crucial
in problem solving (Fuchs et al., 2006; Pasolunghi
et al., 1999). (Crossley et al., 2017) show that language skills are correlated with the performance in
mathematical tests also among the university students. In particular, they pointed out that ability
to use complex syntactic structures and cohesion
devices are linked to better scores in a blended discrete mathematics course. We take inspiration from
all such studies and decide to build our mathematical model based on language models.
**Math word problems. Solving math word prob-**
lems is a significant component of the mathematics
curriculum and is taught very early, thoroughly, and
universally. Such the emphasize is often motivated
by that solving them is among the best predictors
of employability, and is considered as a distinct
area of mathematical competence (Murnane et al.,
2001; Wang et al., 2016). Since solving such problems is unique to human intelligence, math word
problems are also interesting for the AI community. This results in various approaches, more traditional symbolic methods, neural networks, and
neuro-symbolic methods. (Bobrow, 1964; Charniak, 1969; Shi et al., 2015; Ling et al., 2017; Amini
et al., 2019; Parisotto et al., 2016; Wang et al.,
2018; Zou and Lu, 2019) as well as datasets (Ling
et al., 2017; Amini et al., 2019; Huang et al., 2016;
Saxton et al., 2019) An interesting approach is proposed in (Rabe et al., 2020), in which authors
use self-supervised tasks on parsing trees of formal expressions. This approach requires syntax
trees, and hence we would have to use an external
parser. As our goal was to make an end to end
model, we did not experiment with it, but there are
no obstacles against using it in symbiosis with our
methods. (Geva et al., 2020) also proposes selfsupervised training for improving mathematical
abilities in language models. We, however, focused
on a data-driven approach to exclude choice biases
and therefore restricted ourselves from using generated data.
**Rationales. In human communication, we always**
expect there is some rationale behind each decision.
Hence, we set the same expectations to our artificial
agents. Symbolic or semi-symbolic architectures
naturally produce justifications as a sequence of formulas in some formal language (Lane et al., 2005;
Core et al., 2006; Lomas et al., 2012; Johnson;
Liang, 2016; Malinowski and Fritz, 2014). Ideally,
such rationales would also be shared and communicated to us through some language. The latter
approach is especially appealing when applied to
black-box neural networks. For instance, (Hendricks et al., 2016) propose a system that classifies
the input image as well as it produces a textual
explanation on “why this class is suitable for the
given image”.
Systems that produce explanations either in the
form of the language (Ling et al., 2017; Hendricks
et al., 2016), attention (Bahdanau et al., 2014; Mnih
et al., 2014; Gulcehre et al., 2016; Malinowski
et al., 2018; Xu and Saenko, 2016; Yang et al.,
2016), phrase selection (Lei et al., 2016), distillation into programs (Hajipour et al., 2020), or decision trees (Alaniz and Akata, 2019) can potentially
increase the transparency of the black-box neural
networks. However, most of these approaches create rationales posthoc where the justification is conditioned on answers or by querying the network. In
our work, we use rationales to learn a finer representation that can potentially lead to better decisions.
In this sense, our technique is conceptually closer
to methods that derive answers based on the program and use rationales paired with questions to
guide the program induction process (Ling et al.,
2017).
-----
**References**
Stephan Alaniz and Zeynep Akata. 2019. Explainable
observer-classifier for explainable binary decisions.
_arXiv preprint arXiv:1902.01780._
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik
Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. [MathQA: Towards interpretable](https://doi.org/10.18653/v1/N19-1245)
[math word problem solving with operation-based](https://doi.org/10.18653/v1/N19-1245)
[formalisms. In Proceedings of the 2019 Conference](https://doi.org/10.18653/v1/N19-1245)
_of the North American Chapter of the Association_
_for Computational Linguistics: Human Language_
_Technologies, Volume 1 (Long and Short Papers),_
pages 2357–2367, Minneapolis, Minnesota. Association for Computational Linguistics.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly
learning to align and translate. _arXiv preprint_
_arXiv:1409.0473._
Rishabh Bhardwaj, Navonil Majumder, and Soujanya
Poria. 2020. Investigating gender bias in bert. arXiv
_preprint arXiv:2009.05021._
Daniel G Bobrow. 1964. Natural language input for a
computer problem solving system.
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, et al. 2020. Language models are few-shot
learners. In Advances in neural information process_ing systems._
Eugene Charniak. 1969. Computer solution of calculus word problems. In Proceedings of the 1st inter_national joint conference on Artificial intelligence,_
pages 303–316.
Kevin Clark, Urvashi Khandelwal, Omer Levy, and
[Christopher D. Manning. 2019. What does BERT](https://doi.org/10.18653/v1/W19-4828)
[look at? an analysis of BERT’s attention. In Pro-](https://doi.org/10.18653/v1/W19-4828)
_ceedings of the 2019 ACL Workshop BlackboxNLP:_
_Analyzing and Interpreting Neural Networks for_
_NLP, pages 276–286, Florence, Italy. Association_
for Computational Linguistics.
Rodney R Cocking, Rodney T Cocking, and Jose P
Mestre. 1988. Linguistic and cultural influences on
_learning mathematics. Psychology Press._
Mark G Core, H Chad Lane, Michael Van Lent, Dave
Gomboc, Steve Solomon, and Milton Rosenberg.
2006. Building explainable artificial intelligence
systems.
Scott Crossley, Tiffany Barnes, Collin Lynch, and
Danielle S McNamara. 2017. Linking language to
math success in an on-line course. International Ed_ucational Data Mining Society._
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2018. Bert: Pre-training of deep
bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. [BERT: Pre-training of](https://doi.org/10.18653/v1/N19-1423)
[deep bidirectional transformers for language under-](https://doi.org/10.18653/v1/N19-1423)
[standing.](https://doi.org/10.18653/v1/N19-1423) In Proceedings of the 2019 Conference
_of the North American Chapter of the Association_
_for Computational Linguistics: Human Language_
_Technologies, Volume 1 (Long and Short Papers),_
pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Hubert L Dreyfus, L Hubert, et al. 1992. What com_puters still can’t do: A critique of artificial reason._
MIT press.
John Rupert Firth. 1961. Papers in Linguistics 1934_1951: Repr. Oxford University Press._
Lynn S Fuchs, Douglas Fuchs, Donald L Compton,
Sarah R Powell, Pamela M Seethaler, Andrea M
Capizzi, Christopher Schatschneider, and Jack M
Fletcher. 2006. The cognitive correlates of thirdgrade skill in arithmetic, algorithmic computation,
and arithmetic word problems. Journal of Educa_tional Psychology, 98(1):29._
Lynn S Fuchs, Douglas Fuchs, Karla Stuebing, Jack M
Fletcher, Carol L Hamlett, and Warren Lambert.
2008. Problem solving and computational skill:
Are they shared or distinct aspects of mathematical cognition? _Journal of educational psychology,_
100(1):30.
Mor Geva, Ankit Gupta, and Jonathan Berant. 2020.
[Injecting numerical reasoning skills into language](https://doi.org/10.18653/v1/2020.acl-main.89)
[models. In Proceedings of the 58th Annual Meet-](https://doi.org/10.18653/v1/2020.acl-main.89)
_ing of the Association for Computational Linguis-_
_tics, pages 946–958, Online. Association for Com-_
putational Linguistics.
Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho,
and Yoshua Bengio. 2016. Dynamic neural turing machine with soft and hard addressing schemes.
_arXiv preprint arXiv:1607.00036._
Hossein Hajipour, Mateusz Malinowski, and Mario
Fritz. 2020. Ireen: Iterative reverse-engineering of
black-box functions via neural program synthesis.
_arXiv preprint arXiv:2006.10720._
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian
Sun. 2016. Identity mappings in deep residual networks. In European conference on computer vision
_(ECCV), pages 630–645. Springer._
Lisa Anne Hendricks, Zeynep Akata, Marcus
Rohrbach, Jeff Donahue, Bernt Schiele, and Trevor
Darrell. 2016. Generating visual explanations. In
_European Conference on Computer Vision, pages_
3–19. Springer.
Lisa Anne Hendricks, Kaylee Burns, Kate Saenko,
Trevor Darrell, and Anna Rohrbach. 2018. Women
also snowboard: Overcoming bias in captioning
models. In Proceedings of the European Conference
_on Computer Vision (ECCV), pages 771–787._
-----
Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin,
and Wei-Ying Ma. 2016. [How well do comput-](https://doi.org/10.18653/v1/P16-1084)
[ers solve math word problems? large-scale dataset](https://doi.org/10.18653/v1/P16-1084)
[construction and evaluation. In Proceedings of the](https://doi.org/10.18653/v1/P16-1084)
_54th Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
887–896, Berlin, Germany. Association for Computational Linguistics.
W Lewis Johnson. Agents that learn to explain themselves.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural
_information processing systems, pages 1097–1105._
Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black,
and Yulia Tsvetkov. 2019. Measuring bias in contextualized word representations. _arXiv preprint_
_arXiv:1906.07337._
Zhenzhong Lan, Mingda Chen, Sebastian Goodman,
Kevin Gimpel, Piyush Sharma, and Radu Soricut.
2019. ALBERT: A lite BERT for self-supervised
learning of language representations. arXiv preprint
_arXiv:1909.11942._
H Chad Lane, Mark G Core, Michael Van Lent, Steve
Solomon, and Dave Gomboc. 2005. Explainable artificial intelligence for training and tutoring. Technical report, UNIVERSITY OF SOUTHERN CALIFORNIA MARINA DEL REY CA INST FOR CREATIVE . . . .
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton.
2015. Deep learning. Nature, 521(7553):436–444.
Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016.
Rationalizing neural predictions. _arXiv preprint_
_arXiv:1606.04155._
Percy Liang. 2016. Learning executable semantic
parsers for natural language understanding. Commu_nications of the ACM, 59(9):68–76._
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word
problems. arXiv preprint arXiv:1705.04146.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
Meghann Lomas, Robert Chevalier, Ernest Vincent
Cross, Robert Christopher Garrett, John Hoare, and
Michael Kopack. 2012. Explaining robot actions.
In Proceedings of the seventh annual ACM/IEEE in_ternational conference on Human-Robot Interaction,_
pages 187–188.
Laurens Van der Maaten and Geoffrey Hinton. 2008.
Visualizing data using t-sne. _Journal of machine_
_learning research, 9(11)._
Mateusz Malinowski, Carl Doersch, Adam Santoro,
and Peter Battaglia. 2018. Learning visual question
answering by bootstrapping hard attention. In Pro_ceedings of the European Conference on Computer_
_Vision (ECCV), pages 3–20._
Mateusz Malinowski and Mario Fritz. 2014. A multiworld approach to question answering about realworld scenes based on uncertain input. In Advances
_in neural information processing systems, pages_
1682–1690.
Jose P Mestre. 2013. The role of language comprehension in mathematics and problem solving. In
_Linguistic and cultural influences on learning math-_
_ematics, pages 201–220. Taylor and Francis._
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing
_systems, pages 3111–3119._
Volodymyr Mnih, Nicolas Heess, Alex Graves, et al.
2014. Recurrent models of visual attention. In
_Advances in neural information processing systems,_
pages 2204–2212.
Richard J Murnane, John B Willett, M Jay Braatz, and
Yves Duhaldeborde. 2001. Do different dimensions
of male high school students’ skills predict labor
market success a decade later? evidence from the
nlsy. Economics of Education Review, 20(4):311–
320.
Emilio Parisotto, Abdel-rahman Mohamed, Rishabh
Singh, Lihong Li, Dengyong Zhou, and Pushmeet
Kohli. 2016. Neuro-symbolic program synthesis.
_arXiv preprint arXiv:1611.01855._
Maria Chiara Pasolunghi, Cesare Cornoldi, and
Stephanie De Liberto. 1999. Working memory and
intrusions of irrelevant information in a group of specific poor problem solvers. Memory & Cognition,
27(5):779–790.
Markus N Rabe, Dennis Lee, Kshitij Bansal, and Christian Szegedy. 2020. Mathematical reasoning via
self-supervised skip-tree training. _arXiv preprint_
_arXiv:2006.04757._
Alec Radford, Karthik Narasimhan, Tim Salimans, and
Ilya Sutskever. Improving language understanding
by generative pre-training.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
Wei Li, and Peter J Liu. 2019. Exploring the limits
of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683.
Victor Sanh, Lysandre Debut, Julien Chaumond, and
Thomas Wolf. 2019. Distilbert, a distilled version
of bert: smaller, faster, cheaper and lighter. arXiv
_preprint arXiv:1910.01108._
-----
David Saxton, Edward Grefenstette, Felix Hill, and
Pushmeet Kohli. 2019. Analysing mathematical reasoning abilities of neural models. _arXiv preprint_
_arXiv:1904.01557._
Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang
Liu, and Yong Rui. 2015. Automatically solving
number word problems by semantic parsing and reasoning. In Proceedings of the 2015 Conference on
_Empirical Methods in Natural Language Processing,_
pages 1132–1142.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz
Kaiser, and Illia Polosukhin. 2017. Attention is all
you need. In Advances in neural information pro_cessing systems, pages 5998–6008._
Wietse de Vries, Andreas van Cranenburgh, Arianna
Bisazza, Tommaso Caselli, Gertjan van Noord, and
Malvina Nissim. 2019. Bertje: A dutch bert model.
_arXiv preprint arXiv:1912.09582._
Amber Y Wang, Lynn S Fuchs, and Douglas Fuchs.
2016. Cognitive and linguistic predictors of mathematical word problems with and without irrelevant
information. _Learning and individual differences,_
52:79–87.
Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan
Song, Long Guo, and Heng Tao Shen. 2018. Mathdqn: Solving arithmetic word problems via deep reinforcement learning. In Thirty-Second AAAI Con_ference on Artificial Intelligence._
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.
Huijuan Xu and Kate Saenko. 2016. Ask, attend and
answer: Exploring question-guided spatial attention
for visual question answering. In European Confer_ence on Computer Vision, pages 451–466. Springer._
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019.
Xlnet: Generalized autoregressive pretraining for
language understanding. In Advances in neural in_formation processing systems, pages 5753–5763._
Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng,
and Alex Smola. 2016. Stacked attention networks
for image question answering. In Proceedings of
_the IEEE conference on computer vision and pattern_
_recognition, pages 21–29._
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja
Fidler. 2015. Aligning books and movies: Towards
story-like visual explanations by watching movies
and reading books. In Proceedings of the IEEE inter_national conference on computer vision, pages 19–_
27.
[Yanyan Zou and Wei Lu. 2019. Text2Math: End-to-](https://doi.org/10.18653/v1/D19-1536)
[end parsing text into math expressions. In Proceed-](https://doi.org/10.18653/v1/D19-1536)
_ings of the 2019 Conference on Empirical Methods_
_in Natural Language Processing and the 9th Inter-_
_national Joint Conference on Natural Language Pro-_
_cessing (EMNLP-IJCNLP), pages 5327–5337, Hong_
Kong, China. Association for Computational Linguistics.
-----
| [
"Piotr, Piękos",
"Mateusz, Malinowski",
"Henryk, Michalewski"
] | 2021-01-01T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2106.03921 | https://arxiv.org/abs/2106.03921 | null |
MetaMath: Integrating Natural Language and Code for Enhanced Mathematical Reasoning in Large Language Models | Large Language Models (LLMs) are commonly used to generate solutions for mathematical reasoning problems in the following formats: natural language, code, or a combination of both. In this paper, we explore fundamental questions related to solving mathematical reasoning problems using natural language and code with state-of-the-art LLMs, including GPT-4o-mini and LLama-3.1-8b-Turbo. Our findings show that LLMs are better at reasoning in natural language compared to code. Additionally, although natural language and code serve as complementary forms of reasoning, they can affect each other in a negative way in certain scenarios. These insights motivate our development of a new prompting method, MetaMath, which leverages an LLM to dynamically select the most appropriate reasoning form, resulting in improved performance over comparable baselines with GPT-4o-mini. | Findings show that LLMs are better at reasoning in natural language compared to code, and motivate the development of a new prompting method, MetaMath, which leverages an LLM to dynamically select the most appropriate reasoning form, resulting in improved performance over comparable baselines with GPT-4o-mini. | ## MetaMath: Integrating Natural Language and Code for Enhanced Mathematical Reasoning in Large Language Models
**Xuyuan Xiong[1][∗]** **Simeng Han[2][∗]** **Ziyue Zhou[3]** **Arman Cohan[2]**
1Shanghai Jiao Tong University 2Yale University 3Google
**Abstract**
Large Language Models (LLMs) are commonly used to generate solutions for
mathematical reasoning problems in the following formats: natural language, code,
or a combination of both. In this paper, we explore fundamental questions related
to solving mathematical reasoning problems using natural language and code with
state-of-the-art LLMs, including GPT-4o-mini and LLama-3.1-8b-Turbo. Our
findings show that LLMs are better at reasoning in natural language compared to
code. Additionally, although natural language and code serve as complementary
forms of reasoning, they can affect each other in a negative way in certain scenarios.
These insights motivate our development of a new prompting method, MetaMath,
which leverages an LLM to dynamically select the most appropriate reasoning form,
resulting in improved performance over comparable baselines with GPT-4o-mini.
**1** **Introduction**
Mathematical reasoning with LLMs is typically approached through two paradigms. The first
paradigm focuses on designing various prompting strategies to elicit detailed and natural language
reasoning processes. This line of research continues from Chain-of-Thought prompting [1–4]. The
second paradigm leverages LLMs to generate solutions in the form of code, which can then be
executed with external tools to derive the final answer [5–10].
While these two paradigms offer distinct approaches to mathematical reasoning with LLMs, understanding their interplay is crucial for advancing state-of-the-art methodologies. By examining the
strengths and limitations of both natural language reasoning and code-based problem-solving, we can
better assess how these approaches complement or contrast with each other. In this paper, we aim to
explore and address several fundamental questions in mathematical reasoning with state-of-the-art
LLMs, including GPT-4o, GPT-4o-mini, and LLama-3.1-8b: (1) Is natural language necessary for
mathematical reasoning? (2) Does solving problems using code provide a robust framework for
mathematical reasoning? (3) How do different types of mathematical reasoning problems affect
the effectiveness of natural language versus code? (4) Can natural language enhance code-based
reasoning, and can code augment natural language reasoning in solving mathematical problems?
By exploring these questions, we aim to reveal how natural language and code contribute to mathematical reasoning, providing insights that could influence the future of LLM-based approaches to
solving complex problems. We further propose propose MetaMath, a method that leverages an LLM
to select the most appropriate reasoning strategy from four options: using only natural language,
using only code, using code followed by natural language, or using natural language followed by code.
This approach is inspired by SELF-DISCOVER, which employs an LLM to uncover task-intrinsic
reasoning structures for specific problems [4].
_∗Equal contribution._
Preprint. Under review.
-----
**2** **Method**
Figure 1: Illustration of MetaMath. MetaMath enables the model to analyze the problem and choose
the most suitable approach among CoT, PAL, CodeNL, and NLCode. In this example, MetaMath
selects CodeNL, which leads to the right solution (105) while CoT and PAL generate incorrect results.
To investigate how natural language reasoning and coding influence the mathematical reasoning
abilities of LLMs, we consider four fundamental approaches involving NL and code: Chain-ofThought prompting (CoT)[1], Program-aided Language Models (PAL)[8], CodeNL, and NLCode:
- CoT: The LLM tackles the problem by generating a step-by-step breakdown of the reasoning
process in natural language, guiding toward the solution.
- PAL: The LLM is instructed to develop a Python program with annotated comments, ensuring that
the solution can be derived by executing the code.
- CodeNL: The LLM first writes a solution in Python code, then step-by-step analyzes the problem
based on the code and its executed results in natural language to obtain the final answer.
- NLCode: The LLM starts by constructing a thorough reasoning path in natural language, followed
by translating it into Python code, running the code, and determining the solution from the output.
As a baseline, we include the results obtained by applying majority voting among the four approaches
to determine the final answer, with ties broken by random choice. Additionally, we introduce an
oracle baseline, where the result is recorded as correct if any of the four approaches provides the
correct answer. This serves as an upper bound for the performance of the combined methods.
The CodeNL approach emphasizes the importance of analyzing the code’s execution through natural
language reasoning, allowing for corrections and insights even if the initial code has errors. In
contrast, NLCode starts with a natural language outline to clarify the problem before writing the code,
which enhances the likelihood of successful implementation. Together, these new methods leverage
both reasoning forms to improve problem-solving effectiveness.
Given the varying nature of problems, we propose MetaMath. Before using a specific approach to
derive the final answer, MetaMath allows the model to first analyze the problem and decide which
reasoning approach to apply among CoT, PAL, CodeNL and NLCode, before using the selected
approach to write a solution.
The core intuition behind MetaMath is that not all problems benefit equally from the same reasoning
approach[5]. Some problems may require step-by-step natural language reasoning, while others
are better suited for code-based solutions. MetaMath empowers the model to dynamically adapt its
strategy by selecting the most appropriate method, ensuring flexibility and maximizing performance
across different problem types, as illustrated in Figure 1.
-----
**3** **Evaluation**
We conduct experiments with one of the most advanced LLMs, GPT-4o-mini and a competitive
open-sourced LLM, Llama-3.1-8b-Turbo. To ensure a fair evaluation, we use 8 shots for both CoT
and PAL, while CodeNL and NLCode each have 4 shots at both stages. The complete prompt texts
are in the appendix.
**3.1** **Preliminary experiments**
In our preliminary experiments, we evaluate the performance of these four approaches on GSM8K[11],
Algebra (a subset of MATH [12]), and AIME. [2] For each dataset, we sample 100 examples.
Table 1: Test results for GPT-4o-mini: 100 examples
**Method** **GSM8K** **Algebra** **AIME**
COT 92 88 40 (zero-shot)
PAL 87 70 24 (zero-shot)
CodeNL 95 93 41 (zero-shot)
NLCode 91 76 42 (zero-shot)
The results in Table 1 show that GPT-4o-mini exhibits strong reasoning abilities across various
mathematical datasets. The model achieves over 90% accuracy on GSM8k, showcasing its excellent
problem-solving skills on general math tasks. On the more challenging AIME dataset, the CoT
method performs surprisingly well, achieving 40% accuracy in a zero-shot setting, indicating that
natural language reasoning alone can handle complex problems. Given these promising outcomes,
we extend our evaluation to more advanced Level 5 problems from the MATH dataset.
**3.2** **Experiments on hard problems of MATH**
We test our method on an advanced model, GPT-4o-mini, and an open-source model, LLama 3.1-8B
Turbo Instruct, using all Level 5 problems from the MATH dataset, which has 1,342 problems. The
results are in Table 3.2 and Table 3.
Table 2: Results of GPT-4o-mini on level-5 problems in the MATH dataset.
**Method** **Algebra** **Counting & Probability Geometry** **Number Theory** **Intermediate Algebra** **Precalculus Prealgebra** **Average**
COT 75.57 52.85 31.06 62.99 23.57 25.93 69.43 50.60
PAL 47.56 46.43 18.18 56.49 16.07 10.37 57.51 36.56
CodeNL **76.22** 54.48 28.79 68.18 23.93 21.48 69.43 50.90
NLCode 66.12 **58.54** 30.30 **70.78** 21.07 17.78 63.73 47.58
MetaMath 75.90 56.10 **32.58** 64.29 **24.64** 25.19 **69.43** **51.44**
Majority Vote 83.39 60.16 30.30 77.92 26.07 20.00 75.65 55.59
Oracle Baseline 92.51 77.24 49.24 86.36 42.86 38.52 84.97 68.96
Table 3: Results of Llama3.1-8B-Instruct-Turbo on level-5 problems in the MATH dataset
**Method** **Algebra** **Counting & Probability Geometry** **Number Theory** **Intermediate Algebra** **Precalculus Prealgebra** **Average**
COT **34.53** 10.57 **6.82** 18.18 4.64 **5.19** **34.20** 18.28
PAL 15.61 7.32 1.52 14.29 **6.43** 2.22 18.65 11.71
CodeNL 27.69 7.32 3.79 20.13 4.29 2.96 26.42 14.88
NLCode 31.92 **17.07** 4.55 **22.73** 5.71 2.22 33.16 **18.35**
MetaMath 29.64 11.38 **6.82** 19.48 5.34 2.97 30.05 16.69
MajorityVote 37.46 12.20 4.55 27.27 5.71 2.22 39.90 20.69
Oracle Baseline 64.50 27.64 14.39 48.05 15.71 9.63 58.55 37.39
The performance of PAL appears to be the weakest across both models, likely due to the model’s limited ability in generating code or the characteristics of the problems, which may not lend themselves
well to being solved through code. We provide more insignts on this in Section 3.3. In the GPT-4omini model, MetaMath achieves the highest performance, adeptly selecting the most appropriate
approach among the four methods. However, in the Llama3.1-8B model, MetaMath’s performance
is closer to the average of the four approaches, suggesting that more advanced models are better at
selecting the optimal approach based on the problem. Additionally, while PAL performs poorly in
2This dataset includes a comprehensive collection of American Invitational Mathematics Examination (AIME)
[problems from 1983 to 2024, available at https://huggingface.co/datasets/qq8933/AIME_1983_2024.](https://huggingface.co/datasets/qq8933/AIME_1983_2024)
-----
Llama3.1-8B, NLCode shows significantly better results, indicating that outlining a reasoning path
prior to generating code can improve reasoning with code, particularly in models with limited code
capabilities. Across both models, methods that explicitly incorporate natural language reasoning, such
as CoT, NLCode, and CodeNL, outperform those relying solely on programming, emphasizing the
importance of natural language reasoning in mathematical problem-solving. Finally, while majority
voting performs well on problems with higher accuracy, it struggles with more challenging problems,
such as those in the precalculus category.
**3.3** **Analysis**
We conduct further analysis to better understand the positive and negative effects of integrating code
and natural language (NL). Table 4 shows the frequency of errors either corrected or introduced after
two stages of reasoning with GPT-4o-mini. In addition, we manually examine 50 examples generated
by GPT-4o-mini. Below, we summarize our key findings, offering further insights into reasoning with
both NL and code. We also include case studies in the appendix.
Table 4: Comparison of GPT-4o-mini results in one-stage and two-stage correctness. "One-stage
incorrect, two-stage correct" indicates the number of examples answered incorrectly with only NL or
only code and answered correctly with one-stage of NL and one-stage of code. "One-stage correct,
two-stage incorrect" indicates the number of examples answered correctly with only NL or only code
and answered incorrectly with one-stage of NL and one-stage of code.
PAL COT
CodeNL NLCode CodeNL NLCode
One-stage incorrect, two-stage correct 272 243 167 158
One-stage correct, two-stage incorrect 80 102 147 189
Ratio (first row / second row) 3.4 2.4 1.1 0.8
**NL and code complement each other in reasoning.** As shown in Table 4, PAL combined with NL
(CodeNL, NLCode) results in significantly more corrected errors than introduced mistakes (ratios of
3.4 and 2.4). This highlights the positive effect of add NL into code. From manual inspection we
observe that incorporating natural language before generating code provides clearer problem-solving
guidance, improving the code solution accuracy. In addition, adding NL after code helps further
explain and correct the potential logical and syntax errors in code (e.g., transforming the incorrect
answer "Interval(0, 1)" into the correct form "[0,1)"). For purely NL-based reasoning, integrating
code helps handle complex computations (e.g., multiple steps of addition and multiplication, which
are difficult for NL reasoning but straightforward for programs). However, CoT results indicate more
balanced outcomes, where adding code does not always lead to significant improvements.
**NL and code can affect each other in a negative way in reasoning.** When generating code alone
can solve a problem, adding natural language reasoning beforehand may introduce incorrect designs
or flawed intermediate conclusions, leading to errors in generated code (e.g., the second stage of a
program often uses intermediate steps from NL reasoning, resulting in error propagation). As shown
in Table 4, in CoT with NLCode, there are more instances of correct answers turning incorrect (189)
compared to incorrect answers being corrected (158), indicating negative interference. Similarly,
generating code before using CoT can face similar issues. Additionally, writing a code solution after
natural language reasoning can face formatting issues, where the program’s output is technically
correct but is misjudged by automated evaluation due to inconsistencies in the output format.
**4** **Conclusion and Future Work**
In this paper, we explored the integration of natural language and code-based reasoning in mathematical problem-solving using state-of-the-art LLMs. Through a series of experiments on GSM8k, MATH
nd AIME, we demonstrated the complementary nature of natural language and code-based reasoning. We observed that natural language remains a critical component of successful mathematical
problem-solving, even when combined with code. Additionaly, our proposed approach, MetaMath,
dynamically selects the optimal form of reasoning from natural language, code, or a combination of
both based on the problem type. MetaMath outperformed baseline methods by effectively leveraging
the strengths of both natural language and code reasoning.
-----
**References**
[1] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou,
“Chain-of-thought prompting elicits reasoning in large language models,” 2023.
[2] D. Zhou, N. Schärli, L. Hou, J. Wei, N. Scales, X. Wang, D. Schuurmans, C. Cui, O. Bousquet,
Q. V. Le, and E. H. Chi, “Least-to-most prompting enables complex reasoning in large language
models,” in The Eleventh International Conference on Learning Representations, 2023.
[3] H. S. Zheng, S. Mishra, X. Chen, H.-T. Cheng, E. H. Chi, Q. V. Le, and D. Zhou, “Take a step
back: Evoking reasoning via abstraction in large language models,” in The Twelfth International
_Conference on Learning Representations, 2024._
[4] P. Zhou, J. Pujara, X. Ren, X. Chen, H.-T. Cheng, Q. V. Le, E. H. Chi, D. Zhou, S. Mishra, and
H. S. Zheng, “Self-discover: Large language models self-compose reasoning structures,” 2024.
[5] J. P. Zhou, C. E. Staats, W. Li, C. Szegedy, K. Q. Weinberger, and Y. Wu, “Don’t trust: Verify –
grounding LLM quantitative reasoning with autoformalization,” in The Twelfth International
_Conference on Learning Representations, 2024._
[6] A. Gu, B. Rozière, H. Leather, A. Solar-Lezama, G. Synnaeve, and S. I. Wang, “Cruxeval: A
benchmark for code reasoning, understanding and execution,” 2024.
[7] Z. Gou, Z. Shao, Y. Gong, yelong shen, Y. Yang, M. Huang, N. Duan, and W. Chen, “ToRA: A
tool-integrated reasoning agent for mathematical problem solving,” in The Twelfth International
_Conference on Learning Representations, 2024._
[8] L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y. Yang, J. Callan, and G. Neubig, “Pal: Programaided language models,” 2023.
[9] X. Ye, Q. Chen, I. Dillig, and G. Durrett, “SatLM: Satisfiability-aided language models using
declarative prompting,” in Thirty-seventh Conference on Neural Information Processing Systems,
2023.
[10] W. Chen, X. Ma, X. Wang, and W. W. Cohen, “Program of thoughts prompting: Disentangling
computation from reasoning for numerical reasoning tasks,” Transactions on Machine Learning
_Research, 2023._
[11] K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek,
J. Hilton, R. Nakano, C. Hesse, and J. Schulman, “Training verifiers to solve math word
problems,” 2021.
[12] D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt,
“Measuring mathematical problem solving with the math dataset,” 2021.
**A** **Appendix / supplemental material**
**A.1** **Examples of One-Stage Incorrect, Two-Stage Correct**
We present two examples where the one-stage approach yields an incorrect result, but the subsequent
stage corrects the error, as demonstrated in Figure 2 and Figure 3.
**A.2** **Prompts and shots**
Below are the prompts and shots used for the four approaches: CoT, PAL, CodeNL, and NLCode.
-----
Figure 2: Example of ’PAL incorrect, CodeNL correct’ — In this example, the first stage (PAL)
produces an incorrect result (0, 1) due to a logical error in handling the domain condition. However,
in the second stage, natural language reasoning is applied, correcting the error and yielding the
correct domain [0, 1). This demonstrates how adding natural language to code(CodeNL) can resolve
logical issues in the solution proces
Figure 3: Example of ’COT incorrect, NLCode correct’ — In this example, the first stage (COT)
produces an incorrect result of 1977 due to errors in handling the arithmetic computation. However,
in the second stage (NLCode), the code correctly performs the necessary calculations, arriving at the
correct final answer of 1599. This demonstrates how using code (NLCode) can help handle more
complex arithmetic computations that natural language reasoning (COT) struggles with.
-----
**A.2.1** **COT**
```
cot_system_prompt = ’’’
You are a helpful assistant who is good at solving math problems. You should follow
the guidelines below:
- Present the final result in LaTeX using a ‘\boxed{}‘ without any units.
- Utilize the ‘pi‘ symbol, and simplify all fractions and square roots without
converting them to decimal values.
’’’
```
```
cot_instruction_prompt = "Please think step by step. "
cot_math_shots= [
’’’Question: Kevin Kangaroo begins hopping on a number line at 0. He wants to get to
1, but he can hop only $\frac{1}{3}$ of the distance. Each hop tires him out
so that he continues to hop $\frac{1}{3}$ of the remaining distance. How far
has he hopped after five hops? Express your answer as a common fraction.
Answer: Let’s think step by step
Kevin hops $1/3$ of the remaining distance with every hop.
His first hop takes $1/3$ closer.
For his second hop, he has $2/3$ left to travel, so he hops forward $(2/3)(1/3)$.
For his third hop, he has $(2/3)^2$ left to travel, so he hops forward $(2/3)^2(1/3)
$.
In general, Kevin hops forward $(2/3)^{k-1}(1/3)$ on his $k$th hop.
We want to find how far he has hopped after five hops.
This is a finite geometric series with first term $1/3$, common ratio $2/3$, and
five terms.
Thus, Kevin has hopped $\frac{\frac{1}{3}\left(1-\left(\frac{2}{3}\right)^5\right)
}{1-\frac{2}{3}} = \boxed{\frac{211}{243}}$.
The answer is \frac{211}{243}’’’,
```
```
’’’Question: What is the area of the region defined by the equation $x^2+y^2 - 7 = 4
y-14x+3$?
Answer: Let’s think step by step
We rewrite the equation as $x^2 + 14x + y^2 - 4y = 10$ and then complete the square,
resulting in $(x+7)^2-49 + (y-2)^2-4=10$,
or $(x+7)^2+(y-2)^2=63$.
This is the equation of a circle with center $(-7, 2)$ and radius $\sqrt{63},$
so the area of this region is $\pi r^2 = \boxed{63\pi}$.
The answer is 63\pi’’’,
’’’Question: If $x^2+y^2=1$, what is the largest possible value of $|x|+|y|$?
Answer: Let’s think step by step
If $(x,y)$ lies on the circle,
so does $(x,-y),$ $(-x,-y),$ and $(-x,-y),$ (which all give the same value of $|x| +
|y|$),
so we can assume that $x \ge 0$ and $y \ge 0.$
Then $|x| + |y| = x + y.$ Squaring, we get
\[(x + y)^2 = x^2 + 2xy + y^2 = 1 + 2xy.\]
Note that $(x - y)^2 \ge 0.$
Expanding, we get $x^2 - 2xy + y^2 \ge 0,$ so $2xy \le x^2 + y^2 = 1.$
Hence,\[1 + 2xy \le 2,\]which means $x + y \le \sqrt{2}.$
Equality occurs when $x = y = \frac{1}{\sqrt{2}},$
so the maximum value of $|x| + |y|$ is $\boxed{\sqrt{2}}.$
The answer is \sqrt{2}’’’,
```
```
’’’Question: If $f(x)=\frac{ax+b}{cx+d}, abcd\not=0$ and $f(f(x))=x$ for all $x$ in
the domain of $f$, what is the value of $a+d$?
Answer: Let’s think step by step
The condition $f(f(x))$ means that $f$ is the inverse of itself,
so its graph is symmetrical about the line $y = x$.
With a rational function of this form, we will have two asymptotes:
```
-----
```
a vertical one at $x=-d/c$ if $cx+d$ does not divide $ax+b$,
and a horizontal one at $y=a/c$,
if we take the limit of $f(x)$ as $x$ goes to $\pm\infty$.
In order for $f$ to be its own inverse, the intersection of the asymptotes must lie
on the line $y=x$
so that it and its asymptotes reflect onto themselves.
This means that $-d/c=a/c$,
and therefore $-d=a$ and $a+d=\boxed{0}$.
The answer is 0’’’,
’’’Question: A math teacher requires Noelle to do one homework assignment for each
of the first five homework points she wants to earn; for each of the next five
homework points, she needs to do two homework assignments; and so on, so that
to earn the $n^{\text{th}}$ homework point, she has to do $n\div5$ (rounded up)
homework assignments. For example, when she has 11 points, it will take $12\
div5=2.4\rightarrow3$ homework assignments to earn her $12^{\text{th}}$ point.
What is the smallest number of homework assignments necessary to earn a total
of 25 homework points?
Answer: Let’s think step by step
Noelle only has to do 1 homework assignment to earn her first point,
and the same is true for each of her first five points.
She must then do 2 homework assignments to earn her sixth point, seventh point, and
so on, up to her tenth point.
Continuing, we see that Noelle must do a total of \[1+1+1+1+1+2+2+2+2+2+\dots
+5+5+5+5+5\] homework assignments to earn 25 points.
This sum may be rewritten as $5(1+2+3+4+5)=5(15)=\boxed{75}$.
The answer is 75’’’,
```
```
’’’Question: The quadratic equation $x^2+mx+n=0$ has roots that are twice those of
$x^2+px+m=0,$ and none of $m,$ $n,$ and $p$ is zero. What is the value of $n/p?
$
Answer: Let’s think step by step
Let $r_1$ and $r_2$ be the roots of $x^2+px+m=0.$
Since the roots of $x^2+mx+n=0$ are $2r_1$ and $2r_2,$ we have the following
relationships: \[
m=r_1 r_2,\quad n=4r_1 r_2,\quad p=-(r_1+r_2), \quad\text{and}\quad
m=-2(r_1+r_2).
\] So \[
n = 4m, \quad p = \frac{1}{2}m,
\quad\text{and}\quad
\frac{n}{p}=\frac{4m}{\frac{1}{2}m}=\boxed{8}.
\]
Alternatively, the roots of \[
\left(\frac{x}{2}\right)^2 + p\left(\frac{x}{2}\right) + m = 0
\] are twice those of $x^2 + px + m = 0.$
Since the first equation is equivalent to $x^2 + 2px + 4m = 0,$
we have \[m = 2p \quad\text{and}\quad n = 4m, \quad\text{so}\quad \frac{n}{p} = \
boxed{8}.\]
The answer is 8’’’,
```
```
’’’Question: Expand $(2z^2 + 5z - 6)(3z^3 - 2z + 1)$.
Answer: Let’s think step by step
$$\begin{array}{crrrrrrr}
& & & 3z^3 & & -2z & + 1 & \\
\times & & & & 2z^2 & +5z & -6 \\
\cline{1-7}\rule{0pt}{0.17in}
& & & -18z^3 & & +12z & -6 & \\
& & +15z^4 & & -10z^2 & +5z & & \\
+ & 6z^5 & & -4z^3 & +2z^2 & & & \\
\cline{1-7}\rule{0pt}{0.17in}
& 6z^5 & +15z^4 & -22z^3 & - 8z^2 &+17z & -6 &
\end{array}$$
The answer is 6z^5+15z^4-22z^3-8z^2+17z-6’’’,
```
-----
```
’’’Question: Find the mean of all solutions for $x$ when $x^3 + 3x^2 - 10x = 0$.
Answer: Let’s think step by step
First, we factor the equation as $x(x^2 +3x - 10) = 0$.
So, one solution is $x=0$ and the other two solutions are the solutions to $x^2 + 3x
-10=0$.
We could either factor the quadratic, or note that the sum of the solutions to this
quadratic is $-(3/1)=-3$,
so the mean of the three solutions to the original equation is $-3/3=\boxed{-1}$.
The answer is -1
’’’,
]
```
**A.2.2** **PAL**
```
pal_system_prompt = ’’’
You are a helpful assistant who is good at sloving math problems and writing code.
You should should follow the guidelines below:
- Utilize the ‘pi‘ symbol and ‘Rational‘‘ from Sympy for $\pi$ and fractions, and
simplify all fractions and square roots without converting them to decimal
values
- You should only write code blocks and the function name should be ‘solution‘ and
the returned value should be the final answer.
’’’
pal_instruction_prompt = "Let’s use python to solve the math problem. "
```
```
pal_math_shots= [
’’’Question: Find the coefficient of $x^3$ when $3(x^2 - x^3+x) +3(x +2x^3- 3x^2 + 3
x^5+x^3) -5(1+x-4x^3 - x^2)$ is simplifie.
‘‘‘python
from sympy import symbols, simplify
def solution():
"""Find the coefficient of $x^3$ when $3(x^2 - x^3+x) +3(x +2x^3- 3x^2 + 3x^5+x
^3) -5(1+x-4x^3 - x^2)$ is simplified."""
x = symbols(’x’)
expr = 3*(x**2 - x**3 + x) + 3*(x + 2*x**3 - 3*x**2 + 3*x**5 + x**3) - 5*(1 + x 4*x**3 - x**2)
simplified_expr = simplify(expr)
x3_coefficient = simplified_expr.as_coefficients_dict()[x**3]
result = x3_coefficient
return result
‘‘‘’’’,
```
```
’’’Question: The surface area of a sphere with radius $r$ is $4\pi r^2$. Including
the area of its circular base, what is the total surface area of a hemisphere
with radius 6 cm? Express your answer in terms of $\pi$.
‘‘‘python
import math
def solution():
"""The surface area of a sphere with radius $r$ is $4\pi r^2$. Including the
area of its circular base, what is the total surface area of a hemisphere
with radius 6 cm? Express your answer in terms of $\pi$"""
radius = 6
```
-----
```
# Surface area of the hemisphere
hemisphere_area = 2 * math.pi * radius**2
# Area of the circular base
base_area = math.pi * radius**2
# Total surface area
total_surface_area = hemisphere_area + base_area
# Formatting the result in LaTeX
result = r’{}\\pi’.format(total_surface_area / math.pi)
return result
‘‘‘’’’,
```
```
’’’Question: Monica tosses a fair 6-sided die. If the roll is a prime number, then
she wins that amount of dollars (so that, for example, if she rolls 3, then she
wins 3 dollars). If the roll is composite, she wins nothing. Otherwise, she
loses 3 dollars. What is the expected value of her winnings on one die toss?
Express your answer as a dollar value to the nearest cent.
‘‘‘python
def solution():
"""Monica tosses a fair 6-sided die. If the roll is a prime number, then she
wins that amount of dollars (so that, for example, if she rolls 3, then she
wins 3 dollars). If the roll is composite, she wins nothing. Otherwise, she
loses 3 dollars. What is the expected value of her winnings on one die toss?
Express your answer as a dollar value to the nearest cent."""
# Probabilities of each outcome
prime_prob = 1 / 6
composite_prob = 1 / 3
otherwise_prob = 1 / 6
# Expected value of each outcome
prime_expected_value = (2 * prime_prob) + (3 * prime_prob) + (5 * prime_prob)
composite_expected_value = 0 * composite_prob
otherwise_expected_value = -3 * otherwise_prob
```
```
# Total expected value
total_expected_value = prime_expected_value + composite_expected_value +
otherwise_expected_value
# Dollar value to the nearest cent
result = "{:.2f}".format(total_expected_value)
return result
‘‘‘’’’,
```
```
’’’Question: Given $\mathbf{a} = \begin{pmatrix} -7 \\ 0 \\ 1 \end{pmatrix}$ and $\
mathbf{b} = \begin{pmatrix} 4 \\ 2 \\ -1 \end{pmatrix},$ find $\mathbf{a} - 3 \
mathbf{b}.$
Solution:
‘‘‘python
import numpy as np
def solution()
"""Given $\mathbf{a} = \begin{pmatrix} -7 \\ 0 \\ 1 \end{pmatrix}$ and $\mathbf{
b} = \begin{pmatrix} 4 \\ 2 \\ -1 \end{pmatrix},$ find $\mathbf{a} - 3 \
mathbf{b}.$"""
a = np.array([-7, 0, 1])
b = np.array([4, 2, -1])
result = a - 3 * b
result = r’\begin{{pmatrix}} {} \\ {} \\ {} \end{{pmatrix}}’.format(result[0],
result[1], result[2])
```
-----
```
return result
‘‘‘’’’,
’’’Question: The endpoints of a diameter of circle $M$ are $(-1,-4)$ and $(-7,6)$.
What are the coordinates of the center of circle $M$? Express your answer as an
ordered pair.
‘‘‘python
def solution():
"""The endpoints of a diameter of circle $M$ are $(-1,-4)$ and $(-7,6)$. Find
the coordinates of the center of circle $M$."""
x1, y1 = -1, -4
x2, y2 = -7, 6
# Midpoint formula
center_x = (x1 + x2) / 2
center_y = (y1 + y2) / 2
```
```
# Result as an ordered pair
result = (center_x, center_y)
return result
‘‘‘’’’,
’’’Question: Find the remainder when $2x^6-x^4+4x^2-7$ is divided by $x^2+4x+3$.
‘‘‘python
from sympy import symbols, div
def solution():
"""Find the remainder when $2x^6-x^4+4x^2-7$ is divided by $x^2+4x+3$."""
x = symbols(’x’)
numerator = 2*x**6 - x**4 + 4*x**2 - 7
denominator = x**2 + 4*x + 3
quotient, remainder = div(numerator, denominator)
return remainder
’’’,
```
```
’’’Question: Find $x$ so that the vectors $\begin{pmatrix} 2 \\ 5 \end{pmatrix}$ and
$\begin{pmatrix} x \\ -3 \end{pmatrix}$ are orthogonal.
‘‘‘python
from sympy import symbols, Eq, solve
def solution():
"""Find x so that the vectors (2, 5) and (x, -3) are orthogonal."""
x = symbols(’x’)
# Dot product of the vectors should be zero for orthogonality
equation = Eq(2*x + 5*(-3), 0)
result = solve(equation, x)
return result[0]
‘‘‘’’’,
’’’Question: If $x$ is positive and $x^2 = 729$, what is the value of $x$?
‘‘‘python
from sympy import symbols, Eq, solve
```
```
def solution():
"""If $x$ is positive and $x^2 = 729$, what is the value of $x$?"""
x = symbols(’x’)
equation = Eq(x**2, 729)
solutions = solve(equation, x)
# Since x is positive, we select the positive solution
result = [sol for sol in solutions if sol > 0][0]
return result
‘‘‘’’’,
```
-----
**A.2.3** **Second stage of CodeNL**
```
codenl_instructio_prompt = "Please think step by step about the question based on
the provided code(it may be wrong) and the executed output. "
codenl_math_shots = [
’’’Question: How many distinct positive factors does 32 have?
Code: ‘‘‘python
def solution():
"""How many distinct positive factors does 32 have?"""
number = 32
factors = set()
for i in range(1, int(number**0.5) + 1):
if number % i == 0:
factors.add(i)
factors.add(number // i)
```
```
result = len(factors)
return result
‘‘‘
Output: 6
Answer: To determine the number of distinct positive factors of 32, we can follow
these steps:
1. Prime Factorization:
- First, we find the prime factorization of 32.
- \( 32 = 2^5 \)
```
```
2. Number of Factors:
- The formula to find the number of distinct positive factors of a number based
on its prime factorization is to add 1 to each of the exponents in the
factorization and then multiply the results.
- For \( 32 = 2^5 \), the exponent is 5.
- Adding 1 to the exponent: \( 5 + 1 = 6 \).
3. Conclusion:
- Therefore, the number of distinct positive factors of 32 is 6.
```
```
Thus, the final result is:
\[\boxed{6}\]’’’,
’’’Question: In parallelogram $ABCD$, angle $B$ measures $110^\circ$. What is the
number of degrees in the measure of angle $C$?
Code: ‘‘‘python
def solution():
"""In parallelogram $ABCD$, angle $B$ measures $110^\circ$. What is the number
of degrees in the measure of angle $C$?"""
angle_B = 110
# In a parallelogram, opposite angles are equal and adjacent angles are
supplementary
angle_C = 180 - angle_B
```
```
return angle_C
‘‘‘
Output: 70
Answer: In a parallelogram, opposite angles are equal and adjacent angles are
supplementary. Given that angle \( B \) measures \( 110^\circ \), we need to
find the measure of angle \( C \).
```
-----
```
Step-by-step reasoning:
1. In a parallelogram, adjacent angles are supplementary. This means that the sum of
the measures of angle \( B \) and angle \( C \) is \( 180^\circ \).
2. Therefore, we can write the equation: \( \text{angle } B + \text{angle } C =
180^\circ \).
3. Substituting the given measure of angle \( B \) into the equation: \( 110^\circ +
\text{angle } C = 180^\circ \).
4. Solving for angle \( C \): \( \text{angle } C = 180^\circ - 110^\circ = 70^\circ
\).
Thus, the measure of angle \( C \) is \( \boxed{70} \).’’’,
’’’Question: In triangle $ABC$, we have that $E$ and $F$ are midpoints of sides $\
overline{AC}$ and $\overline{AB}$, respectively. The area of $\triangle ABC$ is
24 square units. How many square units are in the area of $\triangle CEF$?
Code: ‘‘‘python
from sympy import Rational
def solution():
"""Calculate the area of triangle CEF given that E and F are midpoints of sides
AC and AB respectively, and the area of triangle ABC is 24 square units."""
# Area of triangle ABC
area_ABC = 24
```
```
# Triangle CEF is a median triangle, which is 1/4 the area of triangle ABC
area_CEF = Rational(1, 4) * area_ABC
return area_CEF
‘‘‘
Output: 6
Answer: The problem involves finding the area of triangle $CEF$ where $E$ and $F$
are midpoints of sides $\overline{AC}$ and $\overline{AB}$ respectively in
triangle $ABC$. Given that the area of triangle $ABC$ is 24 square units, we
need to determine the area of triangle $CEF$.
```
```
Since $E$ and $F$ are midpoints of their respective sides, line segment $\overline{
EF}$ is parallel to side $\overline{BC}$ and is half its length. This makes
triangle $CEF$ a median triangle of triangle $ABC$. A median triangle is one fourth the area of the original triangle because it is formed by drawing lines
from each vertex of the original triangle to the midpoints of the opposite
sides.
Thus, the area of triangle $CEF$ is calculated as:
\[\text{Area of } \triangle CEF = \frac{1}{4} \times \text{Area of } \triangle ABC =
\frac{1}{4} \times 24\]
Simplifying this, we get:
\[\text{Area of } \triangle CEF = 6\]
Therefore, the area of triangle $CEF$ is \(\boxed{6}\) square units.’’’,
```
```
’’’Question: What is the greatest two-digit multiple of 13?
Code: ‘‘‘python
def solution():
"""Find the greatest two-digit multiple of 13."""
# Start from the greatest two-digit number and go downwards
for number in range(99, 12, -1):
if number % 13 == 0:
return number
‘‘‘
Output: 91
Answer: To find the greatest two-digit multiple of 13, we start by considering the
largest two-digit number, which is 99. We need to find the largest number less
than or equal to 99 that is divisible by 13.
```
-----
```
We perform the division of 99 by 13 to find the largest integer quotient:
\[ \frac{99}{13} \approx 7.615 \]
The integer part of this division is 7, indicating that the largest multiple of 13
within the two-digit range is \( 13 \times 7 \).
Calculating this product:
\[ 13 \times 7 = 91 \]
Thus, the greatest two-digit multiple of 13 is 91. The Python code provided confirms
this by iterating from 99 downwards and checking divisibility by 13, stopping
at the first number that satisfies this condition, which is 91.
Therefore, the final answer is:
\[ \boxed{91} \]’’’,
]
```
**A.2.4** **Second stage of NLCode**
```
nlcode_instruction_prompt = "Write a Python code that translates a natural language
(NL) reasoning path into executable code to answer a given question. The output
of the generated code should be the final answer to the question. "
nlcode_math_shots = [
’’’Question: The area of the floor in a square room is 225 square feet. The
homeowners plan to cover the floor with rows of 6-inch by 6-inch tiles. How
many tiles will be in each row?
Reasoning Path: Let’s think step by step:
```
```
The area of the floor is given as 225 square feet. Since the room is square, each
side of the room is the square root of the area. Thus, each side of the room is
\(\sqrt{225} = 15\) feet.
Each tile is 6 inches by 6 inches, which is equivalent to \(0.5\) feet by \(0.5\)
feet since there are 12 inches in a foot.
```
```
To find out how many tiles fit along one side of the room, we divide the length of
the room by the length of one tile:
\[
\frac{15 \text{ feet}}{0.5 \text{ feet/tile}} = 30 \text{ tiles}
\]
Therefore, there will be \(\boxed{30}\) tiles in each row.
Code: ‘‘‘python
from sympy import sqrt
```
```
def solution():
# Area of the floor in square feet
area = 225
# Since the room is square, calculate the side length of the room
side_length = sqrt(area)
# Each tile’s side length in feet (6 inches = 0.5 feet)
tile_length = 0.5
# Calculate the number of tiles in each row
tiles_per_row = side_length / tile_length
return tiles_per_row
‘‘‘’’’,
```
-----
```
’’’Question: When the set of natural numbers is listed in ascending order, what is
the smallest prime number that occurs after a sequence of five consecutive
positive integers all of which are nonprime?
Reasoning Path: Let’s think step by step:
To find the smallest prime number that occurs after a sequence of five consecutive
nonprime positive integers, we need to identify such a sequence first.
1. Start by checking small numbers. The first few nonprime numbers are 4, 6, 8, 9,
and 10. However, these do not form a sequence of five consecutive nonprime
numbers.
2. Continue checking further:
- 8, 9, 10, 12, 14 are nonprime, but they are not consecutive.
- 24, 25, 26, 27, 28 are nonprime and consecutive.
```
```
3. Check the number following 28:
- 29 is the next number, and it is a prime number.
Thus, the smallest prime number that occurs after a sequence of five consecutive
nonprime positive integers (24, 25, 26, 27, 28) is 29.
The answer is \(\boxed{29}\).
Code: ‘‘‘python
from sympy import isprime
```
```
def solution():
# Start checking from a reasonable number
n = 1
while True:
# Check if the next five numbers are all nonprime
if all(not isprime(n + i) for i in range(5)):
# Check the number right after the sequence of five nonprimes
if isprime(n + 5):
return n + 5
n += 1
‘‘‘’’’,
’’’Question: What is the slope of the line that is tangent to a circle at point
(5,5) if the center of the circle is (3,2)? Express your answer as a common
fraction.
Reasoning Path: Let’s think step by step:
```
```
The slope of the line connecting the center of the circle at point \((3,2)\) to the
point on the circle \((5,5)\) is given by the formula for the slope between two
points:
\[m = \frac{y_2 - y_1}{x_2 - x_1} = \frac{5 - 2}{5 - 3} = \frac{3}{2}\]
The line that is tangent to the circle at the point \((5,5)\) will be perpendicular
to the line connecting the center of the circle to this point. The slope of a
line perpendicular to another line with slope \(m\) is the negative reciprocal
of \(m\). Therefore, the slope of the tangent line is:
\[m_{\text{tangent}} = -\frac{1}{\frac{3}{2}} = -\frac{2}{3}\]
Thus, the slope of the line that is tangent to the circle at point \((5,5)\) is \(\
boxed{-\frac{2}{3}}\).
Code: ‘‘‘python
from sympy import Rational
def solution():
# Slope of the line connecting the center of the circle to the point (5,5)
slope_radius = Rational(5 - 2, 5 - 3)
# Slope of the tangent line, which is the negative reciprocal of the slope of
the radius
```
-----
```
slope_tangent = -1 / slope_radius
return slope_tangent
‘‘‘’’’,
’’’Question: What is the greatest integer less than 100 for which the greatest
common factor of that integer and 18 is 3?
Reasoning Path: Let’s think step by step
We are looking for the greatest integer less than 100 for which the greatest common
factor (GCF) with 18 is 3.
First, note that 18 can be factored into prime factors as \(18 = 2 \times 3^2\).
For the GCF of a number \(n\) and 18 to be 3, \(n\) must:
1. Be divisible by 3 (to include the factor of 3).
2. Not be divisible by 2 or 9 (to avoid increasing the GCF beyond 3).
```
```
We need to find the largest integer less than 100 that meets these criteria. We look
for numbers that are multiples of 3 but not multiples of 2 or 9.
The largest multiple of 3 under 100 is 99. We check if it is divisible by 2 or 9:
- 99 is not divisible by 2 (since it is odd).
- 99 is divisible by 9 (since \(9 + 9 = 18\), and 18 is divisible by 9).
Since 99 does not work (as it is divisible by 9), we check the next largest multiple
of 3, which is 96.
- 96 is divisible by 2 (even number), so it does not work.
Next, we check 93:
- 93 is not divisible by 2 (odd number).
- 93 is not divisible by 9 (since \(9 + 3 = 12\), and 12 is not divisible by 9).
Thus, 93 meets the criteria of being divisible by 3 but not by 2 or 9. Therefore,
the greatest integer less than 100 for which the GCF with 18 is 3 is \(\boxed
{93}\).
Code ‘‘‘python
def solution():
from math import gcd
# Start from the largest number less than 100 and check downwards
for n in range(99, 0, -1):
if gcd(n, 18) == 3:
return n
```
```
# The function will return the greatest integer less than 100 for which the GCD with
18 is 3
‘‘‘’’’,
```
-----
| [
"Xuyuan, Xiong",
"Simeng, Han",
"Ziyue, Zhou",
"Arman, Cohan"
] | 2024-09-28T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.19381 | https://arxiv.org/abs/2409.19381 | https://www.semanticscholar.org/paper/ff319491cb5a0bd96f2bcbb1aa58c472d728ed91 |
Mirror: Multiple-perspective Self-Reflection Method for Knowledge-rich Reasoning | While Large language models (LLMs) have the capability to iteratively reflect on their own outputs, recent studies have observed their struggles with knowledge-rich problems without access to external resources. In addition to the inefficiency of LLMs in self-assessment, we also observe that LLMs struggle to revisit their predictions despite receiving explicit negative feedback. Therefore, We propose Mirror, a Multiple-perspective self-reflection method for knowledge-rich reasoning, to avoid getting stuck at a particular reflection iteration. Mirror enables LLMs to reflect from multiple-perspective clues, achieved through a heuristic interaction between a Navigator and a Reasoner. It guides agents toward diverse yet plausibly reliable reasoning trajectory without access to ground truth by encouraging (1) diversity of directions generated by Navigator and (2) agreement among strategically induced perturbations in responses generated by the Reasoner. The experiments on five reasoning datasets demonstrate that Mirror’s superiority over several contemporary self-reflection approaches. Additionally, the ablation study studies clearly indicate that our strategies alleviate the aforementioned challenges. | null | # Mirror: A Multiple-perspective Self-Reflection Method for Knowledge-rich Reasoning
**Hanqi Yan[1][∗]** **Qinglin Zhu[1][∗]** **Xinyu Wang[1][,][2]** **Lin Gui[1]** **Yulan He[1][,][2][,][3]**
1King’s College London 2University of Warwick 3The Alan Turing Institute
_{hanqi.yan, qinglin.1.zhu, lin.1.gui, yulan.he}@kcl.ac.uk_
[email protected]
**Abstract**
While Large language models (LLMs) have the
capability to iteratively reflect on their own outputs, recent studies have observed their struggles with knowledge-rich problems without
access to external resources. In addition to
the inefficiency of LLMs in self-assessment,
we also observe that LLMs struggle to revisit
their predictions despite receiving explicit negative feedback. Therefore, We propose Mirror,
a Multiple-perspective self-reflection method
for knowledge-rich reasoning, to avoid getting stuck at a particular reflection iteration.
Mirror enables LLMs to reflect from multipleperspective clues, achieved through a heuristic
interaction between a Navigator and a Reasoner.
It guides agents toward diverse yet plausibly
reliable reasoning trajectory without access to
ground truth by encouraging (1) diversity of directions generated by Navigator and (2) agreement among strategically induced perturbations
in responses generated by the Reasoner. The
experiments on five reasoning datasets demonstrate that Mirror’s superiority over several
contemporary self-reflection approaches. Additionally, the ablation study studies clearly indicate that our strategies alleviate the aforementioned challenges. The code is released at
[https://github.com/hanqi-qi/Mirror.git.](https://github.com/hanqi-qi/Mirror.git)
**1** **Introduction**
Large Language Models (LLMs) have become an
important and flexible building block in a variety
of tasks. They can be further improved by iterative
correction in many tasks (Madaan et al., 2023; Gou
et al., 2023a; Shinn et al., 2023; Pan et al., 2023),
such as code generation, arithmetic problem solving and reasoning. During iterative refinement, the
critic module, which assesses the current response
and generates valuable feedback, is crucial to drive
performance improvement.
_∗_ Equal Contribution.
Some research shows that LLMs have selfassessment abilities (Manakul et al., 2023; Madaan
et al., 2023). For example, LLMs can reject its
own prediction and generate a response ‘I don’t
_know’ when they are not confident about their pre-_
dictions (Kadavath et al., 2022). Empirical observations demonstrate LLMs’ competence in various
reasoning tasks, leading to the utilization of advanced LLMs to evaluate the predictions made by
other models (Hao et al., 2023; Zhou et al., 2023;
Liu et al., 2023b). However, recent studies suggest
that relying directly on LLMs’ judgements is not
trustworthy and can lead to failures in knowledgerich iterative reasoning (Huang et al., 2023). To
guide LLMs through a reasoning loop, existing
solutions either incorporate external resources to
verify LLMs’ outputs (Peng et al., 2023; Yao et al.,
2023b), or train a critic module on labelled assessment datasets (Gou et al., 2023a; Zelikman et al.,
2022). Furthermore, self-consistency is considered
a robust unsupervised method to identify confident
and reliable LLM outputs.
In self-refinement, the quality of generated feedback also plays a pivotal role. The Self-Refine
method (Madaan et al., 2023) introduced taskspecific metrics for multifaceted feedback generation, requiring LLMs to evaluate their outputs
across various aspects, such as fluency, engage_ment, and relevance for the dialogue generation_
task. This process often heavily relies on human
expertise, and generating effective feedback for reasoning tasks can be even more difficult as it is obscure to define the essential attributes for different
problems. Providing overly general feedback fails
to guide LLMs toward generating better outputs in
subsequent iterations.
The inefficiency of self-assessment and feedback
generation capabilities largely hinders the performance of iterative refinements. On one hand, as
depicted in Figure 1, it is evident that in the absence of a ground truth reference, LLMs fail to
7086
-----
|Col1|Col2|Col3|Col4|Col5|Col6|autostop ground-truth neverstop|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
Figure 1: Without ground truth for validating LLM-generated outputs, LLMs struggle to consistently improve their own outputs
GPT3.5-stem GPT3.5-social GPT3.5-humanity GPT3.5-other
80
80 75 autostop
75 70 ground-truth
70 75 70 neverstop
65 70 65 65
Acc 60 60
65
55 55 60
50 60
50
45 55 45 55
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5
iteration iteration iteration iteration
due to their incapability of self-assessment. Autostop and Neverstop provide different generic feedback without leaking the
correctness of the current response.
consistently improve their predictions, indicating
their limitations in self-assessment[1]. On the other
hand, even when ground truth labels are available,
LLMs often fail to adhere to instructions for revising their incorrect predictions, as shown in Figure
2. Each bar represents the number (averaged over 5
iterations) of revised (blue) and unchanged samples
(grey) among the incorrectly predicted samples. It
is undesirable to see that a large number of incorrect predictions stay unchanged, suggesting that
LLMs can become trapped in a reasoning loop.
To address the aforementioned limitations and
generate high-quality feedback without relying
on human experts, we propose a novel framework, refer to as Mirror (Multiple-perspective self**reflection method for knowledge-rich reasoning).**
Mirror enables LLMs to reflect from multipleperspective clues and this is achieved in a heuristic
manner between a Navigator and a Reasoner, resembling a typical human tutoring process. For
example, when tackling a complex scientific problem, the Navigator generates clues of key elements
and rationales behind posing the question, which
are crucial in focusing the response on the essential
aspects. This information, tailored to the question,
serve as instructions for prompting the Reasoner
to adjust their predictions accordingly and avoid
getting stuck at a particular stage.
To initiate the unsupervised self-reflection properly and avoid being trapped in one reasoning loop,
Mirror integrates an intrinsically motivated planning algorithm to search for the optimal reasoning trajectory. Inspired by the findings in §3.1
and §3.2, we propose to reward both the diversity
of generated directions and the agreement among
strategically induced perturbations in responses.
Notably differing from existing tree-based planning methods for reasoning (Hao et al., 2023; Zhou
1Details of Autostop and Neverstop are in Appendix A.1.
et al., 2023), Mirror avoids deteriorated searching space by encouraging diverse generative outcomes from LLMs at each reflection step, and enhances the self-assessment ability by considering
the agreements among multiple-perspective perturbations strategically induced in responses. We
evaluate the performance of Mirror on two categories of reasoning tasks: MMLU (Hendrycks
et al., 2021), a knowledge-rich question-answering
dataset, and FEVER (Thorne et al., 2018), a factchecking dataset. Mirror achieves a significant
average improvement of over 15% compared to
recent popular unsupervised self-refinement methods. The empirical observations demonstrate that
the proposed diversity-based reward and answer
assessment strategy serve as reliable sources for
performance enhancement.
**2** **Related Work**
**Self-Reflection LLMs.** Extensive research (Honovich et al., 2022; Xie et al., 2023b) has been conducted to enhance LLMs through the concept of
self-reflection, where LLMs learn from automatically generated feedback to understand and reflect
on their own outputs. This feedback can stem
from various sources: the LLM itself (Madaan
et al., 2023; Shinn et al., 2023), a separately trained
critic module (Gou et al., 2023b; Peng et al., 2023)
or external sources (Yao et al., 2023b), such as
Wikipedia or an Internet Browser. Gou et al.
(2023b); Peng et al. (2023) argued that evaluators
trained on task-oriented feedback offer superior performance. For example, Refiner (Paul et al., 2023)
took context and hypotheses as input to generate
templates-based feedback for various error types.
Recent studies (Peng et al., 2023; Shinn et al., 2023;
Hao et al., 2023) have fully utilized the in-context
learning capability of LLMs, prompting them to
generate high-quality feedback based on their pre
7087
-----
vious generation or potential templates. Madaan
et al. (2023) proposed multiple task-oriented metrics and prompted LLMs to evaluate their own
outputs based on these criteria. Similarly, Peng
et al. (2023); Glaese et al. (2022) adopted external tools to predict multi-facet human preference
scores. Our solution aligns with this trend by aiming to provide informative and customized instructions tailored to the specific task and query. Moreover, it seeks to achieve this without relying on
human intervention or external tools, thereby rendering self-refinement more feasible in practice.
**Reasoning models augmented with tree search.**
Recently, tree-based reasoning has attracted significant attention, such as Tree-of-Thought (ToT) (Yao
et al., 2023a), Grace (Khalifa et al., 2023), and
SelfEval-Decoding (Xie et al., 2023b). At each
reasoning step, ToT adopts breadth-first search
and depth-first search, while the latter two methods select the top-k scoring candidates during the
decoding process. Moreover, Monte-Carlo Tree
Search (MCTS) is one of the popular search algorithms (Swiechowski et al., 2023), which strikes
a balance between exploitation and exploration.
Some existing approaches establish a reinforcement learning framework to maximize reward
through learning optimal actions/states (Du et al.,
2023a; Parthasarathy et al., 2023; Zhu et al., 2023).
Other studies fully utilize the capability of LLMs
for interaction and feedback generation. For instance, RAP (Hao et al., 2023) leveraged step-wise
rewards from interactions with the world model to
decompose and solve the problem step-by -step,
rather than a iterative manner. LATS (Zhou et al.,
2023) was the first work in leveraging MCTS for
self-reflection. However, their feedback contains
information from comparisons with ground truth,
which is not applicable in our case. Instead, our
approach, Mirror has no access to gold labels, and
we incorporate a novel diversity reward to avoid
the inefficient search in the reflection iteration.
**3** **Lost in the Reasoning Loop**
Given the observed challenges in enhancing LLMs’
self-improvement without ground truth labels, particularly in knowledge-rich reasoning tasks, our
initial experiment aims to address these challenges
by breaking them down into two sub-questions.
_Q1: To what extent can LLMs assess the correct-_
_ness of a statement? This investigation involves en-_
hancing their capabilities through supervised train
ing. The primary goal is to discern if there are
viable solutions to enhance the verification ability
of LLMs on knowledge-rich statements.
_Q2: How well can LLMs generate high-quality_
_feedback to guide their own subsequent response_
_update? It is especially challenging when the feed-_
back generation models are not trained on highquality data, relying solely on the in-context learning capability of LLMs.
**3.1** **LLMs in Knowledge Grounding**
We experiment with the multiple-choice dataset,
MMLU (Hendrycks et al., 2021), covering 57 subjects across STEM, Humanity, Social and other
domains. To evaluate the ability of LLMs in assessing the knowledge-rich statements, we construct
the positive and negative statements by substituting the question with the correct choice and a randomly selected choice from the other three incorrect choices, respectively. Table 1 presents the
assessment accuracy of assessing. There are three
categories of methods: in-context learning, finetuned on statements, and classification based on
intermediate activations from LLMs.
As illustrated in the first group results in Table A1, an increase in accuracy is observed as the
size of Llama-2-13B-chat increases. Notably, GPT3.5 with 175B parameters consistently achieves
the best results across the three domains, although
the improvement is not directly proportional to the
parameter size. We then apply advanced prompting techniques, i.e., UniLangCheck (Zhang et al.,
2023) on the best-performing method, GPT-3.5.
Our analysis reveals that the improvements are
predominantly driven by self-consistency, while
UniLangCheck does not consistently contribute to
improvement in grounding. For UniLangCheck,
we firstly prompt LLMs to generate a fact about the
key elements in a question before making the final
assessment. It can be partially explained by the
accumulation error, i.e., the inaccurate facts generated by LLMs before reaching the final conclusion
can affect the outcome. We also calculate the correlation between accuracy and self-consistency, represented by the probability of generating a single
answer through multiple repeated prompting. The
average correlation R[2] for questions in the MMLU
datasets across three LLMs is about 0.85, indicating that self-consistency can be relied upon as a
proxy for assessment [2].
2Experiment details are shown in Appendix A.2, self
7088
-----
settings in (Shinn et al., 2023) to incorporate the assessment results in the feedback: "Observation:
The answer is incorrect." is inserted after
presenting the question and previous attempt, and
the LLMs are required to generate refection and
response to this question again. From the results
in Figure 2, it is consistently observed across different model scales that LLMs struggle to update
their predictions despite receiving explicit negative
feedback. The average percentage of successfully
updated examples for GPT-3.5, Llama, and Vicuna
are 65.6%, 51.79% and 74.09%, respectively, indicating an ample room for improvement.
Motivated by the following two observations: (1)
LLMs are particularly susceptible to context influence at the beginning or near the end (Liu et al.,
2023a), (2) In-Context Learning is highly sensitive to stylistic and emotional words in demonstrations (Min et al., 2022; Li et al., 2023), we develop three prompting strategies for feedback generation. An incorrectly predicted example with different prompting strategies is shown in Figure A2.
The results in Table A2 and Table A3 suggest that
based on correct question assessment, enhancing
the exploration capability within a diverse answer
space could lead to higher accuracy in answering
knowledge-rich questions.
The above empirical findings regarding the two
research questions provide valuable insights for
our proposed model, named Mirror. Distinguishing itself from existing self-improvement methods,
Mirror makes two significant contributions: (1) it
features a Navigator module for generating multiple question-adaptive directions, with diversity
constraints implemented to prevent invalid reflections. (2) it relies on the consistency of the inherent
multiple perspectives for boosted self-assessment.
**4** **The Framework of Mirror**
In this section, we introduce our unsupervised selfreflection framework, Mirror, depicted in Figure 3.
The reward R consists of Diversity and Consistency terms. Diversity is applied to prevent reflection from becoming stuck and to facilitate intraconsistency involved in the stop criteria for selfassessment. The Consistency reward also influences direction generation.
**4.1** **Problem Setup**
Given a question, the Reasoner is to arrive at the
final answer through interacting with a Navigator.
We consider a Markov Decision Process (MDP)
Model STEM Social Humanity
Llama-2-13B-chat 0.541 0.540 0.525
Llama2-70B-chat 0.569 0.593 0.587
Vicuna-v1.5-13B 0.539 0.580 0.558
GPT-3.5(175B) 0.666 0.725 0.733
:+UniLangCheck 0.621 0.729 0.713
:+Self-Consistency 0.712 0.730 0.752
TRUE[⋆] 0.545 0.532 0.559
ActivationRegress[⋆] 0.531 0.529 0.553
ContrastSearch 0.606 0.645 0.617
Table 1: The (binary classification) accuracy in evaluating
the factual correctness of statements in the MNLU dataset.
Methods denoted with ⋆ can access to fact labels.
We also evaluate the performance of some su_pervised methods (denoted with ⋆_ in Table 1).
TRUE (Honovich et al., 2022) involves fine-tuning
a T5 (Raffel et al., 2020) model on a collection
of natural language inference (NLI) datasets for
fact-checking. We further fine-tune its classifier
head on our training set. ActivationRegress (Marks
and Tegmark, 2023) trains classifiers using activations extracted from Llama2-13B 12-layer encodings as inputs. ContrastSearch (Burns et al., 2023)
is trained using contrastive and consistency loss
while having no access to the factual labels. This
is achieved by constructing data pairs that include
both a positive-labeled and negative-labeled statements, irrespective of the true factual labels. It is
surprising that both TRUE and ActivationRegress
are inferior than the unsupervised ContrastSearch.
**3.2** **LLMs in Feedback Generation**
Evaluating the quality of generated feedback poses
a significant challenge, particularly when such feedback is utilized across diverse tasks (Madaan et al.,
2023). Drawing inspiration from the pivotal role of
feedback in the self-improvement, we propose to
leverage the performance of LLMs in subsequent
iterations for evaluation. Specifically, LLMs can
access to ground truth, enabling them to evaluate
the correctness of their current responses. This
information is then integrated into feedback generation. Consequently, we assess the quality of
feedback by examining the percentage of examples
that are incorrectly answered, along with the percentage of instances where responses in the next
round are revised for the same incorrectly answered
examples. This comparison sheds light on the effectiveness of instructions in guiding LLMs to rectify
their erroneous responses. Firstly, we follow the
consistency evaluation results are shown in Table A1.
7089
-----
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Changed UnChanged|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
|||||||||
Figure 2: The average number (across all iterations) of changed and unchanged samples among those predicted incorrectly.
GPT3.5-(Un)Changed samples Llama-(Un)Changed samples Vicuna-(Un)Changed samples
30 45 45 ChangedUnChanged
20 30 30
10 15 15
Number of samples Number of samples Number of samples
0 0 0
STEM SocialHumanity Other STEM SocialHumanity Other STEM SocialHumanity Other
Large percentage of unchanged samples indicate the limited capability for efficient reflection.
on several complete generated trajectories, our
framework utilizes multiple-perspective consistency as stop criteria at each step t. (2) Novel
_Reward Mechanism: a novel diversity mechanism_
is designed to avoid the null space encountered in
traditional random search settings. Our method is
detailed in Algorithm 1 in the Appendix.
**4.2** **Multiple-perspective Assessment**
Motivated by the empirical results in § 3.1 regarding knowledge-grounding, we propose to employ an advanced consistency-based method as
a surrogate for factual correctness when external
resources are unavailable. This method considers both intra- and inter-consistency of the generated responses. Specially, we employ the Navigator for K question-oriented direction generation, a ∼ _π(at|st−1, at−1, q, p0, R)._ These K
directions are intended to provide diverse perspectives for problem-solving, with the agreement among guided responses representing interconsistency. Meanwhile, the confidence in selfconsistency (Wang et al., 2023) serves as the measure of intra-consistency.
To integrate consistency considerations into the
assessment per reflection iteration, we use intra**consistency to determine whether the Reasoner**
should accept its initial response. If the intraconsistency for our initial answer surpasses a
threshold T0, we consider it as the final result;
otherwise, we integrate the inter-consistency as
an indicator for stopping criteria in subsequent
reflection iterations. We derive the final answer
when the inter-consistency exceeds T0 or when
reach the predefined maximum iterations, selecting the final answer with the highest consistency
score [3]. This inter-consistency also becomes part
of reward Rconsistency for the current state and contribute to the direction generation. Besides, the
3The threshold T0 for different models are datasets are set
according to the validation performance, details in C.1.
previous attempt,question, Navigator direction 1direction 2
LLMs direction 3
reward ...
No
ReasonerLLMs Criteria?Stop Yes
Action space State space
Figure 3: An overview of Mirror. It facilitates diverse
question-specific directions (represented by different colored
dots in the action space) to encourage extensive reflection by
the Reasoner. The stopping criterion is based on the consistency among states from multiple perspectives, which also
contributes to the direction generation.
defined by a tuple (S, A, P, π, γ, R), where the
_st_ and at denote the state and action, re_∈S_ _∈A_
spectively in the t-th reflection iteration. In the context of multiple-choice question, at is the direction
generated by the Navigator, and st is the response
generated by the Reasoner, including the answer
to the question and the rationale behind. R(s, a)
is the reward function. Therefore, we have state
transition distribution P(st|st−1, at−1) and action
generation distribution π(at|st−1, at−1, q, p0, R),
where p0 is the prompt for the Navigator to generate direction at. It is nontrivial to obtain frequent
rewards that incentivize self-refinement progress
without access to the ground truth. Therefore, we
turn to an intrinsically motivated planning algorithm, i.e., Monte-Carlo Tree Search (MCTS) (Kocsis and Szepesvári, 2006; Browne et al., 2012;
Swiechowski et al., 2023) to efficiently explore
the environment augmenting rewards with auxiliary objectives (Mu et al., 2022; Du et al., 2023b).
Comparing to existing work search-based reasoning methods based on frozen LLMs (Hao et al.,
2023; Zhou et al., 2023), we highlight two notable contributions addressing the vulnerabilities of
LLMs as discussed in §3: (1) Step-wise Multiple_perspective self-assessment: unlike approaches_
that rely on ground truth or majority-voting based
7090
-----
intra-consistency value is transformed into verbal
form, becoming part of the prompt p0 given for
Navigator to generate direction. This is inspired
by our observation that higher intra-consistency
implies a higher likelihood of correctness, so we
offer this additional information to assist in feedback generation. This is similar to (Xu et al., 2023),
where ICL performance benefits from accessing to
the prediction of a supervised fine-tuned smaller
model. Our assessment method is different from
majority vote, which treats every node with the
same weight when aggregating the final result. The
comparison results are shown in Table 4.
**4.3** **Diverse and Valid Search Space**
Obtaining a meaningful and diverse action space
is challenging due to the absence of a dense and
well-defined reward function in the planning algorithm. One of the predominant reasons is that
different action sequences can lead to similar outcomes (Baranes and Oudeyer, 2013). In our context, considering the limitation of LLMs in following instructions, the Reasoner may ignore the
differences among multiple directions and generate identical responses merely based on the question. Therefore, some intrinsically motivated reinforcement learning algorithms choose to explore
outcomes rather than actions (Oudeyer and Kaplan, 2007; Ladosz et al., 2022). MCTS addresses the limitation of sparse rewards by visiting
novel states or transitions through random exploration (Du et al., 2023b). The most popular algorithm in the MCTS family, Upper Confidence
Bound for Trees (UCT) (Kocsis and Szepesvári,
2006) is treated as the choice of child node, UCT =
2 In N (n)
_Rj + 2Cp_ _N_ (nj ) [, where][ R][j][ is the average re-]
ward for child nodeq _j, while the second term en-_
courages sampling from nodes whose children are
less visited. N (n) is the number of times current
node (parent) has been visited in previous iterations,
and N (nj) is times of the child node has been visited. The Cp > 0 is a constant to control balance
between exploitation (first term) and exploration
(second term). In our case, we specifically promote diversity between the parent and child node,
i.e., the response in previous attempt st 1 and the
_−_
current attempt st. For multiple-choice questions
in MMLU, we assess if the predicted choices are
the same across two reflection iterations. The discrepancy in responses indicates the alleviation of
null direction space and the avoidance of being
stuck, especially given the relatively low consistency with the response from the previous iteration.
The relationship between task performance and the
diversity of responses in the generated tree, as illustrated in Figure 5, confirms our motivation for
diversity enhancement.
However, maximizing diversity of outcomes may
not always be enough, as less relevant states might
be collected (Du et al., 2023b). Therefore, to ensure valid search space, we filter out states whose
associated responses are not in the correct form,
such as failing to provide a final choice, or refusing
to answer questions for moral considerations. For
search efficiency, our proposed stopping criteria
is to terminate the search process once its interconsistency surpasses a threshold, thereby avoiding
unnecessary search and expansion costs.
**5** **Can Mirror Steer LLMs in Iterative**
**Improvements?**
We evaluate our proposed Mirror on MMLU
and FEVER (Thorne et al., 2018). FEVER
is a fact-checking dataset featuring three labels
for knowledge-rich statements, i.e., supports,
refutes and not enough info.[4]
**5.1** **Experimental Setup and Results**
**Comparison methods.** The evaluation models
are GPT-3.5, Llama2-13B-Chat (Touvron et al.,
2023), and Vicuna-v1.5-7B (Zheng et al., 2023) [5].
We equip the LLMs with different reasoning mechanisms, including Chain-of-Thought (CoT) (Wei
et al., 2022), Self-consistency (Wang et al., 2023),
Self-Correction (Huang et al., 2023) and Reflexion(w.GT) (Shinn et al., 2023). We implement CoT
by prompting LLMs to first generate step-by-step
thoughts and then generate answers based on those
thoughts. We repeat this process for five times, resulting in Self-Consistency[(5)]. The remaining two
methods are self-improvement techniques where
LLMs are first prompted to generate reflections,
followed by updating their current response accordingly if applicable. Self-Correction relies on
LLM’s internal knowledge for answer assessment,
while Reflexion compares the current answer with
the ground truth for evaluation.
4We also include the evaluation results on GSM8K (Cobbe
et al., 2021) in Table A4.
5We denote them as Llama13B and Vicuna13B for simplicity. Experiment details can be found in Appendix C.
7091
-----
**Question** **Initial Answer** **Self-improvement** Self-Correction
**Question: A score of 50 on the** **Thought: A score of 50 on the Beck Depression Inventory-** Please check your answer and find possible errors. If everything
Beck Depression Inventory-IIsuggests:? II suggests: A. no or minimal depression. Scores on theBDI-II range from 0 to 63, with higher scores indicating is good, stick to your answer, otherwise, update to a new answer.
more severe depressive symptoms. A score of 50 falls **Thought: Upon reviewing the previous answer, I realized that I made an**
A. no or minimal depression. within the range considered to indicate no or minimal error in my initial response. I apologize for any confusion. Let me correct
B. mild depression. depression. So, the correct choice is: A. the answer: A score of 50 on the Beck Depression Inventory-II suggests: B.
C. moderate depression. mild depression.
D. severe depression. **Answer: A** **Answer: B**
**Initial Answer** Mirror
Encourage the student to consider the specific indicators that would lead to each level of
depression. Encourage them to think about the impact of a score of 50 on the Beck Depression
Inventory-II and how it matches up with the levels of depression described in the choices given.
**Thought: I think a score of 50 on the Beck Depression Inventory-II suggests moderate depression.** ... ... ...
It's important to note that this score indicates a significant level of depression that may require Answer: Answer: Answer:
professional attention. So, the correct choice is C. moderate depression. D C C
**Answer: D**
**Self-improvement** **Self-improvement**
It seems there may be a discrepancy between your rationale and your selection. Please consider It seems there may be a discrepancy between your
revising your answer to align with the correct interpretation of the score. Consider revising your rationale and your selection. Please consider revising
your answer to align with the correct interpretation of
answer accordingly. the score.
**Thought: Upon reviewing the scoring criteria for the Beck Depression Inventory-II, I have revised my**
understanding. A score of 50 on the inventory actually suggests severe depression, indicating a highlevel of depression that may require immediate professional attention. Therefore, the correct answeris D. severe depression. Thank you for directing me to ensure a precise understanding of the scoringcriteria. Answer:...D Answer:...D **Thought:score of 50 on the Beck Depression Inventory-II doessuggest a significant level of depression. Therefore, thecorrect answer is D. My previous response was partly correct. A**
**Answer: D** **Answer: D**
Figure 4: Reasoning process of self-correction and Mirror. Text in red are generated directions. Our diversity is characterised
in (i) generating directions tailored to questions (ii) encouraging exploration in multiple plausible reasoning paths. The final
answer is derived through an agreement among multiple trajectories.
oriented direction, the Reasoner better identify errors in the initial response, such as, the error in
_score direction and inconsistency between ratio-_
_nales and selection. The consistency-based criteria_
built in the tree further improves the fact assessment. During backpropagation, node s[(1)]1 receives
a higher reward, leading to the leftmost reasoning
path (details of direction a[(1)]1 _[, a]2[(1)][, a]2[(2)]_ and corresponding responses are shown in the text frame).
By contrast, Self-correction seems to engage in
groundless inference by switching answers without
explicit clues. Even comparing Mirror with Relexion(w.GT), we find comparable results for GPT-3.5
on the STEM dataset, for Llama on all datasets
except for STEM and for Vicuna on STEM and
Humanity. From the perspective of the model, the
average improvements over baselines for GPT-3.5
are particularly prominent, partly explained by its
better ability to adhere to provided directions. This
can also explain the marginal improvements even
ground truth are accessible to the smaller models.
**5.2** **Analysis**
We discuss the effects of key strategies in Mirror.
**Question-Oriented Direction.** Motivated by the
findings in § 3.2 that LLMs struggle to effectively reflect on themselves with generic feedback,
Mirror is equipped with a Navigator for generating question-oriented directions. To study the
effects of these directions (results in Table 3),
we adopt our Navigator for direction generation
|Methods|STEM Social Hum Others FEVE|
|---|---|
|Relexion(w.GT)(5) GPT-3.5 (CoT) Self-Consistency(5) Self-Correct(2) Mirror|0.79 0.84 0.78 0.73 0.7 0.63 0.65 0.53 0.60 0.5 0.67 0.68 0.58 0.64 0.6 0.63 0.62 0.55 0.54 0.5 0.76 0.77 0.71 0.67 0.6|
|Relexion(w.GT)(5) Llama13B(CoT) Self-Consistency(5) Self-Correct(2) Mirror|0.64 0.63 0.60 0.64 0.5 0.42 0.58 0.42 0.53 0.4 0.45 0.60 0.49 0.57 0.4 0.42 0.52 0.53 0.45 0.3 0.57 0.62 0.58 0.62 0.5|
|Relexion(w.GT)(5) Vicuna13B (CoT) Self-Consistency(5) Self-Correct(2) Mirror|0.62 0.68 0.59 0.69 0.5 0.46 0.57 0.43 0.57 0.3 0.50 0.62 0.53 0.60 0.4 0.43 0.49 0.42 0.49 0.3 0.59 0.64 0.56 0.65 0.4|
Table 2: Performances of different reasoning methods, with
an upper-bound represented by results obtained when ground
truth is provided, denoted as Relexion(w.GT). The superscripts
denote the number of reasoning iterations.
**Results.** The results are shown in Table 2. By
comparing CoT with Self-Correction, we observe
the performance degradation after two rounds of
self-Correction across almost all datasets and models. This observation aligns with our findings in
§3.1 and in (Huang et al., 2023). Equipped with
self-consistency[(5)], significant performance improvements are evident across all settings. Mirror
considers additional inter-consistency, achieves the
most notable improvements, with a relative increase of more than 15% across the three models. Figure 4 illustrates the reasoning process of
Self-correction and Mirror. Both methods fail to
answer correctly in the first trial. With question
7092
-----
for CoT settings, in which the direction (GenerativeDirect) is introduced before the LLM generates its thought on the previous trial. We
then replace all adaptive directions with a single generic direction (FixedDirect) which reads:
Read the question and choices carefully
and diagnose the previous response by
locating the incorrect clues and update
the response if applicable. Comparing with
CoT, the inclusion of GenerativeDirect boosts the
performance across all settings with significant improvements. Conversely, FixedDirect sometimes
results in performance degradation for Llama13B.
The impact of FixedDirect is similar to advanced
instruction intended to provide general direction for
the task, whereas GenerativeDirect offers questionspecific advice to accurately summarize clues for
solution. Referencing to the example in Figure A3,
Mirror (bottom) firstly prompts the Navigator for
direction generation (highlighted in red), which
captures the key elements, such as “the character_istics of a connected and undirected graph”. The_
Reasoner then follows this direction to explain the
key concepts of this graph, laying a solid foundation for reaching the correct conclusion. Without
such direction, the Reasoner may overlook or misinterpret knowledge about this graph, leading to
errors in the conclusion.
|Models Methods|MMLU|FEVER|
|---|---|---|
|GPT-3.5: CoT + FixedDirect + GenerativeDirect|0.68 0.73 0.78|0.58 0.60 0.64|
|Llama13B: CoT + FixedDirect + GenerativeDirect|0.46 0.43 0.49|0.40 0.39 0.45|
|Vicuna13B: CoT + FixedDirect + GenerativeDirect|0.48 0.51 0.55|0.42 0.43 0.45|
Table 3: Performances of using generic fixed direction
and generative direction on top of CoT.
0.9
0.8
0.7
0.6
0.5
0.4
0.60
0.55
0.50
0.45
|vicuna-MMLU|Col2|0.550|vicuna-FEVER|
|---|---|---|---|
|||0.550 0.8 0.525 0.7 0.500 ans_presence 0.475 acc 0.6 0.450 0.425 0.5 0.400||
|4 6 8 Num|acc ans_presence||4 6 8 Num|
Figure 5: The Accuracy (acc) and the percentage of samples
where the ground truth is included in the tree (ans-presence),
with different sizes of search space (Num). Results for GPT3.5 and Llama13B are in Figure A4a and A4b.
We provide a case to show the effects of diversity
in directions (highlighted difference are in bold).
The quantitative results of semantic similarity between multiple directions are shown in Table A5.
**Case study**
**Question: The price elasticity of demand for a prod-**
uct is greater if
**- Answer without Direction:**
_C. the number of substitute products is limited._
**- Answer with Direction:**
**Direction 1: Pay close attention to nuances among**
options and prioritize selections that resonate with
**economic principles and the factors influencing con-**
sumer behavior.
**Answer 1: Given the advice, the most suitable**
choice would be: A. the proportion of the good of
_the consumer’s budget is high. This aligns with_
**economic principles of demand elasticity.**
**Direction 2: Prioritize options that align with factors**
impacting consumer responsiveness to price changes,
such as budget allocation and substitute availabil**ity, while considering the time frame for consumer**
reactions.
**Answer 2: Given the advice, the most suitable**
choice would be: A. the proportion of the good
_of the consumer’s budget is high. This choice_
suggests that when a significant portion of a con**sumer’s budget is allocated to a particular prod-**
uct, they are likely to be more sensitive to price
changes.
**Performance of Answer Assessment Criteria.**
As discussed in Section 3.1, LLMs struggle to assess the correctness of knowledge-rich statements,
a capability that can be consistently enhanced
through self-consistency. We further reform the
majority-voting assessment process by considering the inter-consistency built in the hierarchical
decision-making tree. To study the effects of our
answer assessment criteria described in §4.2, we
compare them with two other voting methods, i.e.,
self-consistency and majority vote within our generated tree-trajectories. We average the results from
**Diversity of the Search Space.** We demonstrate
the impact of multiple-perspective directions, aiming at guiding the Reasoner out of reflection
traps. To this end, we compute the percentage
of generated trajectories containing the correct answers (ans_presence) and the according task performances (acc) across various action space sizes,
i.e., the number of generated directions. The results in Figure 5 indicate that lager search space
enhanced by the Rdiversity can increase the probability of reaching the correct answer.
7093
-----
Table 2 for CoT and Self-consistency[(5)] across four
domains in MMLU and denote them as CoT[(1)]
and CoT[(5)], respectively. We also compare with
CoT[(15)] because our generated trees have at most 3
layers and 5 branches in each layer, so there are 15
candidate nodes. For Majority[(][tree][)], we select the
final answer through majority-voting among all intermediate nodes in our generated tree-trajectories.
The results of different final answer assessments
are presented in Table 4. The performance improvements are observed on self-consistency[(15)]
over self-consistency[(5)], although the improvements percentage isn’t pronounced as that seen
in the comparison between self-consistency[(5)] over
CoT. We observe a performance increase after applying majority-voting in the CoT settings, while
this simple strategy doesn’t yield improvements
in the generated tree. This is because undesirable
responses may be generated during the node expanding phase, and majority voting treats all nodes
equally. In contrast, our reward-based search tends
to focus on reliable nodes with higher confidence
in each reflection step, thereby avoiding search cost
on less desirable nodes.
**Models** **Ans. Assessment** **MMLU** **FEVER**
GPT-3.5: CoT[(1)] 0.60 0.58
CoT[(5)] 0.64 0.61
CoT[(15)] 0.67 0.62
+Majority[(][tree][)] 0.69 0.59
+Reward Search[(][tree][)] 0.73 0.64
Llama13B: CoT[(1)] 0.49 0.40
CoT[(5)] 0.53 0.46
CoT[(15)] 0.55 0.48
+Majority[(][tree][)] 0.58 0.50
+Reward Search[(][tree][)] 0.60 0.54
Vicuna13B: CoT[(1)] 0.51 0.39
CoT[(5)] 0.56 0.43
CoT[(15)] 0.58 0.43
+Majority[(][tree][)] 0.59 0.43
+Reward Search[(][tree][)] 0.60 0.46
Table 4: Results of different answer assessment methods.
**Search Efficiency Analysis.** Mirror is a tree-like
searching algorithm that benefits from iterative reasoning process, which enhances the reasoning ability at the computation cost. To mitigate the search
cost, we (i) incorporate the Monte-Carlo tree search
for its selective search and expansion. (ii) introduce
early-stop criteria to encourage a shallow and avoid
multiple playouts. We summarise the tree-depth
in Table 5. The results below show that our resulting tree, with a maximum depth of 3, is heavily
unbalanced and shallow.
**STEM** **Social** **Hum** **Others** **FEVER**
Depth=2 0.17 0.12 0.28 0.13 0.04
Depth=3 0.00 0.09 0.20 0.04 0.00
Table 5: Depth of the search tree based on GPT-3.5 across
different datasets.
**6** **Conclusion**
In this paper, we present a multiple-perspective
reflection method, called Mirror, for knowledgeenriched reasoning. To tackle the limitations of
LLMs in fact assessment and the generation of
high-quality feedback, Mirror is equipped with
a directional Navigator, enabling the Reasoner to
identify multiple key clues in problem-solving. Furthermore, the consistency among responses generated under different directions enhances the validity of answer assessment, particularly when ground
truth is not accessible. Experiments conducted
demonstrate Mirror’s superiority over several contemporary CoT-based and self-consistency-based
reasoning approaches without access to ground
truth. Moreover, the ablation study results clearly
show that our strategies effectively alleviate the
aforementioned challenges.
**Limitations**
In this study, our primary focus is to identify optimal reasoning trajectories based on generated outputs and frozen states. However, the ability to
assess facts and generate reflections may be limited
by the unaltered decoding process and pre-training.
To fully leverage the potential of LLMs in complex
reasoning, it is beneficial to explore two directions:
(1) Strategically guiding fine-grained generation,
such as token-level generation during the decoding
phase within the expansive generation space. (2)
Fine-tuning LLMs through access to limited taskoriented data to enhance their responses to more
complex problems.
**Acknowledgements**
This work was supported in part by the UK Engineering and Physical Sciences Research Council
(EPSRC) through a Turing AI Fellowship (grant
no. EP/V020579/1, EP/V020579/2) and a New
Horizons grant (grant no. EP/X019063/1).
7094
-----
**References**
[Adrien Baranes and Pierre-Yves Oudeyer. 2013. Active](https://doi.org/10.1016/J.ROBOT.2012.05.008)
[learning of inverse models with intrinsically moti-](https://doi.org/10.1016/J.ROBOT.2012.05.008)
[vated goal exploration in robots. Robotics Auton.](https://doi.org/10.1016/J.ROBOT.2012.05.008)
_Syst., 61(1):49–73._
Cameron B. Browne, Edward Powley, Daniel Whitehouse, Simon M. Lucas, Peter I. Cowling, Philipp
Rohlfshagen, Stephen Tavener, Diego Perez, Spyri[don Samothrakis, and Simon Colton. 2012. A survey](https://doi.org/10.1109/TCIAIG.2012.2186810)
[of monte carlo tree search methods. IEEE Transac-](https://doi.org/10.1109/TCIAIG.2012.2186810)
_tions on Computational Intelligence and AI in Games,_
4(1):1–43.
Collin Burns, Haotian Ye, Dan Klein, and Jacob Stein[hardt. 2023. Discovering latent knowledge in lan-](https://openreview.net/pdf?id=ETKGuby0hcs)
[guage models without supervision. In The Eleventh](https://openreview.net/pdf?id=ETKGuby0hcs)
_International Conference on Learning Representa-_
_tions, ICLR 2023, Kigali, Rwanda, May 1-5, 2023._
OpenReview.net.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training verifiers to solve math word prob-](http://arxiv.org/abs/2110.14168)
[lems. CoRR, abs/2110.14168.](http://arxiv.org/abs/2110.14168)
Yilun Du, Mengjiao Yang, Bo Dai, Hanjun Dai, Ofir
Nachum, Joshua B. Tenenbaum, Dale Schuurmans,
and Pieter Abbeel. 2023a. [Learning universal](https://doi.org/10.48550/ARXIV.2302.00111)
[policies via text-guided video generation.](https://doi.org/10.48550/ARXIV.2302.00111) _CoRR,_
abs/2302.00111.
Yuqing Du, Olivia Watkins, Zihan Wang, Cédric Colas,
Trevor Darrell, Pieter Abbeel, Abhishek Gupta, and
[Jacob Andreas. 2023b. Guiding pretraining in rein-](https://proceedings.mlr.press/v202/du23f.html)
[forcement learning with large language models. In In-](https://proceedings.mlr.press/v202/du23f.html)
_ternational Conference on Machine Learning, ICML_
_2023, 23-29 July 2023, Honolulu, Hawaii, USA, vol-_
ume 202 of Proceedings of Machine Learning Re_search, pages 8657–8677. PMLR._
Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony
Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent
Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, and
[Kelvin Guu. 2023a. RARR: Researching and revis-](https://doi.org/10.18653/v1/2023.acl-long.910)
[ing what language models say, using language mod-](https://doi.org/10.18653/v1/2023.acl-long.910)
[els. In Proceedings of the 61st Annual Meeting of the](https://doi.org/10.18653/v1/2023.acl-long.910)
_Association for Computational Linguistics (Volume 1:_
_Long Papers), pages 16477–16508, Toronto, Canada._
Association for Computational Linguistics.
Tianyu Gao, Howard Yen, Jiatong Yu, and Danqi Chen.
2023b. Enabling large language models to generate
text with citations. In Empirical Methods in Natural
_Language Processing (EMNLP)._
Amelia Glaese, Nathan McAleese, Maja Trkebacz,
John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth
Rauh, Laura Weidinger, Martin Chadwick, Phoebe
Thacker, Lucy Campbell-Gillingham, Jonathan Uesato, Po-Sen Huang, Ramona Comanescu, Fan Yang,
A. See, Sumanth Dathathri, Rory Greig, Charlie
Chen, Doug Fritz, Jaume Sanchez Elias, Richard
Green, Sovna Mokr’a, Nicholas Fernando, Boxi
Wu, Rachel Foley, Susannah Young, Iason Gabriel,
William S. Isaac, John F. J. Mellor, Demis Hassabis, Koray Kavukcuoglu, Lisa Anne Hendricks, and
[Geoffrey Irving. 2022. Improving alignment of dia-](https://api.semanticscholar.org/CorpusID:252596089)
[logue agents via targeted human judgements. ArXiv,](https://api.semanticscholar.org/CorpusID:252596089)
abs/2209.14375.
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen,
Yujiu Yang, Nan Duan, and Weizhu Chen. 2023a.
[CRITIC: large language models can self-correct with](https://doi.org/10.48550/ARXIV.2305.11738)
[tool-interactive critiquing. CoRR, abs/2305.11738.](https://doi.org/10.48550/ARXIV.2305.11738)
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen,
Yujiu Yang, Nan Duan, and Weizhu Chen. 2023b.
[Critic: Large language models can self-correct with](https://api.semanticscholar.org/CorpusID:258823123)
[tool-interactive critiquing. ArXiv, abs/2305.11738.](https://api.semanticscholar.org/CorpusID:258823123)
Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong,
Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. 2023.
[Reasoning with language model is planning with](https://doi.org/10.48550/ARXIV.2305.14992)
[world model. CoRR, abs/2305.14992.](https://doi.org/10.48550/ARXIV.2305.14992)
Dan Hendrycks, Collin Burns, Steven Basart, Andy
Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language
understanding. Proceedings of the International Con_ference on Learning Representations (ICLR)._
Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai
Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas
Scialom, Idan Szpektor, Avinatan Hassidim, and
[Yossi Matias. 2022. TRUE: Re-evaluating factual](https://doi.org/10.18653/v1/2022.naacl-main.287)
[consistency evaluation. In Proceedings of the 2022](https://doi.org/10.18653/v1/2022.naacl-main.287)
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 3905–3920, Seattle,_
United States. Association for Computational Linguistics.
Jie Huang, Xinyun Chen, Swaroop Mishra,
Huaixiu Steven Zheng, Adams Wei Yu, Xiny[ing Song, and Denny Zhou. 2023. Large language](https://doi.org/10.48550/ARXIV.2310.01798)
[models cannot self-correct reasoning yet.](https://doi.org/10.48550/ARXIV.2310.01798) _CoRR,_
abs/2310.01798.
Saurav Kadavath, Tom Conerly, Amanda Askell, Tom
Henighan, Dawn Drain, Ethan Perez, Nicholas
Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli
Tran-Johnson, Scott Johnston, Sheer El Showk, Andy
Jones, Nelson Elhage, Tristan Hume, Anna Chen,
Yuntao Bai, Sam Bowman, Stanislav Fort, Deep
Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario
Amodei, Tom Brown, Jack Clark, Nicholas Joseph,
Ben Mann, Sam McCandlish, Chris Olah, and Jared
[Kaplan. 2022. Language models (mostly) know what](https://doi.org/10.48550/ARXIV.2207.05221)
[they know. CoRR, abs/2207.05221.](https://doi.org/10.48550/ARXIV.2207.05221)
Muhammad Khalifa, Lajanugen Logeswaran, Moon[tae Lee, Ho Hin Lee, and Lu Wang. 2023. Grace:](https://api.semanticscholar.org/CorpusID:258865395)
[Discriminator-guided chain-of-thought reasoning. In](https://api.semanticscholar.org/CorpusID:258865395)
_Conference on Empirical Methods in Natural Lan-_
_guage Processing._
7095
-----
Levente Kocsis and Csaba Szepesvári. 2006. Bandit
based monte-carlo planning. In European conference
_on machine learning, pages 282–293. Springer._
Pawel Ladosz, Lilian Weng, Minwoo Kim, and Hyondong Oh. 2022. Exploration in deep reinforcement
learning: A survey. Information Fusion, 85:1–22.
Cheng Li, Jindong Wang, Kaijie Zhu, Yixuan Zhang,
Wenxin Hou, Jianxun Lian, and Xing Xie. 2023.
Emotionprompt: Leveraging psychology for large
language models enhancement via emotional stimulus. arXiv preprint arXiv:2307.11760.
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy
[Liang. 2023a. Lost in the middle: How language](https://doi.org/10.48550/ARXIV.2307.03172)
[models use long contexts. CoRR, abs/2307.03172.](https://doi.org/10.48550/ARXIV.2307.03172)
Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny
Zhou, Andrew M Dai, Diyi Yang, and Soroush
Vosoughi. 2023b. Training socially aligned language
models in simulated human society. arXiv preprint
_arXiv:2305.16960._
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler
Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon,
Nouha Dziri, Shrimai Prabhumoye, Yiming Yang,
Sean Welleck, Bodhisattwa Prasad Majumder,
Shashank Gupta, Amir Yazdanbakhsh, and Peter
[Clark. 2023. Self-refine: Iterative refinement with](https://doi.org/10.48550/ARXIV.2303.17651)
[self-feedback. CoRR, abs/2303.17651.](https://doi.org/10.48550/ARXIV.2303.17651)
Potsawee Manakul, Adian Liusie, and Mark J. F. Gales.
[2023. Selfcheckgpt: Zero-resource black-box hal-](https://doi.org/10.48550/ARXIV.2303.08896)
[lucination detection for generative large language](https://doi.org/10.48550/ARXIV.2303.08896)
[models. CoRR, abs/2303.08896.](https://doi.org/10.48550/ARXIV.2303.08896)
[Samuel Marks and Max Tegmark. 2023. The geometry](https://api.semanticscholar.org/CorpusID:263831277)
[of truth: Emergent linear structure in large language](https://api.semanticscholar.org/CorpusID:263831277)
[model representations of true/false datasets. ArXiv,](https://api.semanticscholar.org/CorpusID:263831277)
abs/2310.06824.
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe,
Mike Lewis, Hannaneh Hajishirzi, and Luke Zettle[moyer. 2022. Rethinking the role of demonstrations:](https://doi.org/10.18653/V1/2022.EMNLP-MAIN.759)
[What makes in-context learning work? In Proceed-](https://doi.org/10.18653/V1/2022.EMNLP-MAIN.759)
_ings of the 2022 Conference on Empirical Methods_
_in Natural Language Processing, EMNLP 2022, Abu_
_Dhabi, United Arab Emirates, December 7-11, 2022,_
pages 11048–11064. Association for Computational
Linguistics.
Jesse Mu, Victor Zhong, Roberta Raileanu, Minqi
Jiang, Noah Goodman, Tim Rocktäschel, and Edward
Grefenstette. 2022. Improving intrinsic exploration
with language abstractions. Advances in Neural In_formation Processing Systems, 35:33947–33960._
Pierre-Yves Oudeyer and Frederic Kaplan. 2007. What
is intrinsic motivation? a typology of computational
approaches. Frontiers in neurorobotics, 1:6.
Liangming Pan, Michael Stephen Saxon, Wenda Xu,
Deepak Nathani, Xinyi Wang, and William Yang
Wang. 2023. [Automatically correcting large lan-](https://api.semanticscholar.org/CorpusID:260682695)
[guage models: Surveying the landscape of diverse](https://api.semanticscholar.org/CorpusID:260682695)
[self-correction strategies. ArXiv, abs/2308.03188.](https://api.semanticscholar.org/CorpusID:260682695)
Dinesh Parthasarathy, Georgios D. Kontes, Axel Plinge,
[and Christopher Mutschler. 2023. C-MCTS: safe](https://doi.org/10.48550/ARXIV.2305.16209)
[planning with monte carlo tree search.](https://doi.org/10.48550/ARXIV.2305.16209) _CoRR,_
abs/2305.16209.
Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, and Boi
[Faltings. 2023. Refiner: Reasoning feedback on in-](https://api.semanticscholar.org/CorpusID:257921623)
[termediate representations. ArXiv, abs/2304.01904.](https://api.semanticscholar.org/CorpusID:257921623)
Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng,
Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou
[Yu, Weizhu Chen, and Jianfeng Gao. 2023. Check](https://doi.org/10.48550/ARXIV.2302.12813)
[your facts and try again: Improving large language](https://doi.org/10.48550/ARXIV.2302.12813)
[models with external knowledge and automated feed-](https://doi.org/10.48550/ARXIV.2302.12813)
[back. CoRR, abs/2302.12813.](https://doi.org/10.48550/ARXIV.2302.12813)
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi
[Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the](http://jmlr.org/papers/v21/20-074.html)
[limits of transfer learning with a unified text-to-text](http://jmlr.org/papers/v21/20-074.html)
[transformer. Journal of Machine Learning Research,](http://jmlr.org/papers/v21/20-074.html)
21(140):1–67.
Noah Shinn, Federico Cassano, Beck Labash, Ashwin
Gopinath, Karthik Narasimhan, and Shunyu Yao.
[2023. Reflexion: Language agents with verbal rein-](https://api.semanticscholar.org/CorpusID:258833055)
[forcement learning.](https://api.semanticscholar.org/CorpusID:258833055)
Maciej Swiechowski, Konrad Godlewski, Bartosz Saw[icki, and Jacek Mandziuk. 2023. Monte carlo tree](https://doi.org/10.1007/S10462-022-10228-Y)
[search: a review of recent modifications and applica-](https://doi.org/10.1007/S10462-022-10228-Y)
[tions. Artif. Intell. Rev., 56(3):2497–2562.](https://doi.org/10.1007/S10462-022-10228-Y)
James Thorne, Andreas Vlachos, Christos
Christodoulopoulos, and Arpit Mittal. 2018.
FEVER: a large-scale dataset for fact extraction and
VERification. In NAACL-HLT.
Hugo Touvron, Louis Martin, Kevin R. Stone, Peter
Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Daniel M. Bikel, Lukas Blecher, Cristian Cantón Ferrer, Moya Chen, Guillem Cucurull,
David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin
Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami,
Naman Goyal, Anthony S. Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor
Kerkez, Madian Khabsa, Isabel M. Kloumann, A. V.
Korenev, Punit Singh Koura, Marie-Anne Lachaux,
Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai
Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew
Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan
Saladi, Alan Schelten, Ruan Silva, Eric Michael
Smith, R. Subramanian, Xia Tan, Binh Tang, Ross
Taylor, Adina Williams, Jian Xiang Kuan, Puxin
Xu, Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and
[Thomas Scialom. 2023. Llama 2: Open foundation](https://api.semanticscholar.org/CorpusID:259950998)
[and fine-tuned chat models. ArXiv, abs/2307.09288.](https://api.semanticscholar.org/CorpusID:259950998)
7096
-----
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V.
Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. [Self-consistency](https://openreview.net/pdf?id=1PL1NIMMrw)
[improves chain of thought reasoning in language](https://openreview.net/pdf?id=1PL1NIMMrw)
[models. In The Eleventh International Conference](https://openreview.net/pdf?id=1PL1NIMMrw)
_on Learning Representations, ICLR 2023, Kigali,_
_Rwanda, May 1-5, 2023. OpenReview.net._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le,
[and Denny Zhou. 2022. Chain of thought prompt-](https://openreview.net/forum?id=_VjQlMeSB_J)
[ing elicits reasoning in large language models. In](https://openreview.net/forum?id=_VjQlMeSB_J)
_Advances in Neural Information Processing Systems._
Jian Xie, Kai Zhang, Jiangjie Chen, Renze Lou, and
[Yu Su. 2023a. Adaptive chameleon or stubborn sloth:](https://api.semanticscholar.org/CorpusID:263610324)
[Revealing the behavior of large language models in](https://api.semanticscholar.org/CorpusID:263610324)
[knowledge conflicts.](https://api.semanticscholar.org/CorpusID:263610324)
Yuxi Xie, Kenji Kawaguchi, Yiran Zhao, Xu Zhao,
MingSung Kan, Junxian He, and Qizhe Xie. 2023b.
[Self-evaluation guided beam search for reasoning.](https://api.semanticscholar.org/CorpusID:258426922)
Canwen Xu, Yichong Xu, Shuohang Wang, Yang Liu,
[Chenguang Zhu, and Julian J. McAuley. 2023. Small](https://doi.org/10.48550/ARXIV.2305.08848)
[models are valuable plug-ins for large language mod-](https://doi.org/10.48550/ARXIV.2305.08848)
[els. CoRR, abs/2305.08848.](https://doi.org/10.48550/ARXIV.2305.08848)
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran,
Thomas L. Griffiths, Yuan Cao, and Karthik
[Narasimhan. 2023a. Tree of thoughts: Deliberate](https://api.semanticscholar.org/CorpusID:258762525)
[problem solving with large language models. ArXiv,](https://api.semanticscholar.org/CorpusID:258762525)
abs/2305.10601.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak
Shafran, Karthik R. Narasimhan, and Yuan Cao.
[2023b. React: Synergizing reasoning and acting](https://openreview.net/pdf?id=WE_vluYUL-X)
[in language models. In The Eleventh International](https://openreview.net/pdf?id=WE_vluYUL-X)
_Conference on Learning Representations, ICLR 2023,_
_Kigali, Rwanda, May 1-5, 2023. OpenReview.net._
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Good[man. 2022. STar: Bootstrapping reasoning with rea-](https://openreview.net/forum?id=_3ELRdg2sgI)
[soning. In Advances in Neural Information Process-](https://openreview.net/forum?id=_3ELRdg2sgI)
_ing Systems._
Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang,
Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin
Wu, Danny Fox, Helen M. Meng, and James R.
[Glass. 2023. Interpretable unified language checking.](https://api.semanticscholar.org/CorpusID:258041307)
_ArXiv, abs/2304.03728._
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan
Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin,
Zhuohan Li, Dacheng Li, Eric P. Xing, Haotong
[Zhang, Joseph Gonzalez, and Ion Stoica. 2023. Judg-](https://api.semanticscholar.org/CorpusID:259129398)
[ing llm-as-a-judge with mt-bench and chatbot arena.](https://api.semanticscholar.org/CorpusID:259129398)
_ArXiv, abs/2306.05685._
Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman,
[Haohan Wang, and Yu-Xiong Wang. 2023. Language](https://doi.org/10.48550/ARXIV.2310.04406)
[agent tree search unifies reasoning acting and plan-](https://doi.org/10.48550/ARXIV.2310.04406)
[ning in language models. CoRR, abs/2310.04406.](https://doi.org/10.48550/ARXIV.2310.04406)
Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang,
Yongfeng Huang, Ruyi Gan, Jiaxing Zhang, and Yu[jiu Yang. 2023. Solving math word problems via](https://doi.org/10.18653/v1/2023.acl-long.245)
[cooperative reasoning induced language models. In](https://doi.org/10.18653/v1/2023.acl-long.245)
_Proceedings of the 61st Annual Meeting of the As-_
_sociation for Computational Linguistics (Volume 1:_
_Long Papers), pages 4471–4485, Toronto, Canada._
Association for Computational Linguistics.
7097
-----
**A** **More Experimental Details for Initial**
**Study**
**A.1** **Experiment for Figure 1**
The prompt used in Autostop is “You were
either successful or unsuccessful in your
previous trial. Stick to your previous
answer if it is correct, otherwise
consider a new answer”. The prompt used for
_NeverStop is “You failed in your previous_
trial and reconsider a new answer”. The
motivation behind Autostop is that we totally rely
on the LLM’s internal knowledge to check the correctness of its own outputs. However, LLM fails
in this setting as the performance is even worse
than initial stage. For NeverStop, we hope to identify that some correctly answered samples will be
kept unchanged even the negative feedback provided. However, we didn’t find a pattern between
the changed and unchanged predicted samples.
**A.2** **Implementation for Knowledge**
**Grounding and Results**
**Dataset** We evaluate LLMs’ knowledge grounding ability on knowledge-rich multiple-choice
dataset, MMLU. It consists of four domains:
STEM, Social, Humanity and Other, totaling 56
subjects. All methods are evaluated on 50 randomly
selected samples for each subject (excluding those
in the Other domain), and the remaining samples
are used as the training set where applicable.
**Models and Baselines** In addition to Llama213B, Llama2-70B, and GPT-3.5 for prompting,
we also leverage unified language checking, Uni_LangCheck (Zhang et al., 2023), for statement_
assessment. _UniLangCheck aims to check if_
language input is factual and fair via prompting
LLMs to generate groundings for fact-checking.
Therefore, we firstly prompt LLMs to generate
a fact about the key element in the question before proceeding to the final assessment. We repeatedly prompt the LLMs for 5 times and use
the majority-voted answer as the result for Self_Consistency (Wang et al., 2023). TRUE (Hon-_
ovich et al., 2022) is the T5-11B (Raffel et al.,
2020) model fine-tuned on a collection of natural
language inference (NLI) datasets to check factual
correctness, and has been used by previous works
within similar contexts (Gao et al., 2023a,b). We
further fine-tune its classifier head on our training set, which is annotated as factually correct or
not, before evaluation. Both Contrastive Consistent Search (ContrastSearch ) (Burns et al., 2023)
and ActivationRegress (Marks and Tegmark, 2023)
train classifiers whose inputs are activations extracted from Llama2-13B 12-layer encodings [6].
_ActivationRegress trains a logistic classifier on the_
activations with factual labels as supervision. Con_trastSearch, instead, operates without factual la-_
bels. For a statement si, we firstly construct a datapair x[+] and x[−] by annotating True and False to
this statement,regardless of its factual correctness.
Then, we derive the probabilities by mapping x to a
number between 0 and 1, i.e., p[+] = pθ(ϕ(x[+]i [))][ and]
_p[−]_ = pθ(ϕ(x[−]i [))][. The mapping function][ p][θ][ is up-]
dated such that the probabilities are both confident
(p[+]i _[≈]_ [1][ −] _[p]i[−][) and consistent (][p]i[+]_ _[̸≈]_ _[p]i[−][).]_
**Prompt Settings** The basic prompt for knowledge grounding is shown in Figure A1a. This is
used for Llama2, GPT-3.5 and Self-Consistency.
The advanced prompt inspired by UniLangCheck
is illustrated in Figure A1b. For each subject, we
randomly select 50 samples and extract their question and choice to build a statement for knowledge checking. The correctness of this statement is
deemed True if the selected choice is exactly the
correct one, otherwise it is labeled False.
**Correlation between Self-consistency Confi-**
**dence and Accuracy** For the self-consistency(5)
baseline, we calculate the R[2] for confidence (the
frequency of the current answer among all generated answers, totaling 5) and the accuracy. The
results are shown in Table A1. We observe a high
correlation between the two variables, which inspires our design of multiple-consistency for answer assessment.
STEM Social Humanity Others
GPT-3.5 0.80 0.89 0.84 0.88
Llama 0.86 0.85 0.91 0.86
Vicuna 0.92 0.90 0.74 0.92
Table A1: The correlation between accuracy and
self-consistency confidence over three domains in the
MMLU datasets.
**A.3** **Implementation for Direction Generation**
Based on the observation that existing feedback
has limited effects to guide LLMs to update their
6The original dimensions of Llama2-30B is 5024. We
apply PCA to reduce this dimensionality to obtain 50dimensional activations as classifier input.
7098
-----
As an expert in knowledge grounding, you'll be assessing statements that consist of a question followed by a proposed answer.
The question forms the initial part of the statement, and the answer follows it. Utilize your thoughtful analysis to determine the
correctness of each statement. Conclude the assessment with a "Finish[answer]" that returns either True or False, marking the
completion of the task.
Here are some examples:
{examples}
(END OF EXAMPLES)
Statement: {question:q, answer: a}
Thought: thought
Action: Finish[answer]
(a) Basic prompt for knowledge grounding. Text in gray is extracted from datasets, in red shadow is generated by LLMs.
In your capacity as a specialist in knowledge grounding, your task is streamlined into a comprehensible two-step process. Firstly, assume the role of
a question architect, delineating the essential "key elements/knowledge" integral to formulating a sound question. Subsequently, based on these
identified key knowledge elements, proffer your response. The ensuing step involves a meticulous comparison of your proposed answer with the
provided solution to ascertain accuracy. Conclude this evaluative process with a succinct 'Finish[answer]' statement, conclusively designating either
True or False, thereby encapsulating the successful execution of the task."
Here are some examples{examples}
(END OF EXAMPLES)
Statement: {question:q, answer: a}
Key Element/fact: fact
Thought: thought
Comparison: comparison
Action: Finish[answer]
(b) Fact-extract prompt applied to UniLangCheck for knowledge grounding. Text in gray shadow is extracted from
datasets, in red shadow is generated by LLMs. Comparing to the basic prompt, it includes additional fact generation.
Figure A1: Prompts for knowledge grounding.
current incorrect response, we propose several simple strategies to enhance the effectiveness of generated feedback in the self-improvement process.
These strategies are mainly inspired by the following two observations: (1) LLMs are more susceptible to context influence at the beginning or near the
end (Liu et al., 2023a) (2) ICL is highly sensitive
to the stylish and emotional words in demonstrations (Min et al., 2022; Li et al., 2023). We summarize the different strategies in the diagram shown
in Figure A2.
We show relative percentage of changed samples those incorrectly predicted ones before and
after applying the NegReflect in Table A2. The
percentages have been greatly improved with the
instruction which has been inserted closer to the
end of prompt. To verify whether this change could
lead to task performance, we display the detailed
performances over three LLMs after applying different instructions in Table A3. It is clear that
_NegPrefix demonstrates the most significant im-_
provements across all the datasets and models. In
contrast, NewAnswer has the same sentences Neg_Prefix as but its position is far away from the gen-_
erating point for LLMs. This can be explained that
_position of instruction is important in ICL. And_
the performance of NewAnswer is slightly better
than baseline, it can be partly explained that the
_NewAnswer explicitly show the negative attitude_
towards and guide the model to generate a different
answer. Among the three models, the average promotion on GPT3.5 is the most negligible. This can
_be explained that larger model are more confident_
_with its internal knowledge and less vulnerable to_
_given noisy text._
**Model** **Prompts** **Change**
Oracle 0.56
GPT35
NegReflect 0.72
Oracle 0.54
Llama
NegReflect 0.72
Oracle 0.64
Vicuna
NegReflect 0.74
Table A2: The relative percentage of changed samples
among those incorrectly predicted ones. We use the
average results for different domains in MMLU.
**B** **Mirror algorithm**
We introduce the pipeline of the proposed Mirror in
Algorithm 1 involves iteratively conducting a UCTSEARCH until predefined iteration constraint is
reached, and the best action a(BESTCHILD(v0, 0))
leading to the best child of the root node v0 returns.
7099
-----
**Baseline**
**You have attempted to answer the following question before and**
**failed.** **Below is the last unsuccessful trial you attempted to answer.**
**.**
**Observation**
**The answer is incorrect .**
**NewAnswer**
**The Thought in last trial is not factually correct and**
**I will reconsider and propose a different answer.**
**NegPrefix**
**The Thought in last trial is not factually correct and**
**I will reconsider and propose a different answer.**
**Previous Trial**
You're an advanced reasoning agent capable of self-reflection and continuous improvement. You have attempted to answer the following
question before and failed. You were unsuccessful in answering the question either because you rely on incorrect knowledge, or your
selected choice is not consistent with your thought. Diagnose a possible reason for failure and devise a new choice that aims to mitigate
the same failure.
Question: One suggestion that Lukianoff and Haidt make to challenge vindictive protectiveness is
Choices:
A. have colleges and universities officially and strongly discourage trigger warnings.
B. to defund the Department of Education.
C. to promote greater understanding of historical and contemporary oppression.
D. none of the above.
**Baseline** **NewAnswer**
Thought: Lukianoff and Haidt suggest challenging vindictive protectiveness by promoting discourse and intellectual freedom through the
use of open-minded dialogue, civility, and reasoned disagreement. This would include promoting an understanding of of historical and
contemporary oppression (Choice C). So, the answer is C. to promote greater understanding of historical and contemporary oppression.
Action: Finish[C. to promote greater understanding of historical and contemporary oppression]
(END PREVIOUS TRIAL)
**Observation**
**Feedback**
Reflections: **NegPrefix**
Figure A2: Given the question and the LLM’s previous trial, it is asked to generated feedback under different
prompts to facilitate reflection and potentially update its previous response. The four candidate instructions, Baseline,
Observation, NewAnswer and NegPrefix, are enclosed in dashed frames, and they will be positioned differently to
exert their respective effects.
we apply the UTC algorithm to select the best child
node. (2) SIMULATION the reward r for vl through
SIMULATIONPOLICY. This phrase is to simulate
the future rewards of the current node through multiple interactions. For simplicity, we follow the
similar process as expansion and return the reward
_r for selected action-state pair. (3) BACKPROPA-_
GATE the simulation results to the selected nodes
to accelerate SEARCHPOLICY in next iteration. Algorithm is described in Algorithm 1.
Model Stem Social Humanity Other
GPT35 0.80 0.82 0.78 0.73
+Observation 0.76 0.82 0.75 0.70
+NewAnswer 0.80 0.84 0.80 0.75
+NegReflect 0.84 0.86 0.84 0.76
Llama 0.64 0.63 0.60 0.64
+Observation 0.63 0.62 0.61 0.62
+NewAnswer 0.64 0.67 0.65 0.64
+NegReflect 0.70 0.72 0.76 0.69
Vicuna 0.62 0.68 0.59 0.69
+Observation 0.64 0.52 0.45 0.67
+NewAnswer 0.66 0.58 0.47 0.65
+NegReflect 0.69 0.63 0.52 0.72
Table A3: Self-improvments results with different
prompt constraints for answer correction. By comparing with the ground truth, this evaluation is to show
the capability of LLMs in obeying the instructions of
changing their incorrect predictions.
Node in the tree is v and its associated state is s(v),
representing the response generated by Reasoner.
The action is a(v), reward is R and N (·) is the
times of the node having been visited. r(v) is the
reward for the terminate state at each iteration.
The overall process consists of three steps: (1)
SEARCHPOLICY to obtain the terminal node vl.
through which expands the tree until fully expanded. Specially, we randomly add one or more
nodes to the root node according to the possible actions. In our case, we generate multiple responses
to the given question and previous attempts/response. When the current node is fully expanded,
7100
-----
**Algorithm 1 Mirror-UCT**
**Require: state transition function f : S × A →** _S, weight Cp_
Rewardfunction R UCT, Stop Criteria-SEARCH(s g0 →{) 0, 1}
create root node v0 with state s0
**while within computational iteration do**
_vrB ←lACK ←SPSIMULATIONROPAGATEEARCHPOLICY(PvOLICYl, r(v)_ 0)(svl )
**return a(BESTCHILD(v0, 0))**
**function SEARCHPOLICY(v)**
**while g(v) == 0 do**
**if v not fully expanded then**
**return EXPAND(v)**
**else**
_v ←_ BESTCHILD(v, Cp)
**return v**
LLMs in direction generation and response generation process. (a) p0 in direction generation in
_π(at_ _st, p0,_ ). The guidance in the upper is for
_|_ _R_
initial response, the bottom one is for reflection in
the subsequent iterations. (b) Prompt for response
generation given previous response and direction.
_P(st|st−1, at−1; q)._
**function EXPAND(v)**
choose a ∈ untried actions from A(s(v))
add a new child v[′] to v
with s(v)[′] = f (s(v), a) and a(v[′]) = a
**return v[′]**
**function BESTCHILD(v, Cp)**
**return** argmax _NR(vv′[′]_ ) [+ 2][C][p]
_v[′]_ _∈children of v_
2 ln N (v)
_N_ (v[′] )
**function SIMULATIONPOLICY(s)**
**While s is non-terminal do**
_a = argmaxa∈A_ (R(a, s))
_s ←_ _f_ (s, a)
**return reward for s**
**function BACKPROPAGATE(v, r)**
**while v is not null do**
_N_ (v) ← _N_ (v) + 1
_R(v) ←R(v) + r(v)_
_v ←_ parent of v
**C** **Experiments for Mirror**
We will introduce the implementation details and
Prompt for Direction Generation (MMLU)
As a tutor, your focus is on guiding the student to navigate multiple-choice question-answering problems
strategically. Encourage them to dissect the question,
identifying key elements and nuances within each
choice. Emphasize the importance of understanding
subtle differences that could distinguish correct from
incorrect options.
As a tutor, your are supposed to meticulously evaluate
the student’s approach to multiple-choice problems.
Question, Choices and the student’s previous thought
and answer are given, check if the facts mentioned
in the thought is correct and if there might be a more
appropriate option than the one chosen. If the student’s reasoning thought is accurate and the proposed
answer is the most appropriate, encourage them to adhere to their initial trial. Otherwise, guide the student
to revisit specific details, explore alternative choice.
provide complementary results experimented on
Mirror in this section.
**C.1** **Implementation Details**
**Hyper-parameter settings.** In order to encourage diverse direction generation, we set the generation temperature as 0.8 for all the models, and
we set do_Sample = True for llama and vicuna to
avoid greedy search. For the threshold T0 in selfassessment to deriving the final answer, we set 0.8
for GPT35, and 0.5 for llama and Vicuna according to the results on limited validation data. These
results reveal that larger language models are more
consistent in their multiple outputs, which is more
difficult for the smaller models. Hence, we adopt a
relatively lower threshold for smaller models. This
observation can be partially explained by the tendency of larger LMs to rely on their parametric
memory (Xie et al., 2023a).
**Prompt Settings.** We provide 5 demonstrations
along with instruction when prompting LLMs.
We show the prompts/instructions provided to
Prompt for Direction Generation (FEVER)
As a tutor, your focus is on guiding the student to navigate fact-checking problems strategically. Encourage
them to dissect the claim, identifying key elements
and associate facts. Emphasize the correct relation
between important elements that could distinguish
SUPPORTS from REFUTES options. Also, lacking
of enough information will lead to NOT ENOUGH
INFO.
As a tutor, your are supposed to meticulously evaluate the student’s approach to fact verification task.
Claim and the student’s previous thought and answer
are given, check if the relations mentioned in the
Thought is correct and if there might be a more appropriate answer. If the student’s reasoning thought
is accurate and the proposed answer is the most appropriate, encourage them to adhere to their initial
trial. Otherwise, guide the student to revisit specific
details, explore alternative answer.
7101
-----
|Col1|Stem Social Humanity Other Fever|
|---|---|
|GPT-3.5 LLama Vicuna|0.665 0.673 0.674 0.648 0.632 0.689 0.691 0.688 0.669 0.661 0.694 0.690 0.692 0.679 0.673|
Table A5: Diversity among different directions generated by Mirror.
**Effects of question-oriented direction.** To save
computational resources, we randomly select 20
samples from each of the four domain datasets in
MMLU and from FEVER. We show an example of
generated direction in the CoT settings.
Prompt for Response Generation (MMLU)
You are an expert in multiple-choice question answering. Each problem will provide you with a question
and answer choices. Read the question and all the
choices carefully, along with the provided advice, and
solve the problem by having a thought. Thought can
reason about the current situation. Finish[answer]
returns the answer and finishes the task.
You’re an advanced reasoning agent capable of selfreflection and continuous improvement. Your objective is to tackle multiple-choice question answering
problems. Each problem will provide you with a question, answer choices, your previous line of reasoning,
and the detailed analyses from an experienced tutor.
In a succinct review, assess the accuracy of your earlier answer based on your expertise and the advice,
subsequently arrive at the definitive response.
Input
**Question: For a connected, undirected graph**, which of the following must be true?
**I.** degree is even. II. . III. G has at least one vertex with degree 1.
A. I only B. II only C. III only D. I and II
Model Output
**Thought: I.** degree is even. This statement is not necessarily true. In an undirected
graph, the sum of the degrees of all vertices can be odd. For example, consider a graph with three
vertices, each with degree 1. The sum of the degrees would be 3, which is odd. ...
**Action: Finish[A. I only]**
Prompt for Response Generation (FEVER)
You are a knowledgeable and accurate fact verifier.
Please verify the correctness of the following claim
based on your expertise and provided advice. Return
SUPPORTS or REFUTES a Claim, or if there is NOT
ENOUGH INFO.
You’re an advanced reasoning agent capable of selfreflection in fact verification task. Claim and the your
previous response and answer are given, along with
the advice. In a succinct review, assess the accuracy
of your earlier answer based on your expertise and the
advice, subsequently arrive at the definitive response.
**Computational budget.** The total running costs
for using GPT-3.5 in our experiments are approximately $500. In addition, the running time for
Llama2 and Vicuna in our experiments totalled 320
hours, utilising one 40G A100 graphics cards.
**C.2** **Additional Results**
We provide additional results as complementary to
our main results.
**Results on GSM8K dataset.** To further verify the effectiveness of Mirror in various reasoning tasks, we include the math problem, i.e.,
GSM8k (Cobbe et al., 2021). The performance
superiority is evident when comparing with the
best-performing unsupervised baseline, i.e., selfconsistency .
COT w. self-cons[(5)] w. self-cons[(15)] Mirror
0.72 0.75 0.77 0.80
Table A4: Based on GPT-3.5, performances of different
reasoning methods.
Model Output
**Oversight: Consider first identifying any possible misconceptions about the concepts of**
connected, undirected graphs and their properties. Next, analyze each statement in the
question and evaluate its relevance to the definition and characteristics of connected, undirected
graphs.
**Thought: I. A connected, undirected graph G = (V, E) is a type of graph in which there is a path**
between every pair of vertices. For such a graph, the sum of the degrees of all vertices is always
even. This is because the sum of the degrees of all vertices is equal to twice the number of edges
. Therefore, statement I must be true.
**Action: Finish[D. I and II only]**
Figure A3: With question-oriented direction, the Reasoner answers questions with explicit clues.
**The diversity of Search Space** One of our motivations is to broaden the diversity of actions available for more effective exploration. Consequently,
we compute the upper bound results for our generated tree, indicating the presence of the correct
answer in the tree signifies a correctly answered
sample. Results are shown in Figure A4.
To analyse the effects of different LLMs quantitatively, we calculate the average pairwise semantic similarity between multiple directions for one
question, then 1- similarity to obtain the diversity
measurement shown below. The pretrained model,
all-MiniLM-L6-v2 [7] is used for sentence-pair similarity calculation. The results is consistent with
the intuition that sophisticated LLMs incline to less
diverse instruction although such diverse directions
are already capable to improve task performances.
[7https://huggingface.co/sentence-transformers/](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
[all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
7102
-----
gpt35-MMLU
gpt35-FEVER
0.85
0.80
0.75
0.70
0.65
0.60
0.55
|Col1|Col2|0.62 0.85 0.80 0.60 ans_presence 0.75 0.58 acc 0.70 0.56 0.65 0.54|Col4|Col5|
|---|---|---|---|---|
|4 6 8 Num|acc ans_presence|||4 6 8 Num|
0.9
0.8
0.7
0.6
0.5
0.66
0.65
0.64
0.63
0.62
0.62
0.60
0.58
0.56
0.62
(a)
|llama-MMLU|Col2|0.85|llama-FEVER|
|---|---|---|---|
|||0.85 0.575 0.80 0.550 0.75 0.525 ans_presence 0.70 0.500 acc 0.65 0.475 0.60 0.450 0.55 0.425||
|4 6 8 Num|acc ans_presence||4 6 8 Num|
(b)
Figure A4: The task performance, Accuracy (acc) and
the percentage of samples where the ground truth is
included in the tree (ans-presence), with different size
of search space (Num).
**Ethics Statement**
We utilized two publicly available datasets: Massive Multitask Language Understanding (MMLU)
and FEVER (Fact Extraction and Verification).
MMLU is a multiple-choice question-answering
dataset covering 57 subjects across STEM, social
sciences, humanities, and more. Notably, some
subjects, such as moral disputes and moral sce_narios, contain statements that may raise ethical_
concerns. Here, LLMs could be misused or misinterpret the information. We strongly recommend
thorough consideration of safety implications before applying such techniques in real-world scenarios. For the FEVER dataset, positive claims (facts)
are extracted from Wikipedia, and negative claims
are generated by contrasting these facts and subsequently verified without knowledge of their original
source sentences. However, due to Wikipedia’s editable nature, the extracted facts may not always be
entirely accurate. Consequently, we advise against
solely relying on our work as the truth source for
any fact-checking task to avoid potential confusion
and bias.
7103
-----
| [
"Hanqi, Yan",
"Lin, Gui",
"Vivek, Srikumar",
"Qinglin, Zhu",
"Xinyu, Wang",
"Yulan, He",
"Lun-Wei, Ku",
"Andre, Martins"
] | 2024-08-01T00:00:00 | ACL 2024 Long Papers | true | 0 | 0 | null | https://aclanthology.org/2024.acl-long.382 | null | https://www.semanticscholar.org/paper/bf7884f7fd10558afd6f8a8d5b98575febd2ad9b |
Models Can and Should Embrace the Communicative Nature of Human-Generated Math | Math is constructed by people for people: just as natural language corpora reflect not just propositions but the communicative goals of language users, the math data that models are trained on reflects not just idealized mathematical entities but rich communicative intentions. While there are important advantages to treating math in a purely symbolic manner, we here hypothesize that there are benefits to treating math as situated linguistic communication and that language models are well suited for this goal, in ways that are not fully appreciated. We illustrate these points with two case studies. First, we ran an experiment in which we found that language models interpret the equals sign in a humanlike way -- generating systematically different word problems for the same underlying equation arranged in different ways. Second, we found that language models prefer proofs to be ordered in naturalistic ways, even though other orders would be logically equivalent. We advocate for AI systems that learn from and represent the communicative intentions latent in human-generated math. | null | ## Models Can and Should Embrace the Communicative Nature of Human-Generated Math
**Sasha Boguraev[†],** **Ben Lipkin[‡],** **Leonie Weissweiler[†],** **Kyle Mahowald[†]**
_†Department of Linguistics, The University of Texas at Austin_
_‡Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology_
```
{sasha.boguraev, weissweiler, kyle}@utexas.edu
[email protected]
```
**Abstract**
Math is constructed by people for people: just as natural language corpora reflect
not just propositions but the communicative goals of language users, the math data
that models are trained on reflects not just idealized mathematical entities but rich
communicative intentions. While there are important advantages to treating math
in a purely symbolic manner, we here hypothesize that there are benefits to treating
math as situated linguistic communication and that language models are well suited
for this goal, in ways that are not fully appreciated. We illustrate these points with
two case studies. First, we ran an experiment in which we found that language
models interpret the equals sign in a humanlike way—generating systematically
different word problems for the same underlying equation arranged in different
ways. Second, we found that language models prefer proofs to be ordered in
naturalistic ways, even though other orders would be logically equivalent. We
advocate for AI systems that learn from and represent the communicative intentions
latent in human-generated math.
Mathematical propositions are first of all English sentences; not only English sentences,
but each mathematical proposition has a resemblance to certain non-mathematical propositions.
_—Ludwig Wittgenstein, Lectures on the Foundations of Mathematics, 1939_
**1** **Introduction**
Language Models sometimes rely on heuristics and statistics rather than being perfectly compositional
idealized reasoners, especially in domains like math and logic [23, 28, 30, 35, 36, 38]. Whereas
language production and comprehension involve some idealized composition using abstract rules
[6, 18] but also a lot of memorization and pragmatic inference [7, 14], math and logic are domains
where it seems like an idealized compositional system is required for obtaining precise solutions.
Indeed, whether an expression is written 5+ _x = 7 or 7_ _−_ 5 = x or “What is 5 less than 7?” or “Seven
frogs were sitting on a log. Five left. How many are there now?”, there is an underlying computation
that can be extracted and performed (namely, the expression 7 − 5). To properly solve these problems,
the thinking goes, systems should abstract away from their situated format into symbolic space.
There is an intuitive, and well-justified, idea that competent human mathematical reasoners employ
exactly this kind of abstraction. By contrast, less competent mathematical reasoners (e.g., children
struggling to learn math) are often shown to rely on heuristics, schemas, and keywords [4, 8, 22, 39,
32]. For instance, kids might learn that every time they see the phrase “in total” in a word problem,
they should add up all the numbers [33]. While the “heuristic” keyword-based direct translation
approach may be less cognitively taxing, it is also prone to translation errors [40]. Students who
report adopting the more involved strategy of parsing a math word problem into a structured mental
Preprint. Under review.
-----
Forward Equation ( 𝑒!" ) Alex has nine packs of trading cardsForward Word Problem ( 𝑤, each with the!" ) Recovered Equation ( 9𝑥−2 = 4 + 3𝑥 𝑒[!]"# )
same number of cards. He gives away 2 cards and now
9𝑥−2 = 4 + 3𝑥 hasHow many cards are in each pack? 4 cards more than three packs of trading cards. Recovered Equation ( 𝑒[!]"# )
4 + 3𝑥= 9𝑥−2
Reverse Word Problem ( 𝑤!" ) Recovered Equation ( 𝑒[!]#" )
Reverse Equation ( 𝑒!" ) Sally hassticker she has. Jimmy has 4 candies and receives 9 candies for every sticker 3 candies for every 9𝑥−2 = 4 + 3𝑥
4 + 3𝑥= 9𝑥−2 he has but losessame number of candies, how many stickers do they 2 candies. If they end up with the Recovered Equation ( 𝑒[!]#" )
each have?
4 + 3𝑥= 9𝑥−2
Figure 1: For each pair of equations, we generate corresponding word problems and then try to
recover the equations from those problems. The model often recovered the original ordering.
model, and then planning computation and evaluating the solution in that space are more successful
problem solvers [17].
Taken together, these ideas might make it seem like the goal of AI math models should be to leave
the messy domain of language behind and translate expressions into symbolic representations. And,
indeed, combining language models with symbolic provers has proven successful in a variety of math
and reasoning domains [3, 12, 16, 27, 37, 43].
Here, we argue that there can be something lost by entirely disregarding the context. We introduce
the Communicative Math Hypothesis: Math is constructed by people, for people, and as such,
_there are conventions and pragmatics that people bring to the production and comprehension of_
_mathematical expressions—communicative interpretations that go beyond the purely symbolic and_
_that can be well studied using the tools of linguistics and cognitive science. The choice to write 3x_ +9
instead of 3(n + 3) conveys something to the reader, even though they are equivalent. Similarly, the
proof of a theorem is not only a formalization that could be computationally verified, but is itself a
communicative act, with intention of being internalized and understood by others.
Drawing on research in math education that we believe is underappreciated in machine learning, we
make the case for AI researchers to take the Communicative Math Hypothesis seriously. We present
some initial proof-of-concept experiments showing that LLMs pick up on these communicative
regularities. We argue that this information should not always be ignored or explained away, but is a
crucial component of human mathematics.
**2** **Case Study One: Equations are Asymmetric**
Asymmetry in human mathematics interpretations has long been studied in math education. In
particular, there is a wealth of literature on the perils of grade-school-aged children’s asymmetrical
understanding of math – that is, a difficulty in reasoning with a problem such as □ = 2 + 4, despite
relative comfort with the complementary equation of 2 + 4 = □[2, 31]. Even expert mathematicians,
though, understand math asymmetrically [25], giving different interpretations of expressions based on
what is on the left or right of the equals sign. Here, we present results from a case study demonstrating
that LLMs are sensitive to asymmetry in equations and, like humans, do not learn a purely symmetrical
interpretation of the equals sign.
**Methods** To test LLMs’ sensitivity to symmetry, we conduct an experiment assessing their ability to
reconstruct the equations they used to create a specific word problem, as shown in Figure 1. Formally,
we perform a three-step experiment. We first generate a set of n paired forward and reverse equations,
denoted as E = {e1, e2, . . ., en}, where each paired equation ei consists of the forward equation ei[f]
and the reverse equation ei[r]. Thus, we can express each ei as ei = {ei[f] _, ei[r]}. Next, for each of our_
_n pairs, we pass both equations in ei to GPT-4o, and prompt it to generate a corresponding pair of_
word problems,ask the LLM to extract the equations wi = {wi[f], wi[r]}, that could be solved by e[′]i [=][ {][e]i[′] _f_, e′ir} for each ei w, withi ∈ _W W, with = {w E1′ . . . w =f_ _{en′1}[. . . e]. We finallyn[′]_ _[}][. Our]r_
hypothesis is that across all n equations, the LLMs will more often recover e[′]i from eif and e′i
-----
from ei[r]. For details on the equations used in this experiment, their generation, and model prompting
methods, see Appendix A.
**Results and Discussion** We measure the average proportion of the time the original order and
the reversed order were respectively recovered from GPT-4o across 5 different sets of 200 pairs of
randomly generated starting equations. We found that the original equation was recovered on average
52% of the time with a 95% CI of [51%, 54%] across 5 runs. The reverse equation was nearly never
recovered: 0.1% of the time, with a 95% CI of [0%, 0.3%].
These results suggest a difference between the word problems generated from a “forward” equation
and word problems generated from a logically equivalent “reverse” equation—and that this difference
is itself recoverable by GPT-4o. We posit that this information, which a purely symbolic solver would
be agnostic to, is crucial information for systems that aim to use math in collaboration with humans or
in human-like ways. These findings are consistent with work showing that order of premises matters
in LLMs’ ability to reason [5], although they frame this order sensitivity as primarily revealing
LLMs’ brittleness. We interpret these findings (and theirs) as revealing sensitivity to important
communicative factors inherent in the data.
**3** **Case Study Two: Mathematical Rules and Proofs Have Orders**
Our second case study focuses on mathematical communication of the sort more likely to take place
among professional mathematicians: mathematical rules and proofs. Proofs, especially, are widely
used in academic math, as well as related fields, and are duly an area of major focus for AI for math.
Proofs are written to communicate truths that are, in some sense, tautological. Nonetheless, mathematicians have strong expectations and interpretations about the directionality of equation. For
instance, there are generalized principles associated with equal signs, like that the right side of the
equation expounds upon or explains the left side [25]. Thus, while a = b and b = a are equivalent
statements by our agreed-upon set of axioms and inference rules, the choice of one or the other might
communicate a different message when used in a proof.
To explore the preferred orderings used by mathematicians in proofs and rules, Mirin and Dawkins
[25] utilize a set of breaching experiments. Breaching experiments are a class of experiments which
try to break rules in an attempt to confirm their existence [34]. In particular, the authors first provided
expert mathematicians with a host of formal mathematical equations, such as the distributive rule
or an inductive proof. However, these equations were ordered in an unnatural manner – that is in
the case of rules, orders which are not commonly encountered in formal mathematical texts, or in
the case of proofs, orders in which steps do not logically follow from one to the next. The authors
measured whether these mathematicians reported any perceived breaches, with any such breaches
providing evidence for the existence of the mathematicians’ ordering preferences. Our case study
into LLM ordering preferences in formal mathematics follows in this vein, measuring LLM surprisals
for various natural (extant) and unnatural (unobserved) equation orderings.
**Methods** Our set of mathematical equations consists of all examples used in the breaching experiments of Mirin and Dawkins [25]. This totals ten different examples, six of which are one line
equivalences, expressing common mathematical rules, and the other four of which are a series of
equivalences comprising a longer proofs. Each example further contains a brief textual introduction
before the series of equivalences. All examples are reported in Appendix B.
We first split each equation into its individual expressions. We then generate every possible ordering
of a given equation by permuting the order of these individual expressions. Finally, for each model
we calculate the average per-token surprisal for every ordering of expressions in a given equation,
conditioned on that equation’s textual introductions. Our calculations are performed using the
```
minicons package [26], a wrapper around Huggingface’s transformers package [41].
```
In this case study, we use the instruction-tuned variants of four models: LLaMa 3.1 8B [10], Mistral
7B v0.3 [21], Mathstral 7B [1], and Qwen2-Math 7B[42]. Two of these models were trained on
general corpora (LLaMa and Mistral), the other two fine-tuned on math (Mathstral and Qwen2-Math).
**Results and Discussion** As seen in Figure 2 the evaluated models display clear and consistent
preferences for the natural ordering in nine of ten equations. In seven of these, all models display uni
-----
2.0
1.5
1.0
0.5
per token surprisal
|difference quotient|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
|distributive|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
|exponents diff rule|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
|exponents power rule|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
|exponents prod rule|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
|homomorphism|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
|induction|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
|product rule|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
|proof|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
|set theory|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
||||||
LlamaMistralMathstralQwenLlamaMistralMathstralQwenLlamaMistralMathstralQwenLlamaMistralMathstralQwenLlamaMistralMathstralQwenLlamaMistralMathstralQwenLlamaMistralMathstralQwenLlamaMistralMathstralQwenLlamaMistralMathstralQwenLlamaMistralMathstralQwen
Figure 2: We compare average per-token surprisal for different orderings of expressions in proofs of
phenomena from Mirin and Dawkins [25]. We find that the original order (red diamonds) have lower
per-token surprisals on average (more probable) than the counterfactual possible orders.
form preference for each equation’s natural ordering. Of the remaining two equations (DIFFERENCE
QUOTIENT and PROOF), only a few nearby orders had a lower surprisal than the natural orders (99.6[th]
and 98.3[rd] percentiles, respectively). The only equation for which there is no clear model preference
for the natural form is PRODUCT RULE, but this was also a rule noted as unusual by participants in
Mirin and Dawkins [25]: mathematicians expressed surprise at seeing f and g instead of f (x) and
_g(x). When we instead use the latter notation, we see consistent preferences for the natural order. We_
do not find significant differences between the performances of math-fine-tuned models and more
generalized language models across all equations (paired t = 0.606, p = 0.548).
These results suggest that LLMs are aligned with expert mathematicians in their preferences for
ordering of proofs and rules, that is in a manner which expresses clear communicative intent. This
alignment leads to AI systems able to produce math interpretable by those using them, which in
comparison to much of the uninterpretable math produced by symbolic solvers and logic programming
systems, is a highly desirable, and long sought-after quality. As such, while the proofs LLMs produce
in their current iteration may not always be correct, any remedies attempting to improve on that
correctness should not do so to the detriment of this alignment, if the goal is human use.
**4** **Conclusion**
We focused our experiments on equation asymmetry and proof ordering, showing that LLMs learn
extra-symbolic communicative information in both domains. But these principles encompass a much
wider set of phenomena. For instance, several phenomena identified as reflecting LLMs’ brittleness
can be fruitfully seen as contributing to the communicative interpretation of math.
- Even though they don’t matter logically, variable names matter for communicating math (e.g.,
functions are often f and g). This pattern extends to programming as well. [19, 24]
- Logically extraneous or pragmatically anomalous information can matter for inferences about
how expressions are interpreted [29, 36].
- The choice of notation and the phrasing of the instructions/prompt can matter for how problems
are solved [15, 20].
Seeing these aspects of LLMs as possible features, and not bugs, could be an important step in
developing AI systems that can work with humans. For instance, working mathematicians were long
limited to purely symbolic theorem provers. Such systems in isolation neglect the more human aspects
of math, ignoring differences in style and comprehensibility. Perhaps we should be developing proof
assistants that are sensitive to these regularities in human proof-writing and other communicative
cues. LLM-based proof systems offer the promise of mathematical assistants that can work with
people [9], alongside them and not just for them as blackbox tools.
While necessarily fuzzier than purely symbolic representations, these communicative principles
are not lawless or illogical but can be studied, systematized, and modeled as rational behavior—as
they are in linguistics and cognitive science [7, 11, 13]. We join Zhang et al. [44] in their call for a
cognitive science perspective on AI and mathematics, centering the role of math as a group activity
and communicative endeavor. The math of the people, by the people, for the people, shall not perish
from our models.
-----
**Acknowledgments**
We would like to thank Paul Dawkins for valuable discussions and insights on mathematical asymmetry and, more generally, the math education literature. We would also like to thank Kanishka
Misra for assistance with the minicons package, and comments on the manuscript. We acknowledge
funding from NSF CAREER grant 2339729 (to Kyle Mahowald).
**References**
[1] AI, M. (2024). Mathstral.
[2] Behr, M., Erlwanger, S., and Nichols, E. (1980). How children view the equals sign. Mathematics
_teaching, 92(1):13–15._
[3] Borazjanizadeh, N. and Piantadosi, S. T. (2024). Reliable reasoning beyond natural language.
_arXiv preprint arXiv:2407.11373._
[4] Briars, D. and Larkin, J. (1984). An integrated model of skill in solving elementary word
problems. Cognition and Instruction, 1(3):245–296.
[5] Chen, X., Chi, R. A., Wang, X., and Zhou, D. (2024). Premise order matters in reasoning with
large language models. In International Conference on Machine Learning. PMLR.
[6] Chomsky, N. (1957). Syntactic Structures. The Hague: Mouton.
[7] Clark, H. H. (1996). Using Language. Cambridge university press.
[8] Clement, L. and Bernhard, J. (2005). A problem-solving alternative to using key words. Mathe_matics Teaching in the Middle School, 10(7):360–365._
[9] Collins, K. M., Sucholutsky, I., Bhatt, U., Chandra, K., Wong, L., Lee, M., Zhang, C. E., ZhiXuan, T., Ho, M., Mansinghka, V., et al. (2024). Building machines that learn and think with
people. arXiv preprint arXiv:2408.03943.
[10] Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten,
A., Yang, A., Fan, A., Goyal, A., Hartshorn, A., Yang, A., Mitra, A., Sravankumar, A., Korenev, A.,
Hinsvark, A., Rao, A., Zhang, A., Rodriguez, A., Gregerson, A., Spataru, A., Roziere, B., Biron,
B., Tang, B., Chern, B., Caucheteux, C., Nayak, C., Bi, C., Marra, C., McConnell, C., Keller, C.,
Touret, C., Wu, C., Wong, C., Ferrer, C. C., Nikolaidis, C., Allonsius, D., Song, D., Pintz, D.,
Livshits, D., Esiobu, D., Choudhary, D., Mahajan, D., Garcia-Olano, D., Perino, D., Hupkes, D.,
Lakomkin, E., AlBadawy, E., Lobanova, E., Dinan, E., Smith, E. M., Radenovic, F., Zhang, F.,
Synnaeve, G., Lee, G., Anderson, G. L., Nail, G., Mialon, G., Pang, G., Cucurell, G., Nguyen,
H., Korevaar, H., Xu, H., Touvron, H., Zarov, I., Ibarra, I. A., Kloumann, I., Misra, I., Evtimov,
I., Copet, J., Lee, J., Geffert, J., Vranes, J., Park, J., Mahadeokar, J., Shah, J., van der Linde, J.,
Billock, J., Hong, J., Lee, J., Fu, J., Chi, J., Huang, J., Liu, J., Wang, J., Yu, J., Bitton, J., Spisak,
J., Park, J., Rocca, J., Johnstun, J., Saxe, J., Jia, J., Alwala, K. V., Upasani, K., Plawiak, K., Li,
K., Heafield, K., Stone, K., El-Arini, K., Iyer, K., Malik, K., Chiu, K., Bhalla, K., Rantala-Yeary,
L., van der Maaten, L., Chen, L., Tan, L., Jenkins, L., Martin, L., Madaan, L., Malo, L., Blecher,
L., Landzaat, L., de Oliveira, L., Muzzi, M., Pasupuleti, M., Singh, M., Paluri, M., Kardas, M.,
Oldham, M., Rita, M., Pavlova, M., Kambadur, M., Lewis, M., Si, M., Singh, M. K., Hassan, M.,
Goyal, N., Torabi, N., Bashlykov, N., Bogoychev, N., Chatterji, N., Duchenne, O., Çelebi, O.,
Alrassy, P., Zhang, P., Li, P., Vasic, P., Weng, P., Bhargava, P., Dubal, P., Krishnan, P., Koura, P. S.,
Xu, P., He, Q., Dong, Q., Srinivasan, R., Ganapathy, R., Calderer, R., Cabral, R. S., Stojnic, R.,
Raileanu, R., Girdhar, R., Patel, R., Sauvestre, R., Polidoro, R., Sumbaly, R., Taylor, R., Silva,
R., Hou, R., Wang, R., Hosseini, S., Chennabasappa, S., Singh, S., Bell, S., Kim, S. S., Edunov,
S., Nie, S., Narang, S., Raparthy, S., Shen, S., Wan, S., Bhosale, S., Zhang, S., Vandenhende, S.,
Batra, S., Whitman, S., Sootla, S., Collot, S., Gururangan, S., Borodinsky, S., Herman, T., Fowler,
T., Sheasha, T., Georgiou, T., Scialom, T., Speckbacher, T., Mihaylov, T., Xiao, T., Karn, U.,
Goswami, V., Gupta, V., Ramanathan, V., Kerkez, V., Gonguet, V., Do, V., Vogeti, V., Petrovic, V.,
Chu, W., Xiong, W., Fu, W., Meers, W., Martinet, X., Wang, X., Tan, X. E., Xie, X., Jia, X., Wang,
X., Goldschlag, Y., Gaur, Y., Babaei, Y., Wen, Y., Song, Y., Zhang, Y., Li, Y., Mao, Y., Coudert,
-----
Z. D., Yan, Z., Chen, Z., Papakipos, Z., Singh, A., Grattafiori, A., Jain, A., Kelsey, A., Shajnfeld,
A., Gangidi, A., Victoria, A., Goldstand, A., Menon, A., Sharma, A., Boesenberg, A., Vaughan,
A., Baevski, A., Feinstein, A., Kallet, A., Sangani, A., Yunus, A., Lupu, A., Alvarado, A., Caples,
A., Gu, A., Ho, A., Poulton, A., Ryan, A., Ramchandani, A., Franco, A., Saraf, A., Chowdhury,
A., Gabriel, A., Bharambe, A., Eisenman, A., Yazdan, A., James, B., Maurer, B., Leonhardi, B.,
Huang, B., Loyd, B., Paola, B. D., Paranjape, B., Liu, B., Wu, B., Ni, B., Hancock, B., Wasti, B.,
Spence, B., Stojkovic, B., Gamido, B., Montalvo, B., Parker, C., Burton, C., Mejia, C., Wang, C.,
Kim, C., Zhou, C., Hu, C., Chu, C.-H., Cai, C., Tindal, C., Feichtenhofer, C., Civin, D., Beaty,
D., Kreymer, D., Li, D., Wyatt, D., Adkins, D., Xu, D., Testuggine, D., David, D., Parikh, D.,
Liskovich, D., Foss, D., Wang, D., Le, D., Holland, D., Dowling, E., Jamil, E., Montgomery, E.,
Presani, E., Hahn, E., Wood, E., Brinkman, E., Arcaute, E., Dunbar, E., Smothers, E., Sun, F.,
Kreuk, F., Tian, F., Ozgenel, F., Caggioni, F., Guzmán, F., Kanayet, F., Seide, F., Florez, G. M.,
Schwarz, G., Badeer, G., Swee, G., Halpern, G., Thattai, G., Herman, G., Sizov, G., Guangyi,
Zhang, Lakshminarayanan, G., Shojanazeri, H., Zou, H., Wang, H., Zha, H., Habeeb, H., Rudolph,
H., Suk, H., Aspegren, H., Goldman, H., Damlaj, I., Molybog, I., Tufanov, I., Veliche, I.-E.,
Gat, I., Weissman, J., Geboski, J., Kohli, J., Asher, J., Gaya, J.-B., Marcus, J., Tang, J., Chan,
J., Zhen, J., Reizenstein, J., Teboul, J., Zhong, J., Jin, J., Yang, J., Cummings, J., Carvill, J.,
Shepard, J., McPhie, J., Torres, J., Ginsburg, J., Wang, J., Wu, K., U, K. H., Saxena, K., Prasad,
K., Khandelwal, K., Zand, K., Matosich, K., Veeraraghavan, K., Michelena, K., Li, K., Huang, K.,
Chawla, K., Lakhotia, K., Huang, K., Chen, L., Garg, L., A, L., Silva, L., Bell, L., Zhang, L., Guo,
L., Yu, L., Moshkovich, L., Wehrstedt, L., Khabsa, M., Avalani, M., Bhatt, M., Tsimpoukelli, M.,
Mankus, M., Hasson, M., Lennie, M., Reso, M., Groshev, M., Naumov, M., Lathi, M., Keneally,
M., Seltzer, M. L., Valko, M., Restrepo, M., Patel, M., Vyatskov, M., Samvelyan, M., Clark, M.,
Macey, M., Wang, M., Hermoso, M. J., Metanat, M., Rastegari, M., Bansal, M., Santhanam, N.,
Parks, N., White, N., Bawa, N., Singhal, N., Egebo, N., Usunier, N., Laptev, N. P., Dong, N.,
Zhang, N., Cheng, N., Chernoguz, O., Hart, O., Salpekar, O., Kalinli, O., Kent, P., Parekh, P., Saab,
P., Balaji, P., Rittner, P., Bontrager, P., Roux, P., Dollar, P., Zvyagina, P., Ratanchandani, P., Yuvraj,
P., Liang, Q., Alao, R., Rodriguez, R., Ayub, R., Murthy, R., Nayani, R., Mitra, R., Li, R., Hogan,
R., Battey, R., Wang, R., Maheswari, R., Howes, R., Rinott, R., Bondu, S. J., Datta, S., Chugh, S.,
Hunt, S., Dhillon, S., Sidorov, S., Pan, S., Verma, S., Yamamoto, S., Ramaswamy, S., Lindsay, S.,
Lindsay, S., Feng, S., Lin, S., Zha, S. C., Shankar, S., Zhang, S., Zhang, S., Wang, S., Agarwal, S.,
Sajuyigbe, S., Chintala, S., Max, S., Chen, S., Kehoe, S., Satterfield, S., Govindaprasad, S., Gupta,
S., Cho, S., Virk, S., Subramanian, S., Choudhury, S., Goldman, S., Remez, T., Glaser, T., Best,
T., Kohler, T., Robinson, T., Li, T., Zhang, T., Matthews, T., Chou, T., Shaked, T., Vontimitta, V.,
Ajayi, V., Montanez, V., Mohan, V., Kumar, V. S., Mangla, V., Albiero, V., Ionescu, V., Poenaru,
V., Mihailescu, V. T., Ivanov, V., Li, W., Wang, W., Jiang, W., Bouaziz, W., Constable, W., Tang,
X., Wang, X., Wu, X., Wang, X., Xia, X., Wu, X., Gao, X., Chen, Y., Hu, Y., Jia, Y., Qi, Y., Li, Y.,
Zhang, Y., Zhang, Y., Adi, Y., Nam, Y., Yu, Wang, Hao, Y., Qian, Y., He, Y., Rait, Z., DeVito, Z.,
Rosnbrick, Z., Wen, Z., Yang, Z., and Zhao, Z. (2024). The llama 3 herd of models.
[11] Frank, M. C. and Goodman, N. D. (2012). Predicting pragmatic reasoning in language games.
_Science, 336(6084):998–998. Publisher: American Association for the Advancement of Science._
[12] Gao, L., Madaan, A., Zhou, S., Alon, U., Liu, P., Yang, Y., Callan, J., and Neubig, G. (2023).
Pal: Program-aided language models. In International Conference on Machine Learning, pages
10764–10799. PMLR.
[13] Gibson, E., Futrell, R., Piandadosi, S. T., Dautriche, I., Mahowald, K., Bergen, L., and Levy, R.
(2019). How efficiency shapes human language. Trends in Cognitive Sciences.
[14] Goldberg, Y. (2019). Assessing BERT’s Syntactic Abilities. arXiv preprint arXiv:1901.05287.
[15] Güçler, B. (2014). The role of symbols in mathematical communication: the case of the limit
notation. Research in Mathematics Education, 16(3):251–268.
[16] He-Yueya, J., Poesia, G., Wang, R., and Goodman, N. (2023). Solving math word problems
by combining language models with symbolic solvers. In The 3rd Workshop on Mathematical
_Reasoning and AI at NeurIPS’23._
[17] Hegarty, M., Mayer, R. E., and Monk, C. A. (1995). Comprehension of arithmetic word
problems: A comparison of successful and unsuccessful problem solvers. Journal of educational
_psychology, 87(1):18._
-----
[18] Heim, I. and Kratzer, A. (1998). Semantics in Generative Grammar. Wiley-Blackwell, Malden,
MA.
[19] Hersh, R. (1998). What is mathematics, really? Mitteilungen der Deutschen Mathematiker_Vereinigung, 6(2):13–14._
[20] Iverson, K. E. (1979). Notation as a tool of thought. Communications of the ACM, 23(8):444–
465. ACM Turing Award Lecture, Delivered at ACM ’79, Detroit, Oct. 29, 1979.
[21] Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., de las Casas, D.,
Bressand, F., Lengyel, G., Lample, G., Saulnier, L., Lavaud, L. R., Lachaux, M.-A., Stock, P.,
Scao, T. L., Lavril, T., Wang, T., Lacroix, T., and Sayed, W. E. (2023). Mistral 7b.
[22] Karp, K. S., Bush, S. B., and Dougherty, B. J. (2019). Avoiding the ineffective keyword strategy.
_Teaching Children Mathematics, 25(7):428–435._
[23] McCoy, R. T., Yao, S., Friedman, D., Hardy, M., and Griffiths, T. L. (2023). Embers of
autoregression: Understanding large language models through the problem they are trained to
solve. arXiv preprint arXiv:2309.13638.
[24] Miceli-Barone, A. V., Barez, F., Cohen, S. B., and Konstas, I. (2023). The larger they are, the
harder they fail: Language models do not recognize identifier swaps in python. In Findings of the
_Association for Computational Linguistics: ACL 2023, pages 272–292._
[25] Mirin, A. and Dawkins, P. C. (2022). Do mathematicians interpret equations asymmetrically?
_The Journal of Mathematical Behavior, 66:100959._
[26] Misra, K. (2022). minicons: Enabling flexible behavioral and representational analyses of
transformer language models. arXiv preprint arXiv:2203.13112.
[27] Olausson, T. X., Gu, A., Lipkin, B., Zhang, C. E., Solar-Lezama, A., Tenenbaum, J. B., and
Levy, R. (2023). Linc: A neurosymbolic approach for logical reasoning by combining language
models with first-order logic provers. arXiv preprint arXiv:2310.15164.
[28] Opedal, A., Stolfo, A., Shirakami, H., Jiao, Y., Cotterell, R., Schölkopf, B., Saparov, A., and
Sachan, M. (2024). Do language models exhibit the same cognitive biases in problem solving as
human learners? In Forty-first International Conference on Machine Learning.
[29] Pasolunghi, M. C., Cornoldi, C., and De Liberto, S. (1999). Working memory and intrusions
of irrelevant information in a group of specific poor problem solvers. Memory & Cognition,
27:779–790.
[30] Patel, A., Bhattamishra, S., and Goyal, N. (2021). Are NLP models really able to solve simple
math word problems? In Toutanova, K., Rumshisky, A., Zettlemoyer, L., Hakkani-Tur, D., Beltagy,
I., Bethard, S., Cotterell, R., Chakraborty, T., and Zhou, Y., editors, Proceedings of the 2021 Con_ference of the North American Chapter of the Association for Computational Linguistics: Human_
_Language Technologies, pages 2080–2094, Online. Association for Computational Linguistics._
[31] Powell, S. R. (2012). Equations and the equal sign in elementary mathematics textbooks. The
_Elementary school journal, 112(4):627–648._
[32] Powell, S. R. and Fuchs, L. S. (2018). Effective word-problem instruction: Using schemas to
facilitate mathematical reasoning. Teaching exceptional children, 51(1):31–42.
[33] Powell, S. R., Namkung, J. M., and Lin, X. (2022). An investigation of using keywords to solve
word problems. The Elementary School Journal, 122(3):452–473.
[34] Rafalovich, A. (2006). Making sociology relevant: The assignment and application of breaching
experiments. Teaching Sociology, 34(2):156–163.
[35] Razeghi, Y., Logan IV, R. L., Gardner, M., and Singh, S. (2022). Impact of pretraining term
frequencies on few-shot numerical reasoning. In Goldberg, Y., Kozareva, Z., and Zhang, Y.,
editors, Findings of the Association for Computational Linguistics: EMNLP 2022, pages 840–854,
Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
-----
[36] Shi, F., Chen, X., Misra, K., Scales, N., Dohan, D., Chi, E. H., Schärli, N., and Zhou, D. (2023).
Large language models can be easily distracted by irrelevant context. In International Conference
_on Machine Learning, pages 31210–31227. PMLR._
[37] Sprague, Z., Yin, F., Rodriguez, J. D., Jiang, D., Wadhwa, M., Singhal, P., Zhao, X., Ye, X.,
Mahowald, K., and Durrett, G. (2024). To CoT or not to CoT? chain-of-thought helps mainly on
math and symbolic reasoning.
[38] Stolfo, A., Jin, Z., Shridhar, K., Schoelkopf, B., and Sachan, M. (2023). A causal framework to
quantify the robustness of mathematical reasoning with language models. In Proceedings of the
_61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),_
pages 545–561.
[39] Verschaffel, L., Greer, B., and De Corte, E. (2000). Making sense of word problems. Lisse, The
_Netherlands, 224:224._
[40] Verschaffel, L., Schukajlow, S., Star, J., and Van Dooren, W. (2020). Word problems in
mathematics education: A survey. Zdm, 52:1–16.
[41] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf,
R., Funtowicz, M., Davison, J., Shleifer, S., von Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C.,
Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., and Rush, A. (2020). Transformers: State-ofthe-art natural language processing. In Liu, Q. and Schlangen, D., editors, Proceedings of the
_2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations,_
pages 38–45, Online. Association for Computational Linguistics.
[42] Yang, A., Yang, B., Hui, B., Zheng, B., Yu, B., Zhou, C., Li, C., Li, C., Liu, D., Huang, F., et al.
(2024). Qwen2 technical report. arXiv preprint arXiv:2407.10671.
[43] Ye, X., Chen, Q., Dillig, I., and Durrett, G. (2024). SatLM: Satisfiability-aided language models
using declarative prompting. Advances in Neural Information Processing Systems, 36.
[44] Zhang, C., Collins, K., Weller, A., and Tenenbaum, J. (2023). Ai for mathematics: A cognitive
science perspective. In The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS’23.
-----
**A** **Equation Generation and Prompting in Case Study One**
**A.1** **Equation Generation**
For step one of this experiment, we create our equation sets as follows. We first create two independent
expressions, each of which consists of two operands, either added or subtracted to each other. One of
these operands is a single digit number, with the other being a variable quantity in x with a single
digit coefficient. All operands, operations, and choice of which operands are the variable quantity
are selected at random. We then form our pair of complimentary equations by placing an equals
sign between these two expressions, in both orders. That is, given the two expressions a and b, our
pair of complimentary equations would be a = b and b = a. To illustrate further, a generated set
of expressions may include, for example, 2x + 3 = 4 − 5x or 8 − 5x = 2 + 3x, but not 2x = 3,
4x + 5y = 8x − 2, or 9x ∗ 2 = x − 4.
**A.2** **Prompting Methods**
Our experimental methodology necessitates prompting GPT-4o twice for each equation in our
evaluation set: once to create a word problem from a given equation, and once to try and recover an
equation given a math word problem. Below, we describe the prompts used for each of these steps.
**A.2.1** **Prompting for Word Problem Creation**
For a given equation, EQUATION, we first prime GPT-4o with the following command:
"You are a helpful middle school math teacher."
We then prompt the model to generate a word problem using the following prompt:
"Create a grade-school math problem representing the following equation:
{EQUATION}. Make sure your problem is clear, concise, represents every term of
the equation, and ends in a question mark. Generate just the problem and nothing
else."
**A.2.2** **Prompting for Equation Recovery**
For a given math word problem, PROBLEM, we first prime GPT-4o with the following command:
You are a helpful assistant.
We then prompt the model to recover the equation that is represented by PROBLEM with the following
prompt:
"What is the underlying math equation represented by the following situation:
{PROBLEM}. Use the letter ’x’ for the unknown quantity. Please do not explain, or
write any accompanying text, give just a single equation and nothing else."
**B** **Equation Set for Case Study Two**
We use the following set of equations for evaluating model order preferences in formal mathematics.
We name each following subsection as they are labeled in Figure 2. Each equation is presented below
in its "natural" form. All TE[Xformatting used to render the following sections, up-to the end of the]
equations, is included in our experiment.
**B.1** **DIFFERENCE QUOTIENT**
The difference quotient of a function g is defined to be
_g(x + h) −_ _g(x)_
(x + h) − _x_
-----
where h is nonzero. Let f : R → R be the function defined by f (x) = x[2]. The following shows the
difference quotient:
_f_ (x + h) _f_ (x)
_−_ = _[f]_ [(][x][ +][ h][)][ −] _[f]_ [(][x][)]
(x + h) − _x_ _h_
= [(][x][ +][ h][)][2][ −] _[x][2]_
= _[x][2][ + 2][xh][ +][ h][2][ −]_ _[x][2]_
= [2][xh]
_h[2]_
= 2x + h
**B.2** **DISTRIBUTIVE**
The distributive law tells us that for all numbers x, y, and z,
_x(y + z) = xy + xz_
**B.3** **EXPONENTS DIFF RULE**
Recall the Properties of Exponents:
_b[x]_
_b[y][ =][ b][x][−][y]_
**B.4** **EXPONENTS POWER RULE**
Recall the Properties of Exponents:
(b[x])[y] = b[xy]
**B.5** **EXPONENTS PROD RULE**
Recall the Properties of Exponents:
_b[x]_ _∗_ _b[y]_ = b[x][+][y]
**B.6** **HOMOMORPHISM**
Let ⟨S, ⋆⟩ and ⟨S[′], ⋆[′]⟩ be binary algebraic structures. A homomorphism from ⟨S, ⋆⟩ **to ⟨S[′], ⋆[′]⟩** is a
function ϕ : S → _S[′]_ such that for all x, y ∈ _S,_
_ϕ(x ⋆y) = ϕ(x) ⋆[′]_ _ϕ(y)_
**B.7** **INDUCTION**
The following is a portion of a proof by induction that for all natural numbers k, k[3] _−_ _k is divisible_
by 6. At this point in the proof, it has been assumed that n[3] _−_ _n is divisible by 6, and it is being_
shown that (n + 1)[3] _−_ (n + 1) is therefore also divisible by 6.
(n + 1)[3] _−_ (n + 1) = (n[3] + 3n[2] + 3n + 1) − (n + 1)
= (n[3] + 3n[2] + 3n + 1) − (n + 1)
= (n[3] _−_ _n) + (3n[2]_ + 3n)
= (n[3] _−_ _n) + 3n(n + 1)_
**B.8** **PRODUCT RULE**
The product rule for derivatives says that if f and g are differentiable functions, then
_fg[′]_ + f _[′]g = (fg)[′]_
-----
**B.9** **PROOF**
**Theorem 1. Suppose ⟨S, ⋆⟩** _and ⟨S[′], ⋆[′]⟩_ _be binary algebraic structures, and ϕ is an isomorphism_
_from ⟨S, ⋆⟩_ _onto ⟨S[′], ⋆[′]⟩. Further suppose that e is a left identity element in ⟨S, ⋆⟩. Then ϕ(e) is a_
_left identity element in ⟨S[′], ⋆[′]⟩._
_Proof. Let s[′]_ be an element of S[′]. Since ϕ is onto, there exists some s ∈ _S such that ϕ(s) = s[′]._
Hence
_s[′]_ = ϕ(s) = ϕ(e ⋆s) = ϕ(e) ⋆[′] _ϕ(s) = ϕ(e) ⋆[′]_ _s[′]_
**B.10** **SET THEORY**
The following is a proof in a set theory textbook that if a is a transitive set, then (a[+]) = a. Note
that a transitive set is defined to be a set a such that all members of a are subsets of a, and a[+] is
defined to be a _a_
_∪{_ _}_ [S]
_Proof._
( _a[+]) =_ (a ∪{a})
[ = ([ _a) ∪_ ( _{a})_
= ([ _a) ∪_ _a[_
= a[
-----
| [
"Sasha, Boguraev",
"Ben, Lipkin",
"Leonie, Weissweiler",
"Kyle, Mahowald"
] | 2024-09-25T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.17005 | https://arxiv.org/abs/2409.17005 | https://www.semanticscholar.org/paper/3ac973b724609018dc2911a33dc65fe1b1267051 |
More Details, Please: Improving Autoformalization with More Detailed Proofs | The formalization of mathematical theorems and their proofs is a time-consuming and tedious process which, despite recent advances in the reasoning capabilities of AI systems, remains a challenging task for computers. Existing attempts to automate the process with language models struggle with the difference in level of detail between formal and informal proofs. Successful autoformalization requires models to understand and be able to explain the nuances of logical arguments, a critical aspect of reasoning that is often overlooked in existing research. In this work, we introduce Sketch, Prove, Add Detail & Repeat (SPADeR), an approach that enhances proof autoformalizers by using language models to infer and explicitly incorporate implicit details from informal proofs. With the same number of autoformalization attempts, our method increases the percentage of successfully formalized problems in the miniF2F test dataset from 34.8% to 38.1%. | null | ## More Details, Please: Improving Autoformalization with More Detailed Proofs
**Guillem Tarrach** [1] **Albert Q. Jiang** [1] **Daniel Raggi** [1] **Wenda Li** [2] **Mateja Jamnik** [1]
**Abstract**
The formalization of mathematical theorems and
their proofs is a time-consuming and tedious process which, despite recent advances in the reasoning capabilities of AI systems, remains a challenging task for computers. Existing attempts
to automate the process with language models
struggle with the difference in level of detail between formal and informal proofs. Successful
autoformalization requires models to understand
and be able to explain the nuances of logical arguments, a critical aspect of reasoning that is often
overlooked in existing research. In this work, we
introduce Sketch, Prove, Add Details & Repeat
(SPADER), an approach that enhances proof autoformalizers by using language models to infer
and explicitly incorporate implicit details from
informal proofs. With the same number of autoformalization attempts, our method increases the
percentage of successfully formalized problems
in the miniF2F test dataset from 34.8% to 38.1%.
**1. Introduction**
A significant body of recent work has investigated the reasoning capabilities of Large Language Models (LLMs), particularly in the context of solving mathematical problems.
One frequently studied task is Automated Theorem Proving (ATP), which involves automatically generating formal
proofs of mathematical theorems. However, few studies
have investigated the ability of LLMs to understand and
explain mathematical arguments. In this work, we introduce
an approach that leverages this capability to construct more
detailed informal mathematical proofs, thereby improving
the process of autoformalization – the translation of informal proofs into formally verifiable formal proofs. Informal
proofs lack many details that are necessary to verify their
correctness. While formal proofs do not suffer from this
1University of Cambridge 2University of Edinburgh. Correspondence to: Guillem Tarrach <[email protected]>.
_AI for MATH Workshop at ICML 2024, Vienna, Austria. Copyright_
2024 by the author(s).
issue, in practice the focus on low-level details makes formal automated theorem provers less successful at high-level
planning. As a result, autoformalization systems struggle
with the discrepancy in the level of detail in formal and informal proofs (Jiang et al., 2023, Section 5.2 and Appendix
C). Our approach uses LLMs to explain informal proofs by
inferring and incorporating implicit details, thereby bridging
the gap between informal and formal proofs.
To plan ahead and focus on the overall proof strategy, mathematicians usually write proofs in a non-linear, hierarchical
manner: They start by writing a high-level proof draft and
iteratively add more detail until the proof is considered complete. Previous work on language model-based ATP has
studied such hierarchical set-ups (Li et al., 2021; Jiang et al.,
2023; Mikuła et al., 2023), but has not explored adding detail to informal proofs. For example, in Draft, Sketch and
_Prove (DSP) (Jiang et al., 2023), a high-level informal proof_
draft is used to inform a more detailed formal proof sketch,
which is later completed by an automated theorem prover.
A common error case occurs in the process of translating
informal drafts into formal sketches. This process happens
in a single step: the model must decide which steps in the
draft need further argumentation, add the missing details,
and translate the informal draft to the formal language all
at once. Therefore, the approach could benefit from using
specialized models for each of the three stages, particularly
adding the missing details as it is especially complex.
Our main contribution is SPADER, a method that enhances
autoformalizers through the use of LLMs to construct more
detailed informal mathematical proofs by incorporating
the implicit reasoning steps into them. The approach is
illustrated in Figure 1. Starting with a theorem statement
and an informal proof, we use an autoformalizer to generate
a formal proof sketch. We attempt to complete the sketch
using an automated prover. If some steps cannot be proved,
we use an LLM to provide more details about them and
re-attempt the process with a new, more detailed, informal
proof. The process succeeds if the additional detail is correct
and offers a good explanation for the problematic step.
Our experiments show that SPADER increases the success
rate of autoformalization systems. With the same number
of autoformalization attempts, adding detail to informal
-----
**More Details, Please: Improving Autoformalization with More Detailed Proofs**
|Autofo + Attempt|rmalise to prove|
|---|---|
|Autofo + Attempt|rmalise to prove|
|---|---|
**Problem. Expand the product (x + 1)[2]** _· x. Show that it is x[3]_ + 2x[2] + x.
**Solution. We have (x + 1)[2]** = (x + 1)(x + 1)
**Problem. Expand the product (x+1)[2]·x. Show that**
it is x[3] + 2x[2] + x. = x · x + x · 1 + 1 · x + 1 · 1
**Solution. We have (x + 1)[2]** = x[2] + 2x + 1. = x[2] + 2x + 1.
Multiplying this by x gives x[3] + 2x[2] + x. □ Multiplying this by x gives x[3] + 2x[2] + x. □
Autoformalise Autoformalise
Add details to
+ +
problematic steps
Attempt to prove Attempt to prove
```
theorem: theorem:
fixes x ::real fixes x ::real
shows "(x + 1)^2 * x = x^3 + 2 * x^2 + x" shows "(x + 1)^2 * x = x^3 + 2 * x^2 + x"
proof - proof have f1: "(x + 1)^2 = x^2 + 2 * x + 1" have c0: "(x + 1)^2 = (x + 1) * (x + 1)"
sorry by (simp add: power2_eq_square)
have "(x^2 + 2 * x + 1) * x = x^3 + 2 * x^2 + x" then have c1: "(x + 1) * (x + 1) = x * x + x * 1 + 1 * x + 1 * 1"
by (simp add: mult.left_commute by (simp add: algebra_simps)
numeral_3_eq_3 power2_eq_square)
then have c2: "x^2 + x + x + 1 = x^2 + 2 * x + 1"
then show ?thesis by (simp add: power2_eq_square)
using f1 by simp finally have f1: "(x + 1)^2 = x^2 + 2 * x + 1"
using c0 c2 by (simp add: c1 mult_1 power2_eq_square)
qed
have "(x^2 + 2 * x + 1) * x = x^3 + 2 * x^2 + x"
by (simp add: mult.left_commute
numeral_3_eq_3 power2_eq_square)
# ✗
then show ?thesis
using f1 by simp
qed
```
# ✓
_Figure 1. Overview of SPADER (Sketch, Prove, Add Details & Repeat). Given formal and informal statements of a theorem and an_
informal proof, we attempt to autoformalize the proof and then formally verify it. Whenever a particular step cannot be proved, we add
more details to it using a language model. We re-attempt the formal verification with the more detailed informal proof. The inclusion of
more details into informal proofs improves the performance of the autoformalizer.
**2. Background and Related Work**
**2.1. Mathematical Reasoning with Language Models**
With recent advances in language models, particularly the
introduction of LLMs, there has been an increase in research
into their reasoning capabilities, particularly in the context
of mathematical problem-solving (Hendrycks et al., 2021;
Drori et al., 2022; Welleck et al., 2021). While alternative prompting methods (Wei et al., 2022; Yao et al., 2023;
Zheng et al., 2023a) help improve the accuracy of reasoning
arguments, language models still frequently make mistakes.
These challenges highlight the need for robust verification
methods to complement informal reasoning.
Furthermore, the ability of LLMs to understand and explain
existing arguments remains largely unexplored. In this
work, we investigate these abilities and their evaluation
proofs with GPT-4o increases the number of successfully
verified problems in the miniF2F test from 85 (34.8%) to 93
(38.1%).
In conclusion, we make the following contributions:
- We propose a method for using LLMs to construct
more detailed proofs by inferring implicit details in
informal mathematical proofs.
- We demonstrate the usefulness of the presented method
for autoformalization.
Our work shows that LLMs can provide detailed mathematical proofs by inferring and explaining implicit reasoning
steps. This ability helps bridge the gap between informal
and formal mathematical proofs and enables LLM-based
autoformalization systems to verify more theorems.
-----
**More Details, Please: Improving Autoformalization with More Detailed Proofs**
through autoformalization and formal verification.
**2.2. Autoformalization**
To address the limitations of reasoning with language models, recent work has explored the combination of informal
reasoning with formal verification through autoformalization. While early approaches to autoformalization with deep
learning took inspiration from Neural Machine Translation
(Wang et al., 2018), it has been observed (Wu et al., 2022)
that LLMs are better suited for this task because of their incontext few-shot learning capabilities (Brown et al., 2020)
and the scarcity of parallel informal-formal data. Draft,
_Sketch and Prove (DSP) (Jiang et al., 2023) approaches_
automated theorem proving by autoformalizing computergenerated informal proofs. Autoformalization proceeds in
two stages. The first stage uses an LLM to generate a formal
sketch that follows the informal proof. The formal sketch is
not a complete formal proof; instead, it outlines the overall
proof strategy by describing intermediate conjectures. In
the second stage, an off-the-shelf formal theorem prover
is employed to prove the intermediate conjectures in the
sketch, thus completing the proof. However, in many cases,
the informal proof does not contain enough detail for the
automated prover to fill in the gaps. Don’t Trust: Verify
(Zhou et al., 2024) applies a similar method to open-ended
mathematical problems. The method consists of generating
an informal chain-of-thought reasoning argument to find
the answer to a problem, which is considered valid only if
it can be autoformalized and formally verified by an automated prover. This approach suffers from the same problem,
where informal solutions sometimes fail to be verified despite being correct. Lyra (Zheng et al., 2023b) addresses
this issue by prompting LLMs to repair errors in the formal
proofs. In our approach, we instead prompt LLMs to add
more detail to the informal proofs. We note that informal
solutions to the test problems used to evaluate the methods
in Lyra, as well as ours, may be part of the training data for
the LLMs used (GPT-4 and GPT-4o. respectively). Therefore, these methods are better understood as methods for
autoformalization rather than theorem-proving.
**2.3. Formal Theorem Proving**
The construction of mathematical proofs in non-linear ways,
where detail is dynamically added to problematic steps, has
been more widely studied in the context of formal theorem
proving. IsarStep (Li et al., 2021) introduces a benchmark
for the task of generating intermediate steps in a formal
proof. Magnushammer (Mikuła et al., 2023) combines a
premise selection model with formal proof generators (Jiang
et al., 2022). The premise selection model is employed
to find premises that imply the intermediate conjectures
generated by the proof generator, which together constitute a
proof of the theorem. Baldur (First et al., 2023) approaches
ATP by generating a full formal proof with a pre-trained
language model and using a specialized model to repair any
errors with it. However, unlike the methods in Section 2.2,
none of these methods make use of informal mathematical
data, which is significantly more abundant than formal data.
**3. SPADER: Enhancing Autoformalization**
**with More Detailed Informal Proofs**
We now describe SPADER (Sketch, Prove, Add Details
_& Repeat), our approach to autoformalization and formal_
verification. This approach enhances the performance of
autoformalizers by using LLMs to construct more detailed
proofs that guide the autoformalization and formal verification process. The approach is illustrated in Figure 1 and
summarized in Algorithm 1. We assume the user has access
to an autoformalizer and an automated theorem prover or
proof assistant. Given an informal proof – a proof draft in
the terminology of DSP (Jiang et al., 2023) – the approach
consists of the following stages:
**Stage 1 (Sketch). The informal proof draft is translated**
into a formal sketch using an autoformalizer. The formal
sketch need not be a complete formal proof; it may contain
open conjectures that will be handled in the next stage.
The formal sketch should follow the high-level structure of
the informal draft. For example, in Figure 1, the different
intermediate conjectures in the formal proof can be mapped
to steps with the same color in the informal draft. This may
be achieved through the use of comments in the sketch.
**Stage 2 (Prove). An automated theorem prover attempts**
to fill in the missing details in the formal sketch, thus completing the proof. If a proof is found, the process has been
successful. If the theorem prover is unable to prove an intermediate step, the step is flagged as not proven, and the
theorem prover proceeds with the rest of the proof, assuming
that the step is true.
**Stage 3. (Add Detail). The steps in the formal sketch that**
could not be proved are mapped to the corresponding steps
in the informal draft. A separate model, typically an LLM,
is then prompted to provide more details on the steps in
question and to generate a more detailed informal draft.
**Repeat. The process is repeated starting from Stage 1 with**
the new draft.
In Figure 1, the vertical downward arrows represent Stages 1
and 2, and the middle arrow represents Stage 3. In the rest
of this paper, we refer to the number of times that Stage 3 is
run as the number of detailing passes M .
-----
**More Details, Please: Improving Autoformalization with More Detailed Proofs**
**Algorithm 1 SPADER (Sketch, Prove, Add Details &**
Repeat). The algorithm assumes that the user has access to an autoformalizer autoformalize, an automated
theorem prover attempt formal proof, and a model
add detail that can add detail to proofs.
**Parameters: Number of detailing passes M** .
**Input: Theorem t, informal proof p.**
_draft ←_ **p**
**for j ∈{0, . . ., M** _} do_
_sketch ←_ autoformalize(t, draft)
_proof ←_ attempt formal proof(sketch)
_failedSteps ←{s ∈_ _proof : s.proven = false}_
**if failedSteps = ∅** **then**
**return proof**
**else if j < M then**
_draft ←_ add detail(t, draft, failedSteps)
**end if**
**end for**
**return FAIL**
3-shot prompting. We prompt the model to include the original proof as comments before the corresponding steps in
Isabelle. We also prompt the model to include a comment
concluding the informal proof (with “The result follows”)
so that the end of the proof can be marked as needing more
detail in the next stages. We include 3 in-context examples,
randomly sampled from a list of 17 hand-labeled samples,
which are modified versions of those from (Jiang et al.,
2023). They have been modified to break down the informal proofs (included as comments in the formal proof) into
smaller steps. We hope that segmenting informal proofs into
smaller steps makes it easier to pinpoint the problematic
steps later. To generate a diverse set of sketches across multiple runs, we generate them with temperature sampling with
a temperature parameter of 0.6 as in DSP (Jiang et al., 2023).
We attempt to complete the sketch using the Isabelle theorem prover. Whenever Isabelle fails to prove an intermediate
conjecture, we attempt to prove it with several heuristics,
as in DSP (Jiang et al., 2023), and Sledgehammer (Paulson
& Blanchette), a collection of automated theorem provers.
If these fail to prove a conjecture, we add a sorry statement, which tells Isabelle to assume that the step has been
proven and allows it to continue verifying the rest of the
proof. If the verification process encounters an error (e.g. if
the formal sketch contains syntax errors), we abort and stop
the run. If a proof is parsed correctly but contains sorry
statements, we add an ‘unproven’ flag to the last comment
before each such statement. Since the comments contain
the original informal proof, the flagged steps are those that
require more detail. We concatenate all the steps to recover
the original proof and surround the flagged steps with the
strings <MORE_DETAIL> and </MORE_DETAIL>.
To add detail to the proof, we prompt GPT-4o to rewrite
the proof with more detail in the marked steps. We use
temperature sampling with a temperature parameter of 0.4.
We have used a lower temperature parameter than before
to prioritize accuracy over diversity since only one detailed
draft is generated per sketch. We generate either one (if
_M = 2) or two (if M = 1) formal sketches for each of the_
new drafts and re-attempt the formal verification. If M = 2,
we repeat the process of adding detail to unproven steps,
generating a formal sketch, and attempting to complete it.
We ensure that the number of autoformalization attempts is
consistent across the different experiments (M = 1, M = 2
and the baseline) by implementing a modified version of
Algorithm 1, which is described in the appendix.
We have run each experiment N = 100 times and consider a problem solution to be successfully autoformalized
whenever one of these succeeds. To avoid confusion in
our discussion, we will distinguish between individual runs
(each run of an experiment) and autoformalization attempts
(each time the autoformalizer is called). Each individual run
**4. Experiments**
Next, we describe the experiments we conducted to evaluate
whether adding detail to informal proofs with SPADER
improves the performance of autoformalizers.
**4.1. Dataset and Metrics**
We evaluate SPADER on the miniF2F dataset (Zheng et al.,
2021). The miniF2F dataset comprises 488 mathematical
problems (244 validation problems and 244 test problems).
Each problem consists of a collection of formal statements in
different formal languages: Isabelle (Paulson, 1988), Lean
(de Moura et al., 2015), MetaMath (Yu et al., 2024) and
HOL Light (Bansal et al., 2019). This dataset was expanded
in DSP (Jiang et al., 2023) to include an informal statement and a human-written informal solution for each formal
statement. Our goal is to correctly autoformalize and formally verify the informal solution in Isabelle. We evaluate
the performance of our method according to the number of
test problem solutions that can be correctly formalized and
verified.
**4.2. Implementation**
We performed our experiments with the Isabelle proof assistant (Paulson, 1988). We have considered M = 1 and
_M = 2 detailing passes. This allows us to compare the_
effect of multiple detailing passes.
As our autoformalization model, we use GPT-4o[1]. We
prompt the model through the OpenAI API to translate
the informal proof into a formal Isabelle/HOL sketch with
[1https://openai.com/index/hello-gpt-4o/](https://openai.com/index/hello-gpt-4o/)
-----
**More Details, Please: Improving Autoformalization with More Detailed Proofs**
_Table 1. Results of the experiments on autoformalization. The_
table shows the number of problems correctly formalized in the
miniF2F test dataset for different autoformalization methods. With
the same number of autoformalization attempts, SPADER is able
to write complete formal solutions for 8 more problems than the
Sketch and Prove baseline, where additional details are not added
to informal drafts.
METHOD PROBLEMS FORMALIZED
SKETCH AND PROVE 85 (34.8%)
SPADER, M = 1 (OURS) **93 (38.1%)**
SPADER, M = 2 (OURS) **93 (38.1%)**
may generate up to three sketches with different levels of
detail, so the process involves at most 300 autoformalization
attempts. Since we stop a run whenever a formal proof is
found or the verification encounters an error, this bound
is rarely reached in practice (with the average number of
attempts being 108).
**4.3. Baselines**
As a baseline, we compare our approach against our implementation of the autoformalization method presented in
DSP (Jiang et al., 2023): we first generate a formal sketch,
which we try to formally verify in Isabelle using the same
procedure described above. We refer to this baseline as
_Sketch and Prove since it does not include the Draft stage in_
DSP: we conduct our experiments exclusively with humangenerated proofs. These proofs, unlike computer-generated
ones, are known to be correct and contain no errors. This
allows us to isolate the effect of additional detail on autoformalization more accurately. As discussed above, for each
individual run, we use the same number of autoformalization attempts in the baseline and the other approaches.
**4.4. Results**
The results of our experiments on autoformalization are
displayed in Table 1. With the same number of autoformalization attempts, SPADER achieves a higher success rate
than the Sketch and Prove baseline. We note that SPADER
is able to solve the same number of problems with M = 1
and M = 2, which suggests that adding even more detail to
detailed sketches does not improve performance. Figure 2
shows how the number of successfully solved problems
changes with the number of individual runs N .
**5. Discussion**
**LLMs Can Understand and Explain Informal Proofs.**
In our experiments, we have asked LLMs to provide more
details on specific steps in mathematical proofs. For 8 prob
_Figure 2. Number of problems solved in the miniF2F test set_
**for different numbers of runs for SPADER with M = 2 (green),**
_M = 1 (orange), and the Sketch and Prove baseline (blue). We_
use the same number of autoformalization attempts in all methods.
After the easier problems are solved, SPADER can autoformalize
more problems.
lems in the miniF2F dataset, the new, more detailed proofs
could be formalized, while the original proofs could not.
Therefore, the details added by the LLM must be correct
(since they have been formally verified) and must provide
a good explanation of the arguments in the original proof
(since they help verify the rest of the proof). We have included a few successful examples in the appendix.
**Additional Details do not Improve Autoformalization on**
**Easy Problems. We observe from Figure 2 that for small**
numbers of runs (N < 20), the more detailed proofs do not
improve the success rate of autoformalizers. All the problems that are proved by the baseline, but not by SPADER
in this range, are solved in subsequent runs by SPADER on
initial autoformalization attempts (i.e., by autoformalizing
the original drafts, which do not contain additional details).
This suggests that the process of adding detail is helpful
only for difficult problems. Future research may explore
distinguishing which proofs benefit most from additional
detail so that resources can be allocated more effectively.
**Multiple Detailing Passes are not Necessary. Table 1 and**
Figure 2 indicate that adding detail twice (M = 2) does not
yield better performance than adding detail once (M = 1).
As discussed above, the performance of the baseline (which
corresponds to M = 0) is similar to that of M = 1, 2 for
small numbers of runs, suggesting that the similar trend for
_M = 1 and M = 2 might not hold for a very large number_
of runs (N ≫ 100). However, verifying or making use of
this would require an impractical amount of computational
resources. It is also possible that more detailing passes are
beneficial for more complicated informal proofs, since they
-----
**More Details, Please: Improving Autoformalization with More Detailed Proofs**
are usually less detailed.
**Use of Specialized Models for Adding Detail. We have**
experimented with using specialized models that retrieve
relevant references or insert intermediate steps in proofs to
provide more detail. However, these models have not proved
successful at improving autoformalization. We believe this
is due to the difference in distribution between the problems
in the miniF2F dataset, consisting of high-school level mathematics and employing rich language in their solutions, and
our training data, which is collected from ProofWiki[2] and
consists of university-level mathematics using more rigid
language.
**6. Conclusion**
To create successful autoformalization systems, it is essential to reconcile the lack of detail of informal proofs with
the high detail requirements of formal verification systems.
In this paper, we introduced SPADER, an approach that
enhances autoformalizers by using Large Language Models
(LLMs) to construct more detailed informal mathematical
proofs. By inferring and incorporating implicit details in
proofs, this approach improves the accuracy of language
model-based autoformalizers. This shows that LLMs possess the ability to understand and explain existing mathematical arguments.
**Impact Statement**
This paper presents work whose goal is to advance the field
of Machine Learning. There are many potential societal
consequences of our work, none of which we feel must be
specifically highlighted here.
**References**
Bansal, K., Loos, S., Rabe, M., Szegedy, C., and Wilcox,
S. HOList: An environment for machine learning
of higher order logic theorem proving. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of
_the 36th International Conference on Machine Learn-_
_ing, volume 97 of Proceedings of Machine Learn-_
_ing Research, pp. 454–463. PMLR, 09–15 Jun 2019._
[URL https://proceedings.mlr.press/v97/](https://proceedings.mlr.press/v97/bansal19a.html)
[bansal19a.html.](https://proceedings.mlr.press/v97/bansal19a.html)
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D.,
Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G.,
Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J.,
Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M.,
Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S.,
[2https://proofwiki.org](https://proofwiki.org)
Radford, A., Sutskever, I., and Amodei, D. Language
models are few-shot learners. In Larochelle, H.,
Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.),
_Advances in Neural Information Processing Systems,_
volume 33, pp. 1877–1901. Curran Associates, Inc.,
2020. [URL https://proceedings.neurips.](https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)
[cc/paper_files/paper/2020/file/](https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)
[1457c0d6bfcb4967418bfb8ac142f64a-Paper.](https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)
[pdf.](https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)
de Moura, L., Kong, S., Avigad, J., van Doorn, F., and
von Raumer, J. The Lean Theorem Prover (System Description). In Felty, A. P. and Middeldorp, A. (eds.),
_Automated Deduction - CADE-25, pp. 378–388, Cham,_
2015. Springer International Publishing. ISBN 978-3319-21401-6.
Drori, I., Zhang, S., Shuttleworth, R., Tang, L., Lu, A., Ke,
E., Liu, K., Chen, L., Tran, S., Cheng, N., Wang, R.,
Singh, N., Patti, T. L., Lynch, J., Shporer, A., Verma, N.,
Wu, E., and Strang, G. A neural network solves, explains,
and generates university math problems by program synthesis and few-shot learning at human level. Proceedings
_of the National Academy of Sciences (PNAS), 119(32),_
2022.
First, E., Rabe, M., Ringer, T., and Brun, Y. Baldur: Wholeproof generation and repair with large language models.
In Proceedings of the 31st ACM Joint European Soft_ware Engineering Conference and Symposium on the_
_Foundations of Software Engineering, ESEC/FSE 2023,_
pp. 1229–1241, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9798400703270.
[doi: 10.1145/3611643.3616243. URL https://doi.](https://doi.org/10.1145/3611643.3616243)
[org/10.1145/3611643.3616243.](https://doi.org/10.1145/3611643.3616243)
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart,
S., Tang, E., Song, D., and Steinhardt, J. Measuring mathematical problem solving with the math dataset. NeurIPS,
2021.
Jiang, A., Czechowski, K., Jamnik, M., Milos, P.,
Tworkowski, S., Li, W., and Wu, Y. T. Thor: Wielding hammers to integrate language models and automated
theorem provers. In NeurIPS, 2022.
Jiang, A. Q., Li, W., Han, J. M., and Wu, Y. Lisa: Language
models of isabelle proofs. 6th Conference on Artificial
_Intelligence and Theorem Proving, 2021._
Jiang, A. Q., Welleck, S., Zhou, J. P., Lacroix, T., Liu,
J., Li, W., Jamnik, M., Lample, G., and Wu, Y. Draft,
sketch, and prove: Guiding formal theorem provers with
informal proofs. In The Eleventh International Confer_[ence on Learning Representations, 2023. URL https:](https://openreview.net/forum?id=SMa9EAovKMC)_
[//openreview.net/forum?id=SMa9EAovKMC.](https://openreview.net/forum?id=SMa9EAovKMC)
-----
**More Details, Please: Improving Autoformalization with More Detailed Proofs**
Li, W., Yu, L., Wu, Y., and Paulson, L. C. Isarstep: a
benchmark for high-level mathematical reasoning. In
_International Conference on Learning Representations,_
[2021. URL https://openreview.net/forum?](https://openreview.net/forum?id=Pzj6fzU6wkj)
[id=Pzj6fzU6wkj.](https://openreview.net/forum?id=Pzj6fzU6wkj)
Mikuła, M., Antoniak, S., Tworkowski, S., Jiang, A. Q.,
Zhou, J. P., Szegedy, C., Łukasz Kucinski, Miło´ s, P., and´
Wu, Y. Magnushammer: A transformer-based approach
to premise selection, 2023.
Paulson, L. and Blanchette, J. Three Years of Experience
with Sledgehammer, a Practical Link between Automatic
and Interactive Theorem Provers. In IWIL 2010. The 8th
_International Workshop on the Implementation of Logics._
Paulson, L. C. The foundation of a generic theorem prover. Technical Report UCAM-CL-TR130, University of Cambridge, Computer Laboratory,
[March 1988. URL https://www.cl.cam.ac.uk/](https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-130.pdf)
[techreports/UCAM-CL-TR-130.pdf.](https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-130.pdf)
Wang, Q., Kaliszyk, C., and Urban, J. First experiments with
neural translation of informal to formal mathematics. In
Rabe, F., Farmer, W. M., Passmore, G. O., and Youssef, A.
(eds.), Intelligent Computer Mathematics, pp. 255–270,
Cham, 2018. Springer International Publishing. ISBN
978-3-319-96812-4.
Wei, J., Wang, X., Schuurmans, D., Bosma, M., brian ichter,
Xia, F., Chi, E. H., Le, Q. V., and Zhou, D. Chain of
thought prompting elicits reasoning in large language
models. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho,
K. (eds.), Advances in Neural Information Processing
_[Systems, 2022. URL https://openreview.net/](https://openreview.net/forum?id=_VjQlMeSB_J)_
[forum?id=_VjQlMeSB_J.](https://openreview.net/forum?id=_VjQlMeSB_J)
Welleck, S., Liu, J., Bras, R. L., Hajishirzi, H., Choi, Y.,
and Cho, K. Naturalproofs: Mathematical theorem proving in natural language. In Thirty-fifth Conference on
_Neural Information Processing Systems Datasets and_
_Benchmarks Track (Round 1), 2021._ [URL https:](https://openreview.net/forum?id=Jvxa8adr3iY)
[//openreview.net/forum?id=Jvxa8adr3iY.](https://openreview.net/forum?id=Jvxa8adr3iY)
Wu, Y., Jiang, A. Q., Li, W., Rabe, M., Staats, C., Jamnik,
M., and Szegedy, C. Autoformalization with large language models. In Koyejo, S., Mohamed, S., Agarwal, A.,
Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in
_Neural Information Processing Systems, volume 35, pp._
32353–32368. Curran Associates, Inc., 2022.
Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao,
Y., and Narasimhan, K. Tree of thoughts: Deliberate
problem solving with large language models, 2023.
Yu, L., Jiang, W., Shi, H., YU, J., Liu, Z., Zhang, Y.,
Kwok, J., Li, Z., Weller, A., and Liu, W. Metamath:
Bootstrap your own mathematical questions for large
language models. In The Twelfth International Confer_[ence on Learning Representations, 2024. URL https:](https://openreview.net/forum?id=N8N0hgNDRt)_
[//openreview.net/forum?id=N8N0hgNDRt.](https://openreview.net/forum?id=N8N0hgNDRt)
Zheng, C., Liu, Z., Xie, E., Li, Z., and Li, Y. Progressivehint prompting improves reasoning in large language
models. arXiv preprint arXiv:2304.09797, 2023a.
Zheng, C., Wang, H., Xie, E., Liu, Z., Sun, J., Xin, H., Shen,
J., Li, Z., and Li, Y. Lyra: Orchestrating dual correction
in automated theorem proving, 2023b.
Zheng, K., Han, J. M., and Polu, S. Minif2f: a cross-system
benchmark for formal olympiad-level mathematics. arXiv
_preprint arXiv:2109.00110, 2021._
Zhou, J. P., Staats, C., Li, W., Szegedy, C., Weinberger,
K. Q., and Wu, Y. Don’t trust: Verify – grounding llm
quantitative reasoning with autoformalization, 2024.
-----
**More Details, Please: Improving Autoformalization with More Detailed Proofs**
**A. Implementation Details**
**A.1. Achieving a Consistent Number of Autoformalization Attempts**
To ensure that the baseline is on a level playing field with our method, we have modified Algorithm 1 to use the same number
of autoformalization attempts for the baseline and our method. The resulting modification can be found in Algorithm 2. We
use M _[∗]_ = 2, so that M = 1, 2 correspond to our implementation of SPADER and M = 0 corresponds to the baseline with
the same number of attempts.
**Algorithm 2 SPADER with a consistent number of attempts for different detailing passes M** . The algorithm assumes that
the user has access to an autoformalizer autoformalize, an automated theorem prover attempt formal proof,
and a model add detail that can add detail to proofs.
**Parameters: Number of maximum detailing passes M** _[∗]._
**Input: Theorem t, informal proof p.**
**for M ∈{0, . . ., M** _[∗]} do_
_successfulProofs[M_ ] ←∅
**end for**
_drafts[1] ←_ **p**
/* Autoformalize with M = M _[∗]_ detailing passes */
**for j ∈{0, . . ., M** _[∗]} do_
_sketch ←_ autoformalize(t, drafts[j])
_proof ←_ attempt formal proof(sketch)
**if proof = ERROR then**
**break**
**end if**
_failedSteps ←{s ∈_ _proof : s.proven = false}_
**if failedSteps = ∅** **then**
_successfulProofs[M_ _[∗]].add(proof_ )
**break**
**else if j < M** _[∗]_ **then**
_drafts[j + 1] ←_ add detail(t, drafts[j], failedSteps)
**end if**
**end for**
/* Autoformalize with M < M _[∗]_ detailing passes with the same number of autoformalization attempts */
**for M ∈{0, . . ., M** _[∗]_ _−_ 1} do
**for j ∈{0, . . ., length(drafts)} do**
_sketch ←_ autoformalize(t, drafts[min(M, j)])
_proof ←_ attempt formal proof(sketch)
_failedSteps ←{s ∈_ _proof : s.proven = false}_
**if failedSteps = ∅** **then**
_successfulProofs[M_ ].add(proof )
**break**
**end if**
**end for**
**end for**
**return FAIL**
**A.2. Autoformalization with LLMs**
We describe in detail our process for translating informal proofs into formal sketches with GPT-4o. In the initial sketching
stage (j = 0 in Algorithm 2), where we work with the original informal proofs (as opposed to more detailed ones), we have
used two different prompts that ask the model to write a formal sketch in Isabelle that follows the informal draft, where
each informal step is included as a comment before the corresponding formal steps. The first has very detailed instructions,
-----
**More Details, Please: Improving Autoformalization with More Detailed Proofs**
prompting the model to follow the informal proof very closely and make the informal steps (included in comments) as close
as possible. The motivation behind this is that, with smaller steps, our approach will be able to more accurately zone in on
the parts of the proof that are difficult for the automated prover. In contrast to (Jiang et al., 2023), instead of prompting the
model to invoke the Sledgehammer automated prover to prove intermediate conjectures whenever possible, we allow the
model to predict premises that will prove it. The second prompt contains less detailed instructions.
We sample 3 random examples from a list of 17 hand-labeled samples and include them as in-context examples. The
examples contain the original informal proof as comments: for each step in the informal proof, a comment containing it is
followed by the corresponding formal statement. The examples are based on the examples from (Jiang et al., 2023); however,
we segment the informal proofs into smaller steps. For each set of examples and prompts, we have generated two outputs
with temperature sampling with a temperature parameter of 0.6 and a maximum output context length of 1024 tokens. We
did not observe any significant difference in the performance of the two prompts in our validation runs. For the following
sketching stages (j > 0), we use only the first prompt and generate a single output per set of in-context examples, also with
a temperature parameter of 0.6 and a maximum output context length of 1024 tokens.
**A.3. ATP heuristics**
Whenever an intermediate conjecture in the formal sketch fails to be proved, we attempt to prove it with the following heuristics: auto, simp, blast, fastforce, force, eval, presburger, sos, arith, linarith,
auto simp: field simps algebra simps. Note that, differently to DSP (Jiang et al., 2023), we include
algebra simps in the last one. If the heuristics fail, we attempt to prove the step with the Sledgehammer automated prover (Paulson & Blanchette). We interact with Isabelle from Python scripts via Portal to ISAbelle (Jiang et al.,
2021).
**B. Examples of Successfully Autoformalized Proofs**
Below, we include four examples of problems whose original human-written solution could not be autoformalized with
the Sketch and Prove baseline, but which could be autoformalized with the additional detail provided by GPT-4o with
SPADeR. We include the original informal statement and original human-written informal solution, as well as the more
detailed informal solution and complete formal solutions provided by SPADeR. We show two number theory problems
(Figure 3 and Figure 4) and two algebra problems (Figure 5 and Figure 6). The first problem (Figure 3) was discussed in
Appendix C of DSP (Jiang et al., 2023) as an example of a problem whose human-written solution could not be correctly
autoformalized due to not containing enough detail, but for which a computer-written solution could.
-----
**More Details, Please: Improving Autoformalization with More Detailed Proofs**
**Problem. Prove that the fraction** [21]14[n]n[+4]+3 [is irreducible for every natural number][ n][.]
**Original informal solution. Denoting the greatest common divisor of a, b as (a, b), we use the Euclidean algorithm:**
(21n + 4, 14n + 3) = (7n + 1, 14n + 3) = (7n + 1, 1) = 1
It follows that [21]14[n]n[+4]+3 [is irreducible. Q.E.D.]
**Detailed informal solution. Denoting the greatest common divisor of a, b as (a, b), we use the Euclidean algorithm: (21n +**
4, 14n + 3).
First, we calculate the remainder when 21n + 4 is divided by 14n + 3:
21n + 4 = (14n + 3) · 1 + (7n + 1), so (21n + 4, 14n + 3) = (14n + 3, 7n + 1).
Next, we calculate the remainder when 14n+3 is divided by 7n+1: 14n+3 = (7n+1)·2+1, so (14n+3, 7n+1) = (7n+1, 1).
Finally, since the greatest common divisor of any number and 1 is 1, we have (7n + 1, 1) = 1.
Therefore, (21n + 4, 14n + 3) = 1.
It follows that 14[21][n]n[+4]+3 [is irreducible. Q.E.D.]
**Formal solution.**
**theorem:**
**fixes n :: nat**
**shows "gcd (21*n + 4) (14*n + 3) = 1"**
**proof -**
**(* Denoting the greatest common divisor of a, b as (a,b), we use the Euclidean algorithm: *)**
**(* (21n+4, 14n+3). *)**
**(* First, we calculate the remainder when 21n+4 is divided by 14n+3: *)**
**have step1: "21*n + 4 = (14*n + 3) * 1 + (7*n + 1)"**
by simp
**(* 21n+4 = (14n+3) \cdot 1 + (7n+1), so (21n+4, 14n+3) = (14n+3, 7n+1). *)**
**have step2: "gcd (21*n + 4) (14*n + 3) = gcd (14*n + 3) (7*n + 1)"**
**using gcd_non_0_nat[of "14*n + 3" "7*n + 1"] step1**
by (metis gcd.commute gcd_add2 nat_mult_1_right)
**(* Next, we calculate the remainder when 14n+3 is divided by 7n+1: *)**
**have step3: "14*n + 3 = (7*n + 1) * 2 + 1"**
by simp
**(* 14n+3 = (7n+1) \cdot 2 + 1, so (14n+3, 7n+1) = (7n+1, 1). *)**
**have step4: "gcd (14*n + 3) (7*n + 1) = gcd (7*n + 1) 1"**
**using gcd_non_0_nat[of "7*n + 1" 1] step3**
by (metis gcd.bottom_right_bottom gcd.commute gcd_add_mult mult.commute nat_1_add_1)
**(* Finally, since the greatest common divisor of any number and 1 is 1, we have (7n+1, 1) = 1. *)**
**have step5: "gcd (7*n + 1) 1 = 1"**
by simp
**(* Therefore, (21n+4, 14n+3) = 1. *)**
**have "gcd (21*n + 4) (14*n + 3) = 1"**
**using step2 step4 step5**
by simp
**(* It follows that \frac{21n+4}{14n+3} is irreducible. The result follows. *)**
then show ?thesis
by simp
**qed**
_Figure 3. First example of a number theory problem solved by SPADER._
10
-----
**More Details, Please: Improving Autoformalization with More Detailed Proofs**
**Problem. What is the units digit of the sum of the squares of the first nine positive integers? Show that it is 5.**
**Original informal solution. We can reduce the amount of work we have to do in this problem by realizing that the units**
digit of the sum of the squares is the units digit of the sum of the units digits of the squares. In other words, the units digit of
1[2] + 2[2] + . . . + 9[2] is the units digit of 1 + 4 + 9 + 6 + 5 + 6 + 9 + 4 + 1 = 45, which is 5.
**Detailed informal solution. We can reduce the amount of work we have to do in this problem by realizing that the units digit of**
the sum of the squares is the units digit of the sum of the units digits of the squares.
To find the units digit of 1[2] + 2[2] + . . . + 9[2], we first compute the units digit of each square:
- The units digit of 1[2] is 1.
- The units digit of 2[2] is 4.
- The units digit of 3[2] is 9.
- The units digit of 4[2] is 6.
- The units digit of 5[2] is 5.
- The units digit of 6[2] is 6.
- The units digit of 7[2] is 9.
- The units digit of 8[2] is 4.
- The units digit of 9[2] is 1.
Summing these units digits, we get 1 + 4 + 9 + 6 + 5 + 6 + 9 + 4 + 1 = 45.
Thus, the units digit of the sum 1[2] + 2[2] + . . . + 9[2] is the same as the units digit of 45, which is 5.
**Formal solution.**
**theorem mathd_numbertheory_3 :**
"(\<Sum> x < 10. ((x + 1)ˆ2)) mod 10 = (5::nat)"
**theorem mathd_numbertheory_3 :**
"(\<Sum> x < 10. ((x + 1)ˆ2)) mod 10 = (5::nat)"
**proof -**
**(* We can reduce the amount of work we have to do in this problem by realizing that the units digit of the sum of the squares is the units digit of**
**the sum of the units digits of the squares. *)**
**(* To find the units digit of 1ˆ2+2ˆ2+\ldots+9ˆ2, we first compute the units digit of each square: *)**
**(* The units digit of 1ˆ2 is 1. *)**
**have c1: "(1ˆ2 :: nat) mod 10 = 1"**
by simp
**(* The units digit of 2ˆ2 is 4. *)**
**have c2: "(2ˆ2 :: nat) mod 10 = 4"**
by simp
**(* The units digit of 3ˆ2 is 9. *)**
**have c3: "(3ˆ2 :: nat) mod 10 = 9"**
by simp
**(* The units digit of 4ˆ2 is 6. *)**
**have c4: "(4ˆ2 :: nat) mod 10 = 6"**
by simp
**(* The units digit of 5ˆ2 is 5. *)**
**have c5: "(5ˆ2 :: nat) mod 10 = 5"**
by simp
**(* The units digit of 6ˆ2 is 6. *)**
**have c6: "(6ˆ2 :: nat) mod 10 = 6"**
by simp
**(* The units digit of 7ˆ2 is 9. *)**
**have c7: "(7ˆ2 :: nat) mod 10 = 9"**
by simp
**(* The units digit of 8ˆ2 is 4. *)**
**have c8: "(8ˆ2 :: nat) mod 10 = 4"**
by simp
**(* The units digit of 9ˆ2 is 1. *)**
**have c9: "(9ˆ2 :: nat) mod 10 = 1"**
by simp
**(* Summing these units digits, we get 1 + 4 + 9 + 6 + 5 + 6 + 9 + 4 + 1 = 45. *)**
**have "1 + 4 + 9 + 6 + 5 + 6 + 9 + 4 + 1 = (45::nat)"**
by simp
**(* Thus, the units digit of the sum 1ˆ2 + 2ˆ2 + \ldots + 9ˆ2 is the same as the units digit of 45, which is 5. *)**
**hence "(1 + 4 + 9 + 6 + 5 + 6 + 9 + 4 + 1) mod 10 = (45 mod 10)"**
by simp
**also have "... = (5::nat)"**
by simp
**(* The result follows. *)**
then show ?thesis
by eval
**qed**
_Figure 4. Second example of a number theory problem solved by SPADER._
11
-----
**More Details, Please: Improving Autoformalization with More Detailed Proofs**
**Problem. If a, b, and c are positive real numbers such that a(b + c) = 152, b(c + a) = 162, and c(a + b) = 170, then abc is**
(A) 672 (B) 688 (C) 704 (D) 720 (E) 750
Show that it is 720.
**Original informal solution. Adding up the three equations gives 2(ab+bc+ca) = 152+162+170 = 484 =⇒** _ab+bc+ca =_
242.
Subtracting each of the above equations from this yields, respectively, bc = 90, ca = 80, ab = 72.
Taking their product, ab · bc · ca = a[2]b[2]c[2] = 90 · 80 · 72 = 720[2] =⇒ _abc = 720 ⇒_ (D).
**Detailed informal solution. Adding up the three equations gives 2(ab+bc+ca) = 152+162+170 = 484 =⇒** _ab+bc+ca =_
242.
Subtracting each of the above equations from this yields, respectively, ab + bc + ca − _a(b + c) = 242 −_ 152 =⇒ _bc = 90,_
_ab + bc + ca −_ _b(c + a) = 242 −_ 162 =⇒ _ca = 80, and ab + bc + ca −_ _c(a + b) = 242 −_ 170 =⇒ _ab = 72._
Taking their product, (ab) · (bc) · (ca) = (abc)[2] = 72 · 90 · 80.
Calculating the product, 72 · 90 = 6480 and 6480 · 80 = 518400 =⇒ (abc)[2] = 518400 =⇒ _abc =_ _√518400 = 720 ⇒_ (D).
The result follows.
**Formal solution.**
**theorem:**
**fixes a b c::real**
**assumes h0: "0 < a \<and> 0 < b \<and> 0 < c"**
**and h1: "a * (b + c) = 152"**
**and h2: "b * (c + a) = 162"**
**and h3: "c * (a + b) = 170"**
**shows "a * b * c = 720"**
**proof -**
**(* Adding up the three equations gives 2(ab + bc + ca) = 152 + 162 + 170 = 484 *)**
**have c0: "2 * (a * b + b * c + c * a) = 152 + 162 + 170"**
**using h1 h2 h3**
by (simp add: algebra_simps)
**(* \Longrightarrow ab + bc + ca = 242 *)**
**hence c1: "a * b + b * c + c * a = 242"**
by simp
**(* Subtracting each of the above equations from this yields, respectively, *)**
**(* ab + bc + ca - a(b+c) = 242 - 152 \Longrightarrow bc = 90 *)**
**have c2: "a * b + b * c + c * a - a * (b + c) = 242 - 152"**
**using c1 h1**
by simp
**hence c3: "b * c = 90"**
by (auto simp: field_simps algebra_simps)
**(* ab + bc + ca - b(c+a) = 242 - 162 \Longrightarrow ca = 80 *)**
**have c4: "a * b + b * c + c * a - b * (c + a) = 242 - 162"**
**using c1 h2**
by simp
**hence c5: "c * a = 80"**
by (auto simp: field_simps algebra_simps)
**(* ab + bc + ca - c(a+b) = 242 - 170 \Longrightarrow ab = 72 *)**
**have c6: "a * b + b * c + c * a - c * (a + b) = 242 - 170"**
**using c1 h3**
by simp
**hence c7: "a * b = 72"**
by (auto simp: field_simps algebra_simps)
**(* Taking their product, (ab) \cdot (bc) \cdot (ca) = (abc)ˆ2 = 72 \cdot 90 \cdot 80. *)**
**have c8: "(a * b) * (b * c) * (c * a) = (a * b * c) ˆ 2"**
by (simp add: power2_eq_square)
**hence c9: "(a * b * c) ˆ 2 = 72 * 90 * 80"**
**using c3 c5 c7**
by simp
**(* Calculating the product, 72 \cdot 90 = 6480 and 6480 \cdot 80 = 518400 \Longrightarrow (abc)ˆ2 = 518400 *)**
**hence c10: "(a * b * c) ˆ 2 = 518400"**
by (simp add: mult.commute mult.left_commute)
**(* \Longrightarrow abc = \sqrt{518400} = 720 *)**
**hence c11: "a * b * c = sqrt 518400"**
by (simp add: h0 less_eq_real_def real_sqrt_unique)
**hence "a * b * c = 720"**
by simp
**(* The result follows. *)**
then show ?thesis
by simp
**qed**
_Figure 5. First of an algebra problem solved by SPADER._
12
-----
**More Details, Please: Improving Autoformalization with More Detailed Proofs**
**Problem. Expand the product (x + 1)[2]** _· x. Show that it is x[3]_ + 2x[2] + x.
**Original informal solution. We have (x + 1)[2]** = (x + 1)(x + 1) = x(x) + 1(x) + 1(x) + 1 = x[2] + 2x + 1. Multiplying this
by x gives x[3] + 2x[2] + x.
**Detailed informal solution. We have (x** +1)[2] = (x +1)(x +1) = x _·_ _x_ + _x_ _·_ 1+1 _·_ _x_ +1 _·_ 1 = x[2] + _x_ + _x_ +1 = x[2] +2x +1.
Multiplying this by x gives x · (x[2] + 2x + 1) = x · x[2] + x · 2x + x · 1 = x[3] + 2x[2] + x.
The result follows.
**Formal solution.**
**theorem:**
**fixes x ::real**
**shows "(x + 1)ˆ2 * x = xˆ3 + 2 * xˆ2 + x"**
**proof -**
**(* We have (x+1)ˆ2 = (x+1)(x+1) *)**
**have c0: "(x + 1)ˆ2 = (x + 1) * (x + 1)"**
by (simp add: power2_eq_square)
**(* = x \cdot x + x \cdot 1 + 1 \cdot x + 1 \cdot 1 *)**
**also have "... = x * x + x * 1 + 1 * x + 1 * 1"**
by (simp add: algebra_simps)
**(* = xˆ2 + x + x + 1 *)**
**also have "... = xˆ2 + x + x + 1"**
by (simp add: power2_eq_square)
**(* = xˆ2 + 2x + 1 *)**
**also have "... = xˆ2 + 2 * x + 1"**
by (simp add: algebra_simps)
**finally have c1: "(x + 1)ˆ2 = xˆ2 + 2 * x + 1"**
.
**(* Multiplying this by x gives x \cdot (xˆ2 + 2x + 1) *)**
**have c2: "(xˆ2 + 2 * x + 1) * x = x * (xˆ2 + 2 * x + 1)"**
by (simp add: algebra_simps)
**(* = x \cdot xˆ2 + x \cdot 2x + x \cdot 1 *)**
**also have "... = x * xˆ2 + x * (2 * x) + x * 1"**
by (simp add: algebra_simps)
**(* = xˆ3 + 2xˆ2 + x *)**
**also have "... = xˆ3 + 2 * xˆ2 + x"**
by (smt (verit, ccfv_SIG) One_nat_def Suc_1 \<open>x * x + x * 1 + 1 * x + 1 * 1 = x\<ˆsup>2 + x
+ x + 1\<close> mult.commute numeral_3_eq_3 power.simps(2) ring_class.ring_distribs(2))
**finally have "(xˆ2 + 2 * x + 1) * x = xˆ3 + 2 * xˆ2 + x"**
.
**(* The result follows. *)**
then show ?thesis
**using c1 c2**
by simp
**qed**
_Figure 6. Second example of an algebra problem solved by SPADER._
13
-----
| [
"Guillem, Tarrach",
"Albert Q., Jiang",
"Wenda, Li",
"Daniel, Raggi",
"Mateja, Jamnik"
] | 2024-06-13T00:00:00 | null | false | 0 | 0 | null | https://openreview.net/forum?id=AkJvzpYMvK&name=pdf | null | null |
MuMath: Multi-perspective Data Augmentation for Mathematical Reasoning in Large Language Models | Recently, the tool-use Large Language Models (LLMs) that integrate with external Python interpreters have significantly enhanced mathematical reasoning capabilities for open-source LLMs. However, these models fall short in demonstrating the calculation process, which compromises user-friendliness and understanding of problem-solving steps. Conversely, while tool-free methods offer a clear display of the problem-solving process, their accuracy leaves room for improvement.These tool-free methods typically employ a somewhat narrow range of augmentation techniques such as rephrasing and difficulty enhancement to boost performance. In response to this issue, we have amalgamated and further refined these strengths while broadening the scope of augmentation methods to construct a **mu**lti-perspective augmentation dataset for **math**ematics—termed **MuMath** (𝜇-Math) Dataset.Subsequently, we finetune LLaMA-2 on the MuMath dataset to derive the MuMath model. Our experiments indicate that our MuMath-70B model achieves new state-of-the-art performance among tool-free methods—achieving 88.3% on GSM8K and 34.5% on MATH .We release the MuMath dataset along with its corresponding models and code for public use. | The amalgamated MuMath-70B model achieves new state-of-024 the-art performance among tool-free methods and the MuMath dataset is released along with its corresponding models and code for public use. | #### MuMath: Multi-perspective Data Augmentation for Mathematical Reasoning in Large Language Models
**Weihao You[1][†], Shuo Yin[12][†§], Xudong Zhao[13][§], Zhilong Ji[1][*], Guoqiang Zhong[2], Jinfeng Bai[1]**
1Tomorrow Advancing Life
2College of Computer Science and Technology, Ocean University of China
3School of Economics and Management, East China Jiaotong University
[email protected], [email protected], [email protected],
[email protected], [email protected], [email protected]
**Abstract** 100 **GSM8K**
Recently, the tool-use Large Language Models (LLMs) that integrate with external Python
interpreters have significantly enhanced mathematical reasoning capabilities for open-source
LLMs. However, these models fall short in
demonstrating the calculation process, which
compromises user-friendliness and understanding of problem-solving steps. Conversely,
while tool-free methods offer a clear display
of the problem-solving process, their accuracy leaves room for improvement. These
tool-free methods typically employ a somewhat narrow range of augmentation techniques
such as rephrasing and difficulty enhancement to boost performance. In response to
this issue, we have amalgamated and further refined these strengths while broadening
the scope of augmentation methods to construct a multi-perspective augmentation dataset
for mathematics—termed MuMath (µ-Math)
Dataset. Subsequently, we finetune LLaMA2 on the MuMath dataset to derive the MuMath model. Our experiments indicate that
our MuMath-70B model achieves new state-ofthe-art performance among tool-free methods—
achieving 88.3% on GSM8K and 34.5% on
MATH . We release the MuMath dataset along
with its corresponding models and code for
public use.
**GSM8K**
100
SFT RFT WizardMath
MetaMath MuggleMath Ours 88.3
85
78.3
76.2
70
**Test Accuracy(%)** 55
40
7b 13b 70b
**MATH**
40
SFT WizardMath MetaMath Ours
34.5
30
26.9
23.3
20
**Test Accuracy(%)** 10
0
7b 13b 70b
Figure 1: Comparing MuMath with baselines on
LLaMA-2 base models from 7B to 70B, it’s observed
that MuMath demonstrate significant enhancement over
previous state-of-the-art mathematical reasoning LLMs.
Luo et al., 2023b), instruction following (Longpre et al., 2023), and mathematical reasoning (Li
et al., 2023; Yu et al., 2023; Gou et al., 2023).
Among these, mathematical ability is an important
and typical aspect for evaluating different LLMs,
and there still remains a considerable gap between
open-source LLMs, e.g., LLaMA (Touvron et al.,
2023), and the proprietary LLMs in the realm of
mathematical problem solving (Yue et al., 2023).
Recently, a multitude of studies dedicated to
enhancing the mathematical capabilities of opensource LLMs, which can be generally divided
into two different research trajectories: tool-use
**1** **Introduction**
Large Language Models (LLMs) (Devlin et al.,
2019; Radford et al., 2019; Liu et al., 2019; Brown
et al., 2020; Raffel et al., 2023), especially proprietary LLMs like GPT-4 (OpenAI, 2023b), have
been proven to be predominant across almost all the
tasks in Natural Language Processing (NLP), including text classification (Jiang et al., 2023b; Min
et al., 2022), code generation (Chen et al., 2021;
†Equal contribution.
§Work done while the author was interning at TAL.
*Corresponding author.
2932
-----
and tool-free. As for the tool-use LLMs, they
are typically integrated with external Python interpreters, making full use of the latter’s impeccable abilities in numerical calculation and logical inference which can substantially assist LLMs
in solving complex mathematical problems, e.g.,
PAL (Gao et al., 2023), PoT (Chen et al., 2023),
MAmmoTH (Yue et al., 2023), TORA (Gou et al.,
2023) and MathCoder (Wang et al., 2023).
Although the tool-use method can solve computational errors through code, it lacks a demonstration of the calculation process, making it less userfriendly in terms of understanding the problemsolving steps. On the other hand, while the
tool-free method provides a good display of the
problem-solving process, its accuracy still needs to
be improved. Therefore, our work follows along
the tool-free trajectory, focusing on improving the
math reasoning ability of LLMs.
Representative tool-free methods adopt supervised finetuning (SFT) on the augmented datasets
to enhance the LLMs’ mathematical reasoning capability, including RFT (Yuan et al., 2023), MetaMath (Yu et al., 2023), WizardMath (Luo et al.,
2023a), and MuggleMath (Li et al., 2023), etc. RFT
only augments the answer via rejection sampling
to produce diverse reasoning paths with correct answers, but the generated data is similar to training
dataset. MetaMath utilizes two simple augmentation methods, that one uses rephrasing to enhance
the narrative diversity of the questions and answers,
and the other adopts the SV (Weng et al., 2023) and
FOBAR (Jiang et al., 2023a) to generate new mathematical problems and problem-solving strategies
for equations. Instead of rephrasing, WizardMath
and MuggleMath create new questions via rephrasing and difficulty enhancement, thus apparently
improving the diversity of the dataset. However,the
augmenting perspectives of these two methods are
not sufficiently comprehensive, and the accuracy
rate of the answers to new questions is suboptimal.
While their constructed augmented dataset enhances the capability of the model, different works
adopt different methods and employ a rather limited variety of augmentation methods. So we integrate and further enhance their strengths and expand the perspective of augmentation methods to
construct a multi-perspective augmentation dataset
for math, called MuMath (µ-Math) Dataset, including four categories. (1) In Data Reformulation, besides the question rephrasing, we propose
the solution reorganization to provide a compre
hensive roadmap for the process and detailed answers. (2) In Backward Creation, We have retained the FOBAR method and introduced the
Backward-Forward Transformation (BF-Trans) approach, which transforms equation-solving into
arithmetic problem-solving, generating new problems and solution methods that are distinctly different from the FOBAR style. (3) We’ve further
refined the existing question alteration from a fresh
perspective: expression replacement. It offers a
controllable and innovative way, compared to simply changing numbers or arbitrarily increasing difficulty. Also, we utilize majority sampling finetuning to boost answer accuracy and data quality. (4)
Additionally, beyond data augmentation for mathematical problem solving, we propose a Nested
Multi-task Construction Augmentation, where we
nest plan programming or question summarizing
texts into the solution, combining data of auxiliary tasks into the main task as solving the math
problem.
Through the process of supervised fine-tuning on
open-source language models, such as LLaMA-2,
and applying it to the MuMath dataset, we have successfully developed MuMath models in a variety
of sizes. This demonstrates that the dataset has the
potential to significantly enhance the mathematical
capabilities of open-source models.
Our contributions are as follows:
- We propose new data augmenting methods for
math reasoning: Reorganization, BF-Trans,
Expression Replacement and Nested Multitask Construction.
- We construct a multi-perspective dataset for
math, called MuMath Dataset, including data
reformulation, backward creation, question
alteration and nested multi-task.
- We conducted extensive experiments to
demonstrate the effectiveness of different augmentations, as well as give some insights on
mathematical reasoning for LLMs.
- By supervised fine-tuning on the open-source
LLMs on the MuMath dataset, we obtain the
MuMath model, which achieves new state-ofthe-art performances among tool-free methods. MuMath-70B has achieved 88.3% on
GSM8K (Cobbe et al., 2021) and 34.5% on
MATH (Hendrycks et al., 2021a) .
2933
-----
Rephrasing BF-Trans
Reorganization FOBAR
**Question: Weng earns $12 an hour for babysitting…** **Original Question: James writes a 3-page letter to 2 different**
**Answer: Weng earns 12/60 = 0.2 per minute . Working 50** friends twice a week. How many pages does he write a year?
minutes, she earned 0.2 * 50 = 10 **FOBAR Question: James writes a 3-page letter to X different**
**Reorganize the Answers:** friends twice a week. How many pages does he write a year?
1. Understand the Problem: Weng is a nanny who earns $12... If we know the answer to the above question is 624, what is
2. Calculate the Earning per Minute: To calculate Weng‘s the value of the unknown variable X?
earning … **BF-Trans Question: James writes a 3-page letter twice a**
3. Calculate Weng’s Total Earnings: We multiply... week. In a year, he writes 624 pages. Then, to how many
4. The answer is: 10. different friends is he writing these letters?
# 𝜇-Math
**Question: Randy has 60 mango trees on his farm. He also has** **Question:A package of candy has 3 servings with 120 calories**
5 less than half as many coconut trees as mango trees. How each. John eats half the package…
many trees does Randy have in all on his farm? **Original Solution:There were 3*120=360 calories in the**
**Expression Replacement Question: Randy has 60 mango** package. So he ate 360/2=180 calories.
trees on his farm. He also has 5 more than twice as many **Nested Multi-task Solution:**
coconut trees as mango trees. How many trees does Randy **[Outline] 1. A package… 2. Every … 3. John eats …**
have in all on his farm? **[Plan] 1. We must know the number … 2. We need calculate…**
**Answer:…** **[Execution] 1. Calculate the number of calories in the package...**
Expression Replacement Problem Outline
Difficulty Enhancement Solution Plan
Figure 2: Overview of the augmentation methods our MuMath employs, which can be divided into four categories: (1) Data Reformulation includes solution reorganization and question rephrasing; (2) Backward Creation
includes Backward-Forward Transformation (BF-Trans) and FOBAR; (3) Question Alteration includes expression
replacement and difficulty enhancement; (4) Nested Multi-task construction includes data of the auxiliary tasks, i.e.,
Problem Outline and Solution Plan. Please zoom in the image for a better view.
**2** **Related Work**
**Mathematical Reasoning** Currently, there are
two main research trajectories to enhance the mathematical ability of open-source models. (1) The
first trajectory focuses on LLMs purely, without
tool use. Yuan et al. (2023) propose a representative
tool-free methods, leveraging rejection sampling
finetuning (RFT) to enhance Llama’s mathematical ability, while WizardMath (Luo et al., 2023a)
chooses a reinforcement learning (RL) framework
and evolves its math capability through proximal policy optimization (PPO, Schulman et al.,
2017). The most recent tool-free methods are MuggleMath (Li et al., 2023) and MetaMath (Yu et al.,
2023), both of which manage to augment math
problem-solution data followed by finetuning the
open LLMs on these newly acquired data. (2) The
second trajectory underscores the integration of
LLMs with tool use, with Program-aided Language
model (PAL, Gao et al., 2023) and Program of
Thought (PoT, Chen et al., 2023) being two pioneering works. Besides, MAmmoTH (Yue et al., 2023)
employs both CoT and PoT that are combined in
a coarse-grained manner, with different samples
utilizing different approaches. ToRA (Gou et al.,
2023) performs imitation learning with problem solutions organized by interleaving multiple steps of
program-based tool use and natural language reasoning. Similarly, MathCoder (Wang et al., 2023),
contemporaneous with TORA, also curates the reasoning text in an interleaving manner, while it introduces interpolation problems to bridge the gap
in difficulty between GSM8K (Cobbe et al., 2021)
problems and MATH (Hendrycks et al., 2021b)
problems. Our work concentrates on fully exploiting LLM’s pretrained capability to conduct mathematical reasoning, thus progressing along the toolfree research trajectory.
**Data Augmentation** The process of data augmentation for mathematical reasoning can be divided into two categories. The first one involves
enhancing the content of both the problem and
its solution without altering their original meaning. CoT approaches (Wei et al., 2022; Fu et al.,
2022) present the solution in a step-by-step format,
making it easier for the model to learn. Rephrasing (Yu et al., 2023; Li et al., 2023) refers to restating the problem and its solution in a different
manner, thereby generating a new sequence of tokens. Rejection sampling (Yuan et al., 2023) utilizes a fully trained model to yield new reasoning
paths, and the quality of these paths is intrinsically
linked to the performance of the trained model.
The other one modifies the computational values
2934
-----
Example 3.1: Reorganization
**Question: Weng earns $12 an hour for babysitting. Yesterday, she just did 50 minutes of babysitting. How much**
did she earn?
**Answer: Weng earns 12 / 60 = 0.2 per minute . Working 50 minutes, she earned 0.2 * 50 = 10.**
**Reorganization Answers:**
1. Understand the Problem: - Weng is a babysitter who earns $12 per hour. Yesterday, she worked for 50 minutes.
Our task is to calculate how much she earned.
2. Calculate the Earning per Minute: - To calculate Weng’s earning per minute, we divide her hourly earning by
60 (the total number of minutes in an hour): 12 / 60 = 0.2 per minute.
3. Calculate Weng’s Total Earnings: - We multiply Weng’s earning per minute by the number of minutes she
worked: 0.2 * 50 = 10.
4. Answer: - Weng earned 10 for her 50 minutes of babysitting. The answer is: 10.
or logic of a problem, thus generating new problems. Then, through the method of knowledge
distillation (Huang et al., 2022; Li et al., 2022;
Magister et al., 2023; Ho et al., 2023; Fu et al.,
2023b; Shridhar et al., 2023), it generates new solutions and transfers reasoning abilities from the
teacher model (for instance, GPT4). The Evolinstruct method (Xu et al., 2023; Luo et al., 2023a)
and difficulty enhancement (Li et al., 2023) incorporate modifications such as adding constraints,
adjusting the context, and more to the original data.
FOBAR (Jiang et al., 2023a) generates a series of
questions for backward reasoning by masking numbers. It then samples a set of backward reasoning
chains to predict the masked number. Our proposed
method not only enriches these two types of augmentation, but also adds a multi-task augmentation
category. This can be nested into the existing data
to bolster the mathematical reasoning capabilities
of the model.
**3** **Methods**
The overview of our method is illustrated in Figure 2. The implementation of our proposed data
augmentation methods is to request GPT4 to obtain
the desired data through specific prompts.
**3.1** **Data reformulation**
Our data reformulation can be divided into two
primary categories: rephrasing and reorganization.
**Rephrasing** Rephrasing refers to rewriting a text
while keeping the original meaning unchanged,
which is also used in MetaMath (Yu et al., 2023).
The prompt we use for rephrasing is shown
in Prompt B.1. After requesting answers to
the rephrasing questions, we can get Dreph =
(Qreph, Sreph) by filtering out questions with
_{_ _}_
incorrect answers.
Figure 3: The relationship between token length and
accuracy on GSM8K test set.
**Reorganization** While rephrasing augments
questions without altering the original meaning,
reorganization merely amplifies a solution which
also holds the same meaning as the original solution. We believe that solutions which are both
standardized and detailed tend to be more easily
comprehended. So we have made solving steps
more understandable for learning by reorganization. After the reorganization through the LLM, the
solving steps will be more logically organized and
clearer. Phrases such as "understand the problem",
"define variables", and "calculate the number" act
as explicit instructions, leading us toward the final
result by "The answer is". See Example 3.1 for
details. The prompt we use for reorganization is
shown in Prompt B.2. We use Sreorg to denote the
reconstructed solution, and thus the new dataset we
get can be formalized as Dreorg = {(Q, Sreorg)}.
For the reorganization solutions, we manipulated
response length by adding a minimum word count
restriction in the prompt. Upon examining the generated response, it was discerned that longer token
lengths corresponded to lower complexity in overall responses. However, the parsing steps become
redundant when the token length becomes exces
2935
-----
sively long. The result could potentially lead to
models assimilating irrelevant information while
overlooking correct answers. See the example in
Appendix A. Consequently, this underscores the
importance of optimal response length for ensuring
model efficacy during reorganization augmentation. So we fine-tune LLaMA-2 7B utilizing data
of varying token lengths and subsequently depict
the correlation between token length and accuracy.
Figure 3 shows a linear accuracy increase for token lengths between 200 and 420, but the accuracy
begins to decline when the token length exceeds
420. So we have chosen to utilize a token length of
approximately 420 for the reorganization data.
Combining the rephrasing and reorganization
datasets, we have the reformulation dataset D1 =
_reorg_ _reph._
_D_ _∪D_
**3.2** **Backward-Forward Transformation**
FOBAR (Jiang et al., 2023a) masks some specific
value in the original forward question using “X”,
convert the final answer to a new condition, and
thus construct a backward question by asking to
find the unknown variable X. However, this method
tends to list equations concerning X and then solve
them, as still a forward reasoning process. Here our
purpose is to introduce backward questions with
directly arithmetic solutions instead of equation
solving, i.e., engage in as much reverse reasoning
as possible.
To this end, we propose a new method called
**Backward-Forward Transformation (BF-Trans).**
For a certain question-answer pair, we firstly utilize
FOBAR to transform the original question Q into
a backward one Qb; secondly, we rephrase the FOBAR question into a new form where the masked
value is requested directly instead of employing
an unknown variable X, resulting in a “secondary
forward” question which we called BF-Trans question, marked as Qbf . Example 3.2 shows the
differences among the original question, FORAR
and BF-Trans. Finally, we generate the solution
_Sbf for this BF-Trans question._ Collecting all
these BF-Trans augmented samples, we can have
_Dbf = {(Qbf_ _, Sbf_ )}. Note that the final answer of
the BF-Trans solution is correct after the filtering
procedure, corresponding to a certain masked number of the FOBAR question is corresponding to a
certain number. See Prompt B.3 and B.4 for more
details.
Combined with the FOBAR dataset _fobar,_
_D_
hence the backward reasoning part of our final train
ing set is D2 = Dbf ∪Dfobar.
**3.3** **Question Alteration**
Our observations have highlighted that diversity
and complexity inherent within training data play
an instrumental role in enhancing mathematical
reasoning capabilities. So we also strive to enhance our model’s ability to generalize by generating brand new problems. We have employed a
more diversified perspective in generation and significantly enhanced the quality of our data.
**Difficulty Enhancement** Drawing inspiration
from WizardMath (Luo et al., 2023a) and MuggleMath (Li et al., 2023), we increase the problem difficulty to create new questions Qcomplex.
Our methods include but are not limited to adding
constraints and modifying context. The prompt
we use for getting more difficult questions are in
Prompt B.5.
**Expression Replacement** We assert that changing numerals doesn’t alter the logic of the calculation, representing a singular enhancement. Conversely, arbitrarily increasing difficulty is excessively unrestricted. Thus, to broaden our perspective on question alteration, we introduce expression
replacement as a novel and controllable alteration
method that has a different calculation logic intrinsically. This method offers an interpolated perspective between changing numerical and increasing
difficulty arbitrarily. The comparison of these three
methods can be found in the Appendix A. Our approach operates in this manner: we first extract all
mathematical expressions from the solution. Subsequently, an arithmetic expression is altered to form
a novel equation. With the original problem statement and new equations as guides, a new question
can be generated denoted as Qreplace. Example 3.3
compares the original questions and the expression
replaced one. Refer to Prompt B.6 and B.7 for
more details about how we get the new questions.
The newly generated questions via expression
replacement may indeed be unreasonalbe and unsolvable. Therefore, when requesting answers, we
utilize GPT-4 to act as a judge model, assessing the
validity of the new questions. Only those questions
deemed reasonable are requested for answers and
added to the training set.
**Majority Sampling Finetuning** After generating new questions, we utilized GPT4 for solutions.
A challenge emerges as these new questions lack
2936
-----
Example 3.2: FOBAR vs BF-Trans
**FOBAR Question: Joe’s mother gave him $56 to go to the store. Joe bought X notebooks and 2 books. Each notebook costs**
$4 and each book costs $7. How much money does Joe have left? If we know the answer to the above question is 14, what is
the value of the unknown variable X ?
**FOBAR Response:We know Joe have $14 left after he bought X notebooks and 2 books. Each notebook costs $4 and he**
bought X of them, so he spent 4X dollars on notebooks. Each book costs $7 and he bought 2 of them, so he spent 2*7=14
dollars on books. Hence the total money he spent is 4X+14. We know he originally had $56, so we have 56-(4X+14)=42-4X
left. Since we know the answer is 14, we have 42-4X=14. Solving for X, we get 4X=28 and X=7.
**BF-Trans Question: Joe’s mother gave him $56 to go to the store. Joe bought 2 books and some notebooks. Each notebook**
costs $4 and each book costs $7. If Joe has $14 left after shopping, how many notebooks did he buy?
**BF-Trans Response : Joe initially had $56, and now has $14 left after shopping, so he spent 56 - 14 = 42 dollars.\nEach book**
costs $7 and he bought 2 books, so he spent 7 * 2=14 dollars on books. He spent 42 dollars in total and 14 dollars on books,
so he spent 42 - 14 = 28 dollars on notebooks. Each notebook costs $4, so he bought 28 / 4 = 7 notebooks.
Example 3.3: Expression Replacement
**Question: Randy has 60 mango trees on his farm. He also has 5 less than half as many coconut trees as mango**
trees. How many trees does Randy have in all on his farm?
**Response: Half of the number of Randy’s mango trees is 60 / 2 = 30 trees. So Randy has 30 - 5 = 25 coconut trees.**
Therefore, Randy has 60 + 25 = 85 treeson his farm. The answer is: 85
**New Question: Randy has 60 mango trees on his farm. He also has 5 more than twice as many coconut trees as**
mango trees. How many trees does Randy have in all on his farm?
**New Response: Twice the number of mango trees on Randy‘s farm is 60 * 2 = 120 trees. The total number of**
coconut trees on Randy’s farm is 5 more than twice the number of mango trees, a total of 120 + 5 = 125 trees.
Altogether, Randy has 125 + 60 = 185 trees on his farm. The answer is: 185
standard reference answers, possibly introducing
errors into the training data. Despite this, our experiments showed satisfactory performance from models trained with this data. We hypothesize that correct steps within incorrect final answers might assist LLMs in understanding math problems, aligning with theories proposed in (Fu et al., 2023a) and
(Yu et al., 2023). To maximize answer accuracy
for new questions, we implemented Majority Solution Sampling to achieve a higher-accuracy dataset
for these queries. We utilize majority voting with
_k = 30 to request solutions and only select one re-_
sponse with the majority answer for finetuning. We
name the above procedure as Majority Sampling
Finetuning (MSF).
We use Sreplace and Scomplex to stand for
the generated solutions to the newly introduced questions Qreplace and Qcomplex respectively, resulting in our recreation dataset D3 =
(Qreplace, Sreplace) (Qcomplex, Scomplex) .
_{_ _} ∪{_ _}_
**3.4** **Nested Multi-task Learning**
Multitask learning (Raffel et al., 2023; Sun et al.,
2019) equips a single model with the capability
to handle diverse tasks, and it can also enhance
the main task processing ability of the model, by
introducing strongly correlated auxiliary tasks. Different from continual learning (Parisi et al., 2019)
where different tasks are separated in stage level
(thus coarse-grained), multitask learning is a finegrained procedure, and it integrates the data from
different tasks into a single training batch for simultaneously learning (different tasks are distinguished
in batch level). We propose a more fine-grained
multi-task learning strategy called Nested Multi**Task learning (NestedMT), where we nest the data**
of auxiliary tasks into the data of the main task in a
sample level.
Specifically, for the main task of solving mathematical problems Q, we select two auxiliary tasks:
summarizing the question and listing the solving
plan. Different from the stage-level and batch-level
counterparts, we prepend the text of question outline O, solving plan P, or both to the solution
text S, assembling into an individual final solution
_Smt = O ⊕_ _P ⊕_ _S, where ⊕_ represents concatenation, for each original question. More details
are shown in Example A.3 and Prompt B.8. Then
we have D4 = {(Q, Smt)} as the nested multitask dataset. In nested multi-task learning, our
model can learn to solve the math problems and
meanwhile learn to manage various auxiliary tasks
strongly related to the math problem solving task
itself. All these tasks are concentrated into one
single sample and thus the auxiliary tasks can contribute in a more detailed and precise manner to
2937
-----
improve the model’s performance on its principal
task as math problem solving.
**4** **Experiments**
**4.1** **Experimental Setup**
**Datasets** We employ two widely recognized
mathematical reasoning benchmarks. The first
one, GSM8K (Cobbe et al., 2021), is a collection
of high-quality elementary school math problems,
comprising 7,473 training instances and 1,319 test
instances. The second benchmark is the MATH
dataset (Hendrycks et al., 2021a), which encompasses seven subjects, i.e., Prealgebra, Algebra,
Number Theory, Counting and Probability, Geometry, Intermediate Algebra and Precalculus. This
dataset includes math competition problems from
high school level with a total of 7,500 training samples and 5,000 testing samples.
We employ a series of augmentation methods
mentioned in Section 3, to create different subsets
based on the original GSM8K and MATH training
data. Note that there are significant differences in
difficulty levels and numbers of conditions between
questions of these two datasets. Therefore, after requesting new solutions and the subsequent filtering,
the amounts of data we obtained from GSM8K and
MATH are slightly different.
For question augmentation, we firstly employ
rephrasing, alteration, FOBAR and BF-Trans to
get about 7k questions for each method on each
original dataset. Then we make multiple requests
for solutions to all these questions (15 times on
GSM8K, and 30 times on MATH). We use majority voting to select samples for data augmented via
alteration, which have no ground truth answer; for
the other parts (rephrasing, FOBAR and BF-Trans),
we filter out samples with wrong answers. After
that, we vary the maximum number of samples for
one unique question (denoted as n), and plot the accuracy curves of the 7B models tested on GSM8K
and MATH (see Appendix C). We select a point
(n = 2) with an appropriate amount of data and
relatively strong performance, then proceed to sampling, and finally obtain the subsets of the resulting
MuMath dataset (about 277K). Reorganization and
Nested Multi-task Construction are merely solution
augmentation conducted on the original data, about
7K for each method on each dataset (totally 27K).
These above subsets add up to our final MuMath
dataset (304K).
**Implementation Details** Our study utilizes the
state-of-the-art open-source LLMs for fine-tuning,
comprising LLaMA-2 7B, LLaMA-2 13B, and
LLaMA-2 70B (Touvron et al., 2023). All these
models undergo full fine-tuning. We incorporate
system prompts from (Taori et al., 2023) during the
fine-tuning, and employ AdamW for optimization.
We set the global batch size to 128 and used a cosine learning rate scheduler with a 0.03 warm-up
period for 3 epochs. The computational hardware
are NVIDIA A800 GPUs.
Model GSM8K MATH
_colsed-source LLMs_
GPT-4 (OpenAI, 2023b) 92.0 42.5
GPT-3.5-Turbo (OpenAI, 2023a) 80.8 34.1
PaLM (540B)(Chowdhery et al., 2022) 56.5 8.8
PaLM-2 (540B) (Anil et al., 2023) 80.7 34.3
Minerva (540B) (Lewkowycz et al., 2022) 58.8 33.6
_tool-use LLMs_
_7B_
CodeLLaMa (PAL) (Rozière et al., 2023) 34.0 16.6
MAmmoTH (Yue et al., 2023) 53.6 31.5
MathCoder-L (Wang et al., 2023) 64.2 23.3
ToRA (Gou et al., 2023) 68.8 40.1
_13B_
CodeLLaMa (PAL) (Rozière et al., 2023) 39.9 19.9
MAmmoTH (Yue et al., 2023) 62.0 34.2
MathCoder-L (Wang et al., 2023) 72.6 29.9
ToRA (Gou et al., 2023) 72.7 43.0
_70B_
MAmmoTH (Yue et al., 2023) 76.9 41.8
MathCoder-L (Wang et al., 2023) 83.9 45.1
ToRA (Gou et al., 2023) 84.3 49.7
_tool-free LLMs_
_7B_
LLaMA-2 (Touvron et al., 2023) 14.6 2.5
LLaMA-2 SFT (Touvron et al., 2023) 41.6 -
LLaMA-2 RFT (Yuan et al., 2023) 50.3 -
WizardMath (Luo et al., 2023a) 54.9 10.7
MetaMath† (Yu et al., 2023) 66.3 19.7
MuggleMath (Li et al., 2023) 68.4 -
**MuMath** **76.2** **23.3**
_13B_
LLaMA-2 (Touvron et al., 2023) 24.3 6.3
LLaMA-2 SFT (Touvron et al., 2023) 51.1 9.2
LLaMA-2 RFT (Yuan et al., 2023) 55.3 -
WizardMath (Luo et al., 2023a) 63.9 14
MetaMath (Yu et al., 2023) 72.3 22.4
MuggleMath (Li et al., 2023) 74 -
**MuMath** **78.3** **26.9**
_70B_
LLaMA-2 (Touvron et al., 2023) 57.8 14.4
LLaMA-2 SFT (Touvron et al., 2023) 69.3 14.9
LLaMA-2 RFT (Yuan et al., 2023) 64.8 -
WizardMath (Luo et al., 2023a) 81.6 22.7
MetaMath(Yu et al., 2023) 82.3 26.6
MuggleMath (Li et al., 2023) 82.3 -
**MuMath** **88.3** **34.5**
Table 1: Comparison of testing accuracy to existing
LLMs on GSM8K and MATH. † denotes the results
are our own reproduction of MetaMath 7B (finetuned
on MetaMathQA), which are close to the ones in the
origianl paper.
2938
-----
**4.2** **Comparison Results**
In Table 1, we contrast the performance of current colsed-source LLMs, tool-use LLMs, and toolfree LLMs on GSM8K and MATH. It’s evident
that MuMath set a new standard in the 7B LLMs.
Compared to the baseline LLaMA-2 SFT, MuMath
shows significant accuracy increases on GSM8K
and MATH by 34.6% and 18.9%, respectively. In
contrast to MetaMath, MuMath improves by 9.9%
and 3.6% on GSM8K and MATH respectively. In
LLMs with 13B parameters, MuMath surpasses
MetaMath by 6% and 4.5% on GSM8K and MATH
datasets respectively. For LLMs with 70B parameters, MuMath surpasses MetaMath by 6% on the
GSM8K dataset. Significantly, against MetaMath
on the MATH dataset, MuMath improves impressively by a margin of 7.9%. Note that our MuMath
dataset contains approximately 304K samples, apparently less than that of MetaMathQA (395K).
This highlights our proposed data augmentation
methods’ effectiveness in enhancing mathematical
reasoning capabilities.
**4.3** **Ablation of Different Augmentation**
In this section, we conduct experiments to study the
effect of augmentations in MuMath. Table 2 showcases the fine-tuning results of each sub-component
within our proposed augmentation methods, tested
on both GSM8K and Math datasets. The data size
of each subset is consistent with the original data
(7K). Each dataset shows substantial improvement
compared to the original data. Remarkably, the
nested multi-task augmentation records a 9.4% increase under equal quantities on GSM8K. To sum
up, all of our augmentation methods effectively
boost the mathematical reasoning abilities of opensource LLMs.
GSM8K MATH
Method Datasize Acc Datasize Acc
SFT 7K 41.6 7K 4.4
Reorganization 7K 50.6 7K 6.0
Rephrasing 7K 46.2 7K 5.9
Reorganization + Rephrasing 7K+7K 52.1 7K+7K 7.3
FOBAR 7K 40.6 7K 4.9
BF-trans 7K 42.8 7K 5.8
FOBAR + BF-Trans 7K+7K 46.2 7K+7K 7.4
Expression Replacement (ER) 7K 47.7 7K 6.4
Complexity Enhancement (CE) 7K 45.1 7K 4.6
ER + CE 7K+7K 48.5 7K+7K 7.0
Nested Multi-task 7K 51.0 7K 6.8
Separate Multi-task 7K+7K 42.5 7K+7K 6.6
Table 2: Different data augmentation strategies on
GSM8K and MATH performances.
GSM8K MATH
_D1_ _D2_ _D3_ _D4 Acc._ _D1_ _D2_ _D3_ _D4 Acc._
21K 40K 74K 7K 21K 40K 94K 7K
✓ ✗ ✗ ✗ 59.6 ✓ ✗ ✗ ✗ 10.5
✗ ✓ ✗ ✗ 53.3 ✗ ✓ ✗ ✗ 10.7
✗ ✗ ✓ ✗ 57.7 ✗ ✗ ✓ ✗ 17.9
✗ ✗ ✗ ✓ 51.0 ✗ ✗ ✗ ✓ 6.8
✓ ✓ ✗ ✗ 64.0 ✓ ✓ ✗ ✗ 14.5
✓ ✗ ✓ ✗ 64.5 ✓ ✗ ✓ ✗ 19.1
✓ ✗ ✗ ✓ 60.8 ✓ ✗ ✗ ✓ 10.8
✗ ✓ ✓ ✗ 62.2 ✗ ✓ ✓ ✗ 20.2
✗ ✓ ✗ ✓ 55.6 ✗ ✓ ✗ ✓ 12.6
✗ ✗ ✓ ✓ 60.1 ✗ ✗ ✓ ✓ 18.6
✓ ✓ ✓ ✗ 67.9 ✓ ✓ ✓ ✗ 21.1
✓ ✓ ✗ ✓ 65.1 ✓ ✓ ✗ ✓ 14.8
✓ ✗ ✓ ✓ 64.0 ✓ ✗ ✓ ✓ 20.1
✗ ✓ ✓ ✓ 63.2 ✗ ✓ ✓ ✓ 20.6
✓ ✓ ✓ ✓ **69.2** ✓ ✓ ✓ ✓ **21.6**
MetaMath 64.4 17.7
MuggleMath 68.4 -
Table 3: Effect of different data subsets on the accuracy
of GSM8K and MATH. D1,D2, D3 and D4 are data
reformulation, backward creation, question alteration,
and nested multi-task learning. We also compare our
MuMath model with two baselines, all of which are
trained on datasets augmented from only one source.
Moreover, from the results obtained by the
stacked data, we discovered that the sub-methods
within each of the four data augmentation methods
are complementary to each other.
Table 3 enumerates the data volumes of four augmentation datasets, and it mainly presents the test
accuracy of various augmentation combinations.
As observed, the models trained on any kind of
augmentations outperform the SFT method significantly. In the GSM8K, employing a single data
augmentation method enables data reformulation
to attain an accuracy rate of 59.6%. In the MATH,
using only question alteration data yields a 17.9%
accuracy rate. Surprisingly, when combining multiple data augmentation methods in any manner, each
additional data increment contributes to further enhancement. This phenomenon persists even at high
accuracy levels. This highlights the versatility and
effectiveness of each augmentation method.
**4.4** **MSF vs. SFT**
We extract 7K new created questions from MATH
to validate our proposed Majority Sampling Fine
2939
-----
tuning (MSF). Specifically, for each question we
randomly select n solutions with the majority answer to construct MSF dataset (for those questions
with less than n majority solutions, we compromise
to use all the < n solutions), and directly request n
solutions with different answers to construct SFT
dataset. Figure 4 illustrates that as the amount of
training data increases (with n varying from 1 to
8), models trained using MSF and SFT both see
a progressive improvement in their performance.
However, the latter saturates earlier than the former, and across all data sizes, the MSF models
consistently outperform the SFT ones.
Figure 4: Comparison of performance between models
trained with MSF and with SFT on MATH dataset.
**4.5** **Out-of-Domain Math Reasoning**
We have evaluated our MuMath 7B and 70B models on out-of-domain datasets, including SVAMP,
MAWPS and ASDiv. The results are shown in
Table 4. On 2 out of 3 above datasets, the performances of our MuMath can even surpass the
state-of-the-art tool-use open LLM, ToRA.
We conduct another ablation study to test the outof-domain math reasoning capability of MuMath.
We firstly split our MuMath dataset into 2 subsets,
GSM8K augmented subset (142k) and MATH augmented part (162k). The in-domain test and out-ofdomain test are shown in Table 5. Apparently the
out-of-domain reasoning results are bad, consistent
with the observation in MuggleMath. According
to the results in Table 5 and those in Table 1, we
can conclude that GSM8K augmented data do not
help much in improving the accuracy on MATH
and vice versa, which matches the ablation results
in MetaMath.
Models SVAMP MAWPS ASDiv
_7B_
WizardMath-7B 57.3 73.3 59.1
ToRA-7B 70.4 **91.3** 78.7
MuMath-7B **76.8** 87.3 **93.6**
_70B_
WizardMath-70B 80.0 86.2 76.2
ToRA-70B 82.7 **93.8** 86.8
MuMath-70B **87.6** 92.0 **96.6**
Table 4: The out-of-domain reasoning capability comparison between MuMath and the other methods.
GSM8K (test) MATH (test)
GSM8K (train) 69.2 6.7
MATH (train) 42.4 21.6
Table 5: The in-domain and out-of-domain reasoning
capability of MuMath 7B on GSM8K and MATH.
**5** **Conclusion**
In this work, we propose four novel methods to
broaden the scope of augmentation for mathematical reasoning data: solution reorganization, BFTrans, expression replacement and nested multitask construction. Through a variety of augmenting
strategies, we create a multi-perspective mathematical problem-solving dataset based on GSM8K and
MATH, called MuMath. After finetuning LLaMA2 on the novel dataset, we get a series of models
(7B, 13B and 70B) equipped with excellent math
capability, which are also termed MuMath. Extensive empirical results demonstrate the effectiveness
of our proposed augmentation methods. Compared
to the open-source methods, our MuMath achieves
the best performance in tool-free LLMs across all
model scales, and even surpasses some tool-use
counterparts. We will explore other augmentation
methods for further improving mathematical reasoning performance of tool-free LLMs, as well as
more auxiliary tasks for nested multi-task learning.
**6** **Acknowledgments**
This work was supported by National Key
R&D Program of China under Grant No.
2020AAA0104500, HY Project under Grant No.
LZY2022033004, the Natural Science Foundation of Shandong Province under Grants No.
ZR2020MF131 and No. ZR2021ZD19, the Science and Technology Program of Qingdao under
Grant No. 21-1-4-ny-19-nsh, and Project of Associative Training of Ocean University of China
under Grant No. 202265007.
2940
-----
**References**
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak
Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng
Chen, et al. 2023. Palm 2 technical report. arXiv
_preprint arXiv:2305.10403._
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
Clemens Winter, Christopher Hesse, Mark Chen, Eric
Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess,
Jack Clark, Christopher Berner, Sam McCandlish,
Alec Radford, Ilya Sutskever, and Dario Amodei.
[2020. Language models are few-shot learners.](http://arxiv.org/abs/2005.14165)
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming
Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph,
Greg Brockman, Alex Ray, Raul Puri, Gretchen
Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray,
Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz
Kaiser, Mohammad Bavarian, Clemens Winter,
Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen
Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie
Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain,
William Saunders, Christopher Hesse, Andrew N.
Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan
Morikawa, Alec Radford, Matthew Knight, Miles
Brundage, Mira Murati, Katie Mayer, Peter Welinder,
Bob McGrew, Dario Amodei, Sam McCandlish, Ilya
[Sutskever, and Wojciech Zaremba. 2021. Evaluating](http://arxiv.org/abs/2107.03374)
[large language models trained on code.](http://arxiv.org/abs/2107.03374)
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W. Cohen. 2023. [Program of thoughts](http://arxiv.org/abs/2211.12588)
[prompting: Disentangling computation from reason-](http://arxiv.org/abs/2211.12588)
[ing for numerical reasoning tasks.](http://arxiv.org/abs/2211.12588)
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
Sebastian Gehrmann, Parker Schuh, Kensen Shi,
Sasha Tsvyashchenko, Joshua Maynez, Abhishek
Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben
Hutchinson, Reiner Pope, James Bradbury, Jacob
Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin,
Toju Duke, Anselm Levskaya, Sanjay Ghemawat,
Sunipa Dev, Henryk Michalewski, Xavier Garcia,
Vedant Misra, Kevin Robinson, Liam Fedus, Denny
Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim,
Barret Zoph, Alexander Spiridonov, Ryan Sepassi,
David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira,
Rewon Child, Oleksandr Polozov, Katherine Lee,
Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark
Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy
Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov,
[and Noah Fiedel. 2022. Palm: Scaling language mod-](http://arxiv.org/abs/2204.02311)
[eling with pathways.](http://arxiv.org/abs/2204.02311)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, et al. 2021. Training verifiers to solve math
word problems. arXiv preprint arXiv:2110.14168.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
[Kristina Toutanova. 2019. BERT: Pre-training of](https://doi.org/10.18653/v1/N19-1423)
[deep bidirectional transformers for language under-](https://doi.org/10.18653/v1/N19-1423)
[standing. In Proceedings of the 2019 Conference of](https://doi.org/10.18653/v1/N19-1423)
_the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
4171–4186, Minneapolis, Minnesota. Association for
Computational Linguistics.
Yao Fu, Litu Ou, Mingyu Chen, Yuhao Wan, Hao Peng,
[and Tushar Khot. 2023a. Chain-of-thought hub: A](http://arxiv.org/abs/2305.17306)
[continuous effort to measure large language models’](http://arxiv.org/abs/2305.17306)
[reasoning performance.](http://arxiv.org/abs/2305.17306)
Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and
Tushar Khot. 2023b. Specializing smaller language
models towards multi-step reasoning. arXiv preprint
_arXiv:2301.12726._
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark,
and Tushar Khot. 2022. Complexity-based prompting for multi-step reasoning. _arXiv preprint_
_arXiv:2210.00720._
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Gra[ham Neubig. 2023. Pal: Program-aided language](http://arxiv.org/abs/2211.10435)
[models.](http://arxiv.org/abs/2211.10435)
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen,
Yujiu Yang, Minlie Huang, Nan Duan, and Weizhu
[Chen. 2023. Tora: A tool-integrated reasoning agent](http://arxiv.org/abs/2309.17452)
[for mathematical problem solving.](http://arxiv.org/abs/2309.17452)
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021a. Measuring mathematical
problem solving with the MATH dataset.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021b. Measuring mathematical problem solving with the math dataset. arXiv
_preprint arXiv:2103.03874._
Namgyu Ho, Laura Schmid, and Se-Young Yun. 2023.
[Large language models are reasoning teachers.](http://arxiv.org/abs/2212.10071)
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu,
Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2022.
[Large language models can self-improve.](http://arxiv.org/abs/2210.11610)
2941
-----
Weisen Jiang, Han Shi, Longhui Yu, Zhengying Liu,
Yu Zhang, Zhenguo Li, and James T. Kwok. 2023a.
[Forward-backward reasoning in large language mod-](http://arxiv.org/abs/2308.07758)
[els for mathematical verification.](http://arxiv.org/abs/2308.07758)
[Weisen Jiang, Yu Zhang, and James Kwok. 2023b. Ef-](https://proceedings.mlr.press/v202/jiang23k.html)
[fective structured prompting by meta-learning and](https://proceedings.mlr.press/v202/jiang23k.html)
[representative verbalizer. In Proceedings of the 40th](https://proceedings.mlr.press/v202/jiang23k.html)
_International Conference on Machine Learning, vol-_
ume 202 of Proceedings of Machine Learning Re_search, pages 15186–15199. PMLR._
Aitor Lewkowycz, Anders Andreassen, David Dohan,
Ethan Dyer, Henryk Michalewski, Vinay Ramasesh,
Ambrose Slone, Cem Anil, Imanol Schlag, Theo
Gutman-Solo, et al. 2022. Solving quantitative reasoning problems with language models. Advances
_in Neural Information Processing Systems, 35:3843–_
3857.
Chengpeng Li, Zheng Yuan, Hongyi Yuan, Guanting
Dong, Keming Lu, Jiancan Wu, Chuanqi Tan, Xiang
[Wang, and Chang Zhou. 2023. Query and response](http://arxiv.org/abs/2310.05506)
[augmentation cannot help out-of-domain math rea-](http://arxiv.org/abs/2310.05506)
[soning generalization.](http://arxiv.org/abs/2310.05506)
Shiyang Li, Jianshu Chen, Yelong Shen, Zhiyu Chen,
Xinlu Zhang, Zekun Li, Hong Wang, Jing Qian,
Baolin Peng, Yi Mao, Wenhu Chen, and Xifeng
[Yan. 2022. Explanations from large language models](http://arxiv.org/abs/2210.06726)
[make small reasoners better.](http://arxiv.org/abs/2210.06726)
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019.
[Roberta: A robustly optimized bert pretraining ap-](http://arxiv.org/abs/1907.11692)
[proach.](http://arxiv.org/abs/1907.11692)
Shayne Longpre, Le Hou, Tu Vu, Albert Webson,
Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V
Le, Barret Zoph, Jason Wei, et al. 2023. The flan
collection: Designing data and methods for effective
instruction tuning. arXiv preprint arXiv:2301.13688.
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
[Lin, Shifeng Chen, and Dongmei Zhang. 2023a. Wiz-](http://arxiv.org/abs/2308.09583)
[ardmath: Empowering mathematical reasoning for](http://arxiv.org/abs/2308.09583)
[large language models via reinforced evol-instruct.](http://arxiv.org/abs/2308.09583)
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo
Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023b. Wizardcoder:
Empowering code large language models with evolinstruct. arXiv preprint arXiv:2306.08568.
Lucie Charlotte Magister, Jonathan Mallinson, Jakub
Adamek, Eric Malmi, and Aliaksei Severyn. 2023.
[Teaching small language models to reason.](http://arxiv.org/abs/2212.08410)
Sewon Min, Mike Lewis, Luke Zettlemoyer, and Han[naneh Hajishirzi. 2022. MetaICL: Learning to learn](https://doi.org/10.18653/v1/2022.naacl-main.201)
[in context. In Proceedings of the 2022 Conference of](https://doi.org/10.18653/v1/2022.naacl-main.201)
_the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, pages 2791–2809, Seattle, United States._
Association for Computational Linguistics.
OpenAI. 2023a. Chatgpt: Optimizing language
[models for dialogue. https://openai.com/blog/](https://openai.com/blog/chatgpt)
[chatgpt.](https://openai.com/blog/chatgpt)
[OpenAI. 2023b. Gpt-4 technical report.](http://arxiv.org/abs/2303.08774)
German I. Parisi, Ronald Kemker, Jose L. Part, Christo[pher Kanan, and Stefan Wermter. 2019. Continual](https://doi.org/https://doi.org/10.1016/j.neunet.2019.01.012)
[lifelong learning with neural networks: A review.](https://doi.org/https://doi.org/10.1016/j.neunet.2019.01.012)
_Neural Networks, 113:54–71._
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei,, and
I. Sutskever. 2019. Language models are unsupervised multitask learners. Technical Report.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
[Wei Li, and Peter J. Liu. 2023. Exploring the limits](http://arxiv.org/abs/1910.10683)
[of transfer learning with a unified text-to-text trans-](http://arxiv.org/abs/1910.10683)
[former.](http://arxiv.org/abs/1910.10683)
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle,
Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi
Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom
Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish
Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal
Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier,
[Thomas Scialom, and Gabriel Synnaeve. 2023. Code](http://arxiv.org/abs/2308.12950)
[llama: Open foundation models for code.](http://arxiv.org/abs/2308.12950)
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec
[Radford, and Oleg Klimov. 2017. Proximal policy](http://arxiv.org/abs/1707.06347)
[optimization algorithms.](http://arxiv.org/abs/1707.06347)
Kumar Shridhar, Alessandro Stolfo, and Mrinmaya
[Sachan. 2023. Distilling reasoning capabilities into](https://aclanthology.org/2023.findings-acl.441)
[smaller language models. In Findings of the Asso-](https://aclanthology.org/2023.findings-acl.441)
_ciation for Computational Linguistics: ACL 2023,_
pages 7059–7073, Toronto, Canada. Association for
Computational Linguistics.
Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao
[Tian, Hua Wu, and Haifeng Wang. 2019. Ernie 2.0:](http://arxiv.org/abs/1907.12412)
[A continual pre-training framework for language un-](http://arxiv.org/abs/1907.12412)
[derstanding.](http://arxiv.org/abs/1907.12412)
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann
Dubois, Xuechen Li, Carlos Guestrin, Percy Liang,
and Tatsunori B. Hashimoto. 2023. Stanford alpaca:
An instruction-following llama model. [https://](https://github.com/tatsu-lab/stanford_alpaca)
[github.com/tatsu-lab/stanford_alpaca.](https://github.com/tatsu-lab/stanford_alpaca)
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton
Ferrer, Moya Chen, Guillem Cucurull, David Esiobu,
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa,
Isabel Kloumann, Artem Korenev, Punit Singh Koura,
2942
-----
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten,
Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu,
Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas
[Scialom. 2023. Llama 2: Open foundation and fine-](http://arxiv.org/abs/2307.09288)
[tuned chat models.](http://arxiv.org/abs/2307.09288)
Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun
Luo, Weikang Shi, Renrui Zhang, Linqi Song,
[Mingjie Zhan, and Hongsheng Li. 2023. Mathcoder:](http://arxiv.org/abs/2310.03731)
[Seamless code integration in llms for enhanced math-](http://arxiv.org/abs/2310.03731)
[ematical reasoning.](http://arxiv.org/abs/2310.03731)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural
_Information Processing Systems, 35:24824–24837._
Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He,
Shengping Liu, Bin Sun, Kang Liu, and Jun Zhao.
[2023. Large language models are better reasoners](https://aclanthology.org/2023.findings-emnlp.167)
[with self-verification. In Findings of the Associa-](https://aclanthology.org/2023.findings-emnlp.167)
_tion for Computational Linguistics: EMNLP 2023,_
pages 2550–2575, Singapore. Association for Computational Linguistics.
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng,
Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin
[Jiang. 2023. Wizardlm: Empowering large language](http://arxiv.org/abs/2304.12244)
[models to follow complex instructions.](http://arxiv.org/abs/2304.12244)
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo
[Li, Adrian Weller, and Weiyang Liu. 2023. Meta-](http://arxiv.org/abs/2309.12284)
[math: Bootstrap your own mathematical questions](http://arxiv.org/abs/2309.12284)
[for large language models.](http://arxiv.org/abs/2309.12284)
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting
Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and
[Jingren Zhou. 2023. Scaling relationship on learning](http://arxiv.org/abs/2308.01825)
[mathematical reasoning with large language models.](http://arxiv.org/abs/2308.01825)
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao
Huang, Huan Sun, Yu Su, and Wenhu Chen. 2023.
[Mammoth: Building math generalist models through](http://arxiv.org/abs/2309.05653)
[hybrid instruction tuning.](http://arxiv.org/abs/2309.05653)
2943
-----
**A** **Examples**
**Example A.1: Knowledge**
**Question: Weng earns $12 an hour for babysitting. Yesterday, she just did 50 minutes of babysitting.**
How much did she earn?
**Answer:(200 token length)**
1. Understand the Problem: - Weng is a babysitter who earns $12 per hour. - Yesterday, she worked for
50 minutes. - Our task is to calculate how much she earned.
2. Calculate the Earning per Minute: - To calculate Weng’s earning per minute, we divide her hourly
earning by 60 (the total number of minutes in an hour): $12 / 60 = $0.2 per minute.
3. Calculate Weng’s Total Earnings: - We multiply Weng’s earning per minute by the number of
minutes she worked: $0.2 x 50 = $10.
4. Answer: - Weng earned $10 for her 50 minutes of babysitting.
**Answer:(520 token length)**
1. Analyze and Breakdown the Problem: - In this problem, Weng is earning a rate of $12 per hour for
babysitting. - However, she hasn’t worked a full hour; she’s only worked for 50 minutes. - Therefore,
the challenge here is to figure out just how much she’s earned for that 50 minutes, given that her hourly
rate is $12.
2. Convert Hours to Minutes: - Since the rate Weng is given is in hours but the time she worked is in
minutes, we need to convert her hourly rate to a per minute rate to simplify the calculations. - There are
60 minutes in an hour, so we’ll divide her hourly earnings by 60 to find out how much she earns per
minute. - The calculation is as follows: 12/60 = $0.2 per minute.
3. Connect to real-life situations: - Picture Weng jotting down her babysitting time and then calculating
this on her calculator, or maybe she’s keeping track in an app that monitors her babysitting earnings. Knowing that she earns $0.2 per minute, she would simply need to multiply this rate by the number of
minutes she worked.
4. Calculate Earnings: - Now that we have Weng’s per minute earning rate, we can multiply this by the
number of minutes she actually worked to find out how much she earned. - Here’s how that calculation
looks: she worked for 50 minutes, so her earnings are computed as 0.2 x 50 = $10.
5. Insert human element: - Now, Weng can feel confident that she’s accurately calculated her earnings.
She can spend or save this money as she chooses. Perhaps she’s saving her earnings for a special
purchase or maybe she counts on this income for regular expenses. Regardless, her diligence in
accurately calculating her earnings assures her of what she can expect in her paycheck.
6. Answer and Conclusion: - After faithfully jotting down her babysitting time, carefully calculating
her earnings, and then multiplying by the number of minutes she spent babysitting, Weng has found
that she has earned $10 for her 50 minutes of babysitting. The answer is: 10.
**Example A.2: The comparison of different alteration methods**
**Question: Randy has 60 mango trees on his farm. He also has 5 less than half as many coconut trees as**
mango trees. How many trees does Randy have in all on his farm?
**Response: Half of the number of Randy’s mango trees is 60 / 2 = 30 trees. So Randy has 30 - 5 = 25**
coconut trees. Therefore, Randy has 60 + 25 = 85 treeson his farm. The answer is: 85
**Change numbers: Randy has 60 mango trees on his farm. He also has 10 more than twice as many**
coconut trees as mango trees. How many trees does Randy have in all on his farm?
**Response: Half of the number of Randy’s mango trees is 60 / 2 = 30 trees. So Randy has 30 - 10 = 20**
coconut trees. Therefore, Randy has 60 + 20 = 80 treeson his farm. The answer is: 80
**Expression replacement: Randy has 60 mango trees on his farm. He also has 5 more than twice as**
many coconut trees as mango trees. How many trees does Randy have in all on his farm?
**Response: Twice the number of mango trees on Randy‘s farm is 60 * 2 = 120 trees. The total number**
2944
-----
of coconut trees on Randy’s farm is 5 more than twice the number of mango trees, a total of 120 + 5 =
125 trees. Altogether, Randy has 125 + 60 = 185 trees on his farm. The answer is: 185
**Increase complexity: Randy has 60 mango trees on his farm. If he plants 10 more mango trees and 8**
more coconut trees, the ratio of the number of mango trees to coconut trees would be 21. How many
trees does Randy have in all on his farm if he decides not to plant any additional trees?"
**Response: If Randy plants 10 more mango trees and 8 more coconut trees, then the ratio of the number**
of mango trees to coconut trees would be 21. This gives: (m + 10)/ (c + 8) = 21/1 Simplify the
Equation: - Distribute the left-hand side of the equation: m/c + (10/c) = 21 - Substituting m=60, we get
60/c + (10/c) = 21 Solve for c: - Multiply both sides by c to eliminate the denominator: 60 + 10 = 21c Combine like terms 70 = 21c - Solve for c: c = 70/21 c = 3.33 (rounded off) Solve for m: - Substituting
the value of c which is 3.33 and m=60 into the equation (m + 10)/ (c + 8) = 21/1 to see if the ratio of the
number of mango trees to coconut trees would be 21: (60+10) / (3.33+8) = 6.3636 but 21/1 is not equal
to 6.3636 -We conclude that planting 10 more mango trees and 8 more coconut trees are impossible,
and Randy has 60 mango trees and no more. Answer: - Therefore, Randy has 60 trees altogether on his
farm if he decides not to plant any additional trees.The answer is: 60
**Example A.3: Nested Multi-task Construction**
**Question:**
A package of candy has 3 servings with 120 calories each. John eats half the package. How many
calories did he eat?
**Original Solution:**
There were 3*120=360 calories in the package. So he ate 360/2=180 calories.
The answer is: 180.
**Nested Multi-task Solution:**
**[Outline]**
1. A package of candy has 3 servings.
2. Every serving contains 120 calories.
3. John eats half the package.
4. Our task is to calculate how many calories John did eat.
**[Plan]**
1. Calculate the total number of calories in the package.
2. Calculate the number of calories John did eat.
**[Execution]**
1. One serving of the candy contains 120 calories, and there are 3 servings, so the total number of
calories in the package is 3 * 120=360 calories.
2. John eats half of the package, which contains 360 / 2 = 180 calories.
The answer is: 180.
2945
-----
**The Prompts We Use**
**Prompt B.1: Rephrasing**
Follow the given examples:
Question: Angelo and Melanie want to plan how many hours over the next week they should study
together for their test next week. They have 2 chapters of their textbook to study and 4 worksheets
to memorize. They figure out that they should dedicate 3 hours to each chapter of their textbook
and 1.5 hours for each worksheet. If they plan to study no more than 4 hours each day, how many
days should they plan to study total over the next week if they take a 10-minute break every hour,
include 3 10-minute snack breaks each day, and 30 minutes for lunch each day?
Rephrase the above question: Angelo and Melanie need to study 2 chapters in their textbook and 4
worksheets for their upcoming test. They have planned to dedicate 3 hours for each chapter and
1.5 hours for each worksheet. They can study for a maximum of 4 hours each day, taking into
account 10-minute breaks every hour, 3 10-minute snack breaks per day, and 30 minutes for lunch.
How many days do they need to study in total over the next week to complete their study plan?
Question: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they
have left in total?
Rephrase the above question: If Leah had 32 chocolates and her sister had 42, and they both
consumed 35 chocolates, what is the total number of chocolates that they have left?
Question: There were nine computers in the server room. Five more computers were installed each
day, from monday to thursday. How many computers are now in the server room?
Rephrase the above question: If there were initially nine computers in the server room and five
more computers were added each day from Monday to Thursday, what is the current total number
of computers in the server room?
Question: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops.
How many lollipops did Jason give to Denny?
Rephrase the above question: If Jason initially had 20 lollipops and now has 12 after giving some
to Denny, how many lollipops did he give to Denny?
Question: Sam bought a dozen boxes, each with 30 highlighter pens inside, for $10 each box. He
rearranged five of these boxes into packages of six highlighters each and sold them for $3 per
package. He sold the rest of the highlighters separately at the rate of three pens for $2. How much
profit did he make in total, in dollars?
Rephrase the above question: Sam purchased 12 boxes, each containing 30 highlighter pens, at
$10 per box. He repackaged five of these boxes into sets of six highlighters and sold them for $3
per set. He sold the remaining highlighters individually at a rate of three pens for $2. What is the
total profit he made in dollars?
Question: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After
they are done, there will be 21 trees. How many trees did the grove workers plant today?
Rephrase the above question: If there were initially 15 trees in the grove and the grove workers are
planning to plant more trees today, resulting in a total of 21 trees, how many trees did the workers
plant today?
Question: {}
Rephrase the above question:
2946
-----
**Prompt B.2: Reorganization**
You are a mathematics expert. Based on the provided questions and answer process, you reorganize
the Solved Process to add the token length. Let’s reorganize the Solved Process. During this
process, You must expand the problem-solving process to approximately 420 tokens. The methods
can include: 1. increasing the complexity of the problem-solving process; 2. adding extra concepts;
3. inserting a background story; 4. increasing the steps of solving the problem. The end of response
needs to be: The answer is: [answer]. Here are two examples how to do it,
==== Example 1 ====
[Problem]:
Ralph is going to practice playing tennis with a tennis ball machine that shoots out tennis balls for
Ralph to hit. He loads up the machine with 175 tennis balls to start with. Out of the first 100 balls,
he manages to hit 2/5 of them. Of the next 75 tennis balls, he manages to hit 1/3 of them. Out of
all the tennis balls, how many did Ralph not hit?
[Answers]:
Out of the first 100 balls, Ralph was able to hit 2/5 of them and not able to hit 3/5 of them, 3/5 x
100 = 60 tennis balls Ralph didn’t hit. Out of the next 75 balls, Ralph was able to hit 1/3 of them
and not able to hit 2/3 of them, 2/3 x 75 = 50 tennis balls that Ralph didn’t hit. Combined, Ralph
was not able to hit 60 + 50 = 110 tennis balls Ralph didn’t hit. The answer is: 110
[Result]:
1. Understand the Problem:
- Here, Ralph is practicing with a tennis ball machine with 175 tennis balls.
- Ralph is hitting the balls in two batches, 100 and 75.
- From each batch, we know the fraction of balls Ralph was able to hit.
- Our task is to calculate how many tennis balls Ralph did not hit.
2. Identify the Fractions that Ralph doesn’t hit:
- From the first 100 balls, Ralph manages to hit 2/5 of them, meaning he was not able to hit 1-2/5
or 3/5 of these balls.
- Similarly, from the next 75 balls, Ralph manages to hit 1/3 of these, leaving 1-1/3 or 2/3 of these
balls that he doesn’t hit.
3. Calculate the number of Missed Balls in the First Batch:
- We multiply the total number of balls in the first batch by the fraction that Ralph doesn’t hit:
3/5 x 100 = 60 tennis balls Ralph didn’t hit from the first batch.
4. Calculate the number of Missed Balls in the Second Batch:
- We multiply the total number of balls in the second batch by the fraction that Ralph doesn’t hit:
2/3 x 75 = 50 tennis balls Ralph didn’t hit from the second batch.
5. Calculate Total Missed Balls:
- Now that we have calculated the number of missed balls from both the first and second batch,
- Adding these two numbers together gives us the total missed balls:
60 missed balls + 50 missed balls = 110 missed balls
6. Answer:
- So, out of all the tennis balls, Ralph didn’t hit 110 tennis balls during his practice session.
The answer is: 110
==== Example 2 ====
[Problem]:
Mary is making ice cubes with fruit frozen in them for a cocktail party. She makes some strawberry
cubes, and a number of blueberry cubes equal to 5 times the number of strawberry cubes minus 4.
If she makes 116 ice cubes total, how many blueberry cubes does she make?
[Answers]: Let s be the number of strawberry cubes and b be the number of blueberry cubes. We
know that b = 5s - 4 and s + b = 116. Substituting the first equation into the second, we get s +
5s - 4 = 116. Combining like terms, we get 6s - 4 = 116 Adding 4 to both sides, we get 6s = 120
2947
-----
Dividing both sides by 6, we get s = 20 Substituting this value back into the equation b = 5s - 4,
we get b = 96 The answer is: 96
[Result]:
1. Define the Variables:
- Let s represent the number of strawberry cubes.
- Let b represent the number of blueberry cubes.
2. Translate the Problem into Equations:
a. From the problem, we know that the number of blueberry cubes is 5 times the strawberry cubes
minus 4. This gives:
b = 5s - 4
b. We also know the total number of cubes is 116. This gives:
s + b = 116
3. Substitute and Solve:
- Substitute the expression from the first equation into the second equation to eliminate b. - This
gives:
s + 5s - 4 = 116
- Combine like terms:
6s - 4 = 116
- Solve for s:
Add 4 to both sides:
6s = 120
Divide both sides by 6:
s = 20
4. Find the Number of Blueberry Cubes:
- Using the value of s we found (20), plug it into the first equation to find b:
b = 5(20) - 4
- This gives:
b = 96
5. Answer: Mary makes 96 blueberry cubes for her cocktail party.
The answer is: 96
How about this question?
[Problem]: {}
[Answers]: {}
You must expand the problem-solving process to approximately 700 tokens. The end of response
needs to be: The answer is: [answer].
[Result]:
2948
-----
**Prompt B.3: Prompt for BF-Trans GSM8K Questions**
You are an experienced mathematics teacher in a grade school, and you are good at rephrase math
problems.
Now you are given a math problem (marked as [Problem]) with one and only one X as the unknown
variable. Your task is to rewrite or rephrase the original problem into an equivalent problem. The
equivalent problem you rephrased should not contain any Xs. Instead, you should ask for the
correlated unknown value using a questioning tone in the last sentence of your rephrased problem.
You can use more words to keep your rephrased problem expressed clearly and thoroughly, and
also can add more concepts to avoid ambiguity. Here are some examples:
==== Example 1 ====
[Problem]:
Ralph is going to practice playing tennis with a tennis ball machine that shoots out tennis balls for
Ralph to hit. He loads up the machine with 175 tennis balls to start with. Out of the first 100 balls,
he manages to hit X of them. Of the next 75 tennis balls, he manages to hit 1/3 of them. Out of all
the tennis balls, how many did Ralph not hit? If we know the answer to the above question is 110,
what is the value of the unknown variable X?
[Rephrase]:
Ralph is going to practice playing tennis with a tennis ball machine that shoots out tennis balls for
Ralph to hit. He loads up the machine with 175 tennis balls to start with, which are divided into 2
groups. In the first group there are 100 balls and the second group contains 75 ones. Of the second
group of balls, Ralph manages to hit 1/3. And out of all the tennis balls, Ralph did not hit 110.
Then out of the first 100 balls, what is the proportion of the balls Ralph hit?
==== Example 2 ====
[Problem]:
In one day, 200 people visit The Metropolitan Museum of Art in New York City. Half of the
visitors are residents of New York City. Of the NYC residents, X% are college students. If the cost
of a college student ticket is $4, how much money does the museum get from college students that
are residents of NYC?
If we know the answer to the above question is 120, what is the value of the unknown variable X?
[Rephrase]:
In one day, 200 people visit The Metropolitan Museum of Art in New York City. Half of the
visitors are residents of New York City. If the cost of a college student ticket is $4, and the museum
gets $120 from college students that are residents of NYC. Then of the NYC residents, what
percentage is the college students?
==== Example 3 ====
[Problem]:
X years from now, John will be 3 times as old as he was 11 years ago. How old is he now?If we
know the answer to the above question is 21, what is the value of the unknown variable X?
[Rephrase]:
If we know John is 21 years old, then how many years from now will John be 3 times as old as he
was 11 years ago?
==== Example 4 ====
[Problem]:
Taipei 101 in Taiwan is X feet tall with 101 floors. Suppose the first to 100th floors have height
each equal to 16.5 feet, how high is the 101st floor?If we know the answer to the above question is
23, what is the value of the unknown variable X?
[Rephrase]:
Taipei 101 in Taiwan has 101 floors. Suppose the first to 100th floors have height each equal to
16.5 feet, and the 101st floor is 23 feet. How high is the whole building?
==== Example 5 ====
2949
-----
[Problem]:
A fox can run at the maximum speed of X kilometers per hour. Considering the fox would run at a
constant speed, what distance would he make during 120 minutes? If we know the answer to the
above question is 100, what is the value of the unknown variable X?
[Rephrase]:
Considering a fox would run at a constant speed, and he will make 100 kilometers during 120
minutes. How many kilometers per hour the fox can run?
==== Example 6 ====
[Problem]:
Ruiz receives a monthly salary of $500. If he received a X% raise, how much will be Ruiz’s new
salary? If we know the answer to the above question is 530, what is the value of the unknown
variable X?
[Rephrase]:
Ruiz receives a monthly salary of $500. If his new salary will be $530 monthly, what percentage is
the raise?
==== Example 7 ====
[Problem]:
Tom decided to send his wife X dozen roses every day for the week. How many total roses did
he send?If we know the answer to the above question is 168, what is the value of the unknown
variable X?
[Rephrase]:
Tom sent his wife 168 roses totally for the week. How many dozen roses did he sent every day for
the week?
==== Example 8 ====
[Problem]:
Facebook decided to award a productivity bonus to all its female employees who are mothers. This
productivity bonus will total 25% of Facebook’s annual earnings, which was X for the year 2020.
It is known that Facebook employs 3300 employees; one-third are men, and of the women, 1200
are not mothers. How much was the bonus that each female mother employee received, assuming
each one received an equal amount? If we know the answer to the above question is 1250, what is
the value of the unknown variable X?
[Rephrase]:
Facebook decided to award a productivity bonus to all its female employees who are mothers.
This productivity bonus will total 25% of Facebook’s annual earnings. It is known that Facebook
employs 3300 employees; one-third are men, and of the women, 1200 are not mothers. Assuming
each one received an equal amount, the bonus that each female mother employee received was
$1250. Then how much was the Facebook’s annual earnings for the year?
==== Example 9 ====
[Problem]: {}
[Rephrase]:
2950
-----
**Prompt B.4: Request the solutions to BF-Trans**
You are an experienced mathematician. Now you are given a grade school math problem (marked
as [Problem]). The task you should accomplish is to solve this problem.
You should solve the problem step by step, as thoroughly and clearly as you can by using any
number of words or solution steps. Remember in the end of your solution you should output a
"The answer is:" and then output the result answer number you get. Here are some examples:
==== Example 1 ====
[Problem]:
Ralph is going to practice playing tennis with a tennis ball machine that shoots out tennis balls for
Ralph to hit. He loads up the machine with 175 tennis balls to start with, which are divided into 2
groups. In the first group there are 100 balls and the second group contains 75 ones. Of the second
group of balls, Ralph manages to hit 1/3. And out of all the tennis balls, Ralph did not hit 110.
Then out of the first 100 balls, what is the proportion of the balls Ralph hit?
[Solution]:
Out of all the 175 tennis balls, Ralph did not hit 110, so he hit 175 - 110 = 65 of them. Of the
second group of balls containing 75 balls, Ralph manages to hit 1/3, so in this group he was able
to hit 75 x 1/3 = 25 Of the first 100 balls, Ralph manages hit 65 - 25 = 40 tennis balls, so in this
group the proportion of the balls Ralph hit is 40 / 100 = 2/5 The answer is: 2/5
==== Example 2 ====
[Problem]:
In one day, 200 people visit The Metropolitan Museum of Art in New York City. Half of the
visitors are residents of New York City. If the cost of a college student ticket is $4, and the museum
gets $120 from college students that are residents of NYC. Then of the NYC residents, what
percentage is the college students?
[Solution]:
200 people visit The Metropolitan Museum of Art in New York City, half of the visitors are
residents of New York City, so the number of visitors that are residents of NYC is 200 / 2 = 100
The cost of a college student ticket is $4, and the museum gets $120 from college students that are
residents of NYC, so there are 120 / 4 = 30 college students that are residents of NYC.
We know the number of visitors that are residents of NYC is 100, and there are 30 college students
that are residents of NYC, so of the NYC residents, there is 30 / 100 = 30% college students,
meaning the percentage is 30
The answer is: 30
==== Example 3 ====
[Problem]:
Taipei 101 in Taiwan has 101 floors. Suppose the first to 100th floors have height each equal to
16.5 feet, and the 101st floor is 23 feet. How high is the whole building?
[Solution]:
The first to 100th floors have height each equal to 16.5 feet, so the total height of the first to 100th
floors is 16.5 x 100 = 1650 feet. We know the total height of the first to 100th floors is 1650 feet,
and the 101st floor is 23 feet, so the whole building is 1650 + 23 = 1673 feet. The answer is: 1673
==== Example 4 ====
[Problem]:
Considering a fox would run at a constant speed, and he will make 100 kilometers during 120
minutes. How many kilometers per hour the fox can run?
[Solution]:
The fox will make 100 kilometers during 120 minutes, and 120 minutes are 120 / 60 = 2 hours, so
he can run 100 / 2 = 50 kilometers per hour. The answer is: 50
==== Example 5 ====
[Problem]:
2951
-----
Facebook decided to award a productivity bonus to all its female employees who are mothers.
This productivity bonus was total 25% of Facebook’s annual earnings. It is known that Facebook
employs 3300 employees; one-third are men, and of the women, 1200 are not mothers. Assuming
each one received an equal amount, the bonus that each female mother employee received was
$1250. Then how much was the Facebook’s annual earnings for the year?
[Solution]:
It is known that Facebook employs 3300 employees and 1/3 are men, so 1 - 1/3 = 2/3 are women
and the number of women is 3300 x 2/3 = 2200
Of the women, 1200 are not mothers, so there are 2200 - 1200 = 1000 mothers. Assuming each
one received an equal amount, the productivity bonus that each female mother employee received
was $1250, and we know Of the women, there are 1000 mothers, so the total productivity bonus of
the mother employees received was $1250 x 1000 = $1,250,000
We know the total productivity bonus of the mother employees received was $1250,000, and it’s
25% of Facebook’s annual earnings for the year, so Facebook’s annual earnings for the year is
$1,250,000 / 25% = $1,250,000 /(1/4) = $ 1,250,000 x 4 = $5,000,000 The answer is: 5,000,000
==== Example 6 ====
[Problem]: {}
[Solution]:
**Prompt B.5: Difficulty Enhancement**
I want you to act as a math teacher. I will provide a grade school math question and you will help
to to create more challenging math questions by given ways. Given the question:
“James writes a 3-page letter to 2 different friends twice a week. How many pages does he write a
year?”, you will modify it by following ideas:
1. Change specific numbers: James writes a 2-page letter to 2 different friends 3 times a week.
How many pages does he write in 4 years?
2. Introduce fractions or percentages: James writes a 3-page letter to 2 different friends twice a
week. Each week, he adds 50% more pages to each letter. How many pages does he write in a
month?
3. Combine multiple concepts: James writes a 3-page letter to 2 different friends twice a week.
He uses both sides of the paper and each side can hold 250 words. If James writes 100 words per
minute, how long does it take for him to write all the letters in a week?
4. Include a conditional statement: James writes a 3-page letter to 2 different friends twice a week.
If it’s a holiday, he writes an additional 5-page letter to each friend. Considering there are 10
holidays in a year, how many pages does he write in a year?
5. Increase the complexity of the problem: James writes a 3-page letter to 2 different friends twice
a week. In addition, he writes a 5-page letter to 3 other friends once a week. How many pages
does he write in a month, assuming there are 4 weeks in a month?
Now you are given the question: {}
2952
-----
**Prompt B.6: Expression Replacement**
You are a mathematics expert, and you need to help me rewrite a math problem. This math problem
includes the question and an explanatory answer. First, you need to understand the question and
explanation, then extract the arithmetic expression from the explanation in the question. Next,
Then, randomly replace the arithmetic expressions, replace addition with subtraction, subtraction
with addition, multiplication with division, and division with multiplication. You can randomly
replace one or two operations. The key is to regenerate a corresponding question based on the
replaced arithmetic expression while ensuring that it makes sense logically. Follow the given
examples:
==== Example 1 ====
[Question]:
Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How
many clips did Natalia sell altogether in April and May?
[Response]:
Natalia sold 48/2 = 24 clips in May. Natalia sold 48+24 = 72 clips altogether in April and May.The
answer is: 72
[Mathematical expression]:
48/2 = 24, 48+24 = 72
[Changed mathematical expression]:
48*2 = 96, 48+96 = 144
[Changed Question]:
Natalia sold clips to 48 of her friends in April, and then she sold double as many clips in May.
How many clips did Natalia sell altogether in April and May?
==== Example 2 ====
[Question]:
Bella bought stamps at the post office. Some of the stamps had a snowflake design, some had
a truck design, and some had a rose design. Bella bought 15 snowflake stamps. She bought 9
more truck stamps than snowflake stamps, and 3 fewer rose stamps than truck stamps. How many
stamps did Bella buy in all?
[Response]:
The number of truck stamps is 15 + 9 = 24. The number of rose stamps is 24-13 = 21. Bella bought
15 + 24 + 21 = 60 stamps in all.The answer is: 60
[Mathematical expression]:
15 + 9 = 24, 24-13 = 21, 15 + 24 + 21 = 60
[Changed mathematical expression]:
15 - 9 = 6, 6-3 = 3, 15 + 6 + 3 = 24
[Changed Question]:
Bella bought stamps at the post office. Some of the stamps had a snowflake design, some had
a truck design, and some had a rose design. Bella bought 15 snowflake stamps. She bought 9
less truck stamps than snowflake stamps, and 3 fewer rose stamps than truck stamps. How many
stamps did Bella buy in all?
==== Example 3 ====
[Question]:
Randy has 60 mango trees on his farm. He also has 5 less than half as many coconut trees as
mango trees. How many trees does Randy have in all on his farm?
[Response]:
Half of the number of Randy’s mango trees is 60/2 = 30 trees. So Randy has 30 - 5 = 25 coconut
trees. Therefore, Randy has 60 + 25 = 85 treeson his farm.The answer is: 85
[Mathematical expression]:
60/2 = 30, 30 - 5 = 25, 60 + 25 =85
2953
-----
[Changed mathematical expression]:
60/2 = 30, 30 + 5 = 35, 60 + 35 = 95
[Changed Question]:
Randy has 60 mango trees on his farm. He also has 5 more than half as many coconut trees as
mango trees. How many trees does Randy have in all on his farm?
How about this question?
[Question]: {}
[Response]:
**Prompt B.7: Request the solutions to expression replacement questions**
I want you to act as an excellent math solver. You will solve the given math question step
by step.Retain decimals to three decimal places. The formulas in the process need to use the
format:48/2 = 24 clips. The end of response needs to be: The answer is: [answer]. Most importantly,
if something doesn’t make sense in the question, just write out: Sorry, this question is wrong.
Follow the given examples:
==== Example 1 ====
[Question]:
Studying for her test, Mitchell had read ten chapters of a book before 4 o’clock. When it clocked 4,
Mitchell had read 20 pages of the 11th chapter of the book she was studying from. After 4 o’clock,
she didn’t read the remaining pages of chapter eleven but proceeded and read 2 more chapters
of the book. If each chapter in the book had 40 pages, calculate the total number of pages that
Mitchell had read altogether?
[Result]:
Since each chapter of the book has 40 pages, Mitchell had read 10*40 = 400 pages from the
first ten chapters. After reading 20 pages of the eleventh chapter, the total number of pages that
Mitchell had read is 400+20 = 420 The next two chapters that she read had 2*40 = 80 pages. In
total, Mitchell read 420+80 = 500 pages of the book that day. The answer is: 500
==== Example 2 ====
[Question]:
Fern is checking IDs to get into an R-rated movie. She denied 20% of the 120 kids from Riverside
High, 70% of the 90 kids from West Side High, and half the 50 kids from Mountaintop High. How
many kids got into the movie?
[Result]:
First find how many kids from Riverside High are rejected: 20% * 120 kids = 24 kids. Then find
how many kids from West Side High are rejected: 70% * 90 kids = 63 kids Then find how many
kids from Mountaintop High are rejected: 50 kids / 2 = 25 kids Then add the number of kids from
each school to find the total number of kids: 120 kids + 90 kids + 50 kids = 260 kids Then subtract
all the kids who were rejected from the total number of kids to find the number who got in: 260
kids - 24 kids - 63 kids - 25 kids = 148 kids. The answer is: 148
==== Example 3 ====
[Question]:
After tests in California, the total number of Coronavirus cases was recorded as 2000 positive cases
on a particular day. The number of cases increased by 500 on the second day, with 50 recoveries.
On the third day, the total number of new cases spiked to 1500 with 200 recoveries. What’s the
total number of positive cases after the third day?
[Result]:
When 500 new cases were recorded after the tests, the total number of positive cases increased to
2000 cases + 500 cases = 2500 cases. With 50 recoveries, the total number of cases reduced to
2954
-----
2500 cases - 50 cases = 2450 cases. On the third day, with 1500 new cases, the total number of
cases became 2450 cases + 1500 cases = 3950 cases. If 200 people recovered from the virus, the
total number of people with Coronavirus became 3950 cases - 200 cases = 3750 cases. The answer
is: 3750"
==== Example 4 ====
[Question]:
Lisa and Carly go shopping together. Lisa spends $40 on t-shirts then spends half of this amount
on jeans and twice this amount on coats. Carly spends only a quarter as much as Lisa on t-shirts
but spends 3 times as much on jeans and a quarter of the amount Lisa spent on coats. In dollars,
how much did Lisa and Carly spend in total?
[Result]:
Lisa spends $40 on t-shirts / 2 = $20 on jeans. She also spends $40 on t-shirts * 2 = $80 on coats.
So Lisa has spent a total of 40 + 20 + 80 = $140. Carly spends $40 / 4 = $10 on t-shirts. She also
spends $20 per pair of jeans * 3 = $60 on jeans. She then also spends $80 Lisa2019s cost for coats[˘]
/ 4 = $20 on coats. So Carly has spent a total of 10 + 60 + 20 = $90. Lisa and Carly have therefore
spent a total of 140 + 90 = $230. The answer is: 230"
==== Example 5 ====
[Question]:
In a section of the forest, there are 100 weasels and 50 rabbits. Three foxes invade this region and
hunt the rodents. Each fox catches an average of 4 weasels and 2 rabbits per week. How many
rabbits and weasels will be left after 3 weeks?
[Result]:
3 foxes catch 4 weasels each every week for a total of 3*4 =12 weasels 12 weasels are caught
every week for 3 weeks for a total of 12*3 = 36 weasels 3 foxes catch 2 rabbits each every week
for a total of 3*2 = 6 rabbits 6 rabbits are caught every week for 3 weeks for a total of 6*3 =18
rabbits There were originally 100 weasels so now there are 100-36 = 64 weasels left There were
originally 50 rabbits so now there are 50-18 = 32 rabbits left There are 64+32 = 96 weasels and
rabbits left, The answer is: 96"
[Question]: {}
[Result]:
2955
-----
**Prompt B.8: Nested Multi-task Learning**
You are an experienced mathematics teacher in a grade school. Now you are given a grade school
problem marked as [Problem] and its correlated solution marked as [Solution]. In the end of the
[Solution], there is always a certain number after a "The answer is: " as the result answer. Based on
the [Problem] and the corresponding [Solution], You are asked to generate a new solution, which
is much clearer than the original one and much easier to understand even for the worst student.
The new solution you generate must by order contains [Outline], [Plan] and [Execution]. The
[Outline] is an outline or summary of the [Problem]; the [Plan] is a plan as an ordered list of steps
solving the problem; the [Execution] is an ordered list of your specific and detailed solving steps,
each of which should be as thorough and clear as possible. There is a one-to-one correspondence
between [Plan] list and [Execution] list. To make your new solution helpful and easy to understand,
you may: 1, increase the number of solving steps in [Plan] and [Execution] lists; 2, explain with
more words in each step of [Execution] list; 3, use several substeps in one step and even use
subsubsteps in one substep for [Execution] list; 4, calcuate each formula or equation step by step.
In the end of the [Execution], you should also output a "The answer is: " and then output a number
as your result answer. To guarantee the correctness of the result answer in the new solution you
will generate, you are recommended to refer to the original [Solution] or even follow it. Now every
time when you see "[Generate]:", you should start to generate your new solution. Here are some
examples:
==== Example 1 ====
[Problem]:
Ralph is going to practice playing tennis with a tennis ball machine that shoots out tennis balls for
Ralph to hit. He loads up the machine with 175 tennis balls to start with. Out of the first 100 balls,
he manages to hit 2/5 of them. Of the next 75 tennis balls, he manages to hit 1/3 of them. Out of
all the tennis balls, how many did Ralph not hit?
[Solution]:
Out of the first 100 balls, Ralph was able to hit 2/5 of them and not able to hit 3/5 of them, 3/5 x
100 = 60 tennis balls Ralph didn’t hit. Out of the next 75 balls, Ralph was able to hit 1/3 of them
and not able to hit 2/3 of them, 2/3 x 75 = 50 tennis balls that Ralph didn’t hit. Combined, Ralph
was not able to hit 60 + 50 = 110 tennis balls Ralph didn’t hit. The answer is: 110
[Generate]:
[Outline]
Ralph loads up the machine with 175 balls totally.
Ralph practices with 2 group of balls, the first group contains 100 balls and the second group
contains 75 balls.
Ralph manages to hit 2/5 balls of the first group, and manages to hit 1/3 balls of the second group.
Our task is to calculate how many tennis balls Ralph did not hit.
[Plan]
1. Calculate the proportion of balls that Ralph did not hit for the first group.
2. Calculate the number of balls that Ralph did not hit for the first group.
3. Calculate the proportion of balls that Ralph did not hit for the second group.
4. Calculate the number of balls that Ralph did not hit for the second group.
5. Calculate the total number of balls that Ralph did not hit for the first and the second group.
[Execution]
1. Calculate the proportion of balls that Ralph did not hit for the first group:
From the first group of 100 balls, Ralph manages to hit 2/5 of them, so he was not able to hit 1 2/5 = 3/5 of these balls.
2. Calculate the number of balls that Ralph did not hit for the first group:
The first group contains 100 balls, the proportion of balls that Ralph did not hit for the first group
is 3/5, so the number of balls that Ralph did not hit for the first group is 3/5 x 100 = 60 balls.
2956
-----
3. Calculate the proportion of balls that Ralph did not hit for the second group:
From the second group of 75 balls, Ralph manages to hit 1/3 of them, so he was not able to hit 1 1/3 = 2/3 of these balls.
4. Calculate the number of balls that Ralph did not hit for the second group:
The second group contains 75 balls, the proportion of balls that Ralph did not hit for the second
group is 2/3, so the number of balls that Ralph did not hit for the second group is 2/3 x 75 = 50
balls.
5. Calculate the total number of balls that Ralph did not hit for the first and the second group:
The number of balls that Ralph did not hit for the first group is 60, the number of balls that Ralph
did not hit for the second group is 50, so the total number of balls that Ralph did not hit for the
first and the second group is 60 + 50 = 110.
The answer is: 110
==== Example 2 ====
[Problem]:
In one day, 200 people visit The Metropolitan Museum of Art in New York City. Half of the
visitors are residents of New York City. Of the NYC residents, 30% are college students. If the
cost of a college student ticket is $4, how much money does the museum get from college students
that are residents of NYC?
[Solution]:
The number of visitors that are residents of NYC is 200 / 2 = 100 visitors The number of NYC
resident visitors that are college students is 100 * 0.30 = 30 visitors The money from the college
students that are residents of NYC is 30 * 4 = 120 The answer is: 120
[Generate]:
[Outline]
200 people visit the museum.
Half of the visitors are residents of NYC.
Of the NYC residents, 30% are college students.
The cost of a college student ticket is $4.
Our task is to calculate how much money the museum gets from college students that are residents
of NYC.
[Plan]
1. Calculate the number of visitors that are residents of NYC.
2. Calculate the number of NYC resident visitors that are college students.
3. Calculate the money from the college students that are residents of NYC.
[Execution]
1. Calculate the number of visitors that are residents of NYC:
200 people visit the museum totally, half of the visitors are residents of NYC, so the number of
visitors that are residents of NYC is 200 / 2 = 100 visitors.
2. Calculate the number of NYC resident visitors that are college students:
The number of visitors that are residents of NYC is 100, and of them 30% are college students, so
the number of NYC resident visitors that are college students is 100 * 0.30 = 30 visitors.
3. Calculate the money from the college students that are residents of NYC:
The number of NYC resident visitors that are college students is 30, and the cost of a college
student ticket is $4, so the money from the college students that are residents of NYC is 30 * $4 =
$120
The answer is: 120
==== Example 3 ====
[Problem]: {}
[Solution]: {}
[Generate]:
2957
-----
**C** **Continue scaling the data**
We continue scaling up the amount of data and set
larger n. As a result, the performance of models
with various parameter sizes are further improved.
For Llama 7B, the performance trends on GSM8K
and MATH are shown in Figure 5.
By setting different n, the performances with
respect to data sizes of our 7B and 13B models
are listed in Table 6, respectively. Note that when
_n = 5, the test accuracy of 7B model on GSM8K_
can achive 81.1. When n = 5 (corresponding to
643K data size), our 70B model achive 88.0 on
GSM8K and 40.0 on MATH. Due to the cost of
training, we did not try all the data sizes on 13B or
70B models.
Figure 5: The test accuracy with respect to the sizes of
the scaling data on GSM8K (top) and MATH (bottom)
_n_ Data Size GSM8K MATH
_7B_
1 141K 70.1 18.1
2 277K 75.0 23.1
3 406K 77.2 25.6
4 527K 78.5 28.1
5 643K **81.1** 29.0
6 751K 79.1 30.0
_13B_
4 527K 81.6 31.2
5 643K 82.1 32.8
6 751K 83.6 33.3
Table 6: By enlarging the values of n, the merged
datasets are range from 141K to 751K, while the performances of the finetuned 7B and 13B models are getting
better.
2958
-----
| [
"Shuo, Yin",
"Weihao, You",
"Zhilong, Ji",
"Xudong, Zhao",
"Guoqiang, Zhong",
"Kevin, Duh",
"Jinfeng, Bai",
"Helena, Gomez",
"Steven, Bethard"
] | 2024-06-01T00:00:00 | NAACL 2024 Findings | false | 0 | 0 | null | https://aclanthology.org/2024.findings-naacl.185 | null | https://www.semanticscholar.org/paper/8c479389f30749d94c99be3c0ef0d764721a81dd |
Multi-language Diversity Benefits Autoformalization | Autoformalization is the task of translating natural language materials into machine-verifiable formalisations. Progress in autoformalization research is hindered by the lack of a sizeable dataset consisting of informal-formal pairs expressing the same essence. Existing methods tend to circumvent this challenge by manually curating small corpora or using few-shot learning with large language models. But these methods suffer from data scarcity and formal language acquisition difficulty. In this work, we create mma, a large, flexible, multi-language, and multi-domain dataset of informal-formal pairs, by using a language model to translate in the reverse direction, that is, from formal mathematical statements into corresponding informal ones. Experiments show that language models fine-tuned on mma can produce up to $29-31$\% of statements acceptable with minimal corrections on the miniF2F and ProofNet benchmarks, up from $0$\% with the base model. We demonstrate that fine-tuning on multi-language formal data results in more capable autoformalization models even on single-language tasks. | null | null | null | null | NeurIPS 2024 | true | 0 | 0 | null | https://neurips.cc/virtual/2024/poster/96799 | null | null |
Multi-tool Integration Application for Math Reasoning Using Large Language Model | Mathematical reasoning is an important research direction in the field of artificial intelligence. This article proposes a novel multi tool application framework for mathematical reasoning, aiming to achieve more comprehensive and accurate mathematical reasoning by utilizing the collaborative effect of large language models (LLMs) and multiple external tools. Firstly, use a Math Tool to perform basic mathematical calculations during the inference process through interaction with LLM. Secondly, Code Tool can generate code fragments that comply with syntax rules and execute them, providing support for complex mathematical problems. Then, through the iterative reasoning of the CoT Tool, the logical coherence and accuracy of mathematical reasoning are enhanced. Ultimately, by using self consistency tools to select the final answer based on different parameters, the consistency and reliability of reasoning are improved. Through the synergistic effect of these tools, the framework has achieved significant performance improvement in mathematical reasoning tasks. We conducted experiments on the NumGLUE Task 4 test set, which includes 220 mathematical reasoning fill in the blank questions. The experimental results showed that, based on Math Tool, Code Tool, and CoT Tool, in Task 4 task,our method achieved an accuracy of 89.09,compared with the GPT3+FewShot baseline, Few Shot+ERNIE-4.0+self consistency improved by 49.09%, and compared with fine-tuning the Fine tuning baseline, Few Shot+ERNIE-4.0+self consistency improved by 52.29% | A novel multi tool application framework for mathematical reasoning is proposed, aiming to achieve more comprehensive and accurate mathematical reasoning by utilizing the collaborative effect of large language models (LLMs) and multiple external tools. | [
"Zhihua, Duan",
"Jialin, Wang"
] | 2024-08-22T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2408.12148v1 | https://arxiv.org/abs/2408.12148 | https://www.semanticscholar.org/paper/8202aa484288acb6b3ecf203e893573c81153bdf |
|
MultiMath: Bridging Visual and Mathematical Reasoning for Large Language Models | The rapid development of large language models (LLMs) has spurred extensive research into their domain-specific capabilities, particularly mathematical reasoning. However, most open-source LLMs focus solely on mathematical reasoning, neglecting the integration with visual injection, despite the fact that many mathematical tasks rely on visual inputs such as geometric diagrams, charts, and function plots. To fill this gap, we introduce \textbf{MultiMath-7B}, a multimodal large language model that bridges the gap between math and vision. \textbf{MultiMath-7B} is trained through a four-stage process, focusing on vision-language alignment, visual and math instruction-tuning, and process-supervised reinforcement learning. We also construct a novel, diverse and comprehensive multimodal mathematical dataset, \textbf{MultiMath-300K}, which spans K-12 levels with image captions and step-wise solutions. MultiMath-7B achieves state-of-the-art (SOTA) performance among open-source models on existing multimodal mathematical benchmarks and also excels on text-only mathematical benchmarks. Our model and dataset are available at {\textcolor{blue}{\url{https://github.com/pengshuai-rin/MultiMath}}}. | A multimodal large language model that bridges the gap between math and vision, MultiMath-7B achieves state-of-the-art (SOTA) performance among open-source models on existing multimodal mathematical benchmarks and also excels on text-only mathematical benchmarks. | [
"Liangcai, Gao",
"Shuai, Peng",
"Di, Fu",
"Xiuqin, Zhong",
"Hongguang, Fu",
"Zhi, Tang"
] | 2024-08-30T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.00147 | https://arxiv.org/abs/2409.00147 | https://www.semanticscholar.org/paper/ddde5c4e6452a4c1cd6ba19fb585f0ac646a9e23 |
|
NUMCoT: Numerals and Units of Measurement in Chain-of-Thought Reasoning using Large Language Models | Numeral systems and units of measurement are two conjoined topics in activities of human beings and have mutual effects with the languages expressing them. Currently, the evaluation of Large Language Models (LLMs) often involves mathematical reasoning, yet little attention is given to how minor changes in numbers or units can drastically alter the complexity of problems and the performance of LLMs. In this paper, we scrutinize existing LLMs on processing of numerals and units of measurement by constructing datasets with perturbations. We first anatomize the reasoning of math word problems to different sub-procedures like numeral conversions from language to numbers and measurement conversions based on units. Then we further annotate math word problems from ancient Chinese arithmetic works which are challenging in numerals and units of measurement. Experiments on perturbed datasets demonstrate that LLMs still encounter difficulties in handling numeral and measurement conversions. | This paper anatomizes the reasoning of math word problems to different sub-procedures like numeral conversions from language to numbers and measurement conversions based on units and demonstrates that LLMs still encounter difficulties in handling numeral and measurement conversions. | ## NUMCoT: Numerals and Units of Measurement in Chain-of-Thought Reasoning using Large Language Models
**Ancheng Xu[1][,][2]** **Minghuan Tan[1][∗]** **Lei Wang[3]** **Min Yang[1][∗]** **Ruifeng Xu[4]**
1Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
2
University of Chinese Academy of Sciences
3
School of Computing and Information Systems, Singapore Management University
4
Harbin Institute of Technology (Shenzhen)
{ac.xu,mh.tan,min.yang}@siat.ac.cn, [email protected], [email protected]
**Abstract**
Numeral systems and units of measurement
are two conjoined topics in activities of human beings and have mutual effects with the
languages expressing them. Currently, the evaluation of Large Language Models (LLMs) often involves mathematical reasoning, yet little attention is given to how minor changes
in numbers or units can drastically alter the
complexity of problems and the performance
of LLMs. In this paper, we scrutinize existing LLMs on processing of numerals and units
of measurement by constructing datasets with
perturbations. We first anatomize the reasoning of math word problems to different subprocedures like numeral conversions from language to numbers and measurement conversions based on units. Then we further annotate math word problems from ancient Chinese
arithmetic works which are challenging in numerals and units of measurement. Experiments
on perturbed datasets demonstrate that LLMs
still encounter difficulties in handling numeral
and measurement conversions. The code and
[data are available at: https://github.com/CAS-](https://github.com/CAS-SIAT-ConsistencyAI/NUMCoT)
[SIAT-ConsistencyAI/NUMCoT.](https://github.com/CAS-SIAT-ConsistencyAI/NUMCoT)
**1** **Introduction**
Numbers and counting are the basic concepts in
human experience. Numbers are a set of conceptual tools made from words and other symbols for
specific quantities and a key set of linguistically
based innovations that distinguish human species
from others (Everett, 2017). The development of
numeral systems allows humans to express numbers in a consistent manner.[1] Counting is usually
not a monotone process of manipulating numbers
from a numeral system but to quantify objects with
a units of measurement[2] to compare the magnitude.
_∗Corresponding author._
[1https://en.wikipedia.org/w/index.php?](https://en.wikipedia.org/w/index.php?title=Numeral_system)
[title=Numeral_system](https://en.wikipedia.org/w/index.php?title=Numeral_system)
[2https://en.wikipedia.org/w/index.php?](https://en.wikipedia.org/w/index.php?title=Unit_of_measurement)
[title=Unit_of_measurement](https://en.wikipedia.org/w/index.php?title=Unit_of_measurement)
In the literature, Thawani et al. (2021) adopt the
taxonomy discipline called Core Systems of Number (Feigenson et al., 2004) from cognitive science.
The tasks in numeracy are then categorized by the
granularity and units attached to the quantities in
the task, where granularity means whether the encoding of the number is exact or approximate, and
units represent whether the numerals are in their
numerical forms or grounded with units of measurement. Based on the taxonomy, existing numeracyoriented tasks are identified as simple arithmetic
tasks (Wang et al., 2021), numeration tasks (Naik
et al., 2019; Wallace et al., 2019; Johnson et al.,
2020), magnitude comparison tasks (Naik et al.,
2019; Wallace et al., 2019), Math Word Problems
(MWPs) (Roy and Roth, 2015; Wang et al., 2017;
Amini et al., 2019), exact facts in the context of numeracy (Lin et al., 2020; Mishra et al., 2020), measurement estimation tasks (Forbes and Choi, 2017;
Elazar et al., 2019; Zhou et al., 2020) and numerical language modeling tasks. There are still tasks
which fall out the taxonomy, such as numeric paraphrasing (one-to-one correspondences between different surface forms of the same number), quantity
entailment tasks (Mishra et al., 2020), numeral understanding tasks, Fused-Head Resolution, counting tasks (Suzgun et al., 2019; Bhattamishra et al.,
2020) and other domain-specific tasks. As far as
we are concerned, the tasks discussed above cover
a wide range of topics in numeracy and address a
lot of challenges faced by numerals and units of
measurement.
However, we still need to address the issue of numeracy when discussing arithmetic by using pure
numerals and making an extra effort to take units
of measurement into consideration. The inadequacy in accurately converting numerals with units
of measurement may lead to unpredictable consequences in real-life scenarios, especially in the era
of Large Language Models (LLMs) where decoderonly generation methods are being employed.
14268
-----
|今|分之一步再加八分之一步再加九 分之一步再加十分之一步。如果 田面积为一亩,那么长度是多少? 答:八十一又七千三百八十一分 之六千九百三十九步。 今|
|---|---|
|EN|Now we have a field, the width of the field is 1 +1+1+1+1+1+ 2 3 4 5 6 1+1+1+ 1 Bu(step). What is 7 8 9 10 the length of the field if the area is|
||1 Mu? Answer: 816939Bu. 7381|
|||
**Numerals** **Units of Measurement** **SUANJING**
二百四十 240 一 亩 二百四十 积步 今有田广一步半、三分步之一、
四分步之一、五分步之一、六
**Two Hundred And Forty** 1 Mu 240 Bu[2] 古 分步之一、七分步之一、八分
步之一、九分步之一、十分步
二分之一 1 一 步 半 一又二分之一 步 之一。求田一亩,问︰从几何?
答:八十一步、七千三百八十
2 1 Bu half 1.5 Bu ZH 一分步之六千九百三十九。
**One over Two**
4895米=?厘米 现在有一块田,宽一步半外加三
4895 meters = ? centimeters 分之一步再加四分之一步再加五
七千三百八十一 6939 分之一步再加六分之一步再加七
分之六千九百三 7381 776周 - 972天 = ?天 今 分之一步再加八分之一步再加九
十九 776 weeks - 972 days = ? days 分之一步再加十分之一步。如果
田面积为一亩,那么长度是多少?
**Six Thousand Nine Hundred** 8吨815千克 + 37吨 = ?吨?千克 答:八十一又七千三百八十一分
**And Thirty Nine over Seven** 8 tons 815 kilograms + 37 tons = ? 之六千九百三十九步。
**Thousand Three Hundred** tons ? kilograms
Now we have a field, the width of
**And Eighty One** 1 1 1 1 1
the field is 1 +
**Cross Lingual MWP** EN 1 1 1 1 2 [+] 3 [+] 4 [+] 5 [+] 6 [+]
三十六点八零 36.80 一支铅笔重28.3克。5支铅笔有多重? 7 [+] 8 [+] 9 [+] 10 [Bu(step). What is ]
One pencil weighs 28.3 grams. How the length of the field if the area is
**Thirty-six Point Eight Zero** 1 Mu?
much do 5 pencils weigh? Answer: 81 6939
7381 [Bu. ]
Figure 1: On the left of the image are numeral conversions tasks. In the middle are challenges related to unit
conversion and mathematical problems. On the far right is an example from SUANJING, featuring its original
problem in ancient Chinese.
Conventional LLMs (Workshop, 2023; OpenAI,
2023; Zeng et al., 2023; Touvron et al., 2023a) implicitly assume that numeral systems and units of
measurement are innate, and they conduct analysis
at the reasoning level to demonstrate their ability in solving math word problems. For example,
Wei et al. (2023) uses Chain-of-Thought method to
prompt LLMs to generate coherent series of intermediate reasoning steps to solve problems. We argue that this assumption needs further verification,
and better prompting methods are also needed to
explore the extent to which the assumption actually
works. In a math word problem, if the conversion
of numerals or scale of units fails, it’s not guaranteed to be correct, even if each further reasoning
step is on the right track. We justify our claim from
the following aspects: (1) In LLMs, the extrapolation of numerals is more difficult to define, as
the numbers in the training set have a wider range
compared to traditional models. (2) Although most
math word problems adopt the Hindu-Arabic writing style for numerals, it’s still common to use
pronunciations with written style to mark a number
for the advantage of being irrevocable, especially
in Chinese. When writing Arabic numerals, we
often overlook the magnitude and only focus on
the length of the numerals. However, the rules for
reading numbers are very different. For example, in
English, every three digits are divided into a scale,
while in Chinese, it is every four digits. When
pronouncing, we first focus on the length of the
numbers, then on the magnitude, and finally group
and read them one by one. (3) To our best knowledge, the investigation on units of measurement has
been conducted through measuring skill tests (unit
conversion, reference range detection, and measure
comparison) (Park et al., 2022) with pretrained language models and has identified their lack of such
abilities. It is still unknown to what extent LLMs
can overcome this challenge, especially in uniting
numeral conversions with units of measurement.
To achieve these goals, we construct four
datasets to synthesize the procedure of how humans process numerals and units of measurement.
The procedure is anatomized into sub-procedures
like converting words into numbers, dealing with
units of measurement with different scales and
solving the problem using reasoning and rationale.
For each sub-procedure, we employ random numbers and addition operations to perturb the dataset,
thereby reducing the generation of memorization
issues.
In this paper, we focus on ChatGPT (OpenAI,
2022), ChatGLM series models (Zhipu.AI, 2023)
14269
-----
, ERNIE-Bot (Baidu, 2023) and LLaMA-2 family
models (Touvron et al., 2023b). We construct different prompts to elicit LLMs to generate responses
for the datasets above. Our experiments reveal that
LLMs exhibit robustness in converting between
numbers and English text, but less effectiveness in
converting between numbers and Chinese text.
Furthermore, LLMs consistently struggle to
memorize conversion ratios between different units,
posing challenges for automatic numeral conversions based on unit changes. In MWPs involving
numeral conversions and units of measurement,
LLMs perform well. However, LLMs often struggle to provide correct answers to SUANJING problems that require specialized long-tail knowledge.
In summary, our work makes the following contributions:
1. We construct four datasets to explore the performance of LLMs in tasks that involve numeral conversions and unit conversion, which
are crucial research questions for LLMs.
2. We discover and verify that introducing CoT
in certain subtasks significantly deteriorates
the reasoning performance of LLMs. In the
experimental section, we provide the corresponding analysis.
3. We conduct prompt-based experiments on
LLMs to assess their ability in numeral conversions and units of measurement, thereby
highlighting a new direction for training and
benchmarking LLMs.
**2** **Related Work**
**2.1** **Units of Measurement in Numeracy**
Units of measurement in numeracy have been attracting attention from the community because
of their relationship with common sense in life
and domain knowledge in applications. Despite recent success of pre-trained language models (PLMs) (Devlin et al., 2019; Liu et al., 2020),
their reasoning abilities using numerical commonsense is surprisingly poor (Lin et al., 2020) and
PLMs lack the capability required for reasoning
over measurements (Park et al., 2022). The knowledge on scaling of measurement, such as 1000 me_ters make a km, can add extra challenge to numeri-_
cal reasoning tasks (Mishra et al., 2022).
While traditional explorations over measurements address more on quantity identification with
measurements (Harper et al., 2021; Göpfert et al.,
2022) and their comparable properties (Forbes and
Choi, 2017; Lin et al., 2020; Park et al., 2022), we
focus more on the accuracy of their usage from
arithmetic perspective. With the development of
CoT-based approaches in LLMs, we are also curious how they perform on dealing with different
system of units in either base forms and derived
forms.
**2.2** **Numeracy in Large Language Models**
Besides the survey conducted by Thawani et al.
(2021) that is mentioned in Section 1, we also review how numeracy is discussed in the era of LLMs.
The evaluation of GPT-3 (Brown et al., 2020) over
NumGLUE (Mishra et al., 2022) indicates that it is
a better few-shot learner but not necessarily a better
many-shot learner. In arithmetic, MathGLM (Yang
et al., 2023) breaks the misconception that LLMs
are unable to accurately perform arithmetic operations and trains a model which can accurately perform multi-digit arithmetic operations with almost
100% accuracy without data leakage, significantly
surpassing GPT-4 (OpenAI, 2023).
**3** **Datasets and Perturbations**
**3.1** **Datasets**
For math word problems using different numeral
systems and units of measurement, we are curious about how LLMs process such information in
their reasoning steps. We choose to anatomize the
reasoning of math word problems into different
sub-procedures, like conversions between numbers
and words, conversions with units of measurement.
We first build the Numeral Conversions dataset and
the Conversions with Units of Measurement dataset.
Then we construct the Cross Lingual MWPs dataset
that involves math word problems with Chinese and
English, and the SUANJING dataset abundant with
these challenges. The datasets are illustrated in
Figure 1.
**Numeral Conversions** The conversion of numerals to words (Num2Words) and its inverse process
_Words2Num are two basic abilities for humans to_
manipulate numbers. Pronunciation of numerals is
critical for humans to express quantities precisely.
For example, an integer 21,600,900 should be pronounced as “twenty one million six hundred thousand nine hundred only" in English and “二千一百
```
六十万零九百" in Chinese. The task is also called
```
14270
-----
as (Numeric) Paraphrasing (Thawani et al., 2021).
The practice using text conversion from numerical to standard spelled-out numbers in numeracy
probing has been conducted earlier in other multilingual numerical understanding works (Johnson
et al., 2020).
Different from them, where numbers are generated from a smaller range of 0 to 999, we generate numbers from 0 to trillions and consider the
complexity of each number from both scale and
pronunciation forms. The Numeral Conversions
dataset is separated into the following splits:
1. The Numeral Conversions Medium split consists of 400 randomly generated integers
falling into the ranges of zero to a thousand (01K), a thousand to a million (1K-1M), a million to a billion (1M-1B), and a billion to a
trillion (1B-1T), with each range containing
100 integers.
2. The Numeral Conversions Easy split comprises 400 Arabic numerals with lengths identical to those in the Numeral Conversions
_Medium split, but the corresponding pronunci-_
ation forms in Chinese and English are significantly shorter.
3. The Numeral Conversions Hard split consists
of 200 fractions and 200 decimals. For fractions, the numerators and denominators of the
fractions are randomly sampled from the same
four numerical ranges mentioned earlier, ensuring they are of similar scales. Two random
integers, A and B, are generated within their
respective numerical range, forming a fraction
in the format A/B . Similarly, two random
integers, C and D, are selected within their
corresponding numerical range, composing a
decimal in the format C.D .
**Conversions with Units of Measurement** In
most human experiences, numbers are used in joint
with units of measurement to express real-world
quantities. In specific scenarios, units of measurement with different scales are also ubiquitous. For
example, 1.5 litre is equivalent to 1 litre plus 500
milliliters. However, it’s still questionable whether
LLMs process such information similarly as humans.
To emphasize this sub-procedure, we create parallel datasets in both Chinese and English based on
18 units commonly used by humans, such as length,
time, weight, and money, including centimeters,
seconds, kilograms, yuan, and other units. These
datasets are generated using random numbers and
are identical in all aspects except for the language.
Additionally, we categorize the questions into three
levels of difficulty.
1. The Units of Measurement Easy split involves
the conversion of numerical values from one
unit to another. For example, 856 grams = ?
milligrams.
2. The Units of Measurement Medium split requires performing addition or subtraction between two units before converting to another
unit. For example, 738 seconds - 5 milliseconds = ? milliseconds.
3. The Units of Measurement Hard split involves
a more complex process: combining two units
into one and then performing addition or subtraction operations before converting to another unit. For example, 4 days 387 hours +
81 days = ? days ? hours.
LLMs require common sense and reasoning abilities to complete conversions at all three levels.
**MWPs and SUANJING** To compare the challenges introduced by numeral conversions and units
of measurement, we utilize a bilingual MWPs
dataset redacted by Tan et al. (2022) and a Chinese dataset SUANJING translated from ancient
Chinese MWPs. The bilingual MWPs dataset is
compiled from AddSub (Hosseini et al., 2014), SingleOp (Roy et al., 2015) and MultiArith (Roy and
Roth, 2015), containing 1557 elementary school
math word problems.
SUANJING problems are constructed by translating ancient Chinese to modern Chinese while
preserving character-level numeral representations. We select SUANJING because it comprehensively tests LLMs on tasks like Num2Words,
Words2Num, and Conversations with Units of Measurement. This setup allows us to examine LLMs’
performance under various conditions: without
CoT, with CoT but lacking rare knowledge, and
with CoT plus rare knowledge. The translation is
performed by ChatGLM-6B (Du et al., 2022; Zeng
et al., 2022) and further refined by human experts.
We list details about SUANJING in Appendix A.
14271
-----
**Num2Words** **Words2Num**
**Zero-shot** **Zero-shot CoT** **Few-shot** **Few-shot CoT** **Zero-shot** **Zero-shot CoT** **Few-shot** **Few-shot CoT**
**ZH** **EN** **ZH** **EN** **ZH** **EN** **ZH** **EN** **ZH** **EN** **ZH** **EN** **ZH** **EN** **ZH** **EN**
**ChatGLM-6B** 28.75 6.50 22.75 1.50 20.00 7.75 7.00 4.50 75.50 50.25 59.75 40.25 70.25 45.75 55.50 32.50
**ERNIE-Bot-turbo** 39.00 12.25 28.50 8.75 48.25 44.75 33.25 36.75 74.75 57.00 **66.25** 29.50 87.00 67.75 **76.00** 51.00
**ChatGLM-Turbo** 39.25 41.75 38.75 35.25 45.75 42.00 32.25 30.00 **80.50** 61.25 53.00 40.00 **88.50** 73.50 62.50 38.50
**Llama2-7B** 12.50 18.50 9.50 12.75 21.50 44.75 7.50 23.75 27.00 30.50 14.00 28.25 39.00 68.50 17.00 29.75
**Llama2-13B** 19.50 37.00 7.75 15.00 33.00 52.75 4.25 17.50 27.00 20.50 13.25 10.50 62.50 76.50 33.75 31.50
**Llama2-70B** 32.75 45.50 8.25 17.25 33.00 54.75 6.50 48.50 38.00 62.75 27.00 21.00 38.75 67.75 19.75 23.75
**ChatGPT** **68.00** **98.25** **54.25** **90.25** **72.50** **99.25** **57.25** **96.50** 61.25 **100.00** 45.75 **68.00** 63.75 **99.75** 58.50 **89.00**
Table 1: Overview of conversion accuracy for Num2words and Words2Num on the Numeral Conversions Medium
split using the four prompt methods: Zero-shot, Zero-shot with CoT, Few-shot and Few-shot with CoT.
**3.2** **Perturbations**
To avoid the generation from memorization issues
that might occur with LLMs, we decide to perturb
the datasets created above. For example, to design a
dataset with Arabic numeral lengths equal to those
in the Numeral Conversions Medium dataset, and
with Chinese and English representations shorter
than those in the Numeral Conversions Medium
dataset, the numerical format of the Numeral Con_versions Easy dataset should ideally follow that of_
_M × 10[N]_ .
However, considering the likelihood of LLMs
encountering Numeral Conversions Easy dataset
numbers frequently during pretraining, we introduce perturbations by adding one to each number
in the Numeral Conversions Easy dataset, with the
format being M × 10[N] + 1.
**4** **Experiments**
We conduct the experiments using open-sourced
LLMs as well as API-based LLMs supporting both
English and Chinese languages. For publicly available LLMs, we chose ChatGLM2-6B[3] and three
models from the LLaMA-2[4] family: 7B, 13B, and
70B, which were deployed to A6000 GPU server
locally. For API-based LLMs, we use ChatGPT[5],
ERNIE-Bot-turbo[6], ChatGLM-Turbo[7].
We consider the following prompt: (1) Zero**shot:We simply present the questions to the**
[3https://github.com/THUDM/ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B)
[4https://llama.meta.com/llama2](https://llama.meta.com/llama2)
[5https://platform.openai.com/docs/](https://platform.openai.com/docs/models/gpt-3-5)
[models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
[6https://cloud.baidu.com/doc/](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/4lilb2lpf)
[WENXINWORKSHOP/s/4lilb2lpf](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/4lilb2lpf)
[7https://open.bigmodel.cn/dev/howuse/](https://open.bigmodel.cn/dev/howuse/model)
[model](https://open.bigmodel.cn/dev/howuse/model)
LLMs without introducing any examples, reasoning steps, or CoT. (2) Zero-shot CoT: We simply
present the questions to the LLMs, employing the
CoT framework without introducing any examples
or deductive steps. Our approach involve the simple addition of the phrase Let’s think step by step.
(3) Few-shot: We present four analogous questions
accompanied by concise responses in the prompt
before presenting the questions to the LLMs, without introducing deductive steps. (4) Few-shot CoT:
We present four analogous questions, each accompanied by concise responses, within the prompt
prior to presenting them to the LLMs. Additionally, deductive steps are introduced alongside the
questions.
**4.1** **The Accuracy of Numeral Conversions**
We list the experimental results for the Numeral
_Conversions Medium splits in Table 1. For more_
information about the prompt design for the current
experiment, please refer to Table 5 to 10 in the
Appendix.
We have the following findings: (1) ChatGPT
has significant advantages over other models in
conversions using English and is almost perfect
at Num2Words task. (2) Introducing CoT and deductive steps in the Num2Words and Words2Num
tasks results in a significant decrease in accuracy
compared to prompts without the incorporation of
CoT and deductive steps.
**Accuracy against Different Scales** From a numerical scale perspective, different models exhibit
significant variance in performance, with ChatGPT
outperforming all other models. When the number is less than 1000, all models achieve their best
performance, and the gap is smallest compared to
14272
-----
**0-1K**
**1B-1T** **1K-1M**
**ChatGPT** **1M-1B**
**ChatGLM-Turbo**
**ERNIE-Bot-turbo**
**Llama 70B**
**Llama 13B**
**Llama 7B**
**ChatGLM-6B**
(a) Overall performance of
different models.
**0-1K**
**1B-1T** **1K-1M**
**1M-1B**
**Few-shot**
**Few-shot CoT**
**Zero-shot**
**Zero-shot CoT**
(c) Accuracy of ChatGPT
with different prompts.
**0-1K**
**1B-1T** **1K-1M**
**1M-1B**
**Easy**
**Medium**
**Hard**
(b) Accuracy of ChatGPT on
splits with different complexity.
**0-1K**
**1B-1T** **1K-1M**
**1M-1B**
**English**
**Chinese**
(d) Accuracy of ChatGPT
with different languages.
Figure 2: Accuracy against different scales with respect to different dimensions.
**Zero-shot** **Zero-shot CoT** **Few-shot** **Few-shot CoT** **Few-shot CoT with knowledge**
**ZH** **EN** **ZH** **EN** **ZH** **EN** **ZH** **EN** **ZH** **EN**
**ChatGLM-6B** 22.83 5.67 44.33 18.33 18.17 6.50 49.50 30.17 46.33 24.33
**ERNIE-Bot-turbo** 28.00 29.33 42.00 31.83 15.50 22.50 37.83 41.83 37.67 39.50
**ChatGLM-turbo** 39.33 34.33 58.83 55.67 33.83 27.67 56.50 57.50 55.00 50.67
**Llama2-7B** 7.83 18.50 6.50 20.00 5.83 16.17 9.83 25.00 12.00 20.67
**Llama2-13B** 13.67 28.50 7.33 27.50 11.67 16.83 23.00 37.17 18.83 30.50
**Llama2-70B** 18.67 44.33 23.67 44.33 16.50 43.67 24.33 47.83 27.50 44.67
**ChatGPT** **45.50** **48.00** **68.83** **77.33** **46.17** **49.67** **72.67** **79.67** **73.67** **76.00**
Table 2: Overview of reasoning accuracy for Units of Measurement on the Numeral Conversions Medium split using
the five prompt methods: Zero-shot, Zero-shot CoT, Few-shot, Few-shot CoT and Few-shot CoT with knowledge.
that of ChatGPT. However, as the scale of numbers
increases, there is a consistent decrease in accuracy for all models. The comparison is shown in
Figure 2a.
**ChatGPT over Different Scales** Given that
ChatGPT performs exceptionally well among other
models, we further analyze ChatGPT as the representative model. For data related to other models,
please refer to Table 14 to 15 in the Appendix.We
illustrate how ChatGPT performs across different
scales from the following aspects: (1) Complexity:
As the decoding length for ChatGPT increases from
Easy to Hard difficulty, the accuracy decreases
consistently across all scales, see Figure 2b. (2)
**Prompt Method: Figure 2c shows that the in-**
clusion of CoT in Zero-shot harms performance
across all scales while Few-shot works better for
large scales. (3) Language: As both Chinese and
English have relative high number system transparency (Johnson et al., 2020), the gaps between
two languages is surprising, see Figure 2d. This
partially shows that either training corpus is skewed
or numeral conversions knowledge is less transferable across languages.
**4.2** **Evaluation of Numerals with Units of**
**Measurement**
In the experiment concerning units of measurement,
we adopt the same prompt design as in the previous experiment. To further investigate the impact
of unit conversion knowledge on the reasoning capabilities of LLMs in this experiment, we define
**Few-shot CoT with knowledge that involves the**
addition of necessary unit conversion knowledge
to the Few-shot CoT framework. For all prompt designs regarding units of measurement, please refer
to Table 11 in the appendix.
The Table 2 is experimental results of the seven
models across datasets of three different difficulty
levels. The results clearly demonstrate that (1)
ChatGPT, compared to other models, consistently
exhibits superior performance and reasoning capabilities across all levels of dataset difficulty
and in both languages. (2) Unlike the previous
14273
-----
**Add CoT and Reasoning steps**
**Non CoT and Reasoning steps**
**LL7** **LL13 GLM6 EBt** **LL70 GLMt GPT**
**Hard** **Medium** **Easy**
**LL7** **LL13 GLM6 EBt** **LL70 GLMt GPT**
**Chinese** **English**
**LL7** **LL13 GLM6 EBt** **LL70 GLMt GPT**
(a) The difference in accuracy of the
model with and without CoT.
(b) The difference in accuracy of the
model on datasets of different difficulty
levels.
(c) The difference in accuracy of the
model on datasets of different language.
Figure 3: Variations in accuracy among LLMs are observed after distinguishing between CoT, difficulty, and
language in Units of Measurement problems. Due to space constraints, we use abbreviations here. LL7 represents
Llama2-7B, LL13 represents Llama2-13B, LL70 represents Llama2-70B, GLM6 represents ChatGLM2-6B, EBt
represents ERNIE-Bot-turbo, GLMt represents ChatGLM-turbo, and GPT represents ChatGPT.
**MWPs** **SUANJING**
|Zero-shot Zero-shot CoT Few-shot Few-shot CoT ZH EN ZH EN ZH EN ZH EN ChatGLM-Turbo 82.98 87.48 87.80 93.32 59.86 70.13 88.18 93.9 ChatGPT 83.75 92.68 86.90 93.26 77.01 85.74 88.38 95.31 Llama2-70B 82.08 91.91 81.95 90.75 81.63 91.78 81.18 90.11|Zero-shot Few-shot Few-shot CoT Zero-shot Few-shot CoT CoT with knowledge ZH ZH ZH ZH ZH 8.00 6.50 2.50 2.00 5.00 5.50 9.00 2.00 5.50 8.00 0.00 0.00 0.00 0.00 0.00|
|---|---|
Table 3: Overview of the impact of four prompts and three models on the accuracy of answers in the bilingual
MWPs set and the SUANJING set.
Num2words and Words2Num experiments, the introduction of CoT and reasoning steps in this experiment significantly enhances the success rate of
LLMs in accurately generating answers.
To delve into more specific information, we categorize the experimental data and create three bar
graphs as depicted in the Figure 3. Figure 3a illustrates that the introduction of CoT and reasoning
steps led to a noticeable improvement in the accuracy of each model when handling units of measurement tasks, Figure 3b shows that as the difficulty
of the questions increased, the accuracy of each
model in dealing with units of measurement tasks
decreased correspondingly. Figure 3c indicates that
the models exhibit roughly the same accuracy in
handling tasks in both Chinese and English, even
in the case of ChatGPT.
**4.3** **Comparisons over MWPs and SUANJING**
In this section, we employ three state-of-the-art
models, ChatGPT, ChatGLM-Turbo and Llama270B, to evaluate the performance of LLMs on
MWPs and SUANJING. We select 100 questions
from SUANJING that share the same operators and
complexity level as MWPs. Additionally, SUAN
JING poses extra challenges due to the textual format of numbers and the requirement for unit conversions during the reasoning process when necessary.
The experimental results for MWPs are presented in the left part of Table 3, which clearly
indicate the following: (1) LLMs can handle both
English and Chinese questions very effectively. (2)
The incorporation of CoT significantly enhances
the accuracy of LLMs on MWPs.
Contrary to this, the experimental results for
SUANJING, as shown in the right part of Table 3,
demonstrate a substantial decline in accuracy. Furthermore, the addition of CoT does not yield any
improvement and, in fact, seems to deteriorate the
performance.
**4.4** **Analysis of Errors from LLMs**
In the numeral conversions experiments, the majority of errors are primarily concentrated on LLMs’
inability to correctly identify the magnitudes of individual numbers. As illustrated in the figure 4a,
the number 6, originally representing hundred million in Arabic numerals, is incorrectly identified
as billion. Additionally, errors arise from LLMs’
mishandling of the digit zero, including failure to
14274
-----
|EN|1 centimeters 729 millimeters - 13 millimeters = ? centimeters ? millimeters The units on the right side of the question mark are centimeters and millimeters. 1 centimeter is equal to 10 millimeters. So, 729 millimeters can be written as 72.9 centimeters (729/10 = 72.9).Subtracting 13 millimeters from this, we get 59.9 centimeters (72.9 - 0.13 = 59.9). Since 1 centimeter is equal to 10 millimeters, 0.9 centimeters is equal to 9 millimeters. Therefore, the answer is 59.9 centimeters 9 millimeters.|
|---|---|
521厘米 + 449毫米 = ?毫米
将521厘米转化为毫米,1厘米等于10毫米,所以
## ZH 521厘米等于521 * 10 = 5210毫米。然后将449毫
米直接加上5210毫米,得到的结果是5661毫米。
所以,521厘米 + 449毫米 = 5661毫米。
ZH EN
六亿零三十万二千 Six Hundred Million Three
零五十 Hundred Two Thousand Fifty
# √ 600302050 √
六十亿三千零二万零五十 Six hundred billion, three
hundred two million, fifty
thousand and fifty.
七亿八千零七十九万五千九 Seven hundred eighty million
百九十五 seven hundred ninety five
thousand nine hundred
# √ ninety five √
780795995
七千八百零七万九千五百九 Seven hundred eighty billion
十九 seven hundred ninety-five
million nine hundred ninetyfive thousand
(a) Errors in numeral conversion experiments.
(b) Errors in units of measurement experiments.
Figure 4: Common errors in Numeral and Units of Measurement experiments from LLMs.
recognize its significance and inability to accurately
restore the quantity and position of zero in the numerical context. To enhance the accuracy of LLMs
in such tasks, future improvements could focus on
refining LLMs’ ability to recognize the length and
magnitude of numbers.
Our experiments also demonstrated that CoT did
not work in the numeral conversion experiments.
LLMs achieved significantly higher accuracy rates
on the easy dataset, which was of comparable scale
to the medium dataset but required shorter answer
lengths. This discrepancy highlights two main
challenges that LLMs face in numerical reasoning. First, the linguistic nature of input text makes
it difficult for LLMs to understand numerical data.
Second, the flexibility and complexity of the answers increase the likelihood of errors in longer
outputs. Given that CoT primarily enhances performance on complex inference tasks rather than
simple ones, its application to simpler tasks such as
Num2Words and Words2Num increases the length
of the generated text, thereby diminishing LLMs
accuracy.
In the units of measurement experiment, the majority of errors primarily stem from LLMs’ failure
to correctly recognize the conversion magnitude
relationship when multiple units are involved. As
depicted in the figure 4b, there exists a tenfold
progressive relationship between decimeters and
centimeters, yet LLMs overlook the magnitude relationship inherent in textual units. Introducing
CoT significantly mitigates the occurrence of such
errors but still requires further refinement. Additionally, even when LLMs have correctly grasped
the magnitude relationship inherent in textual units,
errors may still occur during the calculation process. To enhance the accuracy of LLMs in such
tasks, efforts could be directed towards improving
LLMs’ recognition of textual units and the magnitude relationships between units.
In the SUANJING experiment, LLMs face more
comprehensive problem-solving tasks. As depicted
in the figure 5, LLMs encounter errors in handling
units in SUANJING problems, as some ancient units
are extremely rare in contemporary society, making it difficult for LLMs to correctly understand
SUANJING problems. This long tail problem can
be addressed by introducing external knowledge
in the prompt, thereby enabling LLMs to have a
chance of correctly handling SUANJING problems.
However, LLMs frequently make errors in recognizing numbers and performing numerical calculations, especially in the recognition and computation of more challenging fractions and decimals.
14275
-----
1. There is a noticeable performance gap between Chinese LLMs and top-tier models like
ChatGPT.
2. The same large language model exhibits varying levels of performance facing problems in
different languages.
3. Despite the introduction of external knowledge and CoT, LLMs still struggle to effectively handle comprehensive problems involving numeral conversions and units of measurement.
**Limitations**
In this paper, we introduce datasets to investigate
whether LLMs can process numeral conversions
and units of measurement like humans, despite
certain limitations.
Firstly, LLMs’ responses exhibit randomness,
and we do not conduct repeated trials of the same
question with the same model. Conducting repeated trials could reduce randomness and yield
more precise accuracy estimates. Secondly, Our
experiments involve only seven types of models.
Incorporating commercial models such as GPT-4
and Claude might provide a more representative
performance curve.
Future work could diversify experimental data
by incorporating datasets from professional backgrounds like financial accounting, aiding in exploring the performance boundaries of LLMs.
**Acknowledgements**
This work was partially supported by the National Key Research and Development Program
of China (2022YFF0902100), China Postdoctoral Science Foundation (2023M733654), National Natural Science Foundation of China
(62376262), Guangdong Basic and Applied Basic Research Foundation (2023A1515110496),
the Natural Science Foundation of Guangdong
Province of China (2024A1515030166), Shenzhen Science and Technology Innovation Program
(KQTD20190929172835662), Shenzhen Basic Research Foundation (JCYJ20210324115614039).
**References**
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik
Koncel-Kedziorski, Yejin Choi, and Hannaneh Ha[jishirzi. 2019. MathQA: Towards interpretable math](https://doi.org/10.18653/v1/N19-1245)
还有一块田,它的宽是五分之四步,长是九分之五步,问
这田的面积是多少?
要计算这块田的面积,我们可以使用矩形的面积公式:
面积 = 宽度 × 长度。根据题目,田的宽度是五分之四
步,长度是九分之五步。
所以,田的面积可以计算如下:面积 = 五分之四步 ×
九分之五步。首先我们需要将五分之四和九分之五化为
小数形式。五分之四 = 5/4 = 1.25,九分之五 = 9/5
= 1.8。
然后我们可以计算面积:面积 = 1.25步 × 1.8步 =
2.25步。所以,这块田的面积是2.25步。
又有一块田,它的宽是十二步,长是十四步。问:这田的
面积是多少?
要计算田的面积,我们可以使用公式:面积=长 × 宽。
根据题目,田的宽是十二步,长是十四步。
所以,面积=10 × 12 = 140步,由于1亩=240步,所以
140步又等于0.583亩,即0.583亩。
(需要注意到,1亩=240步)
Figure 5: Errors in SUANJING experiments.
Consequently, even if LLMs can correctly utilize
the external prompt-introduced knowledge of ancient units, their accuracy remains relatively low.
Due to the extensive use of fractions described in
Classical Chinese in SUANJING, LLMs need to
undergo multiple Hard-level Words2Num tasks before answering questions, significantly reducing
the accuracy of SUANJING experiments.
In the MWPs experiment, the majority of errors
are similar to those in the units of measurement
experiment, as the MWPs experiment can be considered a natural language version of the units of
measurement experiment to some extent. Furthermore, SUANJING can be seen as a more challenging version of MWPs, hence many errors observed
in the preceding experiments are also frequent in
SUANJING. To improve the accuracy of LLMs in
such tasks, besides focusing on the improvement
directions of units of measurement experiments,
attention should also be given to the performance
of LLMs on long tail problems.
**5** **Conclusion**
We investigate the performance of various LLMs
on tasks involving numeral conversions and units
of measurement in both Chinese and English languages. Additionally, we explore the capability
boundaries of LLMs by introducing CoT and external knowledge. Based on a series of experiments,
the conclusions are as follows:
14276
-----
[word problem solving with operation-based for-](https://doi.org/10.18653/v1/N19-1245)
[malisms. In Proceedings of the 2019 Conference](https://doi.org/10.18653/v1/N19-1245)
_of the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
2357–2367, Minneapolis, Minnesota. Association for
Computational Linguistics.
Baidu. 2023. Introducing ernie-bot. [http:](http://research.baidu.com/Blog/index-view?id=185)
[//research.baidu.com/Blog/](http://research.baidu.com/Blog/index-view?id=185)
[index-view?id=185.](http://research.baidu.com/Blog/index-view?id=185)
Satwik Bhattamishra, Arkil Patel, and Navin Goyal.
[2020. On the computational power of transformers](https://doi.org/10.18653/v1/2020.conll-1.37)
[and its implications in sequence modeling. In Pro-](https://doi.org/10.18653/v1/2020.conll-1.37)
_ceedings of the 24th Conference on Computational_
_Natural Language Learning, pages 455–475, Online._
Association for Computational Linguistics.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
Clemens Winter, Christopher Hesse, Mark Chen, Eric
Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess,
Jack Clark, Christopher Berner, Sam McCandlish,
Alec Radford, Ilya Sutskever, and Dario Amodei.
[2020. Language models are few-shot learners.](http://arxiv.org/abs/2005.14165)
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
[Kristina Toutanova. 2019. BERT: Pre-training of](https://doi.org/10.18653/v1/N19-1423)
[deep bidirectional transformers for language under-](https://doi.org/10.18653/v1/N19-1423)
[standing. In Proceedings of the 2019 Conference of](https://doi.org/10.18653/v1/N19-1423)
_the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
4171–4186, Minneapolis, Minnesota. Association for
Computational Linguistics.
Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding,
[Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. GLM:](https://doi.org/10.18653/v1/2022.acl-long.26)
[General language model pretraining with autoregres-](https://doi.org/10.18653/v1/2022.acl-long.26)
[sive blank infilling. In Proceedings of the 60th An-](https://doi.org/10.18653/v1/2022.acl-long.26)
_nual Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers), pages 320–335,_
Dublin, Ireland. Association for Computational Linguistics.
Yanai Elazar, Abhijit Mahabal, Deepak Ramachandran,
[Tania Bedrax-Weiss, and Dan Roth. 2019. How large](https://doi.org/10.18653/v1/P19-1388)
[are lions? inducing distributions over quantitative](https://doi.org/10.18653/v1/P19-1388)
[attributes. In Proceedings of the 57th Annual Meet-](https://doi.org/10.18653/v1/P19-1388)
_ing of the Association for Computational Linguistics,_
pages 3973–3983, Florence, Italy. Association for
Computational Linguistics.
C. Everett. 2017. _[Numbers and the Making of Us:](https://books.google.com.sg/books?id=gFh7DgAAQBAJ)_
_[Counting and the Course of Human Cultures. Har-](https://books.google.com.sg/books?id=gFh7DgAAQBAJ)_
vard University Press.
Lisa Feigenson, Stanislas Dehaene, and Elizabeth
[Spelke. 2004. Core systems of number. Trends in](https://doi.org/https://doi.org/10.1016/j.tics.2004.05.002)
_Cognitive Sciences, 8(7):307–314._
[Maxwell Forbes and Yejin Choi. 2017. Verb physics:](https://doi.org/10.18653/v1/P17-1025)
[Relative physical knowledge of actions and objects.](https://doi.org/10.18653/v1/P17-1025)
In Proceedings of the 55th Annual Meeting of the
_Association for Computational Linguistics (Volume_
_1: Long Papers), pages 266–276, Vancouver, Canada._
Association for Computational Linguistics.
Jan Göpfert, Patrick Kuckertz, Jann Weinand, Leander
[Kotzur, and Detlef Stolten. 2022. Measurement ex-](https://doi.org/10.18653/v1/2022.findings-emnlp.161)
[traction with natural language processing: A review.](https://doi.org/10.18653/v1/2022.findings-emnlp.161)
In Findings of the Association for Computational
_Linguistics: EMNLP 2022, pages 2191–2215, Abu_
Dhabi, United Arab Emirates. Association for Computational Linguistics.
Corey Harper, Jessica Cox, Curt Kohler, Antony Scerri,
[Ron Daniel Jr., and Paul Groth. 2021. SemEval-2021](https://doi.org/10.18653/v1/2021.semeval-1.38)
[task 8: MeasEval – extracting counts and measure-](https://doi.org/10.18653/v1/2021.semeval-1.38)
[ments and their related contexts. In Proceedings of](https://doi.org/10.18653/v1/2021.semeval-1.38)
_the 15th International Workshop on Semantic Evalu-_
_ation (SemEval-2021), pages 306–316, Online. Asso-_
ciation for Computational Linguistics.
Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren
[Etzioni, and Nate Kushman. 2014. Learning to solve](https://doi.org/10.3115/v1/D14-1058)
[arithmetic word problems with verb categorization.](https://doi.org/10.3115/v1/D14-1058)
In Proceedings of the 2014 Conference on Empirical
_Methods in Natural Language Processing (EMNLP),_
pages 523–533, Doha, Qatar. Association for Computational Linguistics.
Devin Johnson, Denise Mak, Andrew Barker, and Lexi
[Loessberg-Zahl. 2020. Probing for multilingual nu-](https://doi.org/10.18653/v1/2020.blackboxnlp-1.18)
[merical understanding in transformer-based language](https://doi.org/10.18653/v1/2020.blackboxnlp-1.18)
[models. In Proceedings of the Third BlackboxNLP](https://doi.org/10.18653/v1/2020.blackboxnlp-1.18)
_Workshop on Analyzing and Interpreting Neural Net-_
_works for NLP, pages 184–192, Online. Association_
for Computational Linguistics.
Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, and Xiang
[Ren. 2020. Birds have four legs?! NumerSense:](https://doi.org/10.18653/v1/2020.emnlp-main.557)
[Probing Numerical Commonsense Knowledge of Pre-](https://doi.org/10.18653/v1/2020.emnlp-main.557)
[Trained Language Models. In Proceedings of the](https://doi.org/10.18653/v1/2020.emnlp-main.557)
_2020 Conference on Empirical Methods in Natural_
_Language Processing (EMNLP), pages 6862–6868,_
Online. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2020.
[Ro{bert}a: A robustly optimized {bert} pretraining](https://openreview.net/forum?id=SyxS0T4tvS)
[approach.](https://openreview.net/forum?id=SyxS0T4tvS)
Swaroop Mishra, Arindam Mitra, Neeraj Varshney,
[Bhavdeep Sachdeva, and Chitta Baral. 2020. To-](http://arxiv.org/abs/2005.08516)
[wards question format independent numerical reason-](http://arxiv.org/abs/2005.08516)
[ing: A set of prerequisite tasks.](http://arxiv.org/abs/2005.08516)
Swaroop Mishra, Arindam Mitra, Neeraj Varshney,
Bhavdeep Sachdeva, Peter Clark, Chitta Baral, and
[Ashwin Kalyan. 2022. NumGLUE: A suite of funda-](https://doi.org/10.18653/v1/2022.acl-long.246)
[mental yet challenging mathematical reasoning tasks.](https://doi.org/10.18653/v1/2022.acl-long.246)
In Proceedings of the 60th Annual Meeting of the
_Association for Computational Linguistics (Volume_
_1: Long Papers), pages 3505–3523, Dublin, Ireland._
Association for Computational Linguistics.
14277
-----
Aakanksha Naik, Abhilasha Ravichander, Carolyn Rose,
and Eduard Hovy. 2019. [Exploring numeracy in](https://doi.org/10.18653/v1/P19-1329)
[word embeddings. In Proceedings of the 57th An-](https://doi.org/10.18653/v1/P19-1329)
_nual Meeting of the Association for Computational_
_Linguistics, pages 3374–3380, Florence, Italy. Asso-_
ciation for Computational Linguistics.
OpenAI. 2022. Introducing chatgpt. [https://](https://openai.com/blog/chatgpt)
[openai.com/blog/chatgpt.](https://openai.com/blog/chatgpt)
[OpenAI. 2023. Gpt-4 technical report.](http://arxiv.org/abs/2303.08774)
Sungjin Park, Seungwoo Ryu, and Edward Choi. 2022.
[Do language models understand measurements? In](https://doi.org/10.18653/v1/2022.findings-emnlp.128)
_Findings of the Association for Computational Lin-_
_guistics: EMNLP 2022, pages 1782–1792, Abu_
Dhabi, United Arab Emirates. Association for Computational Linguistics.
[Subhro Roy and Dan Roth. 2015. Solving general arith-](https://doi.org/10.18653/v1/D15-1202)
[metic word problems. In Proceedings of the 2015](https://doi.org/10.18653/v1/D15-1202)
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, pages 1743–1752, Lisbon, Portu-_
gal. Association for Computational Linguistics.
[Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reason-](https://doi.org/10.1162/tacl_a_00118)
[ing about quantities in natural language. Transac-](https://doi.org/10.1162/tacl_a_00118)
_tions of the Association for Computational Linguis-_
_tics, 3:1–13._
Mirac Suzgun, Yonatan Belinkov, Stuart Shieber, and
[Sebastian Gehrmann. 2019. LSTM networks can](https://doi.org/10.18653/v1/W19-3905)
[perform dynamic counting. In Proceedings of the](https://doi.org/10.18653/v1/W19-3905)
_Workshop on Deep Learning and Formal Languages:_
_Building Bridges, pages 44–54, Florence. Associa-_
tion for Computational Linguistics.
Minghuan Tan, Lei Wang, Lingxiao Jiang, and Jing
[Jiang. 2022. Investigating math word problems using](https://doi.org/10.18653/v1/2022.mathnlp-1.2)
[pretrained multilingual language models. In Proceed-](https://doi.org/10.18653/v1/2022.mathnlp-1.2)
_ings of the 1st Workshop on Mathematical Natural_
_Language Processing (MathNLP), pages 7–16, Abu_
Dhabi, United Arab Emirates (Hybrid). Association
for Computational Linguistics.
Avijit Thawani, Jay Pujara, Filip Ilievski, and Pedro
[Szekely. 2021. Representing numbers in NLP: a](https://doi.org/10.18653/v1/2021.naacl-main.53)
[survey and a vision. In Proceedings of the 2021](https://doi.org/10.18653/v1/2021.naacl-main.53)
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 644–656, Online. As-_
sociation for Computational Linguistics.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal
Azhar, Aurelien Rodriguez, Armand Joulin, Edouard
Grave, and Guillaume Lample. 2023a. [Llama 2:](http://arxiv.org/abs/2302.13971)
[Open foundation and fine-tuned chat models.](http://arxiv.org/abs/2302.13971)
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton
Ferrer, Moya Chen, Guillem Cucurull, David Esiobu,
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa,
Isabel Kloumann, Artem Korenev, Punit Singh Koura,
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten,
Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu,
Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas
[Scialom. 2023b. Llama: Open and efficient founda-](http://arxiv.org/abs/2307.09288)
[tion language models.](http://arxiv.org/abs/2307.09288)
Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh,
[and Matt Gardner. 2019. Do NLP models know num-](https://doi.org/10.18653/v1/D19-1534)
[bers? probing numeracy in embeddings. In Proceed-](https://doi.org/10.18653/v1/D19-1534)
_ings of the 2019 Conference on Empirical Methods_
_in Natural Language Processing and the 9th Inter-_
_national Joint Conference on Natural Language Pro-_
_cessing (EMNLP-IJCNLP), pages 5307–5315, Hong_
Kong, China. Association for Computational Linguistics.
Cunxiang Wang, Boyuan Zheng, Yuchen Niu, and Yue
Zhang. 2021. Exploring generalization ability of pretrained language models on arithmetic and logical
reasoning. In Natural Language Processing and Chi_nese Computing, pages 758–769, Cham. Springer_
International Publishing.
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
[Deep neural solver for math word problems. In Pro-](https://doi.org/10.18653/v1/D17-1088)
_ceedings of the 2017 Conference on Empirical Meth-_
_ods in Natural Language Processing, pages 845–854,_
Copenhagen, Denmark. Association for Computational Linguistics.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and
[Denny Zhou. 2023. Chain-of-thought prompting elic-](http://arxiv.org/abs/2201.11903)
[its reasoning in large language models.](http://arxiv.org/abs/2201.11903)
BigScience Workshop. 2023. Bloom: A 176b[parameter open-access multilingual language model.](http://arxiv.org/abs/2211.05100)
Zhen Yang, Ming Ding, Qingsong Lv, Zhihuan Jiang,
Zehai He, Yuyi Guo, Jinfeng Bai, and Jie Tang. 2023.
[Gpt can solve mathematical problems without a cal-](http://arxiv.org/abs/2309.03241)
[culator.](http://arxiv.org/abs/2309.03241)
Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang,
Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu,
Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma,
Yufei Xue, Jidong Zhai, Wenguang Chen, Zhiyuan
Liu, Peng Zhang, Yuxiao Dong, and Jie Tang. 2023.
[GLM-130b: An open bilingual pre-trained model. In](https://openreview.net/forum?id=-Aw0rrrPUF)
_The Eleventh International Conference on Learning_
_Representations._
Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang,
Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu,
14278
-----
Hindu–Arabic style in order to avoid subsequent
manipulations. This presents us a great opportunity to reuse ancient math word problems and look
closely at how numeral systems and units of measurement affect reasoning steps of LLMs.
Problems in SUANJING are collected from ancient Chinese mathematical classics. Since Tang
Dynasty (唐 `朝), Mingsuan (明` `算, comprehend`
of arithmetic) has been an important subject in
Keju (科举, imperial examinations) for bureaucrats
selection. Mathematician Li Chunfeng[9] edited The
Ten Computational Canons[10], which was a collection of ten Chinese mathematical works. We
additionally add Old Mathematics in Expanded
_Sections) and The Mathematical Treatise in Nine_
_Sections to SUANJING. The full list of classics and_
extracted problem counts are shown in Table 4.
[9https://en.wikipedia.org/w/index.php?](https://en.wikipedia.org/w/index.php?title=Li_Chunfeng)
[title=Li_Chunfeng](https://en.wikipedia.org/w/index.php?title=Li_Chunfeng)
[10https://en.wikipedia.org/w/index.php?](https://en.wikipedia.org/w/index.php?title=Ten_Computational_Canons)
[title=Ten_Computational_Canons](https://en.wikipedia.org/w/index.php?title=Ten_Computational_Canons)
Wendi Zheng, Xiao Xia, et al. 2022. Glm-130b:
An open bilingual pre-trained model. arXiv preprint
_arXiv:2210.02414._
Zhipu.AI. 2023. Introducing chatglm. [https://](https://open.bigmodel.cn/dev/howuse/model)
[open.bigmodel.cn/dev/howuse/model.](https://open.bigmodel.cn/dev/howuse/model)
Ben Zhou, Qiang Ning, Daniel Khashabi, and Dan Roth.
[2020. Temporal common sense acquisition with min-](https://doi.org/10.18653/v1/2020.acl-main.678)
[imal supervision. In Proceedings of the 58th Annual](https://doi.org/10.18653/v1/2020.acl-main.678)
_Meeting of the Association for Computational Lin-_
_guistics, pages 7579–7589, Online. Association for_
Computational Linguistics.
**A** **SUANJING Dataset**
Title Count
`《周髀算经》` _Zhou Shadow Mathematical_ -
_Classic_
`《九章算术》` _The Nine Chapters on the_ 246
_Mathematical Art_
`《海岛算经》` _The Sea Island Mathemati-_ 9
_cal Classic_
`《孙子算经》` _The Mathematical Classic_ 65
_of Sun Zi_
`《张邱建算经》` _The Mathematical Classic_ 92
_of Zhang Qiujian_
`《五曹算经》` _Computational Canon of the_ 68
_Five Administrations_
`《夏侯阳算经》` _The Mathematical Classic_ 82
_of Xiahou Yang_
`《五经算术》` _Computational_ _Prescrip-_
_tions of the Five Classics_
`《缉古算经》` _Continuation_ _of_ _Ancient_ 20
_Mathematical Classic_
`《缀术》` _Method of Interpolation_ -
`《益古演段》` _Old Mathematics in Ex-_ 64
_panded Sections_
`《数学九章》` _The Mathematical Treatise_ 80
_in Nine Sections_
Total 726
Table 4: Statistics for math word problems extracted
from ancient Chinese mathematics classics.
To facilitate the evaluation of reasoning integrating all sub-procedures, we need a dataset with
challenges discussed above. We construct SUANJING (算经) by extracting and annotating math
word problems from a collection of ancient Chinese algorithmic books.
Although grammars and lexicons of the Chinese
language endure great changes in history, the numeral systems and units of measurement are reserved and still used in daily life. Especially in
formal documents and statements of financial institutions, the representation of numerals are required to be written in both traditional style[8] and
[8https://en.wikipedia.org/w/index.php?](https://en.wikipedia.org/w/index.php?title=Chinese_numerals)
[title=Chinese_numerals](https://en.wikipedia.org/w/index.php?title=Chinese_numerals)
14279
-----
**Prompt that convert integer into English in Num2Words**
**The following is a question about how to convert integers into English pronunciation. Please provide the correct answer based on the numbers in the question.**
Zero-shot
**Question:XXX Answer:**
**The following is a question about how to convert integers into English pronunciation. Please provide the correct answer based on the numbers in the question.**
Zero-shot CoT
**Question:XXX Answer: Let's think step by step,**
The following are questions about converting integers into their English reading forms. Please give the correct answers based on the numbers in the questions. Question: 123
Answer: One hundred twenty-three
The following are questions about converting integers into their English reading forms. Please give the correct answers based on the numbers in the questions. Question: 123456
Answer: One hundred twenty-three thousand four hundred fifty-six
The following are questions about converting integers into their English reading forms. Please give the correct answers based on the numbers in the questions. Question:
Few-shot
123456789 Answer: One hundred twenty-three million four hundred fifty-six thousand seven hundred eighty-nine
The following are questions about converting integers into their English reading forms. Please give the correct answers based on the numbers in the questions. Question:
123456789012 Answer: One hundred twenty-three billion four hundred fifty-six million seven hundred eighty-nine thousand twelve
**The following are questions about converting integers into their English reading forms. Please give the correct answers based on the numbers in the questions. Question:**
**XXX Answer:**
The following are questions about converting integers into their English reading forms. Please give the correct answers based on the numbers in the questions. Question: 123
Answer: Let's think step by step, 1. The length of this integer is 3 digits, which in English grammar is a number at the hundred level; 2. From high to low, each level of this number
reads as: one hundred, twenty, three; 3. From left to right, it can be written as one hundred twenty-three. So, the answer is one hundred twenty-three.
The following are questions about converting integers into their English reading forms. Please give the correct answers based on the numbers in the questions. Question: 123456
Answer: Let's think step by step, 1. The length of this integer is 6 digits, which in English grammar is a number at the hundred thousand level; 2. From high to low, each level of
this number reads as: one hundred thousand, twenty thousand, three thousand, four hundred, fifty, six; 3. From left to right, it can be written as one hundred twenty-three thousand
four hundred fifty-six. So, the answer is one hundred twenty-three thousand four hundred fifty-six.
The following are questions about converting integers into their English reading forms. Please give the correct answers based on the numbers in the questions. Question:
123456789 Answer: Let's think step by step, 1. The length of this integer is 9 digits, which in English grammar is a number at the hundred million level; 2. From high to low, each
Few-shot CoT level of this number reads as: one hundred million, twenty million, three million, four hundred thousand, fifty thousand, six thousand, seven hundred, eighty, nine; 3. From left to
right, it can be written as one hundred twenty-three million four hundred fifty-six thousand seven hundred eighty-nine. So, the answer is one hundred twenty-three million four
hundred fifty-six thousand seven hundred eighty-nine.
The following are questions about converting integers into their English reading forms. Please give the correct answers based on the numbers in the questions. Question:
123456789002 Answer: Let's think step by step, 1. The length of this integer is 12 digits, which in English grammar is a number at the hundred billion level; 2. From high to low,
each level of this number reads as: one hundred billion, twenty billion, three billion, four hundred million, fifty million, six million, seven hundred thousand, eighty thousand, nine
thousand, zero, zero, two; 3. From left to right, it can be written as one hundred and twenty-three billion four hundred and fifty-six million seven hundred and eighty-nine thousand
and two. So, the answer is one hundred and twenty-three billion four hundred and fifty-six million seven hundred and eighty-nine thousand and two.
**The following are questions about converting integers into their English reading forms. Please give the correct answers based on the numbers in the questions. Question:**
**XXX Answer: Let's think step by step,**
**Prompt that convert integer into Chinese in Num2Words**
Zero-shot 以下是关于整数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:XXX 答案:
Zero-shot CoT 以下是关于整数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:XXX 答案:让我们一步一步思考,
以下是关于整数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123 答案:一百二十三。
以下是关于整数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123456 答案:十二万三千四百五十六。
Few-shot 以下是关于整数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123456789 答案:一亿二千三百四十五万六千七百八十九。
以下是关于整数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123456789012 答案:一千二百三十四亿五千六百七十八万九千零一十二。
以下是关于整数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:XXX 答案:
以下是关于整数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123 答案:让我们一步一步思考,1、这个整数的长度一共是3位,按照中文的
语法,3位数字是百级别的数字;2、这个数字从高到低,各级别分别读作:一百、二十、三;3、从左到右依次可以写出一百二十三。所以答案是一百二十三。
以下是关于整数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123456 答案:让我们一步一步思考,1、这个整数的长度一共是6位,按照中
文的语法,6位数字是十万级别的数字;2、这个数字从高到低,各级别分别读作:十万、二万、三千、四百、五十、六;3、从左到右依次可以写出十二万三千四百
五十六。所以答案是十二万三千四百五十六。
以下是关于整数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123456789 答案:让我们一步一步思考,1、这个整数的长度一共是9位,按照
Few-shot CoT 中文的语法,9位数字是亿级别的数字;2、这个数字从高到低,各级别分别读作:一亿、二千万、三百万、四十万、五万、六千、七百、八十、九;3、从左到右依
次可以写出一亿二千三百四十五万六千七百八十九。所以答案是一亿二千三百四十五万六千七百八十九。
以下是关于整数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123456789012 答案:让我们一步一步思考,1、这个整数的长度一共是12位,
按照中文的语法,12位数字是千亿级别的数字;2、这个数字从高到低,各级别分别读作:一千亿、二百亿、三十亿、四亿、五千万、六百万、七十万、八万、九千、
零、零、二;3、出现了连续两个零,那么只写一个零;4、从左到右依次可以写出一千二百三十四亿五千六百七十八万九千零二。所以答案是一千二百三十四亿五
千六百七十八万九千零二。
以下是关于整数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:XXX 答案:让我们一步一步思考,
Table 5: Prompt that convert integers to English and Chinese in Num2Words task, ‘XXX’ is a word or numerical
question in the dataset.
14280
-----
**Prompt that convert decimal into English in Num2Words**
Zero-shot **The following is a question about how to convert decimals into English pronunciation. Please provide the correct answer based on the numbers in the question.**
**Question:XXX Answer:**
Zero-shot CoT **The following is a question about how to convert decimals into English pronunciation. Please provide the correct answer based on the numbers in the question.**
**Question:XXX Answer: Let's think step by step,**
The following are questions about how to convert decimals into English pronunciation. Please provide the correct answer based on the numbers in the question. Question:123.456
Answer: one hundred twenty three point four five six.
The following are questions about how to convert decimals into English pronunciation. Please provide the correct answer based on the numbers in the question. Question:
123456.789012 Answer: one hundred twenty three thousand four hundred fifty six point seven eight nine zero one two.
The following are questions about how to convert decimals into English pronunciation. Please provide the correct answer based on the numbers in the question. Question:
Few-shot 123456789.987654321 Answer: one hundred twenty three million four hundred fifty six thousand seven hundred eighty nine point nine eight seven six five four three two one.
The following are questions about how to convert decimals into English pronunciation. Please provide the correct answer based on the numbers in the question. Question:
123456789012.210987654321 Answer: one hundred twenty three billion four hundred fifty six million seven hundred eighty nine thousand twelve point two one zero nine eight
seven six five four three two one.
**The following are questions about how to convert decimals into English pronunciation. Please provide the correct answer based on the numbers in the question. Question:**
**XXX Answer:**
The following are questions about how to convert decimals into English pronunciation. Please provide the correct answer based on the numbers in the question. Question:123.456
Answer: Let's think step by step, 1. This number has a decimal point, so it's a decimal.2. The integer part to the left of the decimal point is 123, which is a 3-digit integer. According
to English grammar, a 3-digit number is at the hundred level.3. So, the integer part from left to right can be written as one hundred twenty three.4. The decimal part to the right of
the decimal point is 456, which according to English grammar, is written out one digit at a time.5. So the decimal part from left to right can be written as four five six.6. The
decimal point is read as 'point'.So the answer is one hundred twenty three point four five six.
The following are questions about how to convert decimals into English pronunciation. Please provide the correct answer based on the numbers in the question. Question:
123456.789012 Answer: Let's think step by step, 1. This number has a decimal point, so it's a decimal.2. The integer part to the left of the decimal point is 123456, which is a 6digit integer. According to English grammar, a 6-digit number is at the hundred-thousand level.3. So, the integer part from left to right can be written as one hundred twenty three
thousand four hundred fifty six.4. The decimal part to the right of the decimal point is 789012, which according to English grammar, is written out one digit at a time.5. So the
decimal part from left to right can be written as seven eight nine zero one two.6. The decimal point is read as 'point'.So the answer is one hundred twenty three thousand four
hundred fifty six point seven eight nine zero one two.
The following are questions about how to convert decimals into English pronunciation. Please provide the correct answer based on the numbers in the question. Question:
123456789.987654321 Answer: Let's think step by step, 1. This number has a decimal point, so it's a decimal.2. The integer part to the left of the decimal point is 123456789,
Few-shot CoT
which is a 9-digit integer. According to English grammar, a 9-digit number is at the hundred-million level.3. So, the integer part from left to right can be written as one hundred
twenty-three million four hundred fifty-six thousand seven hundred eighty-nine.4. The decimal part to the right of the decimal point is 987654321, which according to English
grammar, is written out one digit at a time.5. So the decimal part from left to right can be written as nine eight seven six five four three two one.6. The decimal point is read as
'point'.So the answer is one hundred twenty-three million four hundred fifty-six thousand seven hundred eighty-nine point nine eight seven six five four three two one.
The following are questions about how to convert decimals into English pronunciation. Please provide the correct answer based on the numbers in the question. Question:
123456789012.210987654321 Answer: Let's think step by step, 1. This number has a decimal point, so it's a decimal.2. The integer part to the left of the decimal point is
123456789012, which is a 12-digit integer. According to English grammar, a 12-digit number is at the hundred-billion level.3. So, the integer part from left to right can be written
as one hundred twenty-three billion four hundred fifty-six million seven hundred eighty-nine thousand twelve.4. The decimal part to the right of the decimal point is 210987654321,
which according to English grammar, is written out one digit at a time.5. So the decimal part from left to right can be written as two one zero nine eight seven six five four three two
one.6. The decimal point is read as 'point'.So the answer is one hundred twenty-three billion four hundred fifty-six million seven hundred eighty-nine thousand twelve point two one
zero nine eight seven six five four three two one.
**The following are questions about how to convert decimals into English pronunciation. Please provide the correct answer based on the numbers in the question. Question:**
**XXX Answer: Let's think step by step,**
**Prompt that convert decimal into Chinese in Num2Words**
Zero-shot 以下是关于小数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:XXX 答案:
Zero-shot CoT 以下是关于小数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:XXX 答案:让我们一步一步思考,
以下是关于小数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123.456 答案:一百二十三点四五六
以下是关于小数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123456.789012 答案:十二万三千四百五十六点七八九零一二
以下是关于小数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123456789.987654321 答案:一亿二千三百四十五万六千七百八十九点九八七
Few-shot 六五四三二一
以下是关于小数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123456789012.210987654321 答案:一千二百三十四亿五千六百七十八万九千
零一十二点二一零九八七六五四三二一
以下是关于小数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:XXX 答案:
以下是关于小数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123.456 答案:让我们一步一步思考,1、这个数字有小数点,所以这是一个小
数;2、小数点左侧的整数部分是123,这是一个长度为3的整数,按照中文的语法,3位数字是百级别的数字;3、所以整数部分从左到右依次可以写出一百二十三;
4、小数右侧的小数部分是456,按照中文的语法,小数部分一个一个数字写出来即可;5、所以小数部分从左到右依次可以写出四五六;6、小数点读作点;所以答
案是一百二十三点四五六。
以下是关于小数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123456.789012 答案:让我们一步一步思考,1、这个数字有小数点,所以这是
一个小数;2、小数点左侧的整数部分是123456,这是一个长度为6的整数,按照中文的语法,6位数字是十万级别的数字;3、所以整数部分从左到右依次可以写出
十二万三千四百五十六;4、小数右侧的小数部分是789012,按照中文的语法,小数部分一个一个数字写出来即可;5、所以小数部分从左到右依次可以写出七八九
零一二;6、小数点读作点;所以答案是十二万三千四百五十六点七八九零一二。
Few-shot CoT 以下是关于小数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123456789.987654321 以这是一个小数;2、小数点左侧的整数部分是123456789,这是一个长度为9的整数,按照中文的语法,9位数字是亿级别的数字;3、所以整数部分从左到右依次可答案:让我们一步一步思考,1、这个数字有小数点,所
以写出一亿二千三百四十五万六千七百八十九;4、小数右侧的小数部分是987654321,按照中文的语法,小数部分一个一个数字写出来即可;5、所以小数部分从左
到右依次可以写出九八七六五四三二一;6、小数点读作点;所以答案是一亿二千三百四十五万六千七百八十九点九八七六五四三二一。
以下是关于小数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123456789012.210987654321 答案:让我们一步一步思考,1、这个数字有小数
点,所以这是一个小数;2、小数点左侧的整数部分是123456789012,这是一个长度为12的整数,按照中文的语法,12位数字是千亿级别的数字;3、所以整数部分
从左到右依次可以写出一千二百三十四亿五千六百七十八万九千零一十二;4、小数右侧的小数部分是210987654321,按照中文的语法,小数部分一个一个数字写出
来即可;5、所以小数部分从左到右依次可以写出二一零九八七六五四三二一;6、小数点读作点;所以答案是一千二百三十四亿五千六百七十八万九千零一十二点
二一零九八七六五四三二一。
以下是关于小数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:XXX 答案:让我们一步一步思考,
Table 6: Prompt that convert decimals to English and Chinese in Num2Words task, ‘XXX’ is a word or numerical
question in the dataset.
14281
-----
**Prompt that convert fraction into English in Num2Words**
Zero-shot **The following is a question about how to convert fractions into English pronunciation. Please provide the correct answer based on the numbers in the**
**question.Question:XXX Answer:**
Zero-shot CoT **The following is a question about how to convert fractions into English pronunciation. Please provide the correct answer based on the numbers in the**
**question.Question:XXX Answer: Let's think step by step,**
The following are questions about how to convert fractions into English pronunciation. Please provide the correct answer based on the numbers in the question.Question:123/456
Answer:one hundred twenty three over four hundred fifty six.
The following are questions about how to convert fractions into English pronunciation. Please provide the correct answer based on the numbers in the
question.Question:123456/789012 Answer: one hundred twenty three thousand four hundred fifty six over seven hundred eighty nine thousand twelve.
The following are questions about how to convert fractions into English pronunciation. Please provide the correct answer based on the numbers in the
question.Question:123456789/987654321 Answer: one hundred twenty three million four hundred fifty six thousand seven hundred eighty nine over nine hundred eighty seven
Few-shot
million six hundred fifty four thousand three hundred twenty one.
The following are questions about how to convert fractions into English pronunciation. Please provide the correct answer based on the numbers in the
question.Question:123456789012/210987654321 Answer: one hundred twenty three billion four hundred fifty six million seven hundred eighty nine thousand twelve over two
hundred ten billion nine hundred eighty seven million six hundred fifty four thousand three hundred twenty one.
**The following are questions about how to convert fractions into English pronunciation. Please provide the correct answer based on the numbers in the**
**question.Question:XXX Answer:**
The following are questions about how to convert fractions into English pronunciation. Please provide the correct answer based on the numbers in the question.Question:123/456
Answer: Let's think step by step, 1. This number has a slash, so it's a fraction.2. The numerator to the left of the slash is 123, which is a 3-digit integer. According to English
grammar, a 3-digit number is at the hundred level.3. So the numerator from left to right can be written as one hundred twenty-three.4. The denominator to the right of the slash is
456, which is also a 3-digit integer. According to English grammar, a 3-digit number is at the hundred level.5. So the denominator from left to right can be written as four hundred
fifty-six.6. The slash is read as 'over'.So the answer is one hundred twenty-three over four hundred fifty-six.
The following are questions about how to convert fractions into English pronunciation. Please provide the correct answer based on the numbers in the
question.Question:123456/789012 Answer: Let's think step by step, 1. This number has a slash, so it's a fraction.2. The numerator to the left of the slash is 123456, which is a 6digit integer. According to English grammar, a 6-digit number is at the hundred-thousand level.3. So the numerator from left to right can be written as one hundred twenty-three
thousand four hundred fifty-six.4. The denominator to the right of the slash is 789012, which is also a 6-digit integer. According to English grammar, a 6-digit number is at the
hundred-thousand level.5. So the denominator from left to right can be written as seven hundred eighty-nine thousand twelve.6. The slash is read as 'over'.So the answer is one
hundred twenty-three thousand four hundred fifty-six over seven hundred eighty-nine thousand twelve.
The following are questions about how to convert fractions into English pronunciation. Please provide the correct answer based on the numbers in the
question.Question:123456789/987654321 Answer: Let's think step by step, 1. This number has a slash, so it's a fraction.2. The numerator to the left of the slash is 123456789,
which is a 9-digit integer. According to English grammar, a 9-digit number is at the hundred-million level.3. So the numerator from left to right can be written as one hundred
Few-shot CoT
twenty-three million four hundred fifty-six thousand seven hundred eighty-nine.4. The denominator to the right of the slash is 987654321, which is also a 9-digit integer. According
to English grammar, a 9-digit number is at the hundred-million level.5. So the denominator from left to right can be written as nine hundred eighty-seven million six hundred fiftyfour thousand three hundred twenty-one.6. The slash is read as 'over'.So the answer is one hundred twenty-three million four hundred fifty-six thousand seven hundred eighty-nine
over nine hundred eighty-seven million six hundred fifty-four thousand three hundred twenty-one.
The following are questions about how to convert fractions into English pronunciation. Please provide the correct answer based on the numbers in the
question.Question:123456789012/210987654321 Answer: Let's think step by step, 1. This number has a slash, so it's a fraction.2. The numerator to the left of the slash is
123456789012, which is a 12-digit integer. According to English grammar, a 12-digit number is at the hundred-billion level.3. So the numerator from left to right can be written as
one hundred twenty-three billion four hundred fifty-six million seven hundred eighty-nine thousand twelve.4. The denominator to the right of the slash is 210987654321, which is
also a 12-digit integer. According to English grammar, a 12-digit number is at the hundred-billion level.5. So the denominator from left to right can be written as two hundred ten
billion nine hundred eighty-seven million six hundred fifty-four thousand three hundred twenty-one.6. The slash is read as 'over'.So the answer is one hundred twenty-three billion
four hundred fifty-six million seven hundred eighty-nine thousand twelve over two hundred ten billion nine hundred eighty-seven million six hundred fifty-four thousand three
hundred twenty-one.
**The following are questions about how to convert fractions into English pronunciation. Please provide the correct answer based on the numbers in the**
**question.Question:XXX Answer: Let's think step by step,**
**Prompt that convert fraction into Chinese in Num2Words**
Zero-shot 以下是关于分数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:XXX 答案:
Zero-shot CoT 以下是关于分数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:XXX 答案:让我们一步一步思考,
以下是关于分数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123/456 答案:四百五十六分之一百二十三
以下是关于分数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123456/789012 答案:七十八万九千零一十二分之十二万三千四百五十六
以下是关于分数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123456789/987654321 答案:九亿八千七百六十五万四千三百二十一分之一亿二
Few-shot 千三百四十五万六千七百八十九
以下是关于分数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123456789012/210987654321 答案:二千一百零九亿八千七百六十五万四千三百
二十一分之一千二百三十四亿五千六百七十八万九千零一十二
以下是关于分数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:XXX 答案:
以下是关于分数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123/456 答案:让我们一步一步思考,1、这个数字有斜杠,所以这是一个分数;
2、斜杠左侧的分子是123,这个整数的长度一共是3位,按照中文的语法,3位数字是百级别的数字;3、所以分子从左到右依次可以写出一百二十三;4、斜杠右侧的
分母是456,这个整数的长度一共是3位,按照中文的语法,3位数字是百级别的数字;5、所以分子从左到右依次可以写出四百五十六;6、斜杠读作分之;7、中文的
语法里,要先读分母,再读斜杠,再读分子;所以答案是四百五十六分之一百二十三。
以下是关于分数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123456/789012 答案:让我们一步一步思考,1、这个数字有斜杠,所以这是一个
分数;2、斜杠左侧的分子是123456,这个整数的长度一共是6位,按照中文的语法,6位数字是十万级别的数字;3、所以分子从左到右依次可以写出十二万三千四百
五十六;4、斜杠右侧的分母是789012,这个整数的长度一共是6位,按照中文的语法,6位数字是十万级别的数字;5、所以分子从左到右依次可以写出七十八万九千
零一十二;6、斜杠读作分之;7、中文的语法里,要先读分母,再读斜杠,再读分子;所以答案是七十八万九千零一十二分之十二万三千四百五十六。
以下是关于分数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123456789/987654321 答案:让我们一步一步思考,1、这个数字有斜杠,所以这
Few-shot CoT 是一个分数;2、斜杠左侧的分子是123456789,这个整数的长度一共是9位,按照中文的语法,9位数字是亿级别的数字;3、所以分子从左到右依次可以写出一亿二千
三百四十五万六千七百八十九;4、斜杠右侧的分母是987654321,这个整数的长度一共是9位,按照中文的语法,9位数字是亿级别的数字;5、所以分子从左到右依次
可以写出九亿八千七百六十五万四千三百二十一;6、斜杠读作分之;7、中文的语法里,要先读分母,再读斜杠,再读分子;所以答案是九亿八千七百六十五万四千
三百二十一分之一亿二千三百四十五万六千七百八十九。
以下是关于分数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:123456789012/210987654321 答案:让我们一步一步思考,1、这个数字有斜杠,
所以这是一个分数;2、斜杠左侧的分子是123456789012,这个整数的长度一共是12位,按照中文的语法,12位数字是千亿级别的数字;3、所以分子从左到右依次可
以写出一千二百三十四亿五千六百七十八万九千零一十二;4、斜杠右侧的分母是210987654321,这个整数的长度一共是12位,按照中文的语法,12位数字是千亿级别
的数字;5、所以分子从左到右依次可以写出二千一百零九亿八千七百六十五万四千三百二十一;6、斜杠读作分之;7、中文的语法里,要先读分母,再读斜杠,再读
分子;所以答案是二千一百零九亿八千七百六十五万四千三百二十一分之一千二百三十四亿五千六百七十八万九千零一十二。
以下是关于分数转为中文读法的题目,请根据题目的数字,给出正确的答案。题目:XXX 答案:让我们一步一步思考,
Table 7: Prompt that convert fractions to English and Chinese in Num2Words task, ‘XXX’ is a word or numerical
question in the dataset.
14282
-----
**Prompt that convert English into integer in Words2Num**
**The following is a question about how to convert English pronunciation into integers. Please provide the correct answer based on the numbers in the question. Question:**
Zero-shot
**XXX Answer:**
**The following is a question about how to convert English pronunciation into integers. Please provide the correct answer based on the numbers in the question. Question:**
Zero-shot CoT
**XXX Answer: Let's think step by step,**
The following are questions about how to convert English pronunciation into integers. Please provide the correct answer based on the numbers in the question. Question: One
hundred twenty-three Answer: 123
The following are questions about how to convert English pronunciation into integers. Please provide the correct answer based on the numbers in the question. Question: One
hundred twenty-three thousand four hundred fifty-six Answer: 123456
The following are questions about how to convert English pronunciation into integers. Please provide the correct answer based on the numbers in the question. Question: One
Few-shot
hundred twenty-three million four hundred fifty-six thousand seven hundred eighty-nine Answer: 123456789
The following are questions about how to convert English pronunciation into integers. Please provide the correct answer based on the numbers in the question. Question: One
hundred twenty-three billion four hundred fifty-six million seven hundred eighty-nine thousand twelve Answer: 123456789012
**The following are questions about how to convert English pronunciation into integers. Please provide the correct answer based on the numbers in the question. Question:**
**XXX Answer:**
The following are questions about how to convert English pronunciation into integers. Please give the correct answers based on the English words in the questions. Question: One
hundred twenty-three Answer: Let's think step by step, 1. Write it down in order from left to right, one hundred is written as 100, twenty is written as 20, three is written as 3; 2.
Add all the numbers above, 100 + 20 + 3 = 123; So, the answer is 123.
The following are questions about how to convert English pronunciation into integers. Please give the correct answers based on the English words in the questions. Question: One
hundred twenty-three thousand four hundred fifty-six Answer: Let's think step by step, 1. Write it down in order from left to right, one hundred twenty-three thousand is written as
123000, four hundred fifty-six is written as 456; 2. Add all the numbers above, 123000 + 456 = 123456; So, the answer is 123456.
The following are questions about how to convert English pronunciation into integers. Please give the correct answers based on the English words in the questions. Question: One
hundred twenty-three million four hundred fifty-six thousand seven hundred eighty-nine Answer: Let's think step by step, 1. Write it down in order from left to right, one hundred
Few-shot CoT
twenty-three million is written as 123000000, four hundred fifty-six thousand is written as 456000, seven hundred eighty-nine is written as 789; 2. Add all the numbers above,
123000000 + 456000 + 789 = 123456789; So, the answer is 123456789.
The following are questions about how to convert English pronunciation into integers. Please give the correct answers based on the English words in the questions. Question: One
hundred twenty-three billion four hundred fifty-six million seven hundred eighty-nine thousand and twelve Answer: Let's think step by step, 1. Write it down in order from left to
right, one hundred twenty-three billion is written as 123000000000, four hundred fifty-six million is written as 456000000, seven hundred eighty-nine thousand is written as 789000,
twelve is written as 12; 2. Add all the numbers above, 123000000000 + 456000000 + 789000 + 12 = 123456789012; So, the answer is 123456789012.
**The following are questions about how to convert English pronunciation into integers. Please give the correct answers based on the English words in the questions.**
**Question: XXX Answer: Let's think step by step,**
**Prompt that convert Chinese into integer in Words2Num**
Zero-shot 以下是关于中文转为整数的题目,请根据题目的中文,给出正确的答案。题目:XXX 答案:
Zero-shot CoT 以下是关于中文转为整数的题目,请根据题目的中文,给出正确的答案。题目:XXX 答案:让我们一步一步思考,
以下是关于中文转为整数的题目,请根据题目的中文,给出正确的答案。题目:一百二十三 答案:123
以下是关于中文转为整数的题目,请根据题目的中文,给出正确的答案。题目:十二万三千四百五十六 答案:123456
Few-shot 以下是关于中文转为整数的题目,请根据题目的中文,给出正确的答案。题目:一亿二千三百四十五万六千七百八十九 答案:123456789
以下是关于中文转为整数的题目,请根据题目的中文,给出正确的答案。题目:一千二百三十四亿五千六百七十八万九千零一十二 答案:123456789012
以下是关于中文转为整数的题目,请根据题目的中文,给出正确的答案。题目:XXX 答案:
以下是关于中文转为整数的题目,请根据题目的中文,给出正确的答案。题目:一百二十三
答案:让我们一步一步思考,1、按照顺序从左到右写下来,一百写作100,二十写作20,三写作3;2、上面所有的数字相加,100+20+3=123;所以答案是123。
以下是关于中文转为整数的题目,请根据题目的中文,给出正确的答案。题目:十二万三千四百五十六 答案:让我们一步一步思考,1、按照顺序从左到右写下
来,十二万写作120000,三千四百五十六写作3456;2、上面所有的数字相加,120000+3456=123456所以答案是123456。
以下是关于中文转为整数的题目,请根据题目的中文,给出正确的答案。题目:一亿二千三百四十五万六千七百八十九 答案:让我们一步一步思考,1、按照顺
Few-shot CoT 序从左到右写下来,一亿写作100000000 ,二千三百四十五万写作23450000 ,六千七百八十九写作6789 ;2 、上面所有的数字相加,
100000000+23450000+6789=123456789;所以答案是123456789。
以下是关于中文转为整数的题目,请根据题目的中文,给出正确的答案。题目:一千二百三十四亿五千六百七十八万九千零一十二 答案:让我们一步一步思考,
1、按照顺序从左到右写下来,一千二百三十四亿写作123400000000,五千六百七十八万写作56780000,九千零一十二写作9012;2、上面所有的数字相加,
123400000000+56780000+9012=123456789012;所以答案是123456789012。
以下是关于中文转为整数的题目,请根据题目的中文,给出正确的答案。题目:XXX 答案:让我们一步一步思考,
Table 8: Prompt that convert English and Chinese to integer in Words2Num task, ‘XXX’ is a word or numerical
question in the dataset.
14283
-----
**Prompt that convert English into decimal in Words2Num**
**The following is a question about how to convert English pronunciation into decimals. Please provide the correct answer based on the numbers in the question. Question:**
Zero-shot
**XXX Answer:**
**The following is a question about how to convert English pronunciation into decimals. Please provide the correct answer based on the numbers in the question. Question:**
Zero-shot CoT
**XXX Answer: Let's think step by step,**
The following are questions about how to convert English pronunciation into decimals. Please provide the correct answer based on the numbers in the question. Question: one
hundred twenty three point four five six Answer:123.456
The following are questions about how to convert English pronunciation into decimals. Please provide the correct answer based on the numbers in the question. Question: one
hundred twenty three thousand four hundred fifty six point seven eight nine zero one two Answer:123456.789012
The following are questions about how to convert English pronunciation into decimals. Please provide the correct answer based on the numbers in the question. Question: one
hundred and twenty-three million four hundred and fifty-six thousand seven hundred and eighty-nine point nine eight seven six five four three two one
Few-shot
Answer:123456789.987654321
The following are questions about how to convert English pronunciation into decimals. Please provide the correct answer based on the numbers in the question. Question: one
hundred twenty three billion four hundred fifty six million seven hundred eighty nine thousand twelve point two one zero nine eight seven six five four three two one
Answer:123456789012.210987654321
**The following are questions about how to convert English pronunciation into decimals. Please provide the correct answer based on the numbers in the question.**
**Question: XXX Answer:**
The following are questions about how to convert English pronunciation into decimals. Please provide the correct answer based on the numbers in the question. Question: one
hundred twenty three point four five six Answer: Let's think step by step, 1. The presence of the character 'point' in the problem indicates that this is a decimal number;2. To the left
of 'point', 'one hundred and twenty-three' is the integer part, written as 123;3. To the right of 'point', 'four five six' is the decimal part, written as 456;4. The 'point' is written as '.';So
the answer is 123.456.
The following are questions about how to convert English pronunciation into decimals. Please provide the correct answer based on the numbers in the question. Question: one
hundred twenty three thousand four hundred fifty six point seven eight nine zero one two Answer: Let's think step by step, 1. The presence of the character 'point' in the problem
indicates that this is a decimal number;2. To the left of 'point', 'one hundred twenty-three thousand four hundred fifty-six' is the integer part, written as 123456;3. To the right of
'point', 'seven eight nine zero one two' is the decimal part, written as 789012;4. The 'point' is written as '.';So the answer is 123456.789012.
The following are questions about how to convert English pronunciation into decimals. Please provide the correct answer based on the numbers in the question. Question: one
hundred and twenty-three million four hundred and fifty-six thousand seven hundred and eighty-nine point nine eight seven six five four three two one Answer: Let's think step by
Few-shot CoT
step, 1. The presence of the character 'point' in the problem indicates that this is a decimal number;2. To the left of 'point', 'one hundred twenty-three million four hundred fifty-six
thousand seven hundred eighty-nine' is the integer part, written as 123456789;3. To the right of 'point', 'nine eight seven six five four three two one' is the decimal part, written as
987654321;4. The 'point' is written as '.';So the answer is 123456789.987654321.
The following are questions about how to convert English pronunciation into decimals. Please provide the correct answer based on the numbers in the question. Question: one
hundred twenty three billion four hundred fifty six million seven hundred eighty nine thousand twelve point two one zero nine eight seven six five four three two one Answer: Let's
think step by step, 1. The presence of the character 'point' in the problem indicates that this is a decimal number;2. To the left of 'point', 'one hundred twenty-three billion four
hundred fifty-six million seven hundred eighty-nine thousand twelve' is the integer part, written as 123456789012;3. To the right of 'point', 'two one zero nine eight seven six five
four three two one' is the decimal part, written as 210987654321;4. The 'point' is written as '.';So the answer is 123456789012.210987654321.
**The following are questions about how to convert English pronunciation into decimals. Please provide the correct answer based on the numbers in the question.**
**Question: XXX Answer: Let's think step by step,**
**Prompt that convert Chinese into decimal in Words2Num**
Zero-shot 以下是关于中文转为小数的题目,请根据题目的中文,给出正确的答案。题目:XXX 答案:
Zero-shot CoT 以下是关于中文转为小数的题目,请根据题目的中文,给出正确的答案。题目:XXX 答案:让我们一步一步思考,
以下是关于中文转为小数的题目,请根据题目的中文,给出正确的答案。题目:一百二十三点四五六 答案:123.456
以下是关于中文转为小数的题目,请根据题目的中文,给出正确的答案。题目:十二万三千四百五十六点七八九零一二 答案:123456.789012
以下是关于中文转为小数的题目,请根据题目的中文,给出正确的答案。题目:一亿二千三百四十五万六千七百八十九点九八七六五四三二一 答案:
Few-shot 123456789.987654321
以下是关于中文转为小数的题目,请根据题目的中文,给出正确的答案。题目:一千二百三十四亿五千六百七十八万九千零一十二点二一零九八七六五四三二一
答案:123456789012.210987654321
以下是关于中文转为小数的题目,请根据题目的中文,给出正确的答案。题目:XXX 答案:
以下是关于中文转为小数的题目,请根据题目的中文,给出正确的答案。题目:一百二十三点四五六 答案:让我们一步一步思考,1、题目里出现了'点'这个汉字,
说明这是一个小数;2、'点'字左边'一百二十三'的是整数部分,写作123;3、'点'字右边'四五六'的是小数部分,写作456;4、'点'字写作. ;所以答案是
123.456。
以下是关于中文转为小数的题目,请根据题目的中文,给出正确的答案。题目:十二万三千四百五十六点七八九零一二 答案:让我们一步一步思考,1、题目里出
现了'点'这个汉字,说明这是一个小数;2、'点'字左边'十二万三千四百五十六'的是整数部分,写作123456;3、'点'字右边'七八九零一二'的是小数部分,写作
789012;4、'点'字写作. ;所以答案是123456.789012。
Few-shot CoT 以下是关于中文转为小数的题目,请根据题目的中文,给出正确的答案。题目:一亿二千三百四十五万六千七百八十九点九八七六五四三二一一步思考,1、题目里出现了'点'这个汉字,说明这是一个小数;2、'点'字左边'一亿二千三百四十五万六千七百八十九'的是整数部分,写作123456789;3、'点'答案:让我们一步
字右边'九八七六五四三二一'的是小数部分,写作987654321;4、'点'字写作. ;所以答案是123456789.987654321。
以下是关于中文转为小数的题目,请根据题目的中文,给出正确的答案。题目:一千二百三十四亿五千六百七十八万九千零一十二点二一零九八七六五四三二一
答案:让我们一步一步思考,1、题目里出现了'点'这个汉字,说明这是一个小数;2、'点'字左边'一千二百三十四亿五千六百七十八万九千零一十二'的是整数部
分,写作123456789012;3、'点'字右边'二一零九八七六五四三二一'的是小数部分,写作210987654321;4、'点'字写作. ;所以答案是
123456789012.210987654321。
以下是关于中文转为小数的题目,请根据题目的中文,给出正确的答案。题目:XXX 答案:让我们一步一步思考,
Table 9: Prompt that convert English and Chinese to decimal in Words2Num task, ‘XXX’ is a word or numerical
question in the dataset.
14284
-----
**Prompt that convert English into fraction in Words2Num**
Zero-shot **The following is a question about how to convert English pronunciation into fractions. Please provide the correct answer based on the numbers in the question. Question:**
**XXX Answer:**
Zero-shot CoT **The following is a question about how to convert English pronunciation into fractions. Please provide the correct answer based on the numbers in the question. Question:**
**XXX Answer: Let's think step by step,**
The following is a question about how to convert English pronunciation into fractions. Please provide the correct answer based on the numbers in the question. Question: one
hundred twenty three over four hundred fifty six Answer:123/456
The following is a question about how to convert English pronunciation into fractions. Please provide the correct answer based on the numbers in the question. Question: one
hundred twenty three thousand four hundred fifty six over seven hundred eighty nine thousand twelve Answer:123456/789012
The following is a question about how to convert English pronunciation into fractions. Please provide the correct answer based on the numbers in the question. Question: one
hundred twenty three million four hundred fifty six thousand seven hundred eighty nine over nine hundred eighty seven million six hundred fifty four thousand three hundred
Few-shot
twenty one Answer:123456789/987654321
The following is a question about how to convert English pronunciation into fractions. Please provide the correct answer based on the numbers in the question. Question: one
hundred twenty three billion four hundred fifty six million seven hundred eighty nine thousand twelve over two hundred ten billion nine hundred eighty seven million six hundred
fifty four thousand three hundred twenty one Answer:123456789012/210987654321
**The following is a question about how to convert English pronunciation into fractions. Please provide the correct answer based on the numbers in the question. Question:**
**XXX Answer:**
The following is a question about how to convert English pronunciation into fractions. Please provide the correct answer based on the numbers in the question. Question: one
hundred twenty three over four hundred fifty six Answer: Let's think step by step, 1. The appearance of the term 'over' in the problem indicates that this is a fraction;2. To the left
of 'over', 'four hundred fifty-six' is the denominator, written as 456;3. To the right of 'over', 'one hundred twenty-three' is the numerator, written as 123;4. 'over' is written as '/';5.
When written as a fraction, the numerator is written first, followed by '/', and then the denominator;So the answer is 123/456.
The following is a question about how to convert English pronunciation into fractions. Please provide the correct answer based on the numbers in the question. Question: one
hundred twenty three thousand four hundred fifty six over seven hundred eighty nine thousand twelve Answer: Let's think step by step, 1. The appearance of the term 'over' in the
problem indicates that this is a fraction;2. To the left of 'over', 'seven hundred eighty-nine thousand twelve' is the denominator, written as 789012;3. To the right of 'over', 'one
hundred twenty-three thousand four hundred fifty-six' is the numerator, written as 123456;4. 'over' is written as '/';5. When written as a fraction, the numerator is written first,
followed by '/', and then the denominator;So the answer is 123456/789012.
The following is a question about how to convert English pronunciation into fractions. Please provide the correct answer based on the numbers in the question. Question: one
hundred twenty three million four hundred fifty six thousand seven hundred eighty nine over nine hundred eighty seven million six hundred fifty four thousand three hundred
Few-shot CoT twenty one Answer: Let's think step by step, 1. The appearance of the term 'over' in the problem indicates that this is a fraction;2. To the left of 'over', 'nine hundred eighty-seven
million six hundred fifty four thousand three hundred twenty-one' is the denominator, written as 987654321;3. To the right of 'over', 'one hundred twenty-three million four hundred
fifty-six thousand seven hundred eighty-nine' is the numerator, written as 123456789;4. 'over' is written as '/';5. When written as a fraction, the numerator is written first, followed
by '/', and then the denominator;So the answer is 123456789/987654321.
The following is a question about how to convert English pronunciation into fractions. Please provide the correct answer based on the numbers in the question. Question: one
hundred twenty three billion four hundred fifty six million seven hundred eighty nine thousand twelve over two hundred ten billion nine hundred eighty seven million six hundred
fifty four thousand three hundred twenty one Answer: Let's think step by step, 1. The appearance of the term 'over' in the problem indicates that this is a fraction;2. To the left of
'over', 'two hundred ten billion nine hundred eighty-seven million six hundred fifty-four thousand three hundred twenty-one' is the denominator, written as 210987654321;3. To the
right of 'over', 'one hundred twenty-three billion four hundred fifty-six million seven hundred eighty-nine thousand twelve' is the numerator, written as 123456789012;4. 'over' is
written as '/';5. When written as a fraction, the numerator is written first, followed by '/', and then the denominator;So the answer is 123456789012/210987654321.
**The following is a question about how to convert English pronunciation into fractions. Please provide the correct answer based on the numbers in the question. Question:**
**XXX Answer: Let's think step by step,**
**Prompt that convert Chinese into fraction in Words2Num**
Zero-shot 以下是关于中文转为分数的题目,请根据题目的中文,给出正确的答案。题目:XXX 答案:
Zero-shot CoT 以下是关于中文转为分数的题目,请根据题目的中文,给出正确的答案。题目:XXX 答案:让我们一步一步思考,
以下是关于中文转为分数的题目,请根据题目的中文,给出正确的答案。题目:四百五十六分之一百二十三 答案:123/456
以下是关于中文转为分数的题目,请根据题目的中文,给出正确的答案。题目:七十八万九千零一十二分之十二万三千四百五十六 答案:123456/789012
以下是关于中文转为分数的题目,请根据题目的中文,给出正确的答案。题目:九亿八千七百六十五万四千三百二十一分之一亿二千三百四十五万六千七百八十
Few-shot 九 答案:123456789/987654321
以下是关于中文转为分数的题目,请根据题目的中文,给出正确的答案。题目:二千一百零九亿八千七百六十五万四千三百二十一分之一千二百三十四亿五千六
百七十八万九千零一十二 答案:123456789012/210987654321
以下是关于中文转为分数的题目,请根据题目的中文,给出正确的答案。题目:XXX 答案:
以下是关于中文转为分数的题目,请根据题目的中文,给出正确的答案。题目:四百五十六分之一百二十三 答案:让我们一步一步思考,1、题目里出现了'分之
'这个词,说明这是一个分数;2、'分之'左边的'四百五十六'是分母,写作456;3、'分之'右边的'一百二十三'是分子,写作123;4、'分之'写作/ ;5、写为分
数的时候,从左到右要先写分子,再写/,最后写分母;所以答案是123/456。
以下是关于中文转为分数的题目,请根据题目的中文,给出正确的答案。题目:七十八万九千零一十二分之十二万三千四百五十六 答案:让我们一步一步思考,
1、题目里出现了'分之'这个词,说明这是一个分数;2、'分之'左边的'七十八万九千零一十二'是分母,写作789012;3、'分之'右边的'十二万三千四百五十六'
是分子,写作123456;4、'分之'写作/ ;5、写为分数的时候,从左到右要先写分子,再写/,最后写分母;所以答案是123456/789012。
以下是关于中文转为分数的题目,请根据题目的中文,给出正确的答案。题目:九亿八千七百六十五万四千三百二十一分之一亿二千三百四十五万六千七百八十
Few-shot CoT 九 答案:让我们一步一步思考,1、题目里出现了'分之'这个词,说明这是一个分数;2、'分之'左边的'九亿八千七百六十五万四千三百二十一'是分母,写作
987654321;3、'分之'右边的'一亿二千三百四十五万六千七百八十九'是分子,写作123456789;4、'分之'写作/ ;5、写为分数的时候,从左到右要先写分子,
再写/,最后写分母;所以答案是123456789/987654321。
以下是关于中文转为分数的题目,请根据题目的中文,给出正确的答案。题目:二千一百零九亿八千七百六十五万四千三百二十一分之一千二百三十四亿五千六
百七十八万九千零一十二 答案:让我们一步一步思考,1、题目里出现了'分之'这个词,说明这是一个分数;2、'分之'左边的'二千一百零九亿八千七百六十五万
四千三百二十一'是分母,写作210987654321;3、'分之'右边的'一千二百三十四亿五千六百七十八万九千零一十二'是分子,写作123456789012;4、'分之'写作
/ ;5、写为分数的时候,从左到右要先写分子,再写/,最后写分母;所以答案是123456789012/210987654321。
以下是关于中文转为分数的题目,请根据题目的中文,给出正确的答案。题目:XXX 答案:让我们一步一步思考,
Table 10: Prompt that convert English and Chinese to fraction in Words2Num task, ‘XXX’ is a word or numerical
question in the dataset.
14285
-----
**Prompt of English unit conversion in Unit of Measurement**
Zero-shot **The following is a question about unit conversion. Please provide the correct answer at the question mark according to the question. question: XXX answer:**
**The following is a question about unit conversion. Please provide the correct answer at the question mark according to the question. question:XXX answer: Let's think**
Zero-shot CoT **step by step,**
The following is a question about unit conversion. Please provide the correct answer at the question mark according to the question. question:5900 meters= ?centimeters answer:
590000 centimeters.
The following is a question about unit conversion. Please provide the correct answer at the question mark according to the question. question:479 minutes - 630
seconds= ?seconds answer: 28110 seconds.
Few-shot
The following is a question about unit conversion. Please provide the correct answer at the question mark according to the question. question:7 tons 54 kilogram + 68
kilogram= ?tons ?kilograms answer: 7 tons 122 kilograms.
**The following is a question about unit conversion. Please provide the correct answer at the question mark according to the question. question:XXX** **answer:**
The following is a question about unit conversion. Please provide the correct answer at the question mark according to the question. question:5900 meters= ?centimeter answer:
Let's think step by step, The unit on the right side of the question mark is centimeters. Since 1 meter=100 centimeters, 5900 meters=5900 * 100 centimeters, so the answer is
590000 centimeters.
The following is a question about unit conversion. Please provide the correct answer at the question mark according to the question. question:479 minutes - 630
seconds= ?seconds answer: Let's think step by step, The unit on the right side of the question mark is seconds. Since 1 minute equals 60 seconds, 479 minutes=479 * 60
seconds=28740 seconds. Because 28740 seconds - 630 seconds=28110 seconds, the answer is 28110 seconds.
Few-shot CoT
The following is a question about unit conversion. Please provide the correct answer at the question mark according to the question. question:7 tons 54 kilogram + 68
kilogram= ?tons ?kilograms answer: Let's think step by step, The units on the right side of the question mark are tons and kilograms. Since 1 ton=1000 kilograms, 7 tons and 54
kilograms=7 * 1000+54=7054 kilograms. Because 7054 kg+68 kg=7122 kg, and 1 kg=1/1000 tons, 7122 kg=7 tons 122 kg.
**The following is a question about unit conversion. Please provide the correct answer at the question mark according to the question. question:XXX** **answer: Let's**
**think step by step,**
The following is a question about unit conversion. Please provide the correct answer at the question mark according to the question. question:5900 meters= ?centimeter answer:
Let's think step by step, Firstly, it is necessary to understand the following common knowledge about unit conversion: 1 ton=1000 kilograms; 1 kilogram=1000 grams; 1
gram=1000 milligrams; 1 week=7 days; 1 day=24 hours; 1 hour=60 minutes; 1 minute=60 seconds; 1 second=1000 milliseconds; 1 kilometer=1000 meters; 1 meter=10 decimeters;
1 decimeter=10 centimeters; 1 centimeter=10 millimeters; 1 yuan=10 jiao; 1 jiao=10 cents. The unit on the right side of the question mark is centimeters. Since 1 meter=100
centimeters, 5900 meters=5900 * 100 centimeters, so the answer is 590000 centimeters.
The following is a question about unit conversion. Please provide the correct answer at the question mark according to the question. question:479 minutes - 630
seconds= ?seconds answer: Let's think step by step, Firstly, it is necessary to understand the following common knowledge about unit conversion: 1 ton=1000 kilograms; 1
kilogram=1000 grams; 1 gram=1000 milligrams; 1 week=7 days; 1 day=24 hours; 1 hour=60 minutes; 1 minute=60 seconds; 1 second=1000 milliseconds; 1 kilometer=1000 meters;
1 meter=10 decimeters; 1 decimeter=10 centimeters; 1 centimeter=10 millimeters; 1 yuan=10 jiao; 1 jiao=10 cents. The unit on the right side of the question mark is seconds. Since
Few-shot CoT 1 minute equals 60 seconds, 479 minutes=479 * 60 seconds=28740 seconds. Because 28740 seconds - 630 seconds=28110 seconds, the answer is 28110 seconds.
with knowledge
The following is a question about unit conversion. Please provide the correct answer at the question mark according to the question. question:7 tons 54 kilogram + 68
kilogram= ?tons ?kilograms answer: Let's think step by step, Firstly, it is necessary to understand the following common knowledge about unit conversion: 1 ton=1000 kilograms;
1 kilogram=1000 grams; 1 gram=1000 milligrams; 1 week=7 days; 1 day=24 hours; 1 hour=60 minutes; 1 minute=60 seconds; 1 second=1000 milliseconds; 1 kilometer=1000
meters; 1 meter=10 decimeters; 1 decimeter=10 centimeters; 1 centimeter=10 millimeters; 1 yuan=10 jiao; 1 jiao=10 cents. The units on the right side of the question mark are tons
and kilograms. Since 1 ton=1000 kilograms, 7 tons and 54 kilograms=7 * 1000+54=7054 kilograms. Because 7054 kg+68 kg=7122 kg, and 1 kg=1/1000 tons, 7122 kg=7 tons 122 kg.
**The following is a question about unit conversion. Please provide the correct answer at the question mark according to the question. question:XXX** **answer: Let's**
**think step by step, Firstly, it is necessary to understand the following common knowledge about unit conversion: 1 ton=1000 kilograms; 1 kilogram=1000 grams; 1**
**gram=1000 milligrams; 1 week=7 days; 1 day=24 hours; 1 hour=60 minutes; 1 minute=60 seconds; 1 second=1000 milliseconds; 1 kilometer=1000 meters; 1 meter=10**
**decimeters; 1 decimeter=10 centimeters; 1 centimeter=10 millimeters; 1 yuan=10 jiao; 1 jiao=10 cents.**
**Prompt of Chinese unit conversion in Unit of Measurement**
Zero-shot 以下是关于单位转化的题目,请根据题目,给出问号处的正确答案。题目:XXX 答案:
Zero-shot CoT 以下是关于单位转化的题目,请根据题目,给出问号处的正确答案。题目:XXX 答案:让我们一步一步思考,
以下是关于单位转化的题目,请根据题目,给出问号处的正确答案。题目:5900米=?厘米 答案:590000厘米
以下是关于单位转化的题目,请根据题目,给出问号处的正确答案。题目:479分钟- 630秒钟= ?秒钟 答案:28110秒钟
Few-shot
以下是关于单位转化的题目,请根据题目,给出问号处的正确答案。题目:7吨54千克+ 68千克= ?吨?千克 答案:7吨122千克
以下是关于单位转化的题目,请根据题目,给出问号处的正确答案。题目:XXX 答案:
以下是关于单位转化的题目,请根据题目,给出问号处的正确答案。题目:5900米=?厘米 答案:让我们一步一步思考,问号右侧的单位是厘米,由于1米=100厘
米,所以5900米=5900*100厘米,所以答案是590000厘米。
以下是关于单位转化的题目,请根据题目,给出问号处的正确答案。题目:479分钟- 630秒钟= ?秒钟 答案:让我们一步一步思考,问号右侧的单位是秒钟,
Few-shot CoT 由于1分钟=60秒钟,因此479分钟=479*60秒钟=28740秒钟。又因为28740秒钟-630秒钟=28110秒钟,所以答案是28110秒钟。
以下是关于单位转化的题目,请根据题目,给出问号处的正确答案。题目:7吨54千克+ 68千克= ?吨?千克 答案:让我们一步一步思考,问号右侧的单位是吨
和千克,由于1吨=1000千克,因此7吨54千克=7*1000+54=7054千克。因为7054千克+68千克=7122千克,且1千克=1/1000吨,所以7122千克=7吨122千克。
以下是关于单位转化的题目,请根据题目,给出问号处的正确答案。题目:XXX 答案:让我们一步一步思考,
以下是关于单位转化的题目,请根据题目,给出问号处的正确答案。题目:5900米=?厘米 答案:让我们一步一步思考,首先需要了解如下的单位转化常识:1吨
=1000千克;1千克=1000克;1克=1000毫克;1周=7天;1天=24小时;1小时=60分钟;1分钟=60秒钟;1秒钟=1000毫秒;1千米=1000米;1米=10分米;1分米=10厘米;
1厘米=10毫米;1元=10角;1角=10分钱。问号右侧的单位是厘米,由于1米=100厘米,所以5900米=5900*100厘米,所以答案是590000厘米。
以下是关于单位转化的题目,请根据题目,给出问号处的正确答案。题目:479分钟- 630秒钟= ?秒钟 答案:让我们一步一步思考,首先需要了解如下的单位
转化常识:1吨=1000千克;1千克=1000克;1克=1000毫克;1周=7天;1天=24小时;1小时=60分钟;1分钟=60秒钟;1秒钟=1000毫秒;1千米=1000米;1米=10分米;
1分米=10厘米;1厘米=10毫米;1元=10角;1角=10分钱。问号右侧的单位是秒钟,由于1分钟=60秒钟,因此479分钟=479*60秒钟=28740秒钟。又因为28740秒钟-630
Few-shot CoT 秒钟=28110秒钟,所以答案是28110秒钟。
with knowledge 以下是关于单位转化的题目,请根据题目,给出问号处的正确答案。题目:7吨54千克+ 68千克= ?吨?千克 答案:让我们一步一步思考,首先需要了解如下的
单位转化常识:1吨=1000千克;1千克=1000克;1克=1000毫克;1周=7天;1天=24小时;1小时=60分钟;1分钟=60秒钟;1秒钟=1000毫秒;1千米=1000米;1米=10分
米;1分米=10厘米;1厘米=10毫米;1元=10角;1角=10分钱。问号右侧的单位是吨和千克,由于1吨=1000千克,因此7吨54千克=7*1000+54=7054千克。因为7054千
克+68千克=7122千克,且1千克=1/1000吨,所以7122千克=7吨122千克。
以下是关于单位转化的题目,请根据题目,给出问号处的正确答案。题目:{question} 答案:让我们一步一步思考,首先需要了解如下的单位转化常识:1吨
=1000千克;1千克=1000克;1克=1000毫克;1周=7天;1天=24小时;1小时=60分钟;1分钟=60秒钟;1秒钟=1000毫秒;1千米=1000米;1米=10分米;1分米=10厘米;
1厘米=10毫米;1元=10角;1角=10分钱。
Table 11: Prompt of English and Chinese unit conversion in Units of Measurement task, ‘XXX’ is a word or
numerical question in the dataset.
14286
-----
**Prompt of calculate English Math Problems of MWP**
**The following are questions containing unit and numerical calculations. Please provide the correct answers based on the questions.**
Zero-shot
**Question:XXX Answer:**
Zero-shot **The following are questions containing unit and numerical calculations. Please provide the correct answers based on the questions.**
CoT **Question:XXX Answer:Let's think step by step,**
The following are questions containing unit and numerical calculations. Please provide the correct answers based on the questions. Question:
Mike picked 7 apples, Nancy picked 3 apples, and Keith picked 6 apples and 4 pears, at the farm . How many apples were picked in total ?
Answer:16.
The following are questions containing unit and numerical calculations. Please provide the correct answers based on the questions. Question:
Tammy drove 55 miles in one hour. At that rate, how far can she drive in 36 hours? Answer:1980.
Few-shot
The following are questions containing unit and numerical calculations. Please provide the correct answers based on the questions. Question:A
trivia team had 15 members total, but during a game 6 members didn't show up. If each member that did show up scored 3 points, how many
points were scored total? Answer:27.
**The following are questions containing unit and numerical calculations. Please provide the correct answers based on the questions.**
**Question:XXX Answer:**
The following are questions containing unit and numerical calculations. Please provide the correct answers based on the questions. Question:
Mike picked 7 apples, Nancy picked 3 apples, and Keith picked 6 apples and 4 pears, at the farm . How many apples were picked in total ?
Answer:Let's think step by step, Due to Mike having 7 apples, Nancy having 3 apples, and Keith having 6 apples, there are a total of 7+3+6=16
apples picked.
The following are questions containing unit and numerical calculations. Please provide the correct answers based on the questions. Question:
Tammy drove 55 miles in one hour. At that rate, how far can she drive in 36 hours? Answer:Let's think step by step, Tammy can drive 55
miles per hour, with a total distance of 36 hours, totaling 55 * 36=1980 miles. The answer is that it can drive 1980 miles.
Few-shot
CoT
The following are questions containing unit and numerical calculations. Please provide the correct answers based on the questions. Question:A
trivia team had 15 members total, but during a game 6 members didn't show up. If each member that did show up scored 3 points, how many
points were scored total? Answer:Let's think step by step, There are a total of 15 members in the team, and 6 members did not appear,
indicating the presence of 9 members from 15-6. Each member who appears has 3 points, so a total of 9 members scored 9 * (15-9)=27 points,
and the answer is 27 points.
**The following are questions containing unit and numerical calculations. Please provide the correct answers based on the questions.**
**Question:XXX Answer:Let's think step by step,**
**Prompt of calculate Chinese Math Problems of MWP**
Zero-shot 以下是含有单位和数字计算的题目,请根据题目,给出正确的答案。 题目:XXX 答案:
Zero-shot
以下是含有单位和数字计算的题目,请根据题目,给出正确的答案。 题目:XXX 答案:让我们一步一步思考,
CoT
以下是含有单位和数字计算的题目,请根据题目,给出正确的答案。 题目:在农场,迈克摘了7个苹果,南希摘了3个,基思摘了6
个苹果和4个梨。一共摘了多少个苹果? 答案:16。
以下是含有单位和数字计算的题目,请根据题目,给出正确的答案。 题目:塔米一小时开了55英里。照这样下去,她36小时能开多
远? 答案:1980。
Few-shot
以下是含有单位和数字计算的题目,请根据题目,给出正确的答案。 题目:一个琐事小组总共有15名成员,但在一场比赛中有6名
成员没有出现。如果每个出现的成员都得了3分,总共得了多少分? 答案:27。
以下是含有单位和数字计算的题目,请根据题目,给出正确的答案。 题目:XXX 答案:
以下是含有单位和数字计算的题目,请根据题目,给出正确的答案。 题目:在农场,迈克摘了7个苹果,南希摘了3个,基思摘了6
个苹果和4个梨。一共摘了多少个苹果? 答案:让我们一步一步思考,由于迈克有7个苹果,南希有3个苹果,基思有6个苹果,所
以一共有7+3+6=16,答案是一共摘了16个苹果。
以下是含有单位和数字计算的题目,请根据题目,给出正确的答案。 题目:塔米一小时开了55英里。照这样下去,她36小时能开多
Few-shot 远? 答案:让我们一步一步思考,塔米一小时能开55英里,开了36小时的总距离,一共有55*36=1980英里,答案是能开1980英里。
CoT
以下是含有单位和数字计算的题目,请根据题目,给出正确的答案。 题目:一个琐事小组总共有15名成员,但在一场比赛中有6名
成员没有出现。如果每个出现的成员都得了3分,总共得了多少分? 答案:让我们一步一步思考,小组一共有15名成员,6名成员
没有出现,说明出现了15-6=9名成员。每名出现的成员有3分,所以9名成员一共得了9*(15-9)=27分,答案是一共得了27分。
以下是含有单位和数字计算的题目,请根据题目,给出正确的答案。 题目:XXX 答案:让我们一步一步思考,
Table 12: Prompt of calculate English and Chinese math problems of MWPs, ‘XXX’ is a question in the dataset.
14287
-----
**Prompt of calculate Chinese Math Problems of SUANJING**
Zero-shot 以下是含有单位和关于汉字数字计算的题目,请根据题目,给出正确的答案。 题目:XXX 答案:
Zero-shot CoT 以下是含有单位和关于汉字数字计算的题目,请根据题目,给出正确的答案。 题目:XXX 答案:让我们一步一步思考,
以下是含有单位和关于汉字数字计算的题目,请根据题目,给出正确的答案。 题目:现在有七个人平分八又三分之一钱。问:每个人各得到多少钱? 答案:
人得一钱、二十一分钱之四。
以下是含有单位和关于汉字数字计算的题目,请根据题目,给出正确的答案。 题目:现在有一个直角梯形田,其中一面宽三十步,另外一面宽四十二步,高六十
Few-shot 四步。问这个田的面积是多少? 答案:九亩一百四十四步。
以下是含有单位和关于汉字数字计算的题目,请根据题目,给出正确的答案。 题目:现在有九块黄金和十一块白银,它们的质量相同。如果将其中一块黄金和白
银进行交换,那么黄金的重量就会轻十三两。问:每块黄金和每块白银的重量各是多少? 答案:金重二斤三两一十八铢,银重一斤十三两六铢。
以下是含有单位和关于汉字数字计算的题目,请根据题目,给出正确的答案。 题目:XXX 答案:
以下是含有单位和关于汉字数字计算的题目,请根据题目,给出正确的答案。 题目:现在有七个人平分八又三分之一钱。问:每个人各得到多少钱? 答案:
让我们一步一步思考,我们首先要明确“八又三分之一”这个数字的确切意思。在这里,“八又三分之一”意味着8 加上1/3,等于25/3。这个问题是关于平均分
配的,所以我们需要将总金额除以人数。这里的总金额是25/3,平均每个人可以分到25/3除以7,这个结果用小数表示约等于1.19钱,用分数表示是25/21钱,或表
示为一又二十一分之四钱,或表示为一钱二十一分钱之四。
以下是含有单位和关于汉字数字计算的题目,请根据题目,给出正确的答案。 题目:现在有一个直角梯形田,其中一面宽三十步,另外一面宽四十二步,高六十
四步。问这个田的面积是多少? 答案:让我们一步一步思考,要计算直角梯形的面积,我们可以使用公式:梯形面积=(上底+下底)* 高/ 2。在这个问题中,
梯形的上底和下底是两个不同的边。由于梯形是直角梯形,我们可以直接使用这些边作为上底和下底。根据题目,这两边的长度分别是30 步和42 步,高是64
步。所以,梯形的面积可以计算如下:梯形面积=(30+42)* 64 / 2 = 2304步,由于1亩=240步,所以2304步又等于9亩144步,即九亩一百四十四步。
Few-shot CoT 以下是含有单位和关于汉字数字计算的题目,请根据题目,给出正确的答案。 题目:现在有九块黄金和十一块白银,它们的质量相同。如果将其中一块黄金和白
银进行交换,那么黄金的重量就会轻十三两。问:每块黄金和每块白银的重量各是多少? 答案:让我们一步一步思考,这是一个典型的数学问题,我们可以用方
程来解决。设每块黄金的重量为X两,每块白银的重量为Y两。根据题意,我们有9块黄金和11块白银,所以黄金部分的重量为9X,白银部分的重量为11Y。首先9块黄
金和11块白银的质量相同,所以9X = 11Y。当我们将一块黄金和一块白银进行交换后,黄金部分的重量变为9X - X + Y = 8X + Y,白银部分的重量变为11Y - Y +
X = 10Y + X。根据题意,新的黄金部分重量比新的白银重量轻十三两,即10Y + X - (8X + Y) = 13。现在我们可以通过9X = 11Y和10Y + X - (8X + Y) = 13这两
个方程,来解出每块黄金和每块白银的重量,让我们计算一下。首先,让我们处理第二个方程10Y + X - (8X + Y) = 13。我们可以简化它,然后使用第一个方程来
找到解。这个简化过程如下:先化简为10Y + X - 8X - Y = 13,再化简为9Y - 7X =13。接下来,我们可以利用第一个方程9X = 11Y来解9Y - 7X =13。求解之后,
我们可以得到以下结果:X(黄金的重量)是143/4两,即35.75两。Y(白银的重量)是117/4两,即29.25两。又因为1斤=16两,一两=24铢,所以金重35.75两等于
二斤三两一十八铢,银重29.25两等于一斤十三两六铢。
以下是含有单位和关于汉字数字计算的题目,请根据题目,给出正确的答案。 题目:XXX 答案:让我们一步一步思考,
以下是含有单位和关于汉字数字计算的题目,请根据题目,给出正确的答案。 题目:现在有七个人平分八又三分之一钱。问:每个人各得到多少钱? 答案:
让我们一步一步思考,这里我们要先了解一些知识,1秒=10忽;1毫=10秒;1厘=10毫;1分=10厘;一寸=10分;1尺=10存;1丈=10尺;1引=10丈;1端=50尺;1疋=40
尺;1匹=1疋;1步=6尺;1顷=100亩;1亩=240步;1亩=240积步;1里=300步;1里=375亩;1两=24铢;1斤=16两;1钧=30斤;1石=4钧;1圭=6粟;1抄=10圭;1撮=10
抄;1勺=10撮;1合=10勺;1升=10合;1斗=10升;1斛=10斗;1秉=16斛;50栗米=30粝米;50栗米=27粺米;50栗米=24凿米;50栗米=21御米;50栗米=13.5小䵂;50
栗米=54大䵂;50栗米=75粝饭;50栗米=54粺饭;50栗米=48凿饭;50栗米=42御饭;50栗米=45菽;50栗米=45答;50栗米=45麻;50栗米=45麦;50栗米=60稻;50栗
米=63豉;50栗米=90豉;50栗米=115熟菽;50栗米=175蘖;1岁=354日;4穿地=5壤;4穿地=5坚;4穿地=5墟。我们首先要明确“八又三分之一”这个数字的确切意
思。在这里,“八又三分之一”意味着8 加上1/3,等于25/3。这个问题是关于平均分配的,所以我们需要将总金额除以人数。这里的总金额是25/3,平均每个人
可以分到25/3除以7,这个结果用小数表示约等于1.19钱,用分数表示是25/21钱,或表示为一又二十一分之四钱,或表示为一钱二十一分钱之四。
以下是含有单位和关于汉字数字计算的题目,请根据题目,给出正确的答案。 题目:现在有一个直角梯形田,其中一面宽三十步,另外一面宽四十二步,高六十
四步。问这个田的面积是多少? 答案:让我们一步一步思考,这里我们要先了解一些知识,1秒=10忽;1毫=10秒;1厘=10毫;1分=10厘;一寸=10分;1尺=10存;
1丈=10尺;1引=10丈;1端=50尺;1疋=40尺;1匹=1疋;1步=6尺;1顷=100亩;1亩=240步;1亩=240积步;1里=300步;1里=375亩;1两=24铢;1斤=16两;1钧=30斤;
1石=4钧;1圭=6粟;1抄=10圭;1撮=10抄;1勺=10撮;1合=10勺;1升=10合;1斗=10升;1斛=10斗;1秉=16斛;50栗米=30粝米;50栗米=27粺米;50栗米=24凿米;
50栗米=21御米;50栗米=13.5小䵂;50栗米=54大䵂;50栗米=75粝饭;50栗米=54粺饭;50栗米=48凿饭;50栗米=42御饭;50栗米=45菽;50栗米=45答;50栗米=45
麻;50栗米=45麦;50栗米=60稻;50栗米=63豉;50栗米=90豉;50栗米=115熟菽;50栗米=175蘖;1岁=354日;4穿地=5壤;4穿地=5坚;4穿地=5墟。要计算直角梯
形的面积,我们可以使用公式:梯形面积=(上底+下底)* 高/ 2。在这个问题中,梯形的上底和下底是两个不同的边。由于梯形是直角梯形,我们可以直接使用
这些边作为上底和下底。根据题目,这两边的长度分别是30 步和42 步,高是64 步。所以,梯形的面积可以计算如下:梯形面积=(30+42)* 64 / 2 = 2304步,
由于1亩=240步,所以2304步又等于9亩144步,即九亩一百四十四步。
Few-shot CoT
with knowledge 以下是含有单位和关于汉字数字计算的题目,请根据题目,给出正确的答案。 题目:现在有九块黄金和十一块白银,它们的质量相同。如果将其中一块黄金和白
银进行交换,那么黄金的重量就会轻十三两。问:每块黄金和每块白银的重量各是多少? 答案:让我们一步一步思考,这里我们要先了解一些知识,1秒=10忽;
1毫=10秒;1厘=10毫;1分=10厘;一寸=10分;1尺=10存;1丈=10尺;1引=10丈;1端=50尺;1疋=40尺;1匹=1疋;1步=6尺;1顷=100亩;1亩=240步;1亩=240积步;
1里=300步;1里=375亩;1两=24铢;1斤=16两;1钧=30斤;1石=4钧;1圭=6粟;1抄=10圭;1撮=10抄;1勺=10撮;1合=10勺;1升=10合;1斗=10升;1斛=10斗;1秉
=16斛;50栗米=30粝米;50栗米=27粺米;50栗米=24凿米;50栗米=21御米;50栗米=13.5小䵂;50栗米=54大䵂;50栗米=75粝饭;50栗米=54粺饭;50栗米=48凿饭;
50栗米=42御饭;50栗米=45菽;50栗米=45答;50栗米=45麻;50栗米=45麦;50栗米=60稻;50栗米=63豉;50栗米=90豉;50栗米=115熟菽;50栗米=175蘖;1岁=354
日;4穿地=5壤;4穿地=5坚;4穿地=5墟。这是一个典型的数学问题,我们可以用方程来解决。设每块黄金的重量为X两,每块白银的重量为Y两。根据题意,我们有
9块黄金和11块白银,所以黄金部分的重量为9X,白银部分的重量为11Y。首先9块黄金和11块白银的质量相同,所以9X = 11Y。当我们将一块黄金和一块白银进行交
换后,黄金部分的重量变为9X - X + Y = 8X + Y,白银部分的重量变为11Y - Y + X = 10Y + X。根据题意,新的黄金部分重量比新的白银重量轻十三两,即10Y +
X - (8X + Y) = 13。现在我们可以通过9X = 11Y和10Y + X - (8X + Y) = 13这两个方程,来解出每块黄金和每块白银的重量,让我们计算一下。首先,让我们处
理第二个方程10Y + X - (8X + Y) = 13。我们可以简化它,然后使用第一个方程来找到解。这个简化过程如下:先化简为10Y + X - 8X - Y = 13,再化简为9Y 7X =13。接下来,我们可以利用第一个方程9X = 11Y来解9Y - 7X =13。求解之后,我们可以得到以下结果:X(黄金的重量)是143/4两,即35.75两。Y(白银的重
量)是117/4两,即29.25两。又因为1斤=16两,1两=24铢,所以金重35.75两等于二斤三两一十八铢,银重29.25两等于一斤十三两六铢。
以下是含有单位和关于汉字数字计算的题目,请根据题目,给出正确的答案。 题目:XXX 答案:让我们一步一步思考,这里我们要先了解一些知识,1秒=10忽;
1毫=10秒;1厘=10毫;1分=10厘;一寸=10分;1尺=10存;1丈=10尺;1引=10丈;1端=50尺;1疋=40尺;1匹=1疋;1步=6尺;1顷=100亩;1亩=240步;1亩=240积步;
1里=300步;1里=375亩;1两=24铢;1斤=16两;1钧=30斤;1石=4钧;1圭=6粟;1抄=10圭;1撮=10抄;1勺=10撮;1合=10勺;1升=10合;1斗=10升;1斛=10斗;1秉
=16斛;50栗米=30粝米;50栗米=27粺米;50栗米=24凿米;50栗米=21御米;50栗米=13.5小䵂;50栗米=54大䵂;50栗米=75粝饭;50栗米=54粺饭;50栗米=48凿饭;
50栗米=42御饭;50栗米=45菽;50栗米=45答;50栗米=45麻;50栗米=45麦;50栗米=60稻;50栗米=63豉;50栗米=90豉;50栗米=115熟菽;50栗米=175蘖;1岁=354
日;4穿地=5壤;4穿地=5坚;4穿地=5墟。
Table 13: Prompt of calculate the modern version of mathematical problems of SUANJING, ‘XXX’ is a question in
the dataset.
14288
-----
**Zero-shot** **Zero-shot CoT** **Few-shot** **Few-shot CoT**
**ZH** **EN** **ZH** **EN** **ZH** **EN** **ZH** **EN**
**ChatGLM-6B Num2Words**
**Easy** **37.75** **8.25** **39.75** 1.00 **25.25** 6.25 **16.00** 4.25
**Medium** 28.75 6.50 22.75 **1.50** 20.00 **7.75** 7.00 4.50
**Hard** 7.50 2.00 3.75 0.00 5.00 1.25 1.25 1.00
**ChatGLM-6B Words2Num**
**Easy** 64.00 35.25 59.00 36.25 47.75 20.50 54.50 **38.50**
**Medium** **75.50** **50.25** **59.75** **40.25** **70.25** **45.75** **55.50** 32.50
**Hard** 17.75 15.50 4.25 2.75 20.75 21.75 14.75 9.50
**ERNIE-Bot-turbo Num2Words**
**Easy** **46.75** **20.00** **41.75** **11.50** 47.25 **54.50** 31.00 **45.50**
**Medium** 39.00 12.25 28.50 8.75 **48.25** 44.75 **33.25** 36.75
**Hard** 13.75 15.50 9.00 11.00 18.50 23.75 11.00 26.50
**ERNIE-Bot-turbo Words2Num**
**Easy** **76.75** 45.25 **72.50** 28.25 80.50 31.50 72.00 **63.50**
**Medium** 74.75 **57.00** 66.25 **29.50** **87.00** **67.75** **76.00** 51.00
**Hard** 23.00 24.25 21.75 13.50 42.25 42.00 36.75 36.50
**ChatGLM-Turbo Num2Words**
**Easy** **43.00** **50.00** **46.00** 43.75 **59.25** **47.50** **43.00** **43.75**
**Medium** 39.25 41.75 38.75 35.25 45.75 42.00 32.25 30.00
**Hard** 13.00 22.25 14.00 15.50 22.25 42.50 25.25 22.00
**ChatGLM-Turbo Words2Num**
**Easy** **95.75** **85.50** **69.25** **56.75** **96.25** 71.50 **84.25** **82.00**
**Medium** 80.50 61.25 53.00 40.00 88.50 **73.50** 62.50 38.50
**Hard** 24.50 32.25 18.50 21.50 49.25 51.25 56.75 42.75
Table 14: The accuracy performance of models ChatGLM-6B, ERNIE-Bot-turbo, and ChatGLM-Turbo in different
difficulty levels of Num2words and Words2Num tasks.
14289
-----
**Zero-shot** **Zero-shot CoT** **Few-shot** **Few-shot CoT**
**ZH** **EN** **ZH** **EN** **ZH** **EN** **ZH** **EN**
**Llama2-7B Num2Words**
**Easy** **13.50** **20.00** 8.75 **13.25** **25.00** **57.25** **10.00** **34.50**
**Medium** 12.50 18.50 **9.50** 12.75 21.50 44.75 7.50 23.75
**Hard** 0.50 6.25 1.00 4.00 5.00 28.25 1.25 17.00
**Llama2-7B Words2Num**
**Easy** **32.00** 17.25 **20.50** 24.00 21.25 27.00 **21.50** **39.25**
**Medium** 27.00 30.50 14.00 **28.25** **39.00** **68.50** 17.00 29.75
**Hard** 4.75 8.50 2.50 12.25 10.75 43.25 15.75 27.75
**Llama2-13B Num2Words**
**Easy** 18.50 **43.75** 6.75 **17.00** **45.75** 40.75 2.75 20.25
**Medium** **19.50** 37.00 **7.75** 15.00 33.00 **52.75** **4.25** 17.50
**Hard** 5.50 14.75 1.00 4.50 12.25 43.75 1.50 14.75
**Llama2-13B Words2Num**
**Easy** 25.00 **22.25** 12.75 **14.75** 35.00 44.75 **48.50** **33.75**
**Medium** **27.00** 20.50 **13.25** 10.50 **62.50** **76.50** 33.75 31.50
**Hard** 9.75 13.00 2.25 3.25 20.75 47.50 20.50 27.75
**Llama2-70B Num2Words**
**Easy** **37.25** 36.75 **10.50** **26.25** **36.75** 45.00 **9.75** 45.25
**Medium** 32.75 **45.50** 8.25 17.25 33.00 **54.75** 6.50 **48.50**
**Hard** 15.75 32.00 6.75 10.75 16.50 35.75 4.25 12.00
**Llama2-70B Words2Num**
**Easy** **41.00** 60.75 **32.25** 11.50 **45.00** 57.75 **28.75** 8.50
**Medium** 38.00 **62.75** 27.00 **21.00** 38.75 **67.75** 19.75 **23.75**
**Hard** 15.75 37.25 3.50 16.25 16.75 35.25 3.00 20.50
Table 15: The accuracy performance of models Llama2-7B, Llama2-13B and Llama2-70B in different difficulty
levels of Num2words and Words2Num tasks.
14290
-----
| [
"Ancheng, Xu",
"Vivek, Srikumar",
"Minghuan, Tan",
"Lei, Wang",
"Min, Yang",
"Lun-Wei, Ku",
"Andre, Martins",
"Ruifeng, Xu"
] | 2024-08-01T00:00:00 | ACL 2024 Findings | false | 0 | 0 | null | https://aclanthology.org/2024.findings-acl.848 | https://arxiv.org/abs/2406.02864 | https://www.semanticscholar.org/paper/5fda196ca0039089181e7d8e22cc500cbea4c877 |
Naproche-ZF: Lessons learned from implementing a new natural-language-oriented theorem prover | N/A | null | [
"Adrian De, Lon"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
|
Neural Architectures for Tactic-Based Automated Theorem Proving | N/A | null | # Neural Architectures for Tactic-Based Automated Theorem Proving
Christian Szegedy, Sarah M. Loos, Aditya Paliwal, Markus Rabe, and Kshitij
Bansal
Google Research
## 1 Introduction
In this talk, we compare various neural network architectures for tactic-based neurally guided
proof search for higher order logic in HOL-Light [4] interactive theorem prover. It was first
demonstrated in the TacticToe [3] prover that learned guidence for tactic based interactive
proof search could yield superior results for automated higher order theorem proving compared
to hammers based on first order logic based ATPs [5]. Here we focus on a deep learning based
solution. We will be addressing two kinds of tasks: the selection of a tactic out of 41 possible
tactics and the ranking of tactic arguments from all the usable tactic arguments from a theorem
database.
Our experiments are conducted on the HOList [2] benchmark, which comprises a standardized set of theorems sorted such that later theorems can be proved solely by earlier theorems
and definitions in the database. Our main metric is the number of proofs successfully closed
on a held out set of theorems. In our imitation learning setup, we train models using our
database of human proofs, logged from the HOL-Light libraries. We also experiment with a
reinforcement learning setup, allowing the model to control the proof search with tactic and
tactic argument selection, while simultaneously training on human proofs. Finally, we perform
reinforcement learning without imitation learning (i.e. “from zero” human proofs); in this setting we additionally measure the cumulative number of proofs closed over a fixed number of
proof attempts.
## 2 Architectures Tested
Our theorem prover is based on a simple breadth first search based backward prover augmented by a neural network for premise selection and tactic prediction. The neural network is
a two-tower architecture without weight sharing. The two towers produce a fixed dimensional
embedding, one for the goal and one for the premise. The two embeddings are combined by a
cheap three-layer network to produce a ranking score for the premise. This architectural choice
is essential for fast ranking of a large number of premises in relatively short time, since the
embeddings for the potential premises can be shared. However, we have a lot of freedom for
choosing the architecture for the individual embedding towers that incur the most computational cost. Here we worked with two types of networks: those that consider the input as a
sequence of tokens and those that take a graph representation of the formulas. In the latter
case we also employ subexpression sharing.
Our experiments focus on various base neural network architectures which all share the
common feature that they produce a feature vector for each input token. In order to use the
produced features efficiently for ranking the premises, this set (or sequence) of output feature
vectors needs to be reduced to a single, relatively short, fixed dimensional feature vector that
-----
Neural Architectures for Tactic-Based ATPs Szegedy, Loos, Paliwal, Rabe and Bansal
can be used in a nearest neighbor look up. The choice of this reduction method is also explored
in detail here. For the base architectures, we have evaluated the following variants:
_• simple convolutional networks,_
_• dilated convolutional networks (a.k.a WaveNets [7]),_
_• transformer network architectures [8],_
_• graph neural networks (GNNs [6]),_
_• graph attention networks [9]._
We additionally evaluate a variety of pooling mechanisms:
_• maximum pooling,_
_• average pooling,_
_• expanding the dimension of output features before pooling,_
_• attention based pooling_
_• and transformer layers (with self-attention)._
## 3 Evaluation Methodologies and Metrics
These architectures were trained with imitation learning (learning from human proof-logs) and
the best models were tested in the context of the reinforcement learning from scratch without
utilizing any of the human proofs (in the context of DeepHOL-Zero [1]). We also report several
proxy metrics for tactic selection and premise ranking and their evolution during the training
process. We study which metrics are most indicative of the final end-to-end prover performance.
## References
[1] Kshitij Bansal, Sarah M Loos, Markus N Rabe, and Christian Szegedy. Learning to reason in large
theories without imitation. arXiv preprint arXiv:1905.10501, 2019.
[2] Kshitij Bansal, Sarah M Loos, Markus N Rabe, Christian Szegedy, and Stewart Wilcox. Holist:
An environment for machine learning of higher-order theorem proving. ICML 2019. International
_Conference on Machine Learning, 2019._
[3] Thibault Gauthier, Cezary Kaliszyk, and Josef Urban. Tactictoe: Learning to reason with hol4 tactics. In LPAR-21. 21st International Conference on Logic for Programming, Artificial Intelligence
_and Reasoning, volume 46, pages 125–143, 2017._
[4] John Harrison. HOL Light: A tutorial introduction. In FMCAD, pages 265–269, 1996.
[5] Cezary Kaliszyk and Josef Urban. Learning-assisted automated reasoning with flyspeck. Journal
_of Automated Reasoning, 53(2):173–213, 2014._
[6] Aditya Paliwal, Sarah Loos, Markus Rabe, Kshitij Bansal, and Christian Szegedy. Graph representations for higher-order logic and theorem proving. arXiv preprint arXiv:1905.10006, 2019.
[7] A¨aron Van Den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves,
Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw
audio. CoRR abs/1609.03499, 2016.
-----
Neural Architectures for Tactic-Based ATPs Szegedy, Loos, Paliwal, Rabe and Bansal
[8] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information
_processing systems, pages 5998–6008, 2017._
[9] Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua
Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
-----
| [
"Aditya, Paliwal",
"Sarah M, Loos",
"Markus, Rabe",
"Christian, Szegedy"
] | 2020-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Neural-Symbolic Collaborative Distillation: Advancing Small Language Models for Complex Reasoning Tasks | In this paper, we propose $\textbf{Ne}$ural-$\textbf{Sy}$mbolic $\textbf{C}$ollaborative $\textbf{D}$istillation ($\textbf{NesyCD}$), a novel knowledge distillation method for learning the complex reasoning abilities of Large Language Models (LLMs, e.g., \textgreater 13B). We argue that complex reasoning tasks are difficult for Small Language Models (SLMs, e.g., $\leq$ 7B), as these tasks demand not only general cognitive abilities but also specialized knowledge, which is often sparse and difficult for these neural-based SLMs to effectively capture. Therefore, NesyCD distills the general capabilities and specialized knowledge in LLMs using different manners. On the one hand, we distill only general abilities from teacher LLMs into the student SLMs of parameterized neural networks. On the other hand, for the specialized abilities and uncommon knowledge of a complex reasoning task, we employ a symbolic knowledge distillation approach to obtain and store the specialized knowledge within a symbolic knowledge base (KB). By decoupling general and specialized capabilities, the proposed NesyCD can achieve superior performance cost-effectively, utilizing smaller models and blending parameterized neural networks with symbolic KB. Moreover, the specialized KB generalizes well and is comprehended and manipulated by humans. Our experiments show that NesyCD significantly boosts SLMs' complex reasoning performance on in-domain (BBH, GSM8K) and out-of-domain (AGIEval, ARC) datasets. Notably, our approach enabled the LLaMA3-8B and Qwen2-7B to surpass GPT-3.5-turbo in performance and come close to matching LLaMA3-70B, despite the latter having nine times more parameters. Our code will be available at https://github.com/Xnhyacinth/NesyCD. | The proposed NesyCD can achieve superior performance cost-effectively, utilizing smaller models and blending parameterized neural networks with symbolic KB, which generalizes well and is comprehended and manipulated by humans. | [
"Jun, Zhao",
"Huanxuan, Liao",
"Yao, Xu",
"Yuanzhe, Zhang",
"Shizhu, He",
"Kang, Liu"
] | 2024-09-20T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.13203 | https://arxiv.org/abs/2409.13203 | https://www.semanticscholar.org/paper/d2ea21e36ea31d9bd880b161b91986c94a46af3f |
|
Neuro-Symbolic Data Generation for Math Reasoning | A critical question about Large Language Models (LLMs) is whether their apparent deficiency in mathematical reasoning is inherent, or merely a result of insufficient exposure to high-quality mathematical data. To explore this, we developed an automated method for generating high-quality, supervised mathematical datasets. The method carefully mutates existing math problems, ensuring both diversity and validity of the newly generated problems. This is achieved by a neuro-symbolic data generation framework combining the intuitive informalization strengths of LLMs, and the precise symbolic reasoning of math solvers along with projected Markov chain Monte Carlo sampling in the highly-irregular symbolic space.Empirical experiments demonstrate the high quality of data generated by the proposed method, and that the LLMs, specifically LLaMA-2 and Mistral, when realigned with the generated data, surpass their state-of-the-art counterparts. | null | null | null | null | NeurIPS 2024 | true | 0 | 0 | null | https://neurips.cc/virtual/2024/poster/96151 | null | null |
Not All LLM Reasoners Are Created Equal | We study the depth of grade-school math (GSM) problem-solving capabilities of LLMs. To this end, we evaluate their performance on pairs of existing math word problems together so that the answer to the second problem depends on correctly answering the first problem. Our findings reveal a significant reasoning gap in most LLMs, that is performance difference between solving the compositional pairs and solving each question independently. This gap is more pronounced in smaller, more cost-efficient, and math-specialized models. Moreover, instruction-tuning recipes and code generation have varying effects across LLM sizes, while finetuning on GSM can lead to task overfitting. Our analysis indicates that large reasoning gaps are not because of test-set leakage, but due to distraction from additional context and poor second-hop reasoning. Overall, LLMs exhibit systematic differences in their reasoning abilities, despite what their performance on standard benchmarks indicates. | Overall, LLMs exhibit systematic differences in their reasoning abilities, despite what their performance on standard benchmarks indicates, with a significant reasoning gap in smaller, more cost-efficient, and math-specialized models. | [
"Rishabh, Agarwal",
"Alessandro, Sordoni",
"Aaron, Courville",
"Arian, Hosseini",
"Daniel, Toyama"
] | 2024-10-02T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.01748 | https://arxiv.org/abs/2410.01748 | https://www.semanticscholar.org/paper/cbb8e228326306085b0a940976a83292ec967bdd |
|
Not All Tokens Are What You Need for Pretraining | Previous language model pre-training methods have uniformly applied a next-token prediction loss to all training tokens. Challenging this norm, we posit that ''Not all tokens in a corpus are equally important for language model training''. Our initial analysis examines token-level training dynamics of language model, revealing distinct loss patterns for different tokens. Leveraging these insights, we introduce a new language model called Rho-1. Unlike traditional LMs that learn to predict every next token in a corpus, Rho-1 employs Selective Language Modeling (SLM), which selectively trains on useful tokens that aligned with the desired distribution. This approach involves scoring pretraining tokens using a reference model, and then training the language model with a focused loss on tokens with higher scores. When continual pretraining on 15B OpenWebMath corpus, Rho-1 yields an absolute improvement in few-shot accuracy of up to 30% in 9 math tasks. After fine-tuning, Rho-1-1B and 7B achieved state-of-the-art results of 40.6% and 51.8% on MATH dataset, respectively - matching DeepSeekMath with only 3% of the pretraining tokens. Furthermore, when pretraining on 80B general tokens, Rho-1 achieves 6.8% average enhancement across 15 diverse tasks, increasing both efficiency and performance of the language model pre-training. | null | null | null | null | NeurIPS 2024 | true | 0 | 0 | null | https://neurips.cc/virtual/2024/poster/96931 | null | null |
OccamLLM: Fast and Exact Language Model Arithmetic in a Single Step | Despite significant advancements in text generation and reasoning, Large Language Models (LLMs) still face challenges in accurately performing complex arithmetic operations. To achieve accurate calculations, language model systems often enable LLMs to generate code for arithmetic operations. However, this approach compromises speed and security and, if finetuning is involved, risks the language model losing prior capabilities. We propose a framework that enables exact arithmetic in a single autoregressive step, providing faster, more secure, and more interpretable LLM systems with arithmetic capabilities. We use the hidden states of an LLM to control a symbolic architecture which performs arithmetic. Our implementation using Llama 3 8B Instruct with OccamNet as a symbolic model (OccamLlama) achieves 100% accuracy on single arithmetic operations (+, -, *, /, sin, cos, log, exp, sqrt), outperforming GPT 4o and on par with GPT 4o using a code interpreter. OccamLlama also outperforms both Llama 3 8B Instruct and GPT 3.5 Turbo on multistep reasoning problems involving challenging arithmetic, thus enabling small LLMs to match the arithmetic performance of even much larger models. Our code is available at https://anonymous.4open.science/r/OccamLlama. | This work proposes a framework that enables exact arithmetic in a single autoregressive step, providing faster, more secure, and more interpretable LLM systems with arithmetic capabilities. | ## OccamLLM: Fast and Exact Language Model Arithmetic in a Single Step
**Owen Dugan[∗†]**
Department of Physics
Massachusetts Institute of Technology
Cambridge, MA
```
[email protected]
```
**Charlotte Loh**
Department of EECS
Massachusetts Institute of Technology
Cambridge, MA
```
[email protected]
```
**Donato M. Jiménez-Benetó[∗]**
Department of Physics
Massachusetts Institute of Technology
Cambridge, MA
```
[email protected]
```
**Zhuo Chen**
Department of Physics
Massachusetts Institute of Technology
Cambridge, MA
```
[email protected]
```
**Rumen Dangovski**
Department of EECS
Massachusetts Institute of Technology
Cambridge, MA
```
[email protected]
```
**Marin Soljaˇci´c**
Department of Physics
Massachusetts Institute of Technology
Cambridge, MA
```
[email protected]
```
**Abstract**
Despite significant advancements in text generation and reasoning, Large Language Models (LLMs) still face challenges in accurately performing complex
arithmetic operations. To achieve accurate calculations, language model systems
often enable LLMs to generate code for arithmetic operations. However, this
approach compromises speed and security and, if finetuning is involved, risks the
language model losing prior capabilities. We propose a framework that enables
exact arithmetic in a single autoregressive step, providing faster, more secure, and
more interpretable LLM systems with arithmetic capabilities. We use the hidden
states of an LLM to control a symbolic architecture which performs arithmetic.
Our implementation using Llama 3 8B Instruct with OccamNet as a symbolic
model (OccamLlama) achieves 100% accuracy on single arithmetic operations
(+, −, ×, ÷, sin, cos, log, exp, _[√]), outperforming GPT 4o and on par with GPT_
4o using a code interpreter. OccamLlama also outperforms GPT 4o both with
and without a code interpreter on mathematical problem solving benchmarks involving challenging arithmetic, thus enabling small LLMs to match the arithmetic
performance of even much larger models. We will make our code public shortly.
**1** **Introduction**
Since the release of GPT 3, Large Language Models (LLMs) have dramatically improved in their
text generation and reasoning capabilities. This has enabled success in downstream applications
including machine translation [1, 2], sentiment analysis [3, 4, 5], and interactive dialogue generation
_∗Equal contribution_
_†Corresponding author_
Preprint. Under review.
-----
Table 1: OccamLLM is the only approach to improving the arithmetic capabilities of a pretrained
LLM which 1) enables single-pass arithmetic, 2) does not risk catastrophic forgetting from finetuning,
3) does not require arbitrary code execution, and 4) provides an interpretable process.
No Catastrophic No Arbitrary
Single Pass Forgetting Code Execution Interpretable
Fine Tuning ✓ ✗ ✓ ✗
Tool Use ✗ ✗ ✗ ✓
**OccamLLM** ✓ ✓ ✓ ✓
[6], with language models even surpassing human experts on some academic benchmarks that require
reading comprehension, reasoning and coding [7]. However even industry-leading LLMs such as
GPT 4 cannot reach 100% accuracy on simple arithmetic [8], limiting their ability to perform basic
mathematical tasks. This hinders potential applications of LLMs ranging from chat-bot physics tutors
to LLM-powered automated research that could accelerate scientific discovery and technological
innovation. The poor arithmetic performance of LLMs is particularly acute for small LLM agents,
limiting their usage in smartphone or in multi-agent applications.
To enable accurate calculations, language model systems often resort to running code written by a
LLM. However, this comes at the cost of speed; the model must perform multiple autoregressive steps
to generate code that performs the appropriate arithmetic operations. This increased decoding time
may negatively impact applications such as multi-agent workflows [9, 10] where speed is essential.
At the same time, code-based LLM arithmetic mechanisms may increase system vulnerability by
providing a mechanism for arbitrary LLM-generated code execution.
We propose an alternative, a framework which enables exact and interpretable LLM arithmetic in a
_single autoregressive step, providing faster and more secure arithmetic capabilities in LLM systems._
Our framework uses the hidden states of a LLM to control a symbolic architecture that performs
arithmetic. Although our method can in principle work with any symbolic architecture, in this paper
we use an interpretable neurosymbolic architecture known as OccamNet [11, 12] because of its
interpretability and scalability. Therefore, we term our method OccamLLM, or OccamLlama when
using a Llama model as the LLM.
Our core contributions are as follows:
1. We develop a framework for exact and interpretable LLM arithmetic in a single autoregressive step without catastrophic forgetting [13] or vulnerability from code generation. We
explore how to train OccamLlama, including data generation, decoder architecture, and loss
function.
2. We benchmark OccamLlama on arithmetic tasks, demonstrating that OccamLlama achieves
100% accuracy on arbitrary single arithmetic operations (+, −, ×, ÷, sin, cos, log, exp, _[√]),_
more than double the accuracy of GPT 4o. OccamLlama performs slightly better than GPT
4o with Code Interpreter while answering in on average more than 50x fewer generation
tokens.
3. We benchmark on mathematical problem solving tasks, showing that OccamLlama can
sustain long generations. OccamLlama outperforms both GPT 4o and GPT 4o with code
interpreter on benchmarks involving challenging arithmetic.
**2** **Related Work**
**Arithmetic Performance in LLMs.** Prior research has trained models on synthetic data, finding that
such models can achieve near-perfect accuracy on addition [14, 15], subtraction [15], multiplication
[14, 15], division [15], and raising to powers [15]. These prior models have been tested only on
arithmetic datasets, so their generality has not been assessed. Other work focuses on finetuning
LLMs which are already trained on large amounts of general-purpose data on math datasets. Both
full-parameter [16, 17] and parameter-efficient (PEFT) [18] finetuning strategies have been applied.
However, finetuning on a single dataset carries the risk of catastrophic forgetting of an LLM’s
previously acquired linguistic skills [19]. While PEFT techniques have been shown to partially
mitigate this effect, this area is still one of active research [20, 21].
-----
Figure 1: The OccamLLM system. For each autoregressive step, the language model hidden states
for that token are fed into a decoder block which assigns weights to OccamNet. The system feeds the
most recent numbers from the text into OccamNet, which then evaluates the sparse function specified
by its weights. The decoder then determines whether to use the LLM output or the OccamNet output.
**LLMs with Tool Use.** Another thrust of prior research has focused on LLM tool use, which
we believe is most directly related to our methods. Calc-X [22] introduces a technique to offload
arithmetic computations to an external tool like a calculator. The authors curated a large dataset of
arithmetic problems and trained a language model that learns to interact with a calculator through the
use of tags to signify the calling of the external tool. Several other works [23, 24, 25] follow a similar
idea, using crowd workers to annotate tool calls and using this data to train language models to
interact with external tools such as a web searching tool, a calculator, or a translation system. These
approaches can be prohibitively expensive in annotation costs; Toolformer [26] overcomes this cost
by using in-context learning and a language model to generate datasets containing the necessary ‘API’
tool calls via a self-supervised loss. Further, the above methods all require finetuning of the LLM,
placing the LLM at risk of losing generality and its original language modelling abilities through
catastrophic forgetting. In contrast, our approach does not involve training the language model. Our
‘external tool’ is a symbolic model which can be trained to correctly use the hidden states of the
language model to perform the required arithmetic computations. The language model is kept frozen
throughout this process. Unlike other tool-calling approaches, where the cost of data annotation
to train for tool-calling interaction can be prohibitively expensive, in our method, each task only
requires manually annotating tens of prompts, a high annotation efficiency. Other prior methods
leverage prompt engineering to improve arithmetic performance of LLMs; this is done either through
chain-of-thought [27], or to encourage LLMs to use a code interpreter [28, 29, 30]. Contrary to these
methods, our approach does not use code kernels; this provides several advantages: 1) it enables tool
use without expending compute on autoregressive steps for token generation, and 2) it avoids running
potentially incorrect or malicious code generated by language models.
**3** **Methods**
**3.1** **OccamLLM: Combining a Language Model with a Symbolic Model**
In short, the OccamLLM system combines a language model with a symbolic model, namely
OccamNet, that can perform arithmetic operations like addition and subtraction. For each token, the
corresponding internal hidden states of the language model are fed into a decoder module which
initializes the symbolic model such that it executes the operation required by the task described in the
input text. A string parser feeds the necessary numbers from the text into OccamNet, which evaluates
the desired expression. Finally, a decoder determines whether to use the language model output or
the OccamNet output for generating the next token.
-----
Figure 2: a) A schematic of the OccamNet architecture, with softmax layers in grey and their outputs in
red. b) A Directed Acyclic Graph (DAG) (with edges not connected to the output removed for clarity)
formed by sampling from OccamNet. This DAG corresponds to the function sin(sin(x1) exp(x0)).
_·_
Modified from [11].
In the example shown in Figure 1, a decoder determines how to initialize OccamNet from the language
model hidden states, choosing to have OccamNet perform addition. The text parser then feeds the
numbers 6 and 7 into OccamNet, which adds the numbers, returning 13. Finally, a decoder decides
to use the OccamNet output instead of the language model output, so the system outputs 13. The
new sentence, including the 13, is tokenized and fed back to the LLM to continue autoregressive
generation. The language model might later generate “Since she ate two apples, she now has,” at
which point the switch will again trigger OccamNet, this time implementing 13 − 2 and returning 11.
In the subsections below, we describe the OccamLLM system which from our experiments we find to
be most performant, even oupterforming GPT 4o in several benchmarks. For an analysis of alternate
architectures and losses, see Appendix C.
**3.1.1** **OccamNet**
OccamNet is a symbolic architecture that provides an interpretable way of parametrizing probability
distributions over a space of functions [11]. We leave a more thorough explanation of OccamNet to
[11] and Appendix D, describing only the relevant components here.
An l-layer OccamNet with primitives P and n inputs is an architecture that defines a probability
distribution over the space of functions representable as compositions of the primitives in P up to
depth l. For example, a two-layer OccamNet with primitives P = {sin, cos} and one input represents
a probability distribution over the set
_F = {x, sin(x), cos(x), sin(sin(x)), sin(cos(x)), cos(sin(x)), sin(sin(x))}._
OccamNet has the structure of an n-input, l-internal-activation-layer multilayer perceptron with the
biases removed and the activations in each layer replaced by the primitives P, as shown in Figure 2a.
Activation functions may have multiple inputs. We rename the linear layers softmax layers, denote
the weights of the ith softmax layer as W[(][i][)], and denote the combined weights of OccamNet as W.
We define the probability distribution which OccamNet parametrizes by specifying how to sample
from it. For each softmax layer output node (shown in red in Figure 2), we select a single connection
to that node from a softmax layer input node by sampling from the distribution given by the softmax
of the weights of the connections to the different inputs. This process produces a directed acyclic
graph (DAG) defining a computational path through the OccamNet activations, such as the one shown
in Figure 2b. In this way, each DAG represents a function on the inputs of OccamNet.
To ensure that OccamNet can represent all possible compositions of functions in P up to depth l,
we include the following modifications to the OccamNet architecture: 1) for each softmax layer,
we concatenate its inputs with the previous softmax layer’s inputs to enable the representation of
functions with fewer than l compositions, and 2) we repeat primitives in the ith activation layer A[l][−][i]
times, where A is the maximum number of inputs of any of the primitives, to ensure that a sufficient
number of each primitive is available at each layer. We refer to this modified architecture as complete
_OccamNet as it can represent the complete set of desired functions. The resulting architecture is_
shown in Figure 6 in the appendix.
In principle, OccamLLM can work with any symbolic model, i.e., any model that can parameterize a
set of symbolic functions or a distribution over such functions. We choose OccamNet as opposed
to, for example, a transformer [31] or recurrent neural network [32], for two reasons: 1) OccamNet
-----
is interpretable, which we hypothesize makes controlling OccamNet an easier task for a decoder to
learn, and 2) OccamNet is parallelizable over multiple samples, allowing for scalable training.
**3.1.2** **OccamLLM Decoder**
The OccamLLM decoder takes the hidden states of a language model and outputs an initialization for
OccamNet. This gives the LLM control over which function to apply on the inputs. The decoder acts
on each input token separately, producing a different OccamNet initialization for each. Therefore,
the arithmetic operations predicted may change along an input sequence, allowing OccamNet’s use
for different computations in a single multi-token generation. This is crucial in multi-step reasoning
scenarios where OccamNet is employed several times for different purposes.
Many decoder architectures are possible. We choose to parameterize the weights of each softmax
layer of OccamNet independently, as (W[(1)], . . ., W[(][l][)]) = (Decoder1(h), . . ., Decoderl(h)), where
**h are the hidden states of the language model. We choose**
Decoderi(h) = MLPi _wi,jhj_ + W[∗][(][i][)] (1)
_j=1_
X
where hj are the hidden states of the jth layer of the language model, wi,j are trainable weights, MLPi
are two-layer multilayer perceptrons (MLPs), and W[∗][(][i][)] are untrained weights which initialize all
functions to have approximately equal probabilities according to the initialization scheme described
in [11] and explained in Appendix D.4.
**3.1.3** **OccamLLM Switch**
We similarly train a decoder for a switch that, for each input token, is fed the hidden states of the
language model and selects whether to use the output of OccamNet or the output of the language
model. The decoder outputs a single number from 0 to 1, where all numbers less than or equal to 0.5
correspond to using the output of the language model and all numbers greater than 0.5 correspond to
using the output of OccamNet. We choose the following architecture for the switch decoder:
MLPswitch
_._ (2)
Decoderswitch(h) = sigmoid
**3.2** **Data Generation**
_wswitch,jhj_
_j=1_
X
We create synthetic datasets to train the OccamLLM decoders, which contain instruction prompts for
diverse arithmetic tasks. To generate datasets of arbitrary size, we create prompts with placeholders
for numbers. Each prompt includes a question with number placeholders, the sampling value range
for each number, and a function that computes the answer to the query given the sampled input
numbers. The prompts fall into two main categories: purely arithmetic tasks and reasoning problems.
Purely arithmetic prompts are formed by expressions including only symbols, without any natural
language added, such as “3 + 85 =.” We create prompts using the following operations: +(·, ·),
( _,_ ), ( _,_ ), ( _,_ ), sqrt( ), power( _,_ ), loge( ), exp( ), sin( ), and cos( ).
_−_ _·_ _·_ _×_ _·_ _·_ _÷_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_
We also include word problems that require one or two reasoning steps. We generated 150 single
step word problems and 40 multi-step reasoning problems which we modified from examples in the
MultiArith training dataset [33].
**3.2.1** **OccamNet Decoder Training Data**
For training the decoder that controls the weights of OccamNet, we created two types of examples,
single queries and concatenated queries. For single queries, we select a single prompt from the
problems generated as discussed in Section 3.2. We use the Llama 3 8B Instruct chat template and
fill in the query as the user input and the result as the assistant response, prepending “Answer = ” to
the later in randomly selected samples (see Appendix A.1 for further details). For the concatenated
queries of examples, we select a random number of prompts and concatenate the query-response pairs
without using the Llama 3 8B Instruct chat template. The OccamNet decoder is trained to predict only
-----
the results of the last query in the sequence. This strategy helps OccamLLM to learn which operation
to perform without becoming confused by earlier text, which is useful for continuous generation. To
create the training dataset, each example is sampled by first randomly selecting whether to create
a single or concatenated query, then randomly selecting the type(s) of prompt(s) used, and finally
randomly sampling the input values from the range corresponding to each selected prompt.
**3.2.2** **OccamLLM Switch Training Data**
To train the switch, we generate examples of possible LLM outputs for given input expressions, and
label the outputs with sequences of 0s or 1s corresponding to whether the language model or the
OccamNet output should be used for the next token. Some examples correspond to the prompts
described in Section 3.2. For such examples, the LLM output is set to “The answer is” or “Answer = ”
and the label sequence is all 0s with a 1 at the last token to indicate the system should use OccamNet
only to compute the answer. We also manually created and labeled several other examples for diverse
scenarios to explicitly teach the system in which cases it should or should not use OccamNet.
To create the training dataset, we concatenate a random number of the above user input - assistant
output pairs in a conversational fashion, using the Llama 3 8B Instruct chat template.
**3.3** **OccamLLM Training**
We train the OccamLLM decoder and the switch separately, as they do not share weights. In all cases,
the weights of the LLM are kept frozen. In the first step, we train the system to predict the answer to
examples generated by the method explained in Section 3.2.1. The OccamNet decoder processes the
hidden states corresponding to the last token of the response and sets the weights of OccamNet such
that the correct arithmetic expression is sampled. In this step, we use a rescaled REINFORCE [34]
loss, which can also be interpreted as a Monte-Carlo estimate of the cross-entropy loss (see Appendix
C.2):
_f_ _pW_ _[R][(][f]_ [(][x][)][, y][) log][ p][W][ [][f] []]
_L(x, y; W_ ) = − P _∼_ _f_ _∼pW_ _[R][(][f]_ [(][x][)][, y][)] _,_ (3)
where pW [f ] ON(f ; DecoderW (h(x))) is the probability distribution represented by the decoder-P
_≡_
initialized OccamNet.
Minimizing this loss steers the decoder towards assigning higher probabilities to the functions that
maximize the reward R(f (x), y), which measures the similarity between the correct answer y and
the prediction of OccamNet f (x). We find setting R(f (x), y) = 1 if f (x) = y, and 0 otherwise,
most effective. We discuss the OccamNet loss in more detail in Appendix C.
The second step involves training the decoder to route the outputs to OccamNet when needed. We train
the switch decoder alone, freezing the weights of the OccamNet decoder of the previous step and minimizing the binary cross-entropy loss between the switch output and the desired output for each token.
The OccamLLM switch decoder learns when to route the output to OccamNet in diverse contexts.
**4** **Experiments**
For all OccamLLM results, we use Llama 3 8B Instruct [35] as the underlying language model. As
such, we call our model OccamLlama. We use a 1 layer Complete OccamNet with primitives
= +( _,_ ), ( _,_ ), ( _,_ ), ( _,_ ), sqrt( ), power( _,_ ), loge( ), exp( ), sin( ), cos( ) _._
_P_ _{_ _·_ _·_ _−_ _·_ _·_ _×_ _·_ _·_ _÷_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _·_ _}_
This single layer OccamNet can be invoked by the LLM several times during generation to perform
complex arithmetic operations accurately. To use the trained OccamLlama for inference, we sample
the highest probability function from OccamNet as described in Appendix D.3.
We benchmark our methods against unmodified Llama 2 7B Chat (Llama 2 7B) [36], unmodified
Llama 3 8B Instruct (Llama 3 8B) [35], gpt-3.5-turbo-0125 (GPT 3.5 Turbo), gpt-4o-2024-05-13
(GPT 4o) [37], and gpt-4o-2024-05-13 with Code Interpreter (GPT 4o + Code). To reduce costs, for
GPT 4o with Code Interpreter, we test a random subset of 200 datapoints for each dataset.
To determine if a model output is correct, we parse all numbers in the model output and if one of
them “matches” the correct answer, we determine that the result is correct. We mark each correct
-----
Table 2: Accuracy on arithmetic tasks, in percentages. OccamLlama uses Llama 3 8B Instruct as the
underlying language model. Higher is better. Bold indicates best performance for each row.
OccamLlama Llama 2 Llama 3 GPT 3.5 GPT 4o GPT 4o
7B Chat 8b Instruct Turbo Code
Addition **100.0** **0.0** 19.2 1.2 44.9 1.6 65.2 1.5 95.7 0.6 **100.0** **0.0**
_±_ _±_ _±_ _±_ _±_ _±_
Subtraction **100.0** **0.0** 8.7 0.9 34.4 1.5 59.8 1.6 85.6 1.1 99.5 0.5
_±_ _±_ _±_ _±_ _±_ _±_
Multiplication **100.0** **0.0** 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 99.0 0.7
_±_ _±_ _±_ _±_ _±_ _±_
Division **100.0** **0.0** 2.8 0.5 35.3 1.5 10.7 1.0 38.6 1.5 **100.0** **0.0**
_±_ _±_ _±_ _±_ _±_ _±_
Square Root **100.0** **0.0** 0.0 0.0 0.0 0.0 0.9 0.3 18.6 1.2 **100.0** **0.0**
_±_ _±_ _±_ _±_ _±_ _±_
Exponential **100.0** **0.0** 0.3 0.2 3.1 0.5 12.5 1.0 23.2 1.3 **100.0** **0.0**
_±_ _±_ _±_ _±_ _±_ _±_
Logarithm **100.0** **0.0** 0.1 0.1 0.0 0.0 17.1 1.2 21.3 1.3 **100.0** **0.0**
_±_ _±_ _±_ _±_ _±_ _±_
Sine **100.0** **0.0** 7.6 0.8 7.0 0.8 13.4 1.1 39.3 1.5 **100.0** **0.0**
_±_ _±_ _±_ _±_ _±_ _±_
Cosine **100.0** **0.0** 0.8 0.3 1.5 0.4 6.7 0.8 32.8 1.5 **100.0** **0.0**
_±_ _±_ _±_ _±_ _±_ _±_
AVERAGE **100.0** **0.0** 4.4 0.2 14.0 0.4 20.7 0.4 39.5 0.5 99.8 0.1
_±_ _±_ _±_ _±_ _±_ _±_
result as 100% accuracy and each incorrect result as 0% accuracy. For each model on each dataset,
we report the mean accuracy and the standard error of the mean. To determine if a number matches
the result, we first determine how many places after the decimal d the number should be accurate
to. If the number is an integer, we set d to 2. Otherwise, we set d to the number of places after the
decimal in the model output, clipped between 2 and 5. Finally we state that a number “matches” the
result if the number and the result differ by less than 10[−][d]. We present further experiment details,
including hyperparameters, prompts, etc. in Appendix A.
**4.1** **Simple Arithmetic Problems**
To evaluate OccamLlama and the baselines on purely arithmetic expressions, we create several
synthetic datasets. For each of the operations in {+, −, ×, ÷}, the inputs are random 7-digit positive
or negative integers. For, the inputs are random 7-digit positive integers. For the logarithms, the
_[√]_
examples are log-uniformly sampled in the interval (10[−][10], 10[10]); for the exponentials, they are
uniformly sampled in the interval (−10, 10), and for sines and cosines they are uniformly sampled in
the interval (−2π, 2π).
The results of these evaluations are shown in Table 2. More detailed results, including relative error
and results for 3 and 5 digit arithmetic, are shown in Appendix A. OccamLlama has 100.0 ± 0.0%
accuracy on all tasks, missing 0 out of 9000 problems. On the other hand, we tested GPT 4o with
Code Interpreter on fewer problems to save cost, and it missed 3 out of the 1800 problems it faced,
achieving an accuracy of 99.8 ± 0.1%.
Furthermore, GPT 4o with Code Interpreter generates on average more than 54 tokens to answer
these problems, whereas our model uses OccamNet on the first forward pass. This means that, barring
advanced decoding techniques such as speculative decoding [38], GPT 4o would need to be more
than 50x faster than OccamLlama per forward pass to be comparable in answer generation speed on
these tasks.
Table 2 demonstrates that arithmetic with LLMs is still challenging; state-of-the-art proprietary
language models like GPT 4o achieve less than 40% accuracy on 7-digit division and fail to perform
any 7-digit multiplications correctly. Open source LLMs fall farther behind, with Llama 3 8B
achieving below 50% on relatively simple tasks such as 7-digit addition.
**4.2** **Mathematical Problem Solving**
To test the performance of OccamLlama on more general mathematical problem solving tasks, we
evaluate our method and baselines on the following six benchmarks: AddSub [39], GSM8K [40],
MultiArith [33], MATH401 [8], Single Eq [41], and SVAMP [42]. All but MATH401 are word
problems requiring longer generation and a mix of reasoning and arithmetic capabilities. MATH401
also includes multistep arithmetic problems which require more than one call to OccamLlama.
-----
Figure 3: Accuracy of OccamLlama and baselines on mathematical problem solving tasks. Higher
OccamLlama Llama 2 7B Llama 3 8B GPT 3.5 Turbo GPT 4o GPT 4o +Code
100
80
60
40
Accuracy (%)
20
0
AddSub GSM8K MultiArith MultiArith Float MATH401 Single Eq SVAMP Average
is better. OccamLlama achieves accuracy comparable to Llama 3 8B on benchmarks with simple
arithmetic, higher accuracy than GPT 4o and GPT 4o + Code on on tasks with challenging arithmetic,
and accuracy above Llama 3 8B and similar to GPT 3.5 Turbo on average.
Because many of the arithmetic operations required in these datasets are relatively simple, we
also create MultiArith Float, a modification of MultiArith in which we select problems which are
arithmetically more challenging, while requiring similar levels of reasoning. To this end, we select
prompts having input numbers that can be replaced with floats. For instance, 3.5 feet or $39.95
are reasonable but 3.5 people is not. Furthermore, we sample input values from ranges larger than
those appearing in the MultiArith dataset, in cases where it is reasonable. Float operations and larger
additions and multiplications are more difficult for the baseline LLMs but do not make a difference for
OccamLLM, so this dataset is particularly useful to show the advantages of the system we propose.
Figure 3 shows the results of these evaluations. More detailed results are shown in Appendix A.
On MultiArith Float and MATH401, two datasets requiring challenging arithmetic, OccamLlama
outperforms not only Llama 3 8B but also GPT 4o and GPT 4o + Code. At the same time, most
other datasets in this benchmark do not involve challenging arithmetic, meaning that Llama 3 8B
is well suited to solve these tasks without assistance; most of the difficulty of these tasks lies in the
reasoning rather than in the arithmetic computations. This is further supported by the fact that GPT
4o with Code Interpreter never substantially outperforms and sometimes underperforms GPT 4o. As
such, it is remarkable that OccamLlama can achieve comparable accuracy to Llama 3 8B even when
it is trained on very different data and evaluated on tasks without challenging arithmetic. The only
datasets for which OccamLlama performs substantially worse than Llama 3 8B are GSM8K and
Single Eq, but we believe this results from an imperfect OccamLlama switch, likely stemming from
text which is outside of the switch training distribution (see Section 4.3).
In Figure 4, we show example generations from OccamLlama for both arithmetic and reasoning tasks.
These generations demonstrate how the OccamLlama switch learns to balance OccamNet outputs
with LLM outputs, effectively distributing the work between a reasoner (Llama) and a calculator
(OccamNet). Because the language model is unaware of the OccamLlama system, its generations
behave as if it possesses an interior calculator even though it is actually using a tool. In this way, we
combine the benefits of a language model finetuned on arithmetic with the benefits of a language
model finetuned to use code for arithmetic, all without any finetuning.
**4.3** **Limitations**
In our experiments, we use a single-layer OccamNet as the symbolic network, enabling evaluation
of single-operation arithmetic problems. This sometimes poses a challenge on reasoning problems
when the base language model generates compound expressions requiring more than one operation to
evaluate, such as 3+5+7 =. A single-layer OccamNet cannot evaluate these expressions. We attempt
to overcome this by prompting Llama to break down compound expressions into multiple steps, but
we find it difficult to coerce Llama to follow these instructions. Another challenge is that Llama often
generates expressions in fractions or percentages, which also constitute compound expressions that
are not properly handled by the OccamLLM system. Fortunately, we observed that these compound
-----
**Llama3 8b Instruct** **OccamLlama**
Q: 4536895 + 234217
A: The result of the addition is:
4536895 + 234217 = 4771112
Q: Mike had 29 video games but 8 of them weren't
working. If he wanted to sell the working games for
73.57 each, how much money could he earn?
A: Let's break it down step by step:
1. Mike had 29 video games initially.
2. 8 of them weren't working, so he needs to subtract
those from the total:
29 - 8 = 21
3. Mike has 21 working video games.
4. He wants to sell each working game for $73.57.
5. To fnd out how much money he can earn, multiply
the number of working games by the price per game:
21 (working games) x $73.57 (price per game) = $1549.17
Q: 4536895 + 234217
A: Let me calculate that for you!
4536895 + 234217 = 4772112
Q: Mike had 29 video games but 8 of them weren't
working. If he wanted to sell the working games for
73.57 each, how much money could he earn?
A: Let's break it down step by step!
First, let's fnd out how many working video games Mike has:
29 (total games) - 8 (non-working games) = 21
So, Mike has 21 working video games.
Next, let's calculate the total amount of money Mike can earn
by selling these games:
21 (working games) × 73.57 (price per game) = 1544.97
Input to OccamNet Output by OccamNet
Figure 4: Examples from Llama 3 8B Instruct and OccamLlama on (top) an arithmetic problem and
(bottom) a mathematical reasoning problem from the MultiArith Float dataset. In OccamLlama, the
LLM performs reasoning, the switch predicts when to use OccamNet, and OccamNet performs arithmetic operations. OccamNet’s inputs and outputs are highlighted in purple and green, respectively.
expressions were typically simple enough for the LLM to evaluate without OccamNet. Therefore,
in our experiments, we trained the OccamLLM switch to avoid using OccamNet for compound
operations. Future work could explore other solutions such as integrating a two-layer OccamNet as
the symbolic network. We found that these issues are particularly acute in the GSM8K and Single
Eq datasets, where the expressions generated by Llama are not prevalent in the switch training data,
causing it to sometimes incorrectly trigger OccamNet and degrade performance, as discussed more in
Appendix A.5.
Furthermore, we found that the language model sometimes appends further digits to OccamLlama
outputs, defeating the purpose of OccamLlama generations. To address this issue, we append “\n\n.”
to every number computed with OccamNet, emulating the usual behavior of Llama.
These techniques demonstrate a design paradigm of OccamLlama: by tuning the behaviors of
OccamNet and the switch, we can often avoid finetuning the LLM.
**5** **Discussion**
We presented OccamLLM, a system enabling exact and interpretable language model arithmetic in a
single autoregressive step. Our method does not require modifying the weights of the underlying
language model, thereby avoiding risks of catastrophic forgetting. Furthermore, our method avoids
security risks arising from running code generated by a language model.
We benchmarked our method on challenging arithmetic tasks, achieving 100% accuracy while GPT 4o
achieves only 40% performance on average. We also
benchmarked our method on mathematical problem
solving tasks, demonstrating that the OccamLlama
switch can accurately balance the LLM for reasoning
and OccamNet for arithmetic, outperforming even
GPT 4o and GPT 4o with Code Interpreter on tasks
involving challenging arithmetic. Our work could
enable smaller LLMs to be as performant as much
larger LLMs in arithmetic. Moreover, integrating
Table 3: Accuracy on multistep arithmetic.
OccamLlama Llama 3
8b Instruct
One-Step **99.9** **0.1** 78.1 1.3
_±_ _±_
Two-Step **98.2** **0.4** 57.8 1.6
_±_ _±_
Three-Step **96.1** **0.6** 40.2 1.6
_±_ _±_
AVERAGE **98.1** **0.3** 58.7 0.9
_±_ _±_
-----
OccamLLM with larger LLMs like GPT 4o could
further improve their arithmetic abilities without requiring a code interpreter, but we leave this for
future work. Furthermore, at present, OccamLLM may not integrate with more advanced decoding
techniques such as speculative decoding [38, 43]. We hope to explore these avenues in future work.
**6** **Broader Impact**
We believe that, in addition to enabling fast, safe, and interpretable arithmetic, OccamLLM demonstrates a new paradigm for tool use. As a proof of concept for more complex tool use, we further
train OccamLlama with a two layer Complete OccamNet with the primitives
_P = {Addition(·, ·), Subtraction(·, ·), Multiplication(·, ·), Division(·, ·)},_
which enables OccamLlama to perform up to three arithmetic operations (e.g., 2 · 7 + 3/2) in a single
autoregressive step. We find that this two-layer OccamLlama can reach near 100% accuracy, even
when performing three arithmetic operations in a single autoregressive step, as shown in Table 3.
This demonstrates that OccamLLM can be used to perform more complex operations, including
composing multiple different tools.
For future work, we plan to explore integrating other tools beyond calculators through a similar
technique. This is facilitated by the fact that there are no restrictions on OccamNet’s activations;
in principle, tools could be placed inside activations of OccamNet, enabling OccamNet to serve as
a sort of a mixture of experts for tools. While some tools, like querying a search engine, may still
be most effective when integrated into language model systems through language, we believe this
work demonstrates that some tools are more effective when they can be more tightly integrated into
the language model.
**Acknowledgements**
We would like to thank Andrew Ma and Di Luo for their thoughtful discussions.
The authors acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for
providing HPC resources that have contributed to the research results reported within this paper.
Research was sponsored by the Department of the Air Force Artificial Intelligence Accelerator
and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and
conclusions contained in this document are those of the authors and should not be interpreted as
representing the official policies, either expressed or implied, of the Department of the Air Force or
the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for
Government purposes notwithstanding any copyright notation herein.
This work is also supported in part by the National Science Foundation under Cooperative Agreement
[PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, http:](http://iaifi.org/)
```
//iaifi.org/).
```
**References**
[1] Biao Zhang, Barry Haddow, and Alexandra Birch. Prompting large language model for machine
translation: A case study, 2023.
[2] Wenhao Zhu, Hongyi Liu, Qingxiu Dong, Jingjing Xu, Shujian Huang, Lingpeng Kong, Jiajun
Chen, and Lei Li. Multilingual machine translation with large language models: Empirical
results and analysis, 2023.
[3] Xiang Deng, Vasilisa Bashlovkina, Feng Han, Simon Baumgartner, and Michael Bendersky.
Llms to the moon? reddit market sentiment analysis with large language models. In Companion
_Proceedings of the ACM Web Conference 2023, WWW ’23 Companion, page 1014–1019, New_
York, NY, USA, 2023. Association for Computing Machinery.
[4] Zengzhi Wang, Qiming Xie, Yi Feng, Zixiang Ding, Zinong Yang, and Rui Xia. Is ChatGPT a
Good Sentiment Analyzer? A Preliminary Study. arXiv e-prints, page arXiv:2304.04339, April
2023.
-----
[5] Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Jialin Pan, and Lidong Bing. Sentiment analysis
in the era of large language models: A reality check, 2023.
[6] OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat,
Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao,
Mohammad Bavarian, Jeff Belgum, Irwan Bello, ..., and Barret Zoph. Gpt-4 technical report,
2024.
[7] Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut,
Johan Schalkwyk, Andrew M. Dai, Anja Hauth, Katie Millican, David Silver, Melvin Johnson,
Ioannis Antonoglou, Julian Schrittwieser, Amelia Glaese, Jilin Chen, Emily Pitler, Timothy
Lillicrap, Angeliki Lazaridou, Orhan Firat, ..., and Oriol Vinyals. Gemini: A family of highly
capable multimodal models, 2024.
[8] Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, and Songfang Huang. How well do large
language models perform in arithmetic tasks?, 2023.
[9] Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and
Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior, 2023.
[10] Taicheng Guo, Xiuying Chen, Yaqi Wang, Ruidi Chang, Shichao Pei, Nitesh V. Chawla, Olaf
Wiest, and Xiangliang Zhang. Large language model based multi-agents: A survey of progress
and challenges, 2024.
[11] Owen Dugan, Rumen Dangovski, Allan Costa, Samuel Kim, Pawan Goyal, Joseph Jacobson,
and Marin Soljaˇci´c. OccamNet: A Fast Neural Model for Symbolic Regression at Scale. arXiv
_e-prints, page arXiv:2007.10784, July 2020._
[12] Julia Balla, Sihao Huang, Owen Dugan, Rumen Dangovski, and Marin Soljacic. AI-Assisted
Discovery of Quantitative and Formal Models in Social Science. _arXiv e-prints, page_
arXiv:2210.00563, October 2022.
[13] Michael McCloskey and Neal J. Cohen. Catastrophic interference in connectionist networks:
The sequential learning problem. volume 24 of Psychology of Learning and Motivation, pages
109–165. Academic Press, 1989.
[14] Davide Maltoni and Matteo Ferrara. Arithmetic with Language Models: from Memorization to
Computation. arXiv e-prints, page arXiv:2308.01154, August 2023.
[15] Zhen Yang, Ming Ding, Qingsong Lv, Zhihuan Jiang, Zehai He, Yuyi Guo, Jinfeng Bai, and
Jie Tang. GPT Can Solve Mathematical Problems Without a Calculator. arXiv e-prints, page
arXiv:2309.03241, September 2023.
[16] Yixin Liu, Avi Singh, C. Daniel Freeman, John D. Co-Reyes, and Peter J. Liu. Improving large
language model fine-tuning for solving math problems, 2023.
[17] Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun Luo, Weikang Shi, Renrui Zhang, Linqi
Song, Mingjie Zhan, and Hongsheng Li. Mathcoder: Seamless code integration in llms for
enhanced mathematical reasoning, 2023.
[18] Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya
Poria, and Roy Ka-Wei Lee. Llm-adapters: An adapter family for parameter-efficient fine-tuning
of large language models, 2023.
[19] Robert M French. Catastrophic forgetting in connectionist networks. Trends in cognitive
_sciences, 3(4):128–135, 1999._
[20] Haolin Chen and Philip N. Garner. Bayesian parameter-efficient fine-tuning for overcoming
catastrophic forgetting, 2024.
[21] Shuo Liu, Jacky Keung, Zhen Yang, Fang Liu, Qilin Zhou, and Yihan Liao. Delving into
parameter-efficient fine-tuning in code change learning: An empirical study, 2024.
[22] Marek Kadlˇcík, Michal Štefánik, Ondˇrej Sotoláˇr, and Vlastimil Martinek. Calc-x and calcformers: Empowering arithmetical chain-of-thought through interaction with symbolic systems,
2023.
[23] Mojtaba Komeili, Kurt Shuster, and Jason Weston. Internet-augmented dialogue generation. In
Smaranda Muresan, Preslav Nakov, and Aline Villavicencio, editors, Proceedings of the 60th
_Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),_
pages 8460–8478, Dublin, Ireland, May 2022. Association for Computational Linguistics.
-----
[24] Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna
Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John
Schulman. Webgpt: Browser-assisted question-answering with human feedback, 2022.
[25] Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha,
Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee,
Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun,
Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts,
Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch,
Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel
Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen,
Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina,
Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm,
Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise AgueraArcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. Lamda: Language models for dialog
applications, 2022.
[26] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach
themselves to use tools, 2023.
[27] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi,
Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language
models, 2023.
[28] Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan,
and Graham Neubig. Pal: Program-aided language models, 2023.
[29] Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning tasks, 2023.
[30] Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun Luo, Zipeng Qin, Shaoqing Lu, Anya
Jia, Linqi Song, Mingjie Zhan, and Hongsheng Li. Solving challenging math word problems
using gpt-4 code interpreter with code-based self-verification, 2023.
[31] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez,
Lukasz Kaiser, and Illia Polosukhin. Attention Is All You Need. _arXiv e-prints, page_
arXiv:1706.03762, June 2017.
[32] Robin M. Schmidt. Recurrent Neural Networks (RNNs): A gentle Introduction and Overview.
_arXiv e-prints, page arXiv:1912.05911, November 2019._
[33] Subhro Roy and Dan Roth. Solving general arithmetic word problems, 2016.
[34] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn., 8(3–4):229–256, May 1992.
[35] AI@Meta. Llama 3 model card. 2024.
[36] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei,
Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas
Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes,
Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony
Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian
Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut
Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta,
Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng
Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien
Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open Foundation
and Fine-Tuned Chat Models. arXiv e-prints, page arXiv:2307.09288, July 2023.
[37] OpenAI. Hello gpt-4o. 2024.
[38] Yaniv Leviathan, Matan Kalman, and Yossi Matias. Fast Inference from Transformers via
Speculative Decoding. arXiv e-prints, page arXiv:2211.17192, November 2022.
-----
[39] Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. Learning to solve arithmetic word problems with verb categorization. In Alessandro Moschitti,
Bo Pang, and Walter Daelemans, editors, Proceedings of the 2014 Conference on Empirical
_Methods in Natural Language Processing (EMNLP), pages 523–533, Doha, Qatar, October_
2014. Association for Computational Linguistics.
[40] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John
Schulman. Training verifiers to solve math word problems. ArXiv, abs/2110.14168, 2021.
[41] Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. Parsing algebraic word problems into equations. Transactions of the Association for
_Computational Linguistics, 3:585–597, 2015._
[42] Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are NLP models really able to solve
simple math word problems? In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer,
Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and
Yichao Zhou, editors, Proceedings of the 2021 Conference of the North American Chapter of the
_Association for Computational Linguistics: Human Language Technologies, pages 2080–2094,_
Online, June 2021. Association for Computational Linguistics.
[43] Benjamin Spector and Chris Re. Accelerating LLM Inference with Staged Speculative Decoding.
_arXiv e-prints, page arXiv:2308.04623, August 2023._
[44] Georg Martius and Christoph H. Lampert. Extrapolation and learning equations. arXiv e-prints,
page arXiv:1610.02995, October 2016.
[45] Subham Sahoo, Christoph Lampert, and Georg Martius. Learning equations for extrapolation
and control. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International
_Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research,_
pages 4442–4450, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR.
[46] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal
Policy Optimization Algorithms. arXiv e-prints, page arXiv:1707.06347, July 2017.
-----
**Appendix**
**A** **Further Experiment Details and Results**
For all training and evaluation runs, we used a single 32 GB NVIDIA Tesla V100 GPU. The LLM
used for the OccamLLM system is Llama 3 8B Instruct.
In the experiments presented in Section 4, for each of the weight decoders and the switch, we used
two-layer MLPs of input size 4096 (Llama 3 8B Instruct hidden size), intermediate size 64 and final
size equal to the number of weights in the corresponding OccamNet layer or switch.
In the two-layer experiments presented in Section 6, for each of the weight decoders, we used
two-layer MLPs of input size 4096 (Llama 3 8B Instruct hidden size), intermediate size 512, and
final size equal to the number of weights in the corresponding OccamNet layer. We did not train a
switch for this experiment as we did not test long-form generations.
**A.1** **Training Dataset**
**A.1.1** **OccamNet Decoder**
To train the OccamNet decoder, we created a training dataset consisting of a 80,000 examples split in
40,000 single queries and 40,000 sequences of concatenated queries. In the first case, we sampled a
single prompt of those described in 3.2 and formatted it using the Llama 3 8B Instruct chat template.
In the second case, we concatenated multiple prompts described in 3.2 without the chat template.
40% of the sampled prompts correspond to simple arithmetic, concretely +, −, ×, and ÷. We sampled
from various input value ranges, chosen at random: integers in [−10, 10], integers in [−100, 100],
integers in [−1000, 1000], integers in [−20000, 20000], floating numbers in [−1, 1], and floating
point numbers in [−1000, 1000].
Another 40% corresponds to complex arithmetic involving square roots, logarithms, exponentials,
trigonometric functions and computing one number to the power of another. For the square root and
the logarithm, we sampled integers uniformly in either [1, 100] or [1, 20000] and floats uniformly in
either [0.01, 100] or [0.01, 20000]. For the exponential, we sampled integers and floats in [−10, 10].
For the powers, we sampled the base as either an integer in [1, 25] or a float in [0.1, 25] and the
exponent as an integer in [−6, 6].
The remaining 20% corresponds to single or multi step problems reasoning prompts. The inputs
were sampled with various ranges, sometimes as floats and sometimes as integers, depending on the
context of the problem. Because a single-OccamNet-layer OccamLlama cannot solve a multi-step
reasoning problem in a single step, we never end the multiple-query examples with a multi-step
reasoning problem.
We first iterated the 80,000 examples, prepending “Answer = ” to the assistant response, thus training
OccamNet to predict the result after the “=”. Next, we validated the model on out-of-distribution
examples where “Answer = ” was not appended. We noticed that the accuracy on this task was
improving during training, but after the full dataset was iterated it still didn’t perform as well as when
evaluated in-distribution. Therefore, we continued to train the model using examples of the same
dataset but with no “Answer = ” at the beginning of the assistant response. The model rapidly learned
the new task. We stopped at 28,000 iterations of this second stage.
For the two-layer OccamNet run, we generated a large set of programmatically generated prompts of
the form 3 + 97 · −4 =, with the Llama 3 8B chat template applied.
**A.1.2** **Switch Decoder**
To train the switch decoder, we created a dataset of 50,000 examples. For each example, the tokens
previous to the numbers that should be computed using OccamNet, which are the ones that the switch
should not route to the LLM, are labeled with a 1, and all the rest are labeled with a 0.
Half of the examples consist of a single prompt corresponding to a simple arithmetic expression as
the ones described in Section 3.2. The token immediately at the beginning of the assistant response is
-----
labeled with a 1. Therefore, the trained system will answer directly to simple arithmetic queries that
OccamNet can compute.
The remaining 25,000 examples consist each of a series of prompts which are formatted in the Llama
3 8B Instruct chat template in a conversational style. The input-output pairs used to create each
sequence of prompts are distributed in the following way:
- 25% of these pairs are created by taking one of the simple arithmetic expressions as input.
The output is selected randomly between answering directly at the beginning of the assistant
response, adding "Answer = " before the answer, or repeating the input expression before
the answer. These examples train the switch to trigger OccamNet in different scenarios
where the LLM needs to compute an answer.
- 70% of the pairs come from a collection of manually created and labeled examples, which
illustrate in which cases the switch should route to OccamNet and, importantly, in which
cases it shouldn’t. This collection was designed to cover a wide variety of situations where
the LLM might need to use OccamNet for computations. Furthermore, it includes cases
where the LLM should avoid calling OccamNet because doing so would produce a bad
prediction. This is the case, for example, of instances where the LLM attempts to add three
numbers simultaneously. If it were to use the 1-layer OccamNet, which can take 2 inputs at
most, the result would be incorrect.
- The remaining 5% of the prompts come from multi-step reasoning problems. We set the
output for these not to a full response, but only “The answer is ”. In such cases, a single-layer
OccamNet cannot compute the answer, so the output tokens are labeled with a 0. This trains
the system to avoid routing to OccamNet when the later cannot compute the answer.
**A.2** **Training Hyperparameters**
For all 1-layer OccamNet training runs, we used a batch size of 1, a learning rate of 6e − 4 and a
weight decay parameter of 0.01. We use gradient accumulation to achieve an effective batch size of 8.
We used a constant learning rate scheduler. We take 1000 samples from OccamNet per token.
For the 2-layer OccamNet run, we used a batch size of 1, a learning rate of 1e − 4 and a weight decay
parameter of 0.01. We use the gradient accumulation technique to achieve an effective batch size of
8. We used a constant learning rate scheduler. We take 50,000 samples from OccamNet per token.
**A.3** **Prompting**
For the division arithmetic tasks, we found that the language models often did not return decimals. As
such, we appended “Give the answer in decimals.” to these prompts. Similarly, for the trigonometric
functions evaluations, we explicitly ask the language models to take the input as radians, by formatting
the prompts as "cos(X rad) =".
For some models, we provide system prompting to guide the model toward the correct behavior. We
break down prompting by model below:
**Llama 2/3:** We did not provide a system prompt for the arithmetic tasks. For the reasoning tasks,
we used the system prompt “Solve step by step.”
**GPT 3.5 Turbo:** We do not use a system prompt for GPT 3.5 Turbo.
**GPT 4o:** We did not use a system prompt, except for the MATH401 dataset, where we noticed
that GPT 4o was returning fractions instead of decimals. As such, on MATH401 we used the system
prompt “Give your answer in decimals.”
**GPT 4o + Code:** We used the system prompt “Write and run code to answer math questions. Do
not format numbers. Give all answers in decimals.”
**OccamLlama:** We experimented with OccamLlama prompts, but discovered that not including a
system prompt was most effective.
-----
**A.4** **Generation parameters**
For OccamLlama, Llama 2 7B and Llama 3 8B, we use the default values of T = 0.6 and Top-P
= 0.9. For GPT 3.5 Turbo, GPT 4o, and GPT 4o with Code Interpreter, we use the default values of
_T = 1.0 and Top-P = 1.0._
**A.5** **Experimental Results**
Tables 4 and 5 show in more detail the accuracy of OccamLlama and other baselines on arithmetic
and mathematical problem solving tasks. We measure accuracy as described in the main text.
We note here that on datasets with challenging arithmetic, in particular Multiarith Float and MATH401,
OccamLlama outperforms even GPT 4o and GPT 4o Code. In fact, on MultiArith Float, OccamLlama
is nearly 10 percentage points more accurate than GPT 4o + Code and and more than 40 percentage
points more accurate than Llama 3 8B. Similarly, on MATH401, OccamLlama is 7 percentage points
more accurate than GPT 4o + Code and nearly 25 percentage points more accurate than Llama 3 8B.
Although MATH401 does not include word problems, it does include some arithmetic expressions
that require multiple calls to OccamNet to solve, meaning it requires both reasoning (to determine
how to break up the arithmetic expression) and arithmetic capabilities.
The only datasets on which OccamLlama performs substantially worse than Llama 3 8B are GSM8K
[40] and Single Eq [41]. We believe a contributor to this is that these datasets include many problems
that involve either fractions and percentages, which Llama does not convert to decimal format, or
equations with unknown variables. As such, Llama often calls OccamNet with expressions such as
“multiplying by 3/4 gives,” “5% of this gives,” or “adding 5 to both sides of x-5 = 11 gives.” Because
the switch is not trained on many examples like these in which the number is not in decimal format,
it does not realize that OccamNet should not be used in these cases. Therefore, the switch triggers
OccamNet, which is not capable of performing the correct operation (these types of operations are
not acheivable with a 1-layer OccamNet). Future work could address this issue by training the switch
with more data on this type of situation or by training an OccamLlama with a two layer OccamNet.
Relative error is another important metric that complements accuracy. It measures by how much the
answer differs from the true result. For two models with a similar accuracy metric, the relative error
they achieve can be very different. Table 6 shows the relative error for the arithmetic experiments.
An answer marked correct can have a nonzero relative error because of machine precision limits and
because the answer does not report an infinite number of digits.
Interestingly, Llama 2 performs exceptionally poorly on division. By examining outputs, we see that
this is because Llama 2 produces an approximately correct output but with the decimal place in the
wrong position, leading to a result that is off by many orders of magnitude.
**B** **Example OccamLlama Generations**
In this section, we include example OccamLlama generations from the MATH401 and MultiArithFloat dataset. We randomly selected three examples for each dataset. OccamNet outputs are included
in green. We omit prompt formatting to save space. Similarly, although outputs from OccamNet are
always followed by “\n\n,” we omit these newlines to save space, instead adding a period and space
after each OccamNet generation.
By chance, all six responses happen to be correct.
**B.1** **MATH401**
**Example 1**
Prompt:
_7^4=_
Response:
_2401. 7 ^4 = 7 × 7 × 7 × 7 = 2401_
-----
Table 4: Percent accuracy on arithmetic tasks. Higher is Better. Bold indicates best performance.
OccamLlama Llama 2 Llama 3 GPT 3.5 GPT 4o GPT 4o
7B Chat 8b Instruct Turbo Code
Addition (3) **100.0** **0.0** 70.9 1.4 97.1 0.5 98.8 0.3 **100.0** **0.0**
_±_ _±_ _±_ _±_ _±_
Addition (5) **100.0** **0.0** 55.9 1.6 77.1 1.3 92.5 0.8 99.2 0.3
_±_ _±_ _±_ _±_ _±_
Addition (7) **100.0** **0.0** 19.2 1.2 44.9 1.6 65.2 1.5 95.7 0.6 **100.0** **0.0**
_±_ _±_ _±_ _±_ _±_ _±_
Subtraction (3) **100.0** **0.0** 49.7 1.6 95.2 0.7 94.0 0.8 98.7 0.4
_±_ _±_ _±_ _±_ _±_
Subtraction (5) **100.0** **0.0** 22.9 1.3 58.8 1.6 86.3 1.1 92.6 0.8
_±_ _±_ _±_ _±_ _±_
Subtraction (7) **100.0** **0.0** 8.7 0.9 34.4 1.5 59.8 1.6 85.6 1.1 99.5 0.5
_±_ _±_ _±_ _±_ _±_ _±_
Multiplication (3) **100.0** **0.0** 4.6 0.7 16.8 1.2 49.2 1.6 76.9 1.3
_±_ _±_ _±_ _±_ _±_
Multiplication (5) **100.0** **0.0** 0.0 0.0 0.1 0.1 0.4 0.2 4.6 0.7
_±_ _±_ _±_ _±_ _±_
Multiplication (7) **100.0** **0.0** 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 99.0 0.7
_±_ _±_ _±_ _±_ _±_ _±_
Division (3) **100.0** **0.0** 20.8 1.3 71.7 1.4 50.5 1.6 78.2 1.3
_±_ _±_ _±_ _±_ _±_
Division (5) **100.0** **0.0** 7.4 0.8 48.1 1.6 15.7 1.2 51.0 1.6
_±_ _±_ _±_ _±_ _±_
Division (7) **100.0** **0.0** 2.8 0.5 35.3 1.5 10.7 1.0 38.6 1.5 **100.0** **0.0**
_±_ _±_ _±_ _±_ _±_ _±_
Square Root (3) **100.0** **0.0** 1.2 0.3 14.8 1.1 47.1 1.6 69.3 1.5
_±_ _±_ _±_ _±_ _±_
Square Root (5) **100.0** **0.0** 0.2 0.1 1.3 0.4 11.9 1.0 23.6 1.3
_±_ _±_ _±_ _±_ _±_
Square Root (7) **100.0** **0.0** 0.0 0.0 0.0 0.0 0.9 0.3 18.6 1.2 **100.0** **0.0**
_±_ _±_ _±_ _±_ _±_ _±_
Exponential **100.0** **0.0** 0.3 0.2 3.1 0.5 12.5 1.0 23.2 1.3 **100.0** **0.0**
_±_ _±_ _±_ _±_ _±_ _±_
Logarithm **100.0** **0.0** 0.1 0.1 0.0 0.0 17.1 1.2 21.3 1.3 **100.0** **0.0**
_±_ _±_ _±_ _±_ _±_ _±_
Sine **100.0** **0.0** 7.6 0.8 7.0 0.8 13.4 1.1 39.3 1.5 **100.0** **0.0**
_±_ _±_ _±_ _±_ _±_ _±_
Cosine **100.0** **0.0** 0.8 0.3 1.5 0.4 6.7 0.8 32.8 1.5 **100.0** **0.0**
_±_ _±_ _±_ _±_ _±_ _±_
AVERAGE **100.0** **0.0** 14.4 0.3 32.0 0.3 38.6 0.4 55.2 0.4 99.8 0.1
_±_ _±_ _±_ _±_ _±_ _±_
Table 5: Percent accuracy on reasoning tasks. Higher is Better. Bold indicates best performance.
OccamLlama Llama 2 Llama 3 GPT 3.5 GPT 4o GPT 4o
7B Chat 8b Instruct Turbo Code
AddSub 91.6 1.4 78.0 2.1 93.4 1.2 95.4 1.1 97.0 0.9 **97.5** **1.1**
_±_ _±_ _±_ _±_ _±_ _±_
GSM8K 73.5 1.2 36.0 1.3 79.8 1.1 84.8 1.0 **96.1** **0.5** 94.0 1.7
_±_ _±_ _±_ _±_ _±_ _±_
MultiArith 99.2 0.4 76.0 1.7 **99.8** **0.2** 97.2 0.7 99.7 0.2 99.5 0.5
_±_ _±_ _±_ _±_ _±_ _±_
MultiArith Float **98.2** **0.5** 23.3 1.7 57.3 2.0 77.3 1.7 96.2 0.8 89.5 2.2
_±_ _±_ _±_ _±_ _±_ _±_
MATH401 **85.0** **1.8** 43.9 2.5 60.3 2.4 63.1 2.4 76.6 2.1 78.0 2.9
_±_ _±_ _±_ _±_ _±_ _±_
Single Eq 92.9 1.1 79.1 1.8 96.3 0.8 97.8 0.6 98.0 0.6 **99.0** **0.7**
_±_ _±_ _±_ _±_ _±_ _±_
SVAMP 88.6 1.0 61.5 1.5 86.3 1.1 87.8 1.0 96.2 0.6 **96.5** **1.3**
_±_ _±_ _±_ _±_ _±_ _±_
AVERAGE 87.4 0.5 53.3 0.7 82.1 0.6 86.5 0.5 **95.2** **0.3** 93.4 0.7
_±_ _±_ _±_ _±_ _±_ _±_
**Example 2**
Prompt:
_6**3=_
Response:
_216. 6 ^3 = 6 × 6 × 6 = 216_
**Example 3**
Prompt:
_63.0+0.074=_
-----
Table 6: Relative error (%) on arithmetic tasks. Lower is Better. Bold indicates best performance.
OccamLlama Llama 2 Llama 3 GPT 3.5 GPT 4o GPT 4o
7B Chat 8b Instruct Turbo Code
Addition (3) **0.0±0.0** 50.5±10.9 3.2±1.9 0.3±0.1 **0.0±0.0**
Addition (5) **0.0±0.0** 113.0±21.1 23.7±4.0 4.6±1.8 0.0±0.0
Addition (7) **0.0±0.0** 310.3±97.0 78.1±16.2 4.0±1.4 1.0±0.9 **0.0±0.0**
Subtraction (3) **0.0±0.0** 66.2±18.7 4.1±0.8 3.8±0.7 0.4±0.1
Subtraction (5) **0.0±0.0** 173.6±67.1 29.4±4.3 38.3±16.5 3.5±0.6
Subtraction (7) **0.0±0.0** 222.3±54.4 65.6±12.9 44.6±31.3 5.4±0.7 0.3±0.3
Multiplication (3) **0.0±0.0** 7.6±0.7 1.9±0.5 1.8±0.4 0.1±0.1
Multiplication (5) **0.0±0.0** 84.9±0.9 46.7±1.7 19.2±3.7 1.8±0.4
Multiplication (7) **0.0±0.0** 98.9±0.2 74.9±1.8 90.1±24.2 4.4±0.6 1.0±0.7
Division (3) 0.1±0.0 1346.2±275.4 1.3±0.9 1.1±0.3 **0.0±0.0**
Division (5) 0.2±0.1 174156.6±31687.6 9.5±1.8 0.7±0.2 **0.1±0.0**
Division (7) 0.1±0.0 22032920.9±3642549.7 225.3±142.1 0.3±0.1 0.0±0.0 **0.0±0.0**
Square Root (3) **0.0±0.0** 8.9±1.1 1.1±0.3 0.2±0.0 0.0±0.0
Square Root (5) **0.0±0.0** 72.8±4.8 12.6±1.7 0.1±0.0 0.0±0.0
Square Root (7) 0.0±0.0 207.8±21.8 15.4±1.4 8.8±0.9 4.8±2.0 **0.0±0.0**
Exponential 0.3±0.0 422.6±82.6 11.7±0.7 2.3±0.9 0.1±0.0 **0.0±0.0**
Logarithm 0.0±0.0 138.3±11.9 40.2±1.3 6.4±4.0 0.1±0.0 **0.0±0.0**
Sine 0.1±0.0 515.5±68.1 279.2±21.9 125.4±12.3 3.2±0.7 **0.1±0.0**
Cosine 0.1±0.0 581.9±199.7 346.1±80.4 188.5±22.5 2.8±0.5 **0.0±0.0**
AVERAGE **0.1±0.0** 1169210.8±194953.4 66.9±8.8 28.4±2.7 1.5±0.1 0.2±0.1
Response:
_63.074. 63.0 + 0.074 = 63.074. So the answer is 63.074._
**B.2** **MultiArith-Float**
**Example 1**
Prompt:
_Mike had 30 video games but 6 of them weren’t working. If he wanted to sell_
_the working games for 85.53 each, how much money could he earn?_
Response:
_Mike had 30 video games. 6 weren’t working, so he had 30 - 6 = 24. He can_
_sell 24 games for 85.53 each. 24 x 85.53 is 2052.720. So Mike could earn 2052.72_
_dollars._
**Example 2**
Prompt:
_While playing a trivia game, Adam answered 6 questions correct in the first_
_half and 3 questions correct in the second half. If each question was worth 59.08_
_points, what was his final score?_
Response:
_Adam got 6 questions correct in the first half, and 3 questions correct in the_
_second half. So he got 6 + 3 = 9. Each question was worth 59.08 points. So his_
_score is 9 * 59.08 = 531.720. The answer is 531.72._
**Example 3**
Prompt:
-----
_Lana’s favorite band was holding a concert where tickets were 181.92 dollars_
_each. Lana bought 5 tickets for herself and her friends and 4 extra tickets in case_
_anyone else wanted to go. How much did she spend?_
Response:
_Lana bought 5 tickets for herself and her friends. Then 4 extra tickets. So in_
_total she bought 5 + 4 = 9. Each ticket was 181.92 dollars. So 9 tickets will be 9 x_
_181.92 = 1637.280. The answer is 1637.280._
**C** **Alternative Architectures and Losses**
**C.1** **Alternative Architectures**
As discussed in the main text, although OccamLLM works most naturally with OccamNet, it can also
work with other symbolic architectures such as the EQL network [44, 45], or architectures that can
represent probability distributions over symbolic expressions, such as transformers [31] or recurrent
neural networks (RNNs) [32].
However, in practice we believe OccamNet is the most effective architecture for this use case. We
find that because EQL does not represent a probability distribution over functions, it easily gets stuck
in local minima.
Regarding transformers and RNNs, we believe that OccamNet possesses a key advantage of being
interpretable; simply by looking at the weights, it is possible for a human to determine which functions
OccamNet assigns a high probability. We believe that this interpretability will make OccamNet easy
for a decoder to initialize with the desired distribution. On the other hand, an RNN or transformer have
substantially more complex relations between the weights and corresponding probability distribution,
which we hypothesize would make learning a decoder for such models difficult.
This leads us to a key point: transformers and RNNs are effective for modeling complex multimodal
distributions, but for this problem, we want to select a single function for each token, so the extra
expressivity of these models is unneeded and likely detrimental to performance. We believe that
OccamNet, a much simpler architecture, enables better parameter efficiency and performance.
**C.2** **Alternative Losses**
In this section we discuss alternative possible losses and how we arrived at the loss in Equation 3.
We considered two loss functions which are natural when optimizing a probability distribution: 1) a
cross-entropy loss, and 2) a REINFORCE [34] loss. Each of these requires only a slight modification
to reach Equation 3. This discussion thus illustrates how our loss combines benefits from both the
cross-entropy and the reinforcement-learning losses.
**Cross-Entropy Loss** The cross-entropy loss is effective at modeling probability distributions.
Given a ground truth distribution qx[f ] conditioned on the input text x, the cross-entropy loss is given
by
_L(x, y; W_ ) = − _qx[f_ ] log pW [f ]. (4)
_f_
X
Unfortunately, for OccamLLM, the ground-truth distribution qx[f ] is not uniquely specified. In
particular the only constraints on qx[f ] are that it is is normalized and satisfies qx[f ] = 0 if f is not
the desired function (i.e., f (x) ̸= y). Since the same function can be represented in many ways in
the OccamNet network (a property true of many function representations), multiple f may satisfy
_f_ (x) = y, so qx is underparametrized.
The most natural choice for qx is to weight each valid function equally:
_qx[f_ ] = _c0x_ ifotherwise f (x) = y (5)
where cx is a constant chosen such that qx is normalized, given by the inverse of the number of
functions f satisfying f (x) = y. However, determining cx requires testing every possible function f,
-----
which may be infeasible for large OccamNet networks. Further, this qx requires OccamNet to learn a
superposition of functions, which may be challenging given its relatively low parameter count.
Another option is to choose a canonical form f _[∗]_ for each function and to set qx to be a 1-hot
distribution that is nonzero only at f _[∗]. Although this removes the challenge of learning a superposition,_
it still requires sampling nearly all functions in OccamNet due to the sparsity of qx.
Ideally, we would like to find a qx with the following conditions:
- It enables the cross-entropy loss to be calculated by sampling from OccamNet. This allows
us to avoid needing to iterate through and evaluate every f (x) each time we compute the
loss, since we can instead obtain a Monte-Carlo estimate.
- It is minimized when pW is a 1-hot probability distribution. This ensures that OccamNet
can represent the optimal distribution.
- It has qx[f ] = 0 for all f satisfying f (x) = y. This improves sample-efficiency by
_̸_
increasing the probability of sampling an f with qx[f ] > 0.
A solution is to set
_cxpW [f_ ] if f (x) = y
_qx[f_ ] = 0 otherwise (6)
where cx is chosen such that qx is normalized. This gives a loss
_L(x, y; W_ ) = −
_f_
X
= −
_f_
X
_≈−_ _[c]N[x]_
_≈−_
P
_qx[f_ ] · log pW [f ]
_cxpW [f_ ] · δ(f (x) − _y) · log pW [f_ ]
_δ(f_ (x) − _y) · log pW [f_ ]
_fX∼pW_
_f_ _pW_ _[δ][(][f]_ [(][x][)][ −] _[y][)][ ·][ log][ p][W][ [][f]_ []]
_∼_ _,_
_f_ _pW_ _[δ][(][f]_ [(][x][)][ −] _[y][)]_
_∼_
P
where
where
1 if f (x) = y
_δ(f_ (x) _y) =_ (7)
_−_ 0 otherwise
and in the last step we used the fact that cx can be approximated as
1 _N_
_cx =_
_f_ _[p][W][ [][f]_ []][δ][(][f] [(][x][)][ −] _[y][)][ ≈]_ _f_ _pW_ _[δ][(][f]_ [(][x][)][ −] _[y][)]_ _[.]_
_∼_
This loss is easily computed by sampling fromP _pW, it satisfiesP_ _qW > 0 for all f satisfying f_ (x) = y,
and it is minimized when pW is a delta function centered at any f satisfying f (x) = y, as desired.
_cx =_
Note that
_f_ _pW_ _[δ][(][f]_ [(][x][)][ −] _[y][)][ ·][ log][ p][W][ [][f]_ []]
_L(x, y; W_ ) = − P _∼_ _f_ _∼pW_ _[δ][(][f]_ [(][x][)][ −] _[y][)]_ (8)
is exactly the loss given in Equation 3 with R(f (x), y) = δ(f (x) _y). Thus, we have shown_
P
_−_
how Equation 3 can be interpreted as a cross-entropy loss. Equation 3 with general R(f (x), y)
can be seen as a cross-entropy loss with a “smoothed” ground truth distributionpW [f ] · R(f (x), y). _qx given by qx ∝_
**REINFORCE Loss** Reinforcement-learning losses are effective for exploring large search spaces.
We use a modification of the REINFORCE [34] loss because it is relatively simple to implement.
Future work could explore more sophisticated variants of this algorithm, such as Proximal Policy
Optimization [46].
The standard REINFORCE loss applied to OccamLLM gives
(x, y; W ) =
_L_ _−_ _N[1]_
_R(f_ (x), y) · log pW [f ].
_fX∼pW_
-----
Figure 5: a) An example OccamNet, with image layers boxed in green and arguments layers boxed
in blue. We denote the inputs as the 0th image layer and the outputs as the (L + 1)th arguments layer.
Nodes in the arguments layers are represented with a P because of their probabilistic nature. b) A
demonstration of the dropped connections from sampled paths in OccamNet. All light grey paths are
dropped from the final symbolic form of the sampled function because they are not directly connected
to the outputs.
Note that for sparse R, there will be very few nonzero R(f (x), y) sampled, so, since we are dividing
by N, the gradient signal will be small. We modify REINFORCE by dividing by the the sum of the
rewards for all samples instead of by N to ensure that correct functions sampled only a few times
still receive a large training step update. This once again produces Equation 3.
We find that using a delta function for our reward is most effective because it most accurately
represents the sparse reward of the problem. Further, as shown above, this loss provides a MonteCarlo estimate of the the cross entropy loss. Due to the sparse reward, many samples may initially be
required to obtain an accurate estimate of the loss. However, as OccamNet approaches the desired
distribution, the loss’s sample efficiency will improve.
**D** **Background on OccamNet**
This section is heavily modified from [11].
We divide this section into the following subsections:
1. In Section D.1, we describe OccamNet’s architecture in more detail.
2. In Section D.2, we describe OccamNet’s sampling process.
3. In Section D.3, we describe OccamNet’s probability distribution.
4. In Section D.4, we describe OccamNet’s initialization process.
**D.1** **OccamNet Architecture**
As described in the main text, we start from a predefined collection of N primitive functions P.
OccamNet represents a distribution over compositions of functions in P. From now on, we denote
the ith primitive function in the lth layer as ϕ[(]i[l][)][. We begin indexing the primitives from 0 and]
the layers from 1, because we treat the inputs as the 0th layer. So, for example, in Figure 5a,
_ϕ[(1)]2_ = ϕ[(2)]2 = ϕ[(3)]2 = sin.
Each OccamNet layer consists of two sublayers, which we denote the arguments and image sublayers,
shown in Figure 5a. For an L-layer OccamNet, each of these sublayers is reproduced L times. The
_lth softmax layer connects the (l −_ 1)th image layer with the lth arguments layer. For 1 ≤ _l ≤_ _L, we_
denote the lth arguments sublayer hidden state as **h[(][l][)]** and the lth image sublayer hidden state as h[(][l][)].
So, **h[(2)]** would represent the middle layer of nodes labeled P in Figure 5a. We further write
[e]
_⊤_ _⊤_
[e] **h[(][l][)]** = _h[(]1[l][)][, . . .,]_ [e]h[(]M[l][)][(][l][)] _, h[(][l][)]_ = _h[(]1[l][)][, . . ., h][(]N[l][)][(][l][)]_ _,_ (9)
h i h i
where e e
_M_ [(][l][)] = _α_ _ϕ[(]k[l][)]_ _,_
0 _k<N_ [(][l][)]
_≤X_ h i
-----
Figure 6: The progression of enhancements leading to a Complete OccamNet from a standard
OccamNet. a) A standard OccamNet without repeated activations or skip connections. b) The same
OccamNet as in a) with activations repeated in earlier layers. c) The same OccamNet as in b) with
added skip connections. This is a Complete OccamNet.
_N_ [(][l][)] is the number of primitives in layer l, and α[ϕ] is the arity of function ϕ. We also define h[(0)] to
be the input layer (an image sublayer) and **h[(][L][+1)]** to be the output layer (an arguments sublayer).
In a standard OccamNet layer, each primitive is repeated exactly once in each layer. However, in
Complete OccamNet, each primitive in the l[e]th layer is repeated A[L][−][l] times, where A is the maximum
arity of the primitives. This is shown in Figure 6 in the transition from 6a to 6b. Complete OccamNet
also concatenates each image layer to the next image layer, as shown in Figure 6c.
**D.2** **Sampling from OccamNet**
In this section, we more carefully describe OccamNet’s sampling process. We sample a connection
to each arguments layer node from the distribution given by the softmax of the softmax-layer weights
leading to that node. In particular, if wi[(][l][)] are the weights of the lth softmax layer leading to the ith
node of the lth argument’s layer, when we sample we produce a sparse matrix
softmax(w1[(][l][)][)]
.
.
.
softmax(wM[(][l][)][(][l][)] [)]
(10)
SAMPLE
where the SAMPLE function samples a one-hot row vector for each row based on the categorical
probability distribution defined by softmax(w). To evaluate this sample, we simply evaluate a
forward pass through the network, treating the sampled sparse matrices from the softmax layers as
the weights of linear layers:
_h[(]1[l][)]_
.
.
.
e
_h[(]M[l][)][(][l][)]_
softmax(w1[(][l][)][)]
.
.
.
softmax(wM[(][l][)][(][l][)] [)]
**h[(][l][)]** =
. .
**h[(][l][)]** = . SAMPLE . **h[(][l][−][1)],** (11)
e. _≡_ .
e h[(]M[l][)][(][l][)] softmax(wM[(][l][)][(][l][)] [)]
To complete the picture of the forward pass, we formalize how we deal with activations acceptinge
multiple inputs. We define the action of the activation functions as follows:
_h[(]i[l][)]_ = ϕ[(]i[l][)]
_h[(]j[l][)][, . . .,]_ [e]h[(]j[l]+[)] _α[ϕ[(]i[l][)]]_ 1
_−_
e
_α_ _ϕ[(]k[l][)]_ _._ (12)
0 _k<i_
_≤X_ h i
_, j =_
**D.3** **OccamNet’s Probability Distribution**
OccamNet parametrizes a probability distribution over all functions which it can sample. In particular,
when OccamNet samples a function, it is really sampling a directed acyclic graph (DAG) which
-----
defines a computational path to compute a function. The probability of sampling a computational
graph is equal to the product of the probabilities of the connections in the DAG which are connected
to the output node.
Note that multiple computational graphs can correspond to the same function. In this paper, when we
refer to a function sampled from OccamNet or the probability of a function according to OccamNet,
we use function as a shorthand for a particular computational graph corresponding to that function.
Although this underspecifies the computational graph in question, this is never an issue because we
always refer to functions in abstract.
When using OccamLlama for inference, we select the maximum probability function by sampling
100 functions from OccamNet, evaluating their probabilities as described above and selecting the
maximum one.
**D.4** **Initialization**
This section describes how we calculate W[∗] from the main text. We wish to initialize W[∗] such that
_pW ∗_ [f1] = pW ∗ [f2] for all f1 and f2. Below, we assume that skip connections do not exist. However,
the algorithm also works for skip connections, requiring only a small modification to Equation 14.
Unfortunately, such an initialization is impossible for any OccamNet with two or more layers
containing primitives with more than one argument. However, it is possible to initialize OccamNet
such that a lower bound qW _[∗]_ of the true probability pW _[∗]_ is independent of f.
Define the probability of a function f up to a given node as the product of the probabilities of the
edges that lead to that node in the DAG of f . Intuitively, qW [f ] approximates pW [f ] by maintaining
a lower bound on the probability of f up to each node of an OccamNet and propagating that lower
bound through the computational graph given by f .
To define qW more precisely, let qi[(][l][)][[][f] []][ and][ e]qi[(][l][)][[][f] []][ be the probability bounds corresponding to the]
_ith node of the lth image or arguments sublayer. We have suppressed the dependence on W for_
notational convenience. We compute these probabilities starting with the inputs, for which we set
_qi[(0)]_ = 1. We then propagate probabilities to the arguments layers according to
_qi[(][l][+1)]_ = softmax(wi[(][l][+1)])jqj[(][l][)][,] (13)
where j is the node in the lth image layer which f connects to the ith node of (l + 1)th arguments
layer. Similarly, we propagate probabilities to the image layers according toe
_n+α[ϕ[(]i[l][)]]_ 1
_−_
_qk[(][l][)][,]_ _n =_
_k=n_
Y
e
_i−1_
_α_ _ϕ[(]j[l][)]_ _._ (14)
_j=1_
X h i
_qi[(][l][)]_
Finally, we define qW [f ] = q0[(][L][+1)][f ].
In practice qW [f ] ≤ _pW [f_ ], where equality holds for many functions. In fact, qW [f ] < pW [f ] only
when part of the DAG of f is used as input to two different arguments nodes. In cases such as these,
the portion of the DAG that is used twice multiplicatively contributes the probability of its edges to
_qW [f_ ] twice, artificially suppressing its value. However, because qW [f ] is a lower bound, initializing
_W_ _[∗]_ to equalize qW ∗ still has the desired effect of ensuring adequate coverage for each f in the initial
probability distribution of OccamNet.
With this primer, we can now define the algorithm to initialize W _[∗]_ such that qW _[∗]_ [f ] is uniform.
The algorithm traverses through OccamNet layer by layer and establishes as an invariant that, after
assigning the weights up to the lth layer, _qi[(][l][)][[][f]_ []][ are equal for all][ i][ and][ f] [. This implies that, after]
assigning the weights up to the lth layer, qi[(][l][)][[][f] []][ are equal for all][ f] [, but not necessarily for all][ i][. We]
denote the common value of _qi[(][l][)][[][f]_ []][ as][ e]q[(][l][)] eand the common value of qi[(][l][)][[][f] []][ as][ q]i[(][l][)][.]
The algorithm starts with input layer, where qi[(0)] = 1 automatically. Once the invariant above is true
e
for a given l, the algorithm sets
mink _qk[(][l][)]_
_qj[(][l][)]_
**wi[∗][(][l][+1)]**
(15)
_j_ [= log]
-----
for all i, j, where **wi[∗][(][l][+1)]**
_j_ [denotes the weight connecting the][ j][th node in the][ l][th image layer to]
the ith node in the (l + 1)th arguments layer. This establishes the invariant for _l + 1 because_
_qj[(][l][)]_ exp **wi[∗][(][l][+1)]**
_j_
_qj[(][l][)][softmax][(][w]i[∗][(][l][+1)])j =_
_k_ [exp] **wi[∗][(][l][+1)]**
_k_
h i
Pqj[(][l][)] mink _qk[(][l][)]_ _/qj[(][l][)]_
_k_ [min][m] _qm[(][l][)]_ _/qk[(][l][)]_
P 1
= _,_
_k_ [1][/q]k[(][l][)]
which is a constant over both i and j, so _qi[(][l][+1)][f_ ]P is a constant over both i and f. The algorithm
repeats the above procedure until it has traversed the entire network.
In summary, the algorithm involves the following steps: e
1. Set l = 0 and qi[(][l][)] = 1.
2. Increment l by 1.
3. Set W[∗][(][l][)] according to Equation 15.
4. If l < L + 1, Compute _q[(][l][+1)]_ and qi[(][l][+1)]
5. Return to step 2 until l = L + 1.
e
-----
| [
"Owen, Dugan",
"Donato Manuel Jimenez, Beneto",
"Charlotte, Loh",
"Zhuo, Chen",
"Rumen, Dangovski",
"Marin, Soljačić"
] | 2024-06-29T00:00:00 | NeurIPS 2024 | true | 0 | 0 | null | http://arxiv.org/abs/2406.06576 | https://arxiv.org/abs/2406.06576 | https://www.semanticscholar.org/paper/924ea5bed43a9856d5ffae7073ced95bb90ab548 |
On Lemma Conjecturing using Neural, Symbolic and Neuro-symbolic approaches | We present ongoing work in combining Large Language Models (LLMs) and symbolic tools for lemma conjecturing. Our aim is to develop a neuro-symbolic lemma conjecturing tool leveraging the best of both symbolic and neural methods. | null | # On Lemma Conjecturing using Neural, Symbolic and Neuro-symbolic approaches
S´olr´un Halla Einarsd´ottir[1], Yousef Alhessi[2], Emily First[2], and Moa Johansson[1]
1 Chalmers University of Technology, Gothenburg, Sweden.
2 University of California, San Diego, USA.{slrn, moa.johansson}@chalmers.se {emfirst,yalhessi}@ucsd.edu
**Abstract**
We present ongoing work in combining Large Language Models (LLMs) and symbolic
tools for lemma conjecturing. Our aim is to develop a neuro-symbolic lemma conjecturing
tool leveraging the best of both symbolic and neural methods.
## 1 Introduction
Theory exploration is the automatic discovery of interesting conjectures and lemmas. Previously, we have developed symbolic tools for theory exploration [13, 6] which have been used to
successfully discover, for example, lemmas needed in automated (co)-inductive provers [9, 3, 2].
There has also been prior work on using purely neural methods for conjecturing. An early
result of using large language models (LLMs) for the task of lemma generation used a GPT-2
model trained on Mizar theories [14]. Rabe et al. [12] experimented with a self-supervised approach. Our pilot study on automated conjecturing with LLMs was presented in [10], and used
GPT-3.5 and GPT-4 out of the box via ChatGPT. Recent work [11] explores conjecturing using
a constrained-decoding approach, guaranteeing well-formed conjectures. However, a common
issue when using purely neural methods for conjecturing is that many of the generated lemmas
can be duplicates, renamings or simply false. This is not the case with symbolic methods, but
they may, on the other hand, miss large conjectures outside their specified search space [10].
Our aim is to develop a neuro-symbolic lemma conjecturing tool leveraging the best of both
symbolic and neural methods. We are now following up on [10] with additional experiments
on neural conjecturing, but also going further to develop a neuro-symbolic approach through
combining the LLM with our work on data-driven conjecturing as presented at AITP 2022 [5].
For a neuro-symbolic approach, the symbolic component will come from an updated version of our template-based conjecturing tool RoughSpec [6], which restricts its search space to
properties of specific shapes using templates. For example, the template ?F (?F (X, Y ), Z) =
?F (X, ?F (Y, Z)) describes an associative binary function ?F . In the original version of the
tool, the human user decided which templates to use. In [4], we extracted a dataset of lemma
templates from Isabelle’s Archive of Formal Proofs[1] (AFP). We have now updated RoughSpec
to parse templates from files in the format used in [4], so that it now can be run automatically
without user intervention when given a file containing function definitions and a file containing
templates as input. It can also now be run on input functions defined in Isabelle or SMT-LIB
format, not only Haskell functions as previously.
We hypothesize that template-based conjecturing may be suitable as a component of a
neuro-symbolic system, where the neural part suggests suitable templates and the symbolic
part fills in the templates to produce conjectures, discarding any conjecture which is trivial,
trivially false, or already known.
[1https://www.isa-afp.org/index.html](https://www.isa-afp.org/index.html)
-----
On Lemma Conjecturing Einarsd´ottir et al.
## 2 Ongoing Experiments
We are interested in comparing the results achievable using purely neural conjecturing, purely
symbolic conjecturing, and various neuro-symbolic combination approaches.
**2.1** **Neural conjecturing**
We are experimenting with purely neural lemma generation, letting the LLM predict lemmas
directly given function definitions, similar to [10].
As a first step, using the open-source 7B-parameter Llemma model [1], (a variant of LLama2
fine-tuned on e.g. proof assistant data) we prompted the model with an example of QuickSpec
output (conjectured equational properties) for a set of function definitions and asked it to
generate such output for a different set of function definitions. Our preliminary results indicated
that this is not sufficient to generate useful conjectures. Although the output looks syntactically
correct and many of the conjectures seem to hold, we notice a great deal of repetition and
redundancy. This is not surprising, but rather served as a base-line seeing what results are
achievable using available open-source models “out of the box.”
We’re aware that we can most likely achieve much better results if we fine-tune the models
for lemma conjecturing. In order to do this, we have collected fine-tuning data consisting of
function definitions and lemmas about them from Isabelle’s AFP using the Portal-to-Isabelle
API [8]. To create our training example input-target pairs, we have the target be the lemma
statement and the input be a concatentation of the definitions and constants appearing in
that lemma statement. We have fine-tuned the Facebook OPT 1.3B-parameter pre-trained
model [15] on this data, and sampled from the model to predict relevant lemmas.
**2.2** **Neuro-symbolic conjecturing**
Given function definitions, we want to ask the model to predict lemma templates that are
useful for this context. We can then use symbolic methods (RoughSpec) to fill in the templates,
ensuring we do not get repetitions and false conjectures.
We can also extend this approach to iterate in several rounds (i.e. several calls to the
LLM), interleaved with counter-example checking. Our prior work Baldur [7] showed that
LLMs, when generating a proof of a given theorem, benefit from the Isabelle file context, which
includes related theorems and their proofs. Thus, after each round, the contextual information
gathered, such as theorems about functions of interest, could help the LLM generate templates
in subsequent rounds.
**2.3** **Evaluation**
To evaluate the results of our experiments, we first consider the generated conjectures and find
how many of them are 1) syntactically correct 2) true (no counter-example found by checker).
We plan to evaluate whether the lemma is provable by some chosen methods (such as
Sledgehammer). We will compare the generated conjectures to the output of purely symbolic
conjecturing with QuickSpec and consider the benefits and drawbacks of each respective method
such as how much time and computing resources they need to run.
For further evaluation of the quality of generated conjectures, we can consider coverage, i.e.
how many of the lemmas in a library can we generate (although here one must consider training
data leakage), and evaluate the usefulness of the conjectured lemmas in automated proofs.
-----
On Lemma Conjecturing Einarsd´ottir et al.
## References
[1] Z. Azerbayev, H. Schoelkopf, K. Paster, M. Dos Santos, S. McAleer, A. Q. Jiang, J. Deng, S. Biderman, and S. Welleck. Llemma: An open language model for mathematics. arXiv preprint
_arXiv:2310.06786, 2023._
[2] S. H. Einarsd´ottir, M. Hajdu, M. Johansson, N. Smallbone, and M. Suda. Lemma discovery and
strategies for automated induction. In C. Benzm¨uller, M. J. Heule, and R. A. Schmidt, editors,
_Automated Reasoning, pages 214–232, Cham, 2024. Springer Nature Switzerland._
[3] S. H. Einarsd´ottir, M. Johansson, and J. [˚]A. Pohjola. Into the infinite - theory exploration for
coinduction. In Proceedings of AISC 2018, pages 70–86, 01 2018.
[4] S. H. Einarsd´ottir, M. Johansson, and N. Smallbone. Lol: A library of lemma templates for datadriven conjecturing. In Work-in-progress papers presented at the 15th Conference on Intelligent
_Computer Mathematics (CICM 2022) Informal Proceedings, page 22, 2022._
[5] S. H. Einarsd´ottir, M. Johansson, and N. Smallbone. Towards neuro-symbolic conjecturing, 2022.
Extended abstract accepted for presentation at the 7th Conference on Artificial Intelligence and
Theorem Proving, AITP 2022.
[6] S. H. Einarsd´ottir, N. Smallbone, and M. Johansson. Template-based theory exploration: Discovering properties of functional programs by testing. In Proceedings of the 32nd Symposium on
_Implementation and Application of Functional Languages, IFL ’20, page 67–78, New York, NY,_
USA, 2021. Association for Computing Machinery.
[7] E. First, M. Rabe, T. Ringer, and Y. Brun. Baldur: Whole-Proof Generation and Repair with
Large Language Models. In Proceedings of the 31st ACM Joint European Software Engineering
_Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2023, page_
1229–1241, New York, NY, USA, Nov. 2023. Association for Computing Machinery.
[8] A. Q. Jiang, W. Li, J. M. Han, and Y. Wu. Lisa: Language models of isabelle proofs. _6th_
_Conference on Artificial Intelligence and Theorem Proving, 2021._
[9] M. Johansson, D. Ros´en, N. Smallbone, and K. Claessen. Hipster: Integrating theory exploration
in a proof assistant. In Proceedings of CICM, pages 108–122. Springer, 2014.
[10] M. Johansson and N. Smallbone. Exploring mathematical conjecturing with large language models.
In 17th International Workshop on Neural-Symbolic Learning and Reasoning, NeSy 2023, 2023.
[11] G. Poesia, D. Broman, N. Haber, and N. D. Goodman. Learning formal mathematics from intrinsic
motivation. arXiv preprint arXiv:2407.00695, 2024.
[12] M. N. Rabe, D. Lee, K. Bansal, and C. Szegedy. Mathematical reasoning via self-supervised
skip-tree training. In Proceedings of ICLR, 2021.
[13] N. Smallbone, M. Johansson, K. Claessen, and M. Algehed. Quick specifications for the busy
programmer. Journal of Functional Programming, 27, 2017.
[14] J. Urban and J. Jakub˚uv. First neural conjecturing datasets and experiments. In Proceedings of
_CICM, 2020._
[15] S. Zhang, S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, C. Dewan, M. Diab, X. Li, X. V.
Lin, T. Mihaylov, M. Ott, S. Shleifer, K. Shuster, D. Simig, P. S. Koura, A. Sridhar, T. Wang,
and L. Zettlemoyer. Opt: Open pre-trained transformer language models, 2022.
-----
| [
"Emily, First",
"Moa, Johansson",
"Solrun Halla, Einarsdottir",
"Yousef, Alhessi"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
On the Inductive Bias of Stacking Towards Improving Reasoning | Given the increasing scale of model sizes, novel training strategies like gradual stacking have garnered interest. Stacking enables efficient training by gradually growing the depth of a model in stages and using layers from a smaller model in an earlier stage to initialize the next stage. Although efficient for training, the model biases induced by such growing approaches is largely unexplored. In this work, we examine this fundamental aspect of gradual stacking, going beyond its efficiency benefits. We propose a variant of gradual stacking called MIDAS and discover an intriguing phenomenon for this approach: MIDAS is not only training efficient, but surprisingly also has an inductive bias towards improving downstream tasks, especially tasks that require reasoning abilities, despite having similar or slightly worse perplexity compared to baseline training. To further analyze this inductive bias, we construct {\em reasoning primitives} – simple synthetic tasks that are building blocks for reasoning – and find that a model pretrained with stacking is significantly better than standard pretraining on these primitives, with and without fine-tuning. This provides stronger and more robust evidence for this inductive bias towards reasoning. Furthermore, we conjecture the underlying reason for this inductive bias by exploring the connection of stacking to looped models and provide strong supporting empirical analysis. | An intriguing phenomenon is discovered: MIDAS is not only training-efficient but surprisingly also has an inductive bias towards improving downstream tasks, especially tasks that require reasoning abilities like reading comprehension and math problems, despite having similar or slightly worse perplexity compared to baseline training. | ## On the Inductive Bias of Stacking Towards Improving Reasoning
**Nikunj Saunshi[∗]**
Google Research
```
[email protected]
```
**Sobhan Miryoosefi**
Google Research
```
[email protected]
```
**Stefani Karp**
Google Research
```
[email protected]
```
**Sashank J. Reddi**
Google Research
```
[email protected]
```
**Shankar Krishnan**
Google Research
```
[email protected]
```
**Sanjiv Kumar**
Google Research
```
[email protected]
```
**Abstract**
Given the increasing scale of model sizes, novel training strategies like gradual
stacking [Gong et al., 2019, Reddi et al., 2023] have garnered interest. Stacking
enables efficient training by gradually growing the depth of a model in stages
and using layers from a smaller model in an earlier stage to initialize the next
stage. Although efficient for training, the model biases induced by such growing
approaches are largely unexplored. In this work, we examine this fundamental
aspect of gradual stacking, going beyond its efficiency benefits. We propose a
variant of gradual stacking called MIDAS that can speed up language model training by up to 40%. Furthermore we discover an intriguing phenomenon: MIDAS
is not only training-efficient but surprisingly also has an inductive bias towards
improving downstream tasks, especially tasks that require reasoning abilities like
reading comprehension and math problems, despite having similar or slightly worse
perplexity compared to baseline training. To further analyze this inductive bias, we
construct reasoning primitives – simple synthetic tasks that are building blocks for
reasoning – and find that a model pretrained with stacking is significantly better
than standard pretraining on these primitives, with and without fine-tuning. This
provides stronger and more robust evidence for this inductive bias towards reasoning. These findings of training efficiency and inductive bias towards reasoning are
verified at 1B, 2B and 8B parameter language models. Finally, we conjecture the
underlying reason for this inductive bias by exploring the connection of stacking to
looped models and provide strong supporting empirical analysis.
**1** **Introduction**
With the advent of very large deep learning models, efficient training to reduce the compute and
time requirements is becoming increasingly important. Along with efficient optimization procedures,
there has been a surge in interest to design efficient training strategies. One practical approach is
to use smaller models to initialize larger models. Usually, this results in much faster convergence
compared to vanilla training [Chen et al., 2022, 2016, Gong et al., 2019, Reddi et al., 2023, Wang
et al., 2023, Li et al., 2023, Kim et al., 2023, Yao et al., 2024, Wang et al., 2024]. Stacking and
growing based approaches have particularly gained traction recently. For instance, gradual stacking
[Reddi et al., 2023] is a prominent approach where in each stage the last few layers of the model
_∗Corresponding author_
Preprint. Under review.
-----
Figure 1: (a) Pictorial depiction of gradual stacking and MIDAS. (b) Accuracy improvements (in
%) for model trained with MIDAS over baseline for various task groups, despite having the same
perplexity. For both 1B, 2B and 8B models, we see that improvements are mostly positive, and are
much larger for tasks that require a lot of reasoning.
are stacked onto itself to initialize the model’s next stage, until the desired depth is reached. This
has been shown to significantly speed up BERT pretraining and also has some theoretical justification
for the efficiency aspect. While these methods can speed up training, such changes can also induce
specific biases into the model. However, the effect of stacking-based approaches on generalization
remains a fundamental open question and is largely unexplored.
Modern deep learning models when trained carefully have been shown to exhibit interesting inductive
biases, and their success is partially attributed to them. Such biases can arise either from model
architecture, optimization techniques, or training strategies, and these biases come in various forms
including simplicity bias, flatness of learned function, and sparsity. The implicit bias of optimizers,
in particular, has been subject to extensive research. For instance, the implicit bias of first-order
methods like stochastic gradient descent has been studied extensively in overparametrized settings
[Gunasekar et al., 2018, Liu et al., 2023]. Similarly, the inductive biases of architecture components
like self-attention and convolution have also been studied [Edelman et al., 2022, Wang and Wu,
2023]. More recently, there has also been interest in constructs like looped models [Lan et al., 2020,
Dehghani et al., 2018] that share weights across layers. They have been shown to be powerful enough
to emulate programmable computers [Giannou et al., 2023] and have the inductive bias to simulate
iterative solutions [Yang et al., 2023], thereby yielding models with algorithmic abilities. However,
in this vein, very little is known about the implicit biases of newer training strategies (e.g., greedy
layerwise training or gradual stacking) that are gaining popularity.
In this work, we investigate the inductive bias of stacking-based approaches beyond training
efficiency. We uncover an intriguing phenomenon — pretraining with a variant of stacking is not
_only efficient, but also has a desirable inductive bias towards improving downstream benchmarks._
First, through comprehensive empirical analysis, we discover a novel variant of gradual stacking
called MIDAS (MIDdle grAdual Stacking) which copies the middle block of layers of a small
network to initialize a larger network (see Figure 1). We demonstrate that MIDAS is more efficient
in training compared to standard training and the previous leading stagewise training approach.
However, remarkably, it also yields significantly better performance on many downstream reasoning
_tasks. For instance, we see in Figure 1 that MIDAS has significantly better performance on math_
word problems and reasoning primitives. This performance boost should come as a surprise, since
**MIDAS uses exactly the same data and fewer training FLOPS compared to standard training. In fact,**
the pretraining perplexity of MIDAS on a validation set matches that of standard baseline training.
This strongly suggests that there is some inductive bias for MIDAS at play.
In this paper, we formalize and provide strong evidence for such an "inductive bias" – MIDAS
achieves better downstream evaluations despite performing similarly in terms of pretraining validation
perplexity. Thus, the improved quality of MIDAS is not because of better generalization in the
pretraining objective, but rather due to its ability to extract more skills and abilities from the pretraining
process. This kind of inductive bias phenomenon was first formalized in Saunshi et al. [2022] for
contrastive learning and later in Liu et al. [2023] for language modeling on synthetic data. However,
this is the first evidence of a strong inductive bias for a training procedure in real language model
-----
training. While our real-world benchmarks already provide strong evidence, in order to better
isolate the contributing factors, we construct simple synthetic tasks that are building blocks for
reasoning, called reasoning primitives. We find that a model pretrained with MIDAS has much better
performance on the reasoning primitives than a model obtained through standard pretraining, as is
evident in Figure 1. In light of the above discussion, we state the main contributions of our paper.
- We propose a novel variant of gradual stacking, called MIDAS, that achieves better training
efficiency than gradual stacking.
- Our investigation of the inductive bias in gradual stacking approaches, particularly with
**MIDAS, reveals a surprising benefit: beyond enabling efficient training, it also enhances**
_performance on downstream tasks. This improvement is especially notable in tasks that rely_
on context and reasoning abilities.
- We provide strong evidence of the aforementioned phenomenon on several datasets that
have previously been used to demonstrate reasoning capabilities.
- We construct simple synthetic tasks that are building blocks for reasoning and demonstrate
that MIDAS performs significantly better than baseline training on these tasks. These
datasets may be of independent interest to the LLM reasoning community.
- Finally, we conjecture the reason behind improved reasoning capabilities of MIDAS by
presenting connections between gradual stacking and looped models and provide strong
empirical evidence to support it.
**2** **Problem Setup**
In this section, we first present the problem setup and background material needed for this paper.
Before we discuss the problem setting, we set up the following notation for the rest of the paper.
**Notation. For a deep network f**, we use fi and #(f ) to denote the i[th] layer and the number of layers
of the network, respectively. With slight abuse of notation, we use fi,b (where i, b ∈ _Z_ [+]) to denote
the layers between (i − 1) · b to i · b of a deep network f . In other words, fi,b denotes the i[th] block of
_b layers in a deep network f_ . a1:k is used to denote a sequence of k scalars {a1, . . ., ak}.
Our goal is to learn a function f : which minimizes the loss E(x,y) _ℓ(f_ (x), y), for some
_X →Y_ _∼D_
loss function ℓ : Y ×Y → R[+] _∪{0} and data distribution D on X ×Y. We are interested in functions_
of the form f = fL _fL_ 1 _f1 where_ and L represent function composition and depth of the
network, respectively. We use ◦ _−_ _◦· · · ◦ FL to denote the function class consisting of functions of this form. ◦_
Given samples from the distribution D, we typically use an iterative stochastic optimizer (e.g., SGD)
to learn a function that minimizes the loss. We note that the optimization procedure is inconsequential
to the arguments in the paper. For standard training, each iteration is of the form:
_f_ _[t]_ = f _[t][−][1]_ + A(f _[t][−][1], Bt, ηt),_ (Standard Training)
where Bt is a mini-batch from distribution D and A(f _[t][−][1], Bt, ηt) represents the iterative optimizer_
update at f _[t][−][1]_ on Bt and learning rate ηt. The computation cost and memory requirement for training
typically increases linearly with the depth, making even simple algorithms, like SGD, slow for very
large models. Throughout this paper, we use T to denote the total number of training iterations.
**2.1** _k-stage training_
Since we primarily focus on stagewise training approaches, it is useful to formally define a stagewise
training procedure. In contrast to standard training, k-stage training involves dividing the training
process into k stages, and at each stage, using the the model from the previous stage to initialize the
model in the current stage. For simplicity, we assume L is divisible by k. The following are the key
ingredients:
1. Function class across stages. At stage i, we use function class Fd(i) where d(i) denotes the depth
of the network at that stage. When d(i) ≪ _L, training is more efficient._
2. Training schedules across stages. As training is divided into k stages, we use T1, · · ·, Tk steps
across stages such that _i=1_ _[T][i][ =][ T]_ [.]
[P][k]
-----
(a) ALBert layer similarity (b) GRADSTACK block similarity (c) MIDAS block similarity
Figure 2: (a) For an ALBert model trained with weight sharing across all layers, we measure the
functional similarity between layers by looking at the top 1% activated neurons in each MLP layer
and measure the intersection-over-union (IoU) metric for each pair of layers. Despite all layers
having the same parameters, a natural functional similarity structure emerges around the middle.
(b) For a UL2 model trained with GRADSTACK, we measure the cosine similarity between every pair
of layer blocks for the first feedforward layer weights. (c) The same similarity measured for MIDAS.
The cosine similarities for stacking based models suggests strong connection to looped models, and
**MIDAS has a closer similarity structure to ALBert style looped models than GRADSTACK.**
3. Stage initialization. This is the key component of stagewise training. Given a network f ∈
_d(i_ 1) trained in the (i 1)[th] stage, let _i(f_ ) denote the network initialization for the next
_F_ _−_ _−_ _M_
stage where _i :_ _d(i_ 1) _d(i) is growth operator._
_M_ _F_ _−_ _→F_
Almost all the recent stagewise training procedures are different instantiations of this framework, using
different training schedules and stage initializations. We will revisit some prominent instantiations of
the framework in the next section.
**2.2** **Progressive & Gradual Stacking**
Progressive and gradual stacking are two special instantiations of the aforementioned framework. We
provide a brief description of these approaches since they are important for our discussion.
**Progressive Stacking [Gong et al., 2019]. This is simple instance of k-stage training setup where**
model in the previous stage is stacked onto itself to initialize the model in the next stage. In particular,
**(1) depth d(i) = 2[i][−][1]d(0) grows exponentially, (2) schedule Ti is typically T/k or proportional to**
_d(i), and (3) the growth function Mi(f_ ) = f ◦ _f_ .
**Gradual Stacking [Reddi et al., 2023]. In contrast to progressive stacking, gradual stacking incre-**
mentally increases the model size where only the last L/k layers of model in the previous stage are
stacked to initialize the model in the next stage, as follows.
1. The depth d(i) = _[L]k[·][i]_ [grows linearly with the stage.]
2. Ti is typically either T/k or allocated proportional or exponential to depth.
3. Mi(fd(i−1) ◦· · · ◦ _f1) = fd(i−1) · · · ◦_ _fd(i−1)−(L/k)−1 ◦_ _fd(i−1) · · · f1. This corresponding_
to stacking the last L/k layers onto the network to initialize the next stage model.
In the next section, we study a novel variant of gradual stacking that enables faster training and
exhibits interesting inductive bias, which we examine carefully.
**3** **Algorithm: MIDAS**
We present the MIDAS algorithm in this section. We first discuss the motivation behind this variant
of gradual stacking and then formally define the algorithm.
-----
**3.1** **Motivation**
The motivation for MIDAS touches upon two crucial aspects: (a) the role of different layers in a
deep network and (b) a connection to looped models. Before delving into more technical details, it is
important to illustrate these points. We present the case for MIDAS based on three observations.
**Observation 1: gradual stacking breaks the natural role of layers. Recall that gradual stacking**
initializes a larger model model by duplicating and stacking the last block of b from the smaller model.
Thus in the newly initialized model, the second-last block of b layers will be the same as the last
_b layers of the smaller model (see Figure 1). Intuitively, this is undesirable since the last few layers_
have been shown to play a different role compared to other layers for Transformer models [Belrose
et al., 2023]. We further validate this in Figure 6. Thus, duplicating the last few layers can break
the natural role of layers at the initialization, making it a suboptimal choice. However, it is plausible
that the similarity structure across layers is broken after continued training and the initialization
is inconsequential. The next observation shows that this is not true, and establishes a connection
to looped models – networks with shared parameters between layers.
**Observation 2: gradual stacking leads to models resembling looped models. To check the**
effect of the initialization, we measure the cosine similarity between weights of layers for a model
pretrained with gradual stacking. In Figure 2b, we observe that indeed the layers continue to have
very high cosine similarity at the end of training, thus establishing a connection between stacking and
looped models like ALBert [Lan et al., 2020] and Universal Transformers [Dehghani et al., 2018].
Unsurprisingly, the similarity structure for gradual stacking is lopsided towards the end of the model,
which raises the question: Is this similarity structure natural for looped models?
**Observation 3: looped models exhibit similarity in the middle. In order to study this, we train a**
prototypical looped model, ALBert, where all layers share the same parameters. Surprisingly, despite
parameters being shared, a natural similarity structure emerges between layers: yet again the first
and last layers tend to be functionally dissimilar to other layers, whereas the functional similarity
between layers is the highest in the middle (see Figure 2a).
The above observations provides a strong motivation for stacking in the middle rather than at the end,
thus inspiring our MIDAS algorithm.
**3.2** **MIDAS algorithm**
First we define the following mapping operator that is useful for stage initialization in MIDAS.
_fn,b,_ (1)
_◦· · · ◦_
_M(f, b) = f1,b ◦· · · ◦_ _f⌈n/2⌉,b ◦_ _f⌈n/2⌉,b_
Replication
| {z }
where n = #(f )/b is the number of blocks of b layers in deep network f . Note that operator M(f, b)
expands the size of the network by size b. Based on this operator, MIDAS can again be described
as a simple instantiation of the k-stage training framework, as seen below. For completeness, the
pseudocode for the MIDAS in listed in Algorithm 1.
-----
**Algorithm 1 MIDAS**
**Require: Schedule T1:k, η1:T, optimizer update**
_A (see Section 2), data distribution D._
**Initialize f** [1][,][0] _L/k._
_∈F_
**for s = 1 →** _k do_
**for t = 1 →** _Ts do_
Sample batch Bt from D.
_f_ _[s,t]_ = f _[s,t][−][1]_ + (f _[s,t][−][1],_ _t, ηt)_
_A_ _B_
**end for**
Initializer for next stage:
Figure 3: Histogram of accuracy improvements
for models trained with MIDAS over baseline.
The data points are MIDAS 1B models listed
in Table 1. The figure shows that MIDAS-based
models have much higher improvement in
contextual version of TyDiQA compared to the
non-contextual version.
_f_ _[s][+1][,][0]_ = M(f _[s,T][s]_ _, L/k)_
(see Equation 1)
**end for**
**return f** _[k,T]_
1. The depth d(i) = _[L]k[·][i]_ [grows linearly with the stage, similar to gradual stacking.]
2. Ti is typically either proportional to i (linear proportional) or i[2] (square proportional) or
exp(i) (exponential). We will revisit this during our empirical analysis.
3. We use growth operator M in equation 1 for initializing the next stage, which corresponds
to replicating the middle L/k layers to initialize the next stage model.
**3.3** **Experiments: UL2 Pretraining**
In this section, we evaluate MIDAS for standard language model pretraining. We train a 24L decoderonly model with 1.5B parameters using the UL2 objective [Tay et al., 2022] on a mixture of C4,
Wikipedia, Arxiv and Github. The observations also hold for GPT-style autoregressive language
modeling. To enable fair comparison, we cached the pretraining dataset and so all methods are
trained for the same number 500B tokens in the same order, using the same batch size (refer to
Appendix A.1 for more details on the training setup). We pretrain models with three methods:
(a) standard training (Baseline), (b) gradual stacking (GRADSTACK) and (c) our proposed method
**MIDAS. The goal is to compare them with respect to validation loss and downstream performance on**
several diverse benchmarks. Motivated by the proportional schedules from prior work, we try the
following generalized proportional schedules for gradual stacking and MIDAS.
**Definition 3.1 (PROP-α schedule). For a total training budget of T steps, the schedule PROP-α**
_i[α]_
_spends time Ti in each stage such that Ti_ _i[α]_ _for all stages i_ [k]. Thus Ti = _k_
_∝_ _∈_ Pj=1 _[j][α][ T]_
PROP-1 schedule has been found to work very well for BERT pretraining [Reddi et al., 2023]. Since
UL2 pretraining is a harder task, we also explore less aggressive schedules like PROP-1 and PROP-2
that spend more time on larger models.
**Efficiency and perplexity findings. We summarize the main results in Table 1, for various stacking**
methods and schedules. Firstly, we note that for all schedules, MIDAS has significantly better
validation log perplexity than GRADSTACK at the same speedup level. This suggests that stacking
in the middle is a lot more effective for optimization than stacking at the end of the model. With the
PROP-2 schedule, MIDAS is 24% faster and nearly matches baseline’s log perplexity. Additionally,
we observe that the findings are robust to the choice of block size for stacking.
**Downstream benchmark evaluations. While perplexity can serve as a decent proxy for model**
quality, there is growing evidence that it is not the best measure [Liang et al., 2023]. Downstream
benchmark evaluations serve as a more holistic measure for quality and are out-of-distribution
evaluations of skills. To this effect, we evaluate MIDAS on many standard benchmarks and these
-----
Table 1: Downstream evaluations for UL2 pretrained models with 1B, 2B and 8B parameters.
Comparisons include standard training (Baseline), gradual stacking (GRADSTACK) from [Reddi
et al., 2023] and our proposed method MIDAS. The downstream evaluations are averaged over
tasks within 3 task groups. See Appendix A for precise tasks included in each task group. For
each cateory and model size, we highlight the top model is bolded and the second best model
is underlined. Firstly, MIDAS is much better than GRADSTACK, thus justifying stacking in the
middle. Secondly, MIDAS can match the log perplexity of baseline training while being roughly 24%
faster. Furthermore, even the schedule with 40% speedup has much better downstream evaluations
compared to baseline, even though it has worse log perplexity. The improvements are particular
larger for task groups that require reasoning (open book QA, math word problems).
|d(i)/i Schedule Speedup (block size)|Loss (↓) (validation)|Closed Open Math Word Book QA (↑) Book QA (↑) Problems (↑) (4 tasks) (5 tasks) (6 tasks)|All Tasks Average (↑) (15 tasks)|
|---|---|---|---|
|1B Parameters|Col2|Col3|Col4|
|---|---|---|---|
|Baseline 24 1x|1.996|13.2 33.3 23.5|24.0|
|GRADSTACK 4 PROP-1 1.39x MIDAS 4 PROP-1 1.39x MIDAS 3 PROP-1 1.41x|2.045 2.028 2.032|10.3 31.4 23.5 11.6 34.5 30.3 10.6 36.1 27.0|22.6 26.7 25.6|
|GRADSTACK 4 PROP-2 1.24x MIDAS 4 PROP-2 1.24x MIDAS 3 PROP-2 1.26x|2.024 2.009 2.012|11.0 31.6 17.3 11.7 36.3 29.0 11.9 37.3 29.8|20.4 26.8 27.5|
|MIDAS 4 PROP-3 1.16x|1.999|12.5 34.8 33.3|28.3|
|MIDAS 4 PROP-3 1.16x 2B Parameters|1.999|12.5 34.8 33.3|28.3|
|---|---|---|---|
|Baseline 48 1x|1.926|15.2 39.1 27.1|28.0|
|MIDAS 8 PROP-1 1.39x|1.947|14.0 38.9 32.0|29.5|
|GRADSTACK 8 PROP-2 1.24x MIDAS 8 PROP-2 1.24x|1.945 1.929|14.2 37.0 24.5 15.7 40.2 38.2|25.9 32.9|
|MIDAS 8 PROP-2 1.24x 8B Parameters|1.929|15.7 40.2 38.2|32.9|
|---|---|---|---|
|Baseline 72 1x|1.841|21.1 39.6 34.9|32.8|
|MIDAS 9 PROP-2 1.26x|1.844|21.8 40.0 43.1|36.4|
are group into task categories in Table 1 (refer to Appendix A.2 for more detailed evaluations on
individual tasks). The accuracy for task category is an average over representative tasks from that
group. For instance, for closed book QA task, we consider an average accuracy on TriviaQA, TydiQA
(no context), NaturalQuestions and WebQuestions.
Surprisingly, we find that downstream improvements for MIDAS are significantly larger than the
improvements in perplexity. In particular, MIDAS with PROP-2 schedule has the very similar
perplexity to baseline at 24% speedup, but the average downstream performance for MIDAS (26.8%)
is much better than baseline (24.0%). In fact, even MIDAS with PROP-1 schedule which has worse
log perplexity is much better on downstream evaluations. Similar trends of better downstream evals
holds for the 2B parameter model. The improvements are particularly large for open book QA and
math word problems, both of which are tasks that require reasoning abilities whereas memorizatino
tasks like closed book QA do not improve. We conjecture that these downstream improvements
are due to an inductive bias induced by stacking and we dive deeper into this in the next section.
**4** **Inductive bias of stacking**
Results in Table 1 demonstrate that MIDAS not only yields training speedups, but also improves
downstream evaluations when trained on the same number of tokens as standard training. This
suggests that stacking can extract more skills out of the same data. Here, we take a closer look at
this improvements in downstream evaluations through the lens of an inductive bias of stacking.
**4.1** **Downstream performance vs log perplexity**
A reasonable expectation from pretraining is that improvements in the pretraining objective would
correlate with improvements in model quality and downstream performance. This notion of transfer
has even been theoretically formalized for language modeling in Saunshi et al. [2020], Arora and
Goyal [2023]. Thus, based on this, a natural explanation for the downstream improvements of
stacking would be that it generalizes better on the pretraining objective. However, as we see in
-----
Figure 4: Downstream evalulation vs validation log perplexity isoplots as training proceeds for
baseline and MIDAS 1B models trained on the same data (stacking is 24% faster here). On the y-axis
we track the performance on various task groups – closed book QA, open book QA, math word
problems and our reasoning primitives from Section 5. On the x-axis the log perplexity is presented
in the reverse order, thus downstream performance for both methods improves as log perplexity gets
lower. For closed book QA (memorization) tasks MIDAS has very similar trends to baseline. For
open book QA tasks and math word problems, MIDAS has much better downstream performance
at an equivalent log perplexity. This showcases the inductive bias of MIDAS towards better overall
quality and better reasoning abilities.
Table 1, downstream performance of MIDAS is better despite having similar or worse validation
perplexity – hence this is not simply the case of better generalization to unseen pretraining data. It is
natural to ask: If not perplexity, what explains this downstream phenomenon?
Since pretraining objective is just a proxy objective for model quality, it is plausible that different
training strategies and model architectures can extract different level of skills from it. This is because
there are multiple ways of doing well on the pretraining tasks, and some training strategies can be
biased to pick one solution over another one. This behavior has been formalized as the inductive
bias in pretraining by recent work [Saunshi et al., 2022, Liu et al., 2023] – at the same level of
validation pretraining loss, different optimization algorithms could have vastly different downstream
performance. We hypothesize that a similar phenomenon is at play when it comes to stacking.
**Isoplots. Inspired by this phenomenon of different downstream performance at the same perplexity,**
we visualize the inductive bias of a method by plotting downstream accuracy vs log perplexity isoplots
as training proceeds. We use the UL2 1B models that are pretrained with standard (baseline) training
and with MIDAS using the PROP-2 schedule (refer to Section 3.3 for more details). In Figure 4,
we visualize the downstream vs log perplexity plots for different task groups – closed-book QA,
open-book QA and math word problems. We observe a very interesting trend – MIDAS and baseline
training can have different isoplot behaviors and the divergence is different for different tasks.
**4.2** **Reasoning vs memorization for QA**
For a clearer display of the inductive bias, we measure the improvements due to MIDAS on closed
book vs open book QA tasks. It is reasonable to assume that closed book QA tasks require strong
memorization abilities whereas open book QA tasks requires some reasoning abilities to infer answers
from the context that is provided. On average, we see much larger improvements on open book QA
tasks compared to closed book QA tasks, as already evident in Figure 1 and Table 1.
**MIDAS is significantly better on Open book QA. To make a direct comparison, we consider**
TydiQA-GoldP and TydiQA-NoContext tasks – the datasets are identical and the only difference is
whether or not additional context is provided (the answer for the contextual version is guaranteed to
be inferred from the given context). In Figure 3, we see that the improvements by various MIDAS
based models on the contextual version of TydiQA are much higher than those on the non-contextual
version. This provides a direct evidence of the bias of MIDAS towards improving tasks that require
reasoning. Furthermore, we find that the memorization performance of stacking improves as the
schedule spends more time on larger model.
-----
Table 2: Evaluation on math tasks, including math word problems from Table 1 and a harder task
GSM8k. For GSM8k we report accuracy with 8-shot prompts and with finetuning. We also report
accuracy on all tasks after using an external calculator to fix arithmetic errors; this correspond to
w/ calc. Overall the use of calculator improves the accuracy for all models on all tasks. The benefit
of MIDAS over baseline is even higher with calculator.
**2B Parameters**
|Model|Pretraining Loss (↓)|Math WPs (5-shot) W/o calc. W calc.|GSM8k (8-shot) W/o calc. W calc.|GSM8k (Finetune) W/o calc. W calc.|
|---|---|---|---|---|
**8B Parameters**
|Baseline MIDAS|1.926 1.929|15.4 22.5|27.1 38.3|3.0 3.0|3.6 4.1|5.3 10.4|8.5 14.5|
|---|---|---|---|---|---|---|---|
|Baseline MIDAS|1.841 1.844|27.3 32.9|34.9 43.1|4.5 5.5|6.6 7.4|12.3 15.2|15.8 18.7|
|---|---|---|---|---|---|---|---|
**4.3** **Reasoning in math tasks**
To test reasoning abilities, we evaluate the language models on various math word problem datasets
like SVAMP [Patel et al., 2021], ASDiv [Miao et al., 2020], AQuA dataset for algebraic word
problems, the MAWPS benchmark [Koncel-Kedziorski et al., 2016]. We report 5-shot evaluation for
the pretrained model on these tasks. Following Wei et al. [2022], we use an external calculator to
do the arithmetic and evaluate the models on their ability to compute the correct expression for the
answer. This is because small models have bad arithmetic accuracy. The choice of using calculator or
not does not significantly affect the trends of the results. For stacking, we use MIDAS PROP-2 model
because it achieves nearly the same perplexity as the baseline model (while being 24% faster), thus,
leading to a fair comparison based on the previous notion of inductive bias.
**MIDAS is significantly better on Math/Reasoning tasks. Detailed results can be found in Table 5.**
For most math tasks, we observe that MIDAS based pretrained model is significantly better than the
baseline model, especially for the MAWPs benchmark. This provides further evidence of better math
and reasoning capabilities of MIDAS.
**GSM8K fine-tuning.** We also evaluate the 2B and 8B models on harder math problems from the
GSM8k dataset [Cobbe et al., 2021] through few-shot prompting and fine-tuning. Full results are
presented in Table 2. For MIDAS we use the PROP-2 model that has very similar perplexity as the
baseline model. We find that MIDAS has much higher accuracy after fine-tuning, thus suggesting
that the benefit of the inductive bias continue after fine-tuning and are not just restricted to few-shot
evaluations. In particular, on the test set, the accuracy metric increased from 5.3% (for baseline
model) to 10.4% (for MIDAS) for the 2B model (these numbers were produced by computing the
average score over three runs with different random seeds). Similarly the GSM8k accuracy of the 8B
model improves from 12.3% to 15.2%. This suggests that MIDAS not only improves the performance
on harder math tasks, but also that the gains remain or improve after fine-tuning.
**Effect of calculator.** For LLMs with less than 20B parameters, Wei et al. [2022] found that the
models often solve the problem correctly but make arithmetic errors. This leads to low accuracy on
math word problems. Wei et al. [2022] remedied this by computing all arithmetic expressions using a
Python program as an external calculator. In Table 2 we find that this improves the accuracy for our
models too. Interestingly, we find that the gap between MIDAS and baseline gets even larger with
the use of calculators in almost all comparisons. We believe this is because arithmetic abilities is
closer to memorization for smaller models [Razeghi et al., 2022] and the use of calculator makes the
problem closer to reasoning, since now the model only has to infer the right expression. We believe
this interplay between reasoning and memorization for math problems deserves further investigation.
**4.4** **Connection to looped models**
Given the nature of the growth operator in each stage, we hypothesize that stacking based models are
close to looped models. The layer duplication that happens at every stage ensures that blocks of layers
start from a common initialization. We measure the similarity between different blocks of layers by
measuring cosine similarities between the parameter vectors (see Figure 2). Since looped models
have been conjectured to solve algorithmic problems [Giannou et al., 2023] by finding iterative
-----
Performance comparison on reasoning primitives
100
80
60
40
20
Copying
86.0 MIDAS Baseline
68.5 69.5
62.0
49.7
43.8 45.0
24.3 14.9 28.3 21.6 19.5 19.0 23.9
(5-shot)
Depth 0
(5-shot)
Depth 1
(5-shot)
Depth 1
(FT)
Task
Depth 2
(5-shot)
Depth 2
(FT)
PSM-calc
(5-shot)
Figure 5: Accuracy improvements for model trained with MIDAS over baseline for representative rea_soning primitives, despite having the same perplexity. We see clear improvements for stacking on al-_
most all the primitives, both with 5-shot evaluation and after fine-tuning (FT) for the depth 2 primitive.
solutions [Yang et al., 2023], we conjecture that the better reasoning abilities of MIDAS are due to
this connection to looped model. We believe exploring this further is a very fruitful direction.
**5** **Deep dive into reasoning improvements**
To further investigate the nature of this inductive bias, we construct various simple synthetic tasks
to help tease apart the model’s capabilities. We conjecture that these simple tasks capture core
basic capabilities needed for contextual reasoning, and we therefore call these tasks “contextual
reasoning primitives”. They are: induction copying, variable assignment, and pre-school math
(PSM), discussed further below. Overall, across various few-shot evaluations and fine-tuning, we
see significant performance gaps between MIDAS and baseline training, suggesting that we have
successfully isolated some of the basic capabilities at which MIDAS excels relative to baseline
training. We refer the reader to Appendix B for more results and the exact input format.
**Primitive 1: Induction copying. The “induction copying” primitive presents a sequence of words,**
followed by a subsequence selected randomly from within this original sequence, and asks the model
to output the next word in the sequence. A simplified example is: “pum nyj gdq ocu rzk jbw
```
mlz eny kyx uni rzk jbw mlz eny kyx”, and the expected output is “uni”. This primitive is
```
inspired by the “induction head” mechanism introduced in Olsson et al. [2022], which is posited to be
the basic mechanism for in-context learning more generally. In Figure 5, task “Copying”, we present
results for 3-letter words of random letters, separated by spaces, with a sequence length of 10 and a
subsequence length of 5.
**Primitive 2: Variable assignment. The “variable assignment” primitive tests the model’s ability to**
associate a value with a variable name and apply this ability compositionally, which we test by varying
the “depth” of the task. We conjecture that this ability is a core function in contextual reasoning,
particularly in math. An example of the depth-0 variant is “u=1; t=0; v=13; y=4; f=22; y=”,
and the expected output is 4. An example of the depth-2 variant is “y=7; f=0; z=3; b=9; x=8;
```
q=y; l=f; m=z; h=x; a=b; n=h; j=m; t=a; i=l; g=q; n=”, and the expected output is 8.
```
Refer to Appendix B for more details.
**Primitive 3: Pre-school math (PSM). This tests the model’s ability to solve a very simple “pre-**
school math” problem by correctly associating multiple values and variables simultaneously and
applying this association to a particular task. An example is “z=6; b=5; i=-z+b; i=”, and the
expected answer (with chain-of-thought) is “-6+5=-1”.
**5-shot evaluation results. Figure 5 presents the results for representative tasks, with more results in**
Appendix B. Overall, we see that MIDAS outperforms baseline training across all tasks. In particular,
we see that MIDAS is significantly stronger than baseline at Depth 0, Copying, PSM-calc, and Depth
1, in decreasing order of magnitude of the performance gap. Depth-2 is much harder and is at random
guessing (20%) for both models.
**Fine-tuning results. Due to the difficulty of the variable assignment task at Depths 1 and 2, we**
investigate fine-tuning on these tasks as well. We fine-tune on a mixture of 32 depth-1 examples and
32 depth-2 examples (i.e., only 64 examples total), using full-batch gradient descent. Figure 5 reports
-----
the validation accuracy on Depth 1 and Depth 2 after fine-tuning on this mixture (tasks “Depth 1 (FT)”
and “Depth 2 (FT)”). Overall, we see that fine-tuning with just 64 examples significantly improves
performance, resulting in MIDAS outperforming baseline by a gap of over 20% validation accuracy
at both depths. See Appendix B for further fine-tuning and evaluation details.
**6** **Conclusions and future work**
In this work we propose a novel stacking method that outperforms previous stacking methods and
speeds up language model pretraining by 25-40%. In the process, we uncover a very intriguing
inductive bias of stacking – its ability to improve downstream reasoning tasks. Through extensive
empirical analysis, the paper makes a strong case for the presence and significance of this inductive
bias. We believe this deserves further attention and exploration since understanding this inductive
bias could unlock new approaches to improving model quality, reasoning in particular. The reasoning
primitives start to provide more insights by isolating the reasoning improvements and we hope that the
dataset is useful for future reasoning on improving reasoning. Finally understanding the dichotomy
between memorization and reasoning, and how this affects the performance on various tasks is an
interesting direction to pursue.
**Acknowledgments.** We thank Srinadh Bhojanapalli and Vaishnavh Nagarajan for discussions on
role of layers and memory vs contextual tasks, respectively, in the early stages of the project. We also
thank Satyen Kale for valuable feedback throughout the project.
**References**
Sanjeev Arora and Anirudh Goyal. A theory for emergence of complex skills in language models.
_arXiv preprint arXiv:2307.15936, 2023._
Nora Belrose, Zach Furman, Logan Smith, Danny Halawi, Igor Ostrovsky, Lev McKinney, Stella
Biderman, and Jacob Steinhardt. Eliciting latent predictions from transformers with the tuned lens.
_arXiv preprint arXiv:2303.08112, 2023._
Cheng Chen, Yichun Yin, Lifeng Shang, Xin Jiang, Yujia Qin, Fengyu Wang, Zhi Wang, Xiao
Chen, Zhiyuan Liu, and Qun Liu. bert2BERT: Towards reusable pretrained language models. In
_Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 2022._
Tianqi Chen, Ian Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via knowledge
transfer. International Conference on Learning Representations, 2016.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve
math word problems. arXiv preprint arXiv:2110.14168, 2021.
Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. Universal
transformers. In International Conference on Learning Representations, 2018.
Benjamin L. Edelman, Surbhi Goel, Sham Kakade, and Cyril Zhang. Inductive biases and variable
creation in self-attention mechanisms. In International Conference on Machine Learning, 2022.
Angeliki Giannou, Shashank Rajput, Jy-yong Sohn, Kangwook Lee, Jason D Lee, and Dimitris
Papailiopoulos. Looped transformers as programmable computers. In International Conference on
_Machine Learning, 2023._
Linyuan Gong, Di He, Zhuohan Li, Tao Qin, Liwei Wang, and Tieyan Liu. Efficient training of bert
by progressively stacking. In International conference on machine learning, pages 2337–2346.
PMLR, 2019.
Suriya Gunasekar, Jason Lee, Daniel Soudry, and Nathan Srebro. Characterizing implicit bias in
terms of optimization geometry. In Proceedings of the 35th International Conference on Machine
_Learning, Proceedings of Machine Learning Research. PMLR, 10–15 Jul 2018._
-----
Dahyun Kim, Chanjun Park, Sanghoon Kim, Wonsung Lee, Wonho Song, Yunsu Kim, Hyeonwoo
Kim, Yungi Kim, Hyeonju Lee, Jihoo Kim, et al. Solar 10.7 b: Scaling large language models with
simple yet effective depth up-scaling. arXiv preprint arXiv:2312.15166, 2023.
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. Mawps:
A math word problem repository. In Proceedings of the 2016 conference of the north american
_chapter of the association for computational linguistics: human language technologies, 2016._
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut.
Albert: A lite bert for self-supervised learning of language representations. In International
_Conference on Learning Representations, 2020._
Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du,
Bowen Qin, et al. Flm-101b: An open llm and how to train it with $100 k budget. arXiv preprint
_arXiv:2309.03852, 2023._
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian
Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. Holistic evaluation of language
models. Transactions on Machine Learning Research, 2023.
Hong Liu, Sang Michael Xie, Zhiyuan Li, and Tengyu Ma. Same pre-training loss, better downstream:
Implicit bias matters for language models. In International Conference on Machine Learning.
PMLR, 2023.
Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and developing
english math word problem solvers. In Proceedings of the 58th Annual Meeting of the Association
_for Computational Linguistics, pages 975–984, 2020._
Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan,
Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Dawn Drain, Deep Ganguli,
Zac Hatfield-Dodds, Danny Hernandez, Scott Johnston, Andy Jones, Jackson Kernion, Liane
Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish,
and Chris Olah. In-context learning and induction heads. arXiv preprint arXiv:2209.11895, 2022.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are nlp models really able to solve simple
math word problems? In Proceedings of the 2021 Conference of the North American Chapter of
_the Association for Computational Linguistics: Human Language Technologies. Association for_
Computational Linguistics, 2021.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi
Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text
transformer. Journal of machine learning research, 2020.
Yasaman Razeghi, Robert L Logan IV, Matt Gardner, and Sameer Singh. Impact of pretraining
term frequencies on few-shot numerical reasoning. In Yoav Goldberg, Zornitsa Kozareva, and
Yue Zhang, editors, Findings of the Association for Computational Linguistics: EMNLP 2022.
Association for Computational Linguistics, 2022.
Sashank Reddi, Sobhan Miryoosefi, Stefani Karp, Shankar Krishnan, Satyen Kale, Seungyeon Kim,
and Sanjiv Kumar. Efficient training of language models using few-shot learning. In Proceedings
_of the 40th International Conference on Machine Learning, 2023._
Nikunj Saunshi, Sadhika Malladi, and Sanjeev Arora. A mathematical exploration of why language
models help solve downstream tasks. In International Conference on Learning Representations,
2020.
Nikunj Saunshi, Jordan Ash, Surbhi Goel, Dipendra Misra, Cyril Zhang, Sanjeev Arora, Sham
Kakade, and Akshay Krishnamurthy. Understanding contrastive learning requires incorporating
inductive biases. In Proceedings of the 39th International Conference on Machine Learning, 2022.
Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost.
In International Conference on Machine Learning, pages 4596–4604. PMLR, 2018.
-----
Yi Tay, Mostafa Dehghani, Vinh Q Tran, Xavier Garcia, Jason Wei, Xuezhi Wang, Hyung Won
Chung, Dara Bahri, Tal Schuster, Steven Zheng, et al. Ul2: Unifying language learning paradigms.
In The Eleventh International Conference on Learning Representations, 2022.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation
and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
Peihao Wang, Rameswar Panda, Lucas Torroba Hennigen, Philip Greengard, Leonid Karlinsky,
Rogerio Feris, David Daniel Cox, Zhangyang Wang, and Yoon Kim. Learning to grow pretrained
models for efficient transformer training. arXiv preprint arXiv:2303.00980, 2023.
Yite Wang, Jiahao Su, Hanlin Lu, Cong Xie, Tianyi Liu, Jianbo Yuan, Haibin Lin, Ruoyu Sun, and
Hongxia Yang. LEMON: Lossless model expansion. In The Twelfth International Conference on
_Learning Representations, 2024._
Zihao Wang and Lei Wu. Theoretical analysis of the inductive biases in deep convolutional networks.
In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny
Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in
_neural information processing systems, 2022._
Liu Yang, Kangwook Lee, Robert Nowak, and Dimitris Papailiopoulos. Looped transformers are
better at learning learning algorithms. arXiv preprint arXiv:2311.12424, 2023.
Yiqun Yao, Zheng Zhang, Jing Li, and Yequan Wang. Masked structural growth for 2x faster language
model pre-training. In The Twelfth International Conference on Learning Representations, 2024.
-----
**A** **Experimental Details**
**A.1** **Pretraining details**
**Model architecture.** We use a decoder-only model and train it using the UL2 objective [Tay et al.,
2022] with 60% causal LM, 20% prefix LM and 20% span corruption. The 1B model uses 24 layers,
model dimension of 2048, hidden dimension of 5120 and 32 attention heads. The 2B model is very
similar to the 1B model, except it uses 48 layers instead of 24. The 8B model uses 72 layers, model
dimension of 2048, hidden dimension of 16384 and 16 attention heads.
**Dataset.** We use a mixture of C4 (57%) [Raffel et al., 2020], Wikipedia (17%), Github (17%),
Arxiv (9%); the proportions are motivated by the dataset used for Llama pretraining [Touvron et al.,
2023]. All models are trained for 512B tokens that are precached so that all model see exactly the
same data in the same order. This corresponds to 0.86 epochs of C4, 9 epochs of Wikipedia, 0.58
epochs of Arxiv and 0.44 epochs of Github.
**Training details.** For the 1B and 2B models, we use a cosine learning schedule with a peak learning
rate of 0.01 that decays to 0.001 in the end, and use a batch size of 512. For the 8B model we use a
peak learning rate of 0.001 and decay it to 0.0001, and use a batch size of 1024. Peak learning rate
was tuned to be optimal for baseline training. All experiments use the AdaFactor optimizer [Shazeer
and Stern, 2018] and sequence length of 1280.
**A.2** **Additional downstream evaluations**
In this section we share further experimental details related to the results summarized in the Table 1.
Trivia QA TyDi QA Natural Questions Web Questions
(No Context)
Method
Baseline (1B) 28.050 11.968 4.543 8.120
GRADSTACK 4 PROP-1 (1B) 22.395 10.106 3.019 5.807
**MIDAS 4 PROP-1 (1B)** 24.984 11.702 3.712 5.856
**MIDAS 3 PROP-1 (1B)** 22.883 9.574 3.546 6.496
GRADSTACK 4 PROP-2 (1B) 22.870 11.436 3.989 5.856
**MIDAS 4 PROP-2 (1B)** 26.411 10.372 3.712 6.447
**MIDAS 3 PROP-2 (1B)** 25.460 10.904 3.767 7.431
**MIDAS 4 PROP-3 (1B)** 26.911 11.968 4.460 6.841
Baseline (2B) 33.579 12.766 5.928 8.711
**MIDAS 8 PROP-1 (2B)** 31.090 11.702 5.568 7.776
**MIDAS 8 PROP-2 (2B)** 34.580 13.032 6.260 8.907
Table 3: Closed Book QA
-----
Figure 6: Measure of linearity for different layers in pretrained BERT-Base and BERT-Large models.
For each layer i, we fit a linear map Ai between inputs Yi and the output of the Transformer block
(without the residual connection), Yi+1 _Yi. We then measure the r2 score and cosine similarity_
for the learned linear fit. The first and last few layers demonstrate a much higher level of linearity −
compared to the rest of the layers.
TyDi QA SquadV2 DROP QuAC CoQA
(With Context)
Method
Baseline (1B) 31.364 41.102 22.850 18.782 52.615
GRADSTACK 4 PROP-1 (1B) 34.318 36.907 21.529 17.465 46.763
**MIDAS 4 PROP-1 (1B)** 36.136 39.148 24.287 18.727 54.350
**MIDAS 3 PROP-1 (1B)** 37.045 44.892 25.010 18.354 55.085
GRADSTACK 4 PROP-2 (1B) 30.000 40.958 22.106 17.200 47.842
**MIDAS 4 PROP-2 (1B)** 35.455 46.576 24.444 19.654 55.372
**MIDAS 2 PROP-2 (1B)** 38.182 46.256 24.780 19.944 57.269
**MIDAS 4 PROP-3 (1B)** 33.636 40.226 24.717 19.488 55.853
Baseline (2B) 42.500 49.558 25.063 20.588 57.806
**MIDAS 8 PROP-1 (2B)** 37.727 48.892 26.133 20.068 61.822
**MIDAS 8 PROP-2 (2B)** 41.818 47.974 27.884 20.737 62.637
Table 4: Open Book QA
ASDiv MAWPS MAWPS MAWPS MAWPS SVAMP
Add/Sub Multi-Arith Single-Eq Single-Op
Method
Baseline (1B) 21.708 38.987 1.667 30.512 34.164 13.900
GRADSTACK 4 PROP-1 (1B) 19.084 38.734 2.000 31.102 35.231 15.100
**MIDAS 4 PROP-1 (1B)** 27.719 45.063 2.833 40.157 49.110 16.900
**MIDAS 3 PROP-1 (1B)** 25.763 45.063 2.500 33.071 40.747 14.800
GRADSTACK 4 PROP-2 (1B) 15.219 29.114 1.000 24.606 26.335 7.600
**MIDAS 4 PROP-2 (1B)** 26.288 51.899 3.333 39.370 40.036 13.000
**MIDAS 3 PROP-2 (1B)** 28.578 38.987 3.000 41.142 50.356 16.800
**MIDAS 4 PROP-3 (1B)** 28.912 55.696 1.500 41.142 50.890 21.800
Baseline (2B) 27.863 41.519 3.167 37.402 36.477 16.400
**MIDAS 8 PROP-1 (2B)** 28.960 56.203 1.000 41.929 45.907 18.100
**MIDAS 8 PROP-2 (2B)** 34.685 58.228 7.333 50.000 57.473 21.800
Table 5: Math World Problems
**B** **Details for contextual reasoning primitives**
In this section, we provide further details corresponding to Section 5.
-----
All evaluations in Section 5 were performed on the 1B-parameter models. For MIDAS, we use the
variant with block size 4 and the PROP-2 schedule.
**B.1** **Exact input format**
Expanding on Section 5, here we provide the format of the inputs and target outputs. The only caveat
is that, for simplicity of presentation, we present the inputs in 0-shot form here vs. their 5-shot form.
In 5-shot form, which is how we conduct the 5-shot evaluations, each example is separated by two
consecutive newline characters.
For each dataset below, the inputs are separated from the targets by the “|” character (this is not a
token in the input), and the targets are colored in red.
Figure 5 uses the following evaluation datasets, in the following order:
1. Copying (random-letter words)
2. Variable assignment depth 0 (code)
3. Variable assignment depth 1 (code)
4. Variable assignment depth 1 (code)
5. Variable assignment depth 2 (code)
6. Variable assignment depth 2 (code)
7. Pre-school math (PSM)
**Copying (random-letter words):**
```
Fill in blank:
pum nyj gdq ocu rzk jbw mlz eny kyx uni rzk jbw mlz eny kyx ___. ->|uni
```
**Copying (real words):**
```
Fill in blank:
eat fit ban sea vet zit pea cat van tea sea vet zit pea cat ___. ->|van
```
**Variable assignment depth 0 (basic):**
```
Fill in blank:
o=14
s=4
u=8
m=10
q=12
```
-----
```
m=___. ->|10
```
**Variable assignment depth 1 (basic):**
```
Fill in blank:
g=21
b=24
v=3
s=23
h=20
k=b
a=s
n=v
f=g
d=h
a=___. ->|23
```
**Variable assignment depth 2 (basic):**
```
Fill in blank:
w=24
l=12
d=16
e=5
j=9
g=j
y=e
r=l
k=d
h=w
v=g
i=r
c=h
t=k
p=y
c=___. ->|24
```
**Variable assignment depth 0 (math):**
```
The following is a set of simple mathematical equations.
n=22
r=16
w=13
v=6
k=10
What is the numerical value of n?
```
-----
```
Answer:|22
```
**Variable assignment depth 1 (math):**
```
The following is a set of simple mathematical equations.
h=20
w=9
c=22
j=11
v=5
g=c
k=w
a=j
s=h
o=v
What is the numerical value of s?
Answer:|20
```
**Variable assignment depth 2 (math):**
```
The following is a set of simple mathematical equations.
g=9
v=24
k=15
p=6
c=10
t=p
s=g
a=c
y=v
n=k
l=s
w=n
j=t
m=y
i=a
What is the numerical value of j?
Answer:|6
```
**Variable assignment depth 0 (code):**
```
The following is a very short Python program. Use the program to resolve
the value of the variable in the question.
Program:
q=12
k=17
l=1
```
-----
```
y=3
a=6
Question:
What is the value of k?
Answer:
|17
```
**Variable assignment depth 1 (code):**
```
The following is a very short Python program. Use the program to resolve
the value of the variable in the question.
Program:
k=11
f=21
e=10
l=7
c=13
y=f
o=c
r=e
u=k
n=l
Question:
What is the value of o?
Answer:
|13
```
**Variable assignment depth 2 (code):**
```
The following is a very short Python program. Use the program to resolve
the value of the variable in the question.
Program:
t=13
j=14
v=4
s=17
y=21
q=j
l=s
e=y
h=t
x=v
b=x
f=e
n=q
a=h
```
-----
```
i=l
Question:
What is the value of i?
Answer:
|17
```
**Pre-school math (PSM):**
```
Fill in blank:
k=1
j=8
l=-k+j
l=___. ->|-1+8=7
```
**Arithmetic:**
```
-3+2=-1
-6+1=-5
+9-7=2
-6-4=-10
-6-1=-7
+1+9=|10
```
**B.2** **Fine-tuning details**
For fine-tuning, we use the “code” variant of the variable assignment task, depths 1 and 2, in 0-shot
form (i.e., no in-context examples). Due to the randomness of the data generation process and
the rather small size of each dataset (64 examples), we randomly generate 3 different 64-example
fine-tuning datasets (consisting of 32 depth-1 examples and 32 depth-2 examples), fine tune on each,
and report our results as an average across the 3 runs. Table 7 reports the standard deviations as well.
Regarding hyperparameters, we continue to use AdaFactor [Shazeer and Stern, 2018] with the same
hyperparameters as in the pretraining phase, with the exception of learning rate and batch size. We use
a constant learning rate of 0.001, which was chosen to match the final learning rate of the pretraining
phase. We use full-batch training with our 64-example datasets. We then evaluate performance
separately on depth 1 and depth 2.
For every step i ∈{200, . . ., 300}, chosen to be significantly after training has converged to 100%
accuracy (we do not observe overfitting in this range as training continues), we evaluate performance
-----
on a 1000-example holdout set. For smoothing purposes, we average over steps 200 through 300 and
report the final averaged performance.
**B.3** **Full 5-shot and fine-tuning results**
**5-shot.** Table 6 includes 5-shot evaluation results for all contextual reasoning primitives. Rows 1, 9,
10, 11, and 14 are the rows which appear in Figure 5.
When performance is better than random guessing, MIDAS consistently outperforms the baseline in
rows 1-11.
For pre-school math (rows 12-14), the value we report in Figure 5 is “with calculator”. This is because
the pre-school math task actually combines two capabilities: reasoning and arithmetic. Arithmetic
can be thought of as a memorization task. We evaluate arithmetic for MIDAS and baseline training,
and we see that arithmetic is quite poor for both models (7.8% and 9.6%, respectively, in Table 6).
However, by evaluating PSM with chain-of-thought and only assessing the accuracy of the reasoning
chain itself, i.e., “-6+5” vs. “-1”, we can successfully disentangle reasoning and memorization in our
evaluation. This is equivalent to having access to a calculator, so we call it “PSM with calculator” or
“PSM-calc” in Figure 5.
|Task|MIDAS (%)|Baseline (%)|Random guessing(%)|
|---|---|---|---|
|Copying (random-letter words)|24.3|14.9|10|
|Copying (real words)|17.8|10.3|10|
|Variable assignment depth 0 (basic)|35.6|32.1|20|
|Variable assignment depth 1 (basic)|20.6|21.9|20|
|Variable assignment depth 2 (basic)|18.9|17.7|20|
|Variable assignment depth 0 (math)|92.8|50.1|20|
|Variable assignment depth 1 (math)|26.5|19.2|20|
|Variable assignment depth 2 (math)|20.4|18.8|20|
|Variable assignment depth 0 (code)|86.0|49.7|20|
|Variable assignment depth 1 (code)|28.3|21.6|20|
|Variable assignment depth 2 (code)|19.5|19|20|
|Pre-school math (PSM), no calculator|7.8|9.6|n/a|
|Arithmetic-only accuracy|9.7|10.3|n/a|
|Pre-school math (PSM), with calculator|69.5|62|n/a|
Table 6: 5-shot results for all variants of the contextual reasoning primitives. This is an expanded set
compared to Figure 5.
**Fine tuning.** Table 7 presents the fine-tuning results from Figure 5 along with corresponding
standard deviations (across the 3 trials).
|Task|MIDAS (%)|Baseline (%)|Random guessing(%)|
|---|---|---|---|
|Variable assignment depth 1 (code)|68.54 7.69 ±|43.75 5.54 ±|20|
|Variable assignment depth 2 (code)|44.97 7.26 ±|23.88 1.56 ±|20|
Table 7: Fine-tuning results corresponding to Figure 5’s 2 fine-tuning tasks. Additionally, this table
reports the standard deviation across the 3 runs with ± std dev.
-----
| [
"Nikunj, Saunshi",
"Stefani, Karp",
"Shankar, Krishnan",
"Sanjiv, Kumar",
"Sobhan, Miryoosefi",
"Sashank J., Reddi"
] | 2024-09-27T00:00:00 | NeurIPS 2024 | true | 0 | 0 | null | http://arxiv.org/abs/2409.19044 | https://arxiv.org/abs/2409.19044 | https://www.semanticscholar.org/paper/bb0184f8d035677d326447f5122b6c340c450c02 |
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data | Mathematical reasoning continues to be a critical challenge in large language model (LLM) development with significant interest. However, most of the cutting-edge progress in mathematical reasoning with LLMs has become \emph{closed-source} due to lack of access to training data. This lack of data access limits researchers from understanding the impact of different choices for synthesizing and utilizing the data. With the goal of creating a high-quality finetuning (SFT) dataset for math reasoning, we conduct careful ablation experiments on data synthesis using the recently released \texttt{Llama3.1} family of models. Our experiments show that: (a) solution format matters, with excessively verbose solutions proving detrimental to SFT performance, (b) data generated by a strong teacher outperforms equally-sized data generated by a weak student model, (c) SFT is robust to low-quality solutions, allowing for imprecise data filtering, and (d) question diversity is crucial for achieving data scaling gains. Based on these insights, we create the OpenMathInstruct-2 dataset, which consists of 14M question-solution pairs ($\approx$ 600K unique questions), making it nearly eight times larger than the previous largest open-source math reasoning dataset. Finetuning the \texttt{Llama-3.1-8B-Base} using OpenMathInstruct-2 outperforms \texttt{Llama3.1-8B-Instruct} on MATH by an absolute 15.9\% (51.9\% $\rightarrow$ 67.8\%). Finally, to accelerate the open-source efforts, we release the code, the finetuned models, and the OpenMathInstruct-2 dataset under a commercially permissive license. | The OpenMathInstruct-2 dataset is created, which consists of 14M question-solution pairs, making it nearly eight times larger than the previous largest open-source math reasoning dataset, and is released under a commercially permissive license. | _2024-10-3_
## OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
**Shubham Toshniwal, Wei Du, Ivan Moshkov, Branislav Kisacanin**
**Alexan Ayrapetyan, Igor Gitman**
**Abstract:**
Mathematical reasoning continues to be a critical challenge in large language model (LLM) development with
significant interest. However, most of the cutting-edge progress in mathematical reasoning with LLMs has
become closed-source due to lack of access to training data. This lack of data access limits researchers from
understanding the impact of different choices for synthesizing and utilizing the data. With the goal of creating
a high-quality finetuning (SFT) dataset for math reasoning, we conduct careful ablation experiments on data
synthesis using the recently released Llama3.1 family of models. Our experiments show that: (a) solution
format matters, with excessively verbose solutions proving detrimental to SFT performance, (b) data generated
by a strong teacher outperforms on-policy data generated by a weak student model, (c) SFT is robust to
low-quality solutions, allowing for imprecise data filtering, and (d) question diversity is crucial for achieving
data scaling gains. Based on these insights, we create the OpenMathInstruct-2 dataset which consists of 14M
question-solution pairs ( 600K unique questions), making it nearly eight times larger than the previous
_≈_
largest open-source math reasoning dataset. Finetuning the Llama-3.1-8B-Base using OpenMathInstruct-2
outperforms Llama3.1-8B-Instruct on MATH by an absolute 15.9% (51.9% → 67.8%). Finally, to accelerate
the open-source efforts, we release the code, the finetuned models, and the OpenMathInstruct-2 dataset under
a commercially permissive license.
### 1. Introduction
Synthetic data has emerged as a key technique
for building large language models due to its costeffectiveness and scalability [21, 24, 11]. In particular,
synthetic data is well suited for mathematical reasoning where the performance improvements with
synthetic data scaling are yet to saturate [41, 7, 36].
However, access to this progress is limited because
the current largest math datasets remain closed_source [41, 36]. The closed nature of these datasets_
introduces two major issues. First, concerns over data
leakage erode trust in reported benchmark results [2].
E.g., Zhang et al. [43] show a drop of more than 10%
for popular LLMs on an unpublished test set which
is distributionally similar to the popular grade school
math benchmark GSM8K [9]. Second, it prevents
practitioners from fully understanding the impact of
data composition and algorithmic choices [4, 28].
Among open-source alternatives, the recent NuminaMath dataset [19] has the largest collection
of questions collected from diverse sources. However, its restrictive license—likely due to the use of
GPT-4o in data processing and synthesis—limits its
broader use. Similarly, other popular math instruction tuning datasets, such as MetaMathQA [38] and
MathInstruct [39], have also utilized GPT models
for data synthesis, which prohibits their usage in
non-commercial settings. A notable exception is the
OpenMathInstruct-1 [30] dataset, one of the biggest
open-source math reasoning datasets, where solutions
are synthesized using open-weight models. However,
OpenMathInstruct-1 has two key limitations. Firstly,
its question diversity is limited, since all the questions
in the dataset are drawn from the training sets of
MATH [13] and GSM8K [9]. Secondly, at the time of
its release, there was a sizable gap in the math reasoning capabilities of open and closed-source models.
As a result, the dataset underrepresents more challenging problems compared to its GPT-based counterparts [12].
The recent emergence of frontier open-weight models [21, 11] has made it possible to create high-quality,
commercially permissible math reasoning datasets. In
this paper, we use the recently released Llama3.1 family of models to generate synthetic math instruction
tuning (SFT) data, and evaluate the quality of the
math reasoning data by finetuning the smaller 8B and
70B base models.[1] To create OpenMathInstruct-2,
we conduct careful ablation studies using the MATH
dataset to determine design choices that impact the
final SFT performance. The highlights of our findings
include:
1Data and models are available at
```
https://huggingface.co/collections/nvidia/
openmath-2-66fb142317d86400783d2c7b
```
[Code is available at https://github.com/Kipok/NeMo-Skills](https://github.com/Kipok/NeMo-Skills)
© 2024 NVIDIA All rights reserved
-----
|Col1|Col2|OpenMat|h2-Llama3.|1-8B|
|---|---|---|---|---|
||||||
||L|lama3.1-8B|-Instruct|+15.9|
||||||
||||||
|Llama3.1-|8B-Base||||
||||||
14.3
|Col1|Col2|
|---|---|
|||
|Col1|Col2|
|---|---|
|||
|Col1|Col2|
|---|---|
|||
|Col1|Col2|
|---|---|
|||
|Col1|Col2|
|---|---|
|||
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
100
70 OpenMath2-Llama3.1-8B `OpenMath2-Llama3.1-8B`
6.2
```
Llama3.1-8B-Instruct
```
60 **+15.9** 80 15.7
16.0
Llama3.1-8B-Instruct
50 60 18.5
40 40
Accuracy (in %) 17.1
MATH Test Accuracy (in %) 30
20
Llama3.1-8B-Base
20
Level 1 Level 2 Level 3 Level 4 Level 5
Size of SFT Dataset (in million)
Figure 1: Performance of Llama3.1-8B -Base on
MATH after finetuning on increasing proportions of
OpenMathInstruct-2.
MATH Levels
Figure 2: Comparison of OpenMath2-Llama3.1-8B
and Llama3.1-8B -Instruct on accuracy across
MATH difficulty levels.
- Chain-of-Thought (CoT) Solution Format: Excessive verbosity can be detrimental to the SFT
performance. Our proposed CoT format outperforms Llama’s CoT format by 3.9% while being
40% shorter in solution length. Using base model
_template (Figure 8 in Appendix) significantly in-_
creases the ability of instruct models to follow
few-shot examples of our proposed format.
- Choice of Data Generation Model: Controlling
for the size of the SFT data, the performance
on data generated by a strong teacher model
surpasses that of on-policy data produced by a
weaker student model by 7.8%.
- Robustness of SFT : With both removing lowquality solutions and introducing them by design,
we find SFT performance to be robust to the
presence of up-to 20% low-quality data.
- Impact of Question Diversity: Controlling for
SFT data size, we find that question diversity has
a huge positive impact on SFT performance. Increasing the number of unique questions from 1K
to 6.5K leads to 10.5% improvement on MATH
validation set.
Based on the above findings, we create
OpenMathInstruct-2 with data synthesized using Llama-3.1-405B-Instruct. To construct this
dataset we prompt an LLM to (a) synthesize solutions
to the original MATH and GSM8K training set
questions and (b) create new question-solution pairs
similar to the training set questions. To ensure there
is no test set contamination among the synthesized
questions, we perform thorough decontamination
using the lm-sys pipeline [37], followed by manual
inspection (Section 3.1). Figure 3 provides an
overview of the entire dataset construction pipeline.
The final dataset consists of 14M question-solution
pairs with 600K unique questions, including 592K
synthesized questions. Thus, OpenMathInstruct-2
is about 8 times bigger than the previous biggest
standalone open-source dataset [30].
The high-quality of OpenMathInstruct-2 is illustrated by the strong performance of the finetuned models. The `OpenMath2-Llama3.1-8B`
`model,` which is the `Llama3.1-8B-Base` model
finetuned with OpenMathInstruct-2, outperforms
`Llama3.1-8B-Instruct` by an absolute 15.9%
on MATH with just SFT (see Figure 1 and
2). With a performance of 67.8% on MATH,
```
OpenMath2-Llama3.1-8B is one of the strongest sub
```
10B open-source models.[2] Our best-performing
model, `OpenMath2-Llama3.1-70B, has an accu-`
racy of 71.9% on MATH which outperforms
```
Llama3.1-70B-Instruct by 3.9%. To support the
```
open-source efforts, we will release all our fine-tuned
models, code, and the OpenMathInstruct-2 dataset.
### 2. Data: Solution Augmentation
In this section, we focus on the Solution Augmen_tation part of the OpenMathInstruct-2 construction_
pipeline, shown in Figure 3. We first give a brief
overview of how solutions are synthesized for existing
questions, and then present ablation studies designed
to understand the impact of the different dataset
design choices.
2We refer to open-weight base models instruction tuned
with publicly released data as open-source.
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
Solution
Augmentation
2.5M
Question-Solution
MATH Augmentation 9.9M 8.9M
Decontamination **OpenMathInstruct-2**
with Test Sets (14M)
Question-Solution 2.1M
GSM8K Augmentation 2.1M
0.5M
Solution
Augmentation
Figure 3: Overview of the data generation pipeline used for OpenMathInstruct-2.
**2.1. Solution Augmentation Preliminaries**
Let = (𝑞𝑖, 𝑎𝑖) _𝑖=1_ [represent a typical mathemati-]
_𝒳_ _{_ _}[𝑁]_
cal reasoning dataset, where 𝑞𝑖 and 𝑎𝑖 denote the 𝑖[th]
question and answer respectively. To synthesize solutions for this dataset, a teacher LLM is prompted
_ℳ_
as follows:
_ℐ_ (𝑞1, 𝑠1), . . ., (𝑞𝐾, 𝑠𝐾), 𝑞[′]
where represents the instruction to answer the given
_ℐ_
math question, {𝑞1, . . ., 𝑞𝐾} represent 𝐾 questions
representative of the dataset, {𝑠1, . . ., 𝑠𝐾} represent
their respective solutions, and 𝑞[′] represents a question
from the training set. Given this prompt, multiple
candidate solutions are sampled using . The high_ℳ_
quality solutions, usually those that lead to the correct
answer, along with the prompt question 𝑞[′], are added
to the SFT dataset.
**2.2. Ablation Studies**
In the previous section, we gave an abstract overview
of the solution augmentation pipeline. In practice,
several design decisions impact the final SFT dataset,
such as the solution format of the few-shot examples
_{𝑠1, . . ., 𝑠𝐾}, the choice of the teacher model ℳ, and_
the solution filtering mechanism. In this section, we
study the impact of these different design choices on
the SFT performance to guide the dataset construction.
For these ablation experiments, we use the 1K validation split created from MATH [13] training set by
Toshniwal et al. [30]. The remaining 6.5K MATH
training set problems are used to create the SFT
dataset. The solutions are generated using nucleus
sampling [14] with a temperature of 1.0 and top-𝑝 of
0.95. The Llama3.1-8B-Base model is used as the
_student model in all the ablation experiments. For_
SFT, the model is trained for 4 epochs, with a batch
size of 256, using the AdamW optimizer [20] with a
constant learning rate of 5e-6 and a weight decay of 1e2. To account for the variance in performance across
runs, we report the performance averaged across 4
runs.
**Data Downsampling** For efficiency or experiment
design reasons, we sometimes need to downsize an
SFT dataset to a specific size or to match another
SFT dataset in ablation experiments. We introduce
the concept of coverage and the two downsampling
operations used in the paper.
_Coverage of a SFT dataset_ = (𝑞𝑖, 𝑠𝑖) _𝑖=1_ [synthe-]
_𝒟_ _{_ _}[𝑇]_
sized using dataset = (𝑞𝑖, 𝑎𝑖) _𝑖=1_ [is the fraction]
_𝒳_ _{_ _}[𝑁]_
of questions in with at least one solution in :
_𝒳_ _𝒟_
Coverage( _,_ ) =
_𝒟_ _𝒳_ _[|{]𝑞[𝑞]_ : ([: (]𝑞, 𝑎[𝑞, 𝑠])[)][ ∈𝒟}|]
_|{_ _∈𝒳}|_
_Fair Downsampling is a question-dependent down-_
sampling method introduced by Toshniwal et al. [30].
Due to the varying difficulty of questions, the representation of “easier” ones can often dominate an
SFT dataset, as generating high-quality solutions for
them is “easier”. The goal of fair downsampling is
to sample question-solution pairs from the original
SFT dataset in a way that ensures all questions are
as equally represented in the downsampled dataset as
possible.
_Matching Coverage: The different design choices_
explored in the ablation studies result in SFT datasets
of varying sizes. However, to compare the quality of
the datasets, we want to control for the dataset size.
To this end, we introduce the Matching Coverage operation, where SFT datasets are matched at the level
of questions. Put simply, after matching coverage, the
number of unique questions as well as the number of
solutions for each individual question in two dataset
is the same.
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
**Question**
```
Ben throws four identical darts. Each hits one of four
identical dartboards on the wall. After throwing the four
darts, he lists the number of darts that hit each board,
from greatest to least. How many different lists are
possible?
```
**Llama CoT** **OpenMath CoT**
```
## Step 1: Understand the constraints of the problem The number of darts that hit each board
There are four identical darts and four identical must be a non-negative integer, and these
dartboards. The task is to list the number of darts that numbers must add to 4. Since the dartboards
hit each board from greatest to least. are identical, we can list the numbers from
greatest to least.
## Step 2: Enumerate the possibilities
Given the constraints, we can start by listing the The possibilities are:
possibilities systematically, starting with the highest
```
`number of darts on one board and then distributing the` 4 + 0 + 0 + 0 = 4 + 0 + 0 = 4 + 0 = 4
```
remaining darts across the other boards in decreasing
```
3 + 1 + 0 + 0 = 4 + 0 + 0 = 4 + 0 = 4
```
order.
```
2 + 2 + 0 + 0 = 4 + 0 + 0 = 4 + 0 = 4
`## Step 3:` `List all possible distributions` 2 + 1 + 1 + 0 = 3 + 1 + 0 = 4 + 0 = 4
```
1. All four darts hit one board: 4, 0, 0, 0
```
1 + 1 + 1 + 1 = 2 + 1 + 1 = 3 + 1 = 4
```
2. Three darts hit one board, and one dart hits another:
3, 1, 0, 0
```
`3.` `Two darts hit one board, and two darts hit another:` `2,` `Counting these, there are` 5 `possibilities.`
```
2, 0, 0
4. Two darts hit one board, one dart hits another, and one
dart hits yet another: 2, 1, 1, 0
5. One dart hits each board: 1, 1, 1, 1
## Step 4: Count the different possibilities
There are 5 different ways to distribute the darts as
listed above.
```
`The final answer is:` 5
Figure 4: A sample solution in the Llama CoT format vs. the OpenMath CoT format.
Formally, suppose we’re given two SFT datasets
1 and 2. Let 𝑄 ( 1) represent the set of unique
_𝒟_ _𝒟_ _𝒟_
questions in 𝒟1:
_𝑄_ (𝒟1) = {𝑞 _| (𝑞, 𝑠1) ∈𝒟1}_
The set of common questions in 1 and 2 is given
_𝒟_ _𝒟_
by:
_𝑄match = 𝑄_ ( 1) _𝑄_ ( 2)
_𝒟_ _∩_ _𝒟_
Let 𝑁 ( _, 𝑞) represent the number of solutions of_
_𝒟_
question 𝑞 in dataset . In the matching coverage
_𝒟_
version of the datasets:
_𝑁match (𝑞) = min (𝑁_ ( 1, 𝑞), 𝑁 ( 2, 𝑞))
_𝒟_ _𝒟_
for each question 𝑞 _𝑄match, 𝑁match (𝑞) solutions are_
_∈_
sampled from the respective datasets.
This covers the two downsampling methods used in
this paper: Fair Downsampling and Matching Cover_age. Next, we will describe the ablation experiments._
**_2.2.1. Solution Format_**
Finetuning with synthetic chain-of-thought (CoT)
solutions [25, 35, 29] has been the key to strong
performances of small models on math reasoning
tasks [38, 30, 21]. We find the Llama’s CoT format
to be quite verbose,[3] and propose an alternate CoT
format, OpenMath CoT, which is detailed as well but
less verbose. Figure 4 shows a sample solution in the
two CoT formats.
To compare the two CoT formats, we generate SFT
data using the Llama3.1-405B-Instruct model. For
generating solutions in the Llama CoT format we
simply use the zero-shot prompt setup as the model
was trained on those kinds of solutions. However, even
when prompting the model with few-shot OpenMath
CoT solutions, a substantial number of generations –
57% in our experiment – still follow the Llama CoT
format. This tendency of aligned models reverting
to their trained behavior when encountering inputs
seen during training has also been observed in prior
work [22]. We find an interesting workaround to
this issue by dropping the special tokens used by
Llama-Instruct models. Prompting the model with
[3https://huggingface.co/datasets/meta-llama/](https://huggingface.co/datasets/meta-llama/Meta-Llama-3.1-8B-Instruct-evals/viewer/Meta-Llama-3.1-8B-Instruct-evals__math__details)
```
Meta-Llama-3.1-8B-Instruct-evals/viewer/Meta-Llama-3.
1-8B-Instruct-evals__math__details
```
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
Table 1: Comparison of Llama and OpenMath CoT formats on MATH validation accuracy and average
solution length measured in number of tokens.
MATH Validation Accuracy Mean Solution Length
Llama CoT 40.6 0.6 331.3
_±_
OpenMath CoT 44.5 0.8 237.0
_±_
Table 2: Llama3.1-8B-Base vs. Llama3.1-405B-Instruct as data generation models.
MATH Validation Accuracy Mean Solution Length
Llama3.1-8B-Base 30.1 0.6 205.7
_±_
Llama3.1-405B-Instruct 37.9 0.6 180.2
_±_
the “base” template leads to a dramatic increase in
adherence to the OpenMath CoT format and reduces
the Llama CoT format generations to only 0.1%. See
Appendix A.1 for the prompt and more details.
With 64 solutions sampled per question, the zeroshot setup results in about 30% more solutions than
the few-shot prompt setup (350K vs 268K). To control for the confounding factor of SFT data size, we
perform the Matching Coverage operation over the
two datasets which reduces the final SFT dataset to
260K question-solution pairs. Table 1 shows that the
OpenMath CoT format is 40% less verbose than the
Llama CoT format and also results in a better SFT
performance. All experiments presented henceforth
use the OpenMath CoT format.
**_2.2.2. Choice of Teacher Model_**
Prior work has shown that with repeated sampling,
even weak models can match or outperform much
stronger/bigger models [17, 6]. In fact, for a fixed
compute budget, a weaker model can be a better
choice for a teacher model [5]. But data synthesis is
a one-time expense and a small portion of the overall
compute budget of training LLMs [31]. We instead
ask the following question: Can a student model learn
_better from its own generated solutions vs solutions_
_generated by a strong teacher model when matching_
_the SFT data coverage?_
In this ablation, we compare Llama3.1-8B-Base
and Llama3.1-405B-Instruct as teacher models.
We sample solutions using the two models and perform
the Matching Coverage operation to match the final
SFT datasets precisely. The SFT results presented in
Table 2 show that even when controlling for the SFT
data size, Llama3.1-405B-Instruct is a far superior
data generation model. Our preliminary analysis suggests that the reason is weaker models generate more
_noisy solutions that use incorrect reasoning yet end_
up with the right answer and, ultimately, part of the
SFT dataset (Appendix B). We leave a more detailed
analysis regarding this for future work. Next, we
investigate the impact of these noisy solutions among
solutions generated by Llama3.1-405B-Instruct.
**_2.2.3. Impact of Low-Quality Solutions_**
Data quality plays an important role in the accuracy
of LLMs [15]. We explore the impact of data quality
on the final SFT performance in our setup. First, we
employ automated LLM-based methods to filter out
solutions that, despite reaching the correct answer,
use incorrect reasoning. Second, we investigate the effects of intentionally incorporating incorrect solutions
into the SFT dataset.
**Removing Low-Quality Solutions.** Synthetic solutions produced in our pipeline may include examples
where the intermediate steps are incorrect, yet still
lead to the right final answer. For simplicity, we refer to these instances as “low-quality” data. In this
section, we will discuss how we identify and remove
low-quality data, followed by an investigation into its
impact on the SFT performance.
We employ two methods to identify low-quality
data: LLM-as-a-Judge and reward model. In the
LLM-as-a-Judge approach, we design two prompts for
the Llama3.1-405B-Instruct to determine whether
the generated solutions contain incorrect intermediate
steps, providing a binary outcome (see Appendix D.3
for the prompts). For the reward model labeling
method, we use Nemotron-4-340B-Reward [34] to
evaluate the quality of the generated solutions based
on factors like helpfulness (the overall usefulness of the
response to the prompt) and correctness (the inclusion
of all relevant facts without errors). Helpfulness and
correctness are rated on a scale from 0 to 4, where
a higher score indicates better data quality. For the
reward model filtering, we used a threshold of 3 based
on small-scale tuning experiments.
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
Table 3: SFT performance on the MATH validation set with various filtering strategies to remove solutions
with incorrect reasoning.
Filtering Strategy Data Size MATH Validation Accuracy
Unfiltered 128K 43.6 1.7
_±_
LLM-as-a-Judge: Prompt 1 113K 43.6 0.1
_±_
LLM-as-a-Judge: Prompt 2 116K 43.0 0.8
_±_
Nemotron-4-340B-Reward: Helpfulness 3 118K 43.8 0.4
_≥_ _±_
Nemotron-4-340B-Reward: Correctness 3 120K 43.1 0.4
_≥_ _±_
50
40
50
45
40
35
30
25
20
15
30
20
10
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
|||||Original|||
|||||Incorrect 10% Incorrect 20% Incorrect 40%|||
|||||Incorrect 80%|||
||||||||
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
|||||||||
|||||||||
||||||Original|||
||||||Incorrect 10% Incorrect 20%|||
||||||Incorrect 40% Incorrect 80%|||
|||||||||
|||||||||
Original
Incorrect 10%
Incorrect 20%
Incorrect 40%
Incorrect 80%
64K 128K 256K 512K
Size of SFT Dataset (in K)
64K 128K 256K 512K 1024K
Size of SFT Dataset (in K)
(b) Correct solutions mismatched with questions
(a) Adding wrong-answer solutions.
Figure 5: Impact of low-quality solutions on the SFT performance.
To determine the impact of filtering low-quality
data on the SFT performance, we use a 128K-sized
fair downsampled SFT dataset. We call this Unfiltered
data and use a model trained on it as a baseline.
Table 3 presents the statistics of data remaining
with different filtering approaches, and the corresponding SFT performance. The proportion of data filtered
by the different methods ranges from 6% to 12%, a
non-negligible fraction of the overall data. [4] Yet none
of the filtering strategies give any meaningful gain
over the baseline Unfiltered model. This means that
either SFT is robust to the presence of up to 10% of
low-quality solutions or our filtering is not accurate
enough. We investigate this question next.
**Adding Low-Quality Solutions.** In the previous section, we see that filtering low-quality
solutions generated by a strong model such as
```
Llama3.1-405B-Instruct leads to almost the same
```
or worse SFT performance in comparison to no filtering. While our manual analysis suggests that most of
4Our manual analysis of 20 examples identified by the two
approaches suggests that approximately 60% of the solutions
are indeed incorrect.
the filtered out solutions were indeed using incorrect
reasoning, the automatic filtering approaches are far
from perfect and it’s hard to gauge the impact of filtering out correct solutions which have been classified
as incorrect.
To remove the effect of potentially inaccurate filtering, we can instead study the impact of explicitly
adding low-quality/incorrect solutions on the SFT
performance. We consider two strategies of adding
“bad” solutions:
1. Wrong-answer Solutions: By incorporating
solutions generated by the teacher LLM, which
were excluded during the creation of the SFT
dataset due to not arriving at the ground truth
answer.
2. Incorrect Pairing: By shuffling some of the
question-solution pairs in the SFT dataset, such
that the correct solutions are paired with unrelated questions.
For both these strategies, we experiment with varying the proportion of such incorrect solutions from
{10%, 20%, 40%, 80%}. We also vary the SFT data
size from {64K, 128K, 256K, 512K, 1024K} to study
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
construction pipeline, illustrated in Figure 3. This
process consists of two stages: (i) question augmentation, and (ii) solution augmentation.
For question augmentation, we utilize the training splits of MATH and GSM8K as seed datasets
to generate new questions. We use simple few-shot
prompting showing 5 examples of original questions
and the new questions written by us that are similar
in some aspect. We do not add explicit instructions to
increase difficulty or add new conditions, instead relying on the inherent variance of the nucleus sampling
that we use to generate new problems. After filtering out syntactically ill-formed questions, we check
the generated questions for potential contamination
with test sets of evaluation benchmarks, described in
detail in the next section. To generate solutions for
the new synthesized questions, we use the solution
augmentation pipeline from Section 2.1, generating
32 solutions for each question with a temperature of
0.7. Since the newly synthesized questions don’t have
ground-truth answers to filter solutions, we instead
use majority voting among the 32 generations as a
proxy for the ground-truth answer. For more details
on question-solution augmentation, see Appendix C.
**3.1. LLM Decontamination**
48
46
44
42
40
38
36
MATH Validation Accuracy (in %)
34
1K 2K 3K 4K 6K
# of Unique Questions
Figure 6: Impact of question diversity on MATH
validation accuracy.
the impact on SFT performance at different data
scales[5].
Figure 5 presents the impact of incorrect solutions
on the SFT performance at varying data sizes. From
both the plots we see that the model performance
suffers little to no performance degradation with as
much as 20% incorrect solutions at data scales 256K.
_≥_
Among the two strategies, we see that the model is
especially robust to “Incorrect Pairing” with strong
performance even with 40% incorrect solutions.
Based on these results we conclude that models
are indeed robust to the presence of up-to 20% of
low-quality solutions during SFT and extensive data
filtering at this stage has limited gains.
**_2.2.4. Impact of Question Diversity_**
It has been noted that many widely used benchmarks
and datasets suffer from data contamination, where
information from the test set unintentionally leaks
into the training data [37]. This can result in an
overly optimistic assessment of the model’s performance. The most commonly used methods, such as
_𝑛-gram overlap and embedding similarity search, are_
susceptible to simple variations in test data (e.g., paraphrasing, translation), allowing rephrased samples to
bypass these basic detection techniques easily.
We adopt the approach suggested by Yang et al.
[37] to remove all potential paraphrases of evaluation
benchmark questions from the synthesized questions.
In our setup, we use the test sets of four evaluation
benchmarks, namely GSM8K [9], MATH [13], AMC
2023 [3], and AIME 2024 [1].
The LLM-based decontamination process consists
of two main steps. First, for each synthesized question,
use embedding similarity search to identify the top_𝑘_ most similar test examples from all benchmark
datasets. Second, create question pairs by matching
the synthesized question with each of these top-𝑘
test examples. An advanced LLM then evaluates
whether any of these pairs are paraphrases via zeroshot prompting. To mitigate any positional bias,
we generate two pairs for each match: one in which
the synthesized question appears first and another in
To investigate the impact of question diversity on
SFT performance, we construct finetuning datasets
with 256K question-solution pairs with the number
of unique questions varying from {1K, 2K, 4K, 6.5K}.
Figure 6 shows a clear trend that the SFT performance improves with an increase in the number of
unique questions, with a drop of more than 10 points
when the number of unique questions is limited to
1K. This result highlights the potential of generating
new questions, and we describe the Question-Solution
Augmentation pipeline next.
### 3. Data: Question-Solution Augmen- tation
In this section, we describe the Question-Solution
Augmentation component of the OpenMathInstruct-2
5For the “Wrong-answer Solutions” setting, we were not
able to run the experiments for 1024K data size because the
```
Llama3.1-405B-Instruct model makes few mistakes on the
```
MATH training set.
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
Table 4: Comparison of our OpenMath2-Llama models with other open-weight and open-source models
without tool usage. Open-weight base models finetuned with publicly released data are considered as
open-source for the purposes of this table.
**Category Params Model** **GSM8K MATH AMC 2023 AIME 2024 Omni-MATH[7]**
Open
Weight
Qwen2.5-Math-7B-Instruct [36] 95.2 83.6 25/40 5/30 32.3
Mathstral-7B [23] 77.1 56.6 - - -
Llama3.1-8B-Instruct [21] 84.2 51.8 9/40 2/30 12.7
Llama3.1-8B-Instruct [21] 84.2 51.8 9/40 2/30 12.7
_< 10B_
Open NuminaMath-7B-CoT [19] 75.4 55.2 11/40 0/30 -
Source OpenMath2-Llama3.1-8B (ours) 91.7 67.8 16/40 3/30 22.0
+ maj@256 94.1 76.1 23/40 3/30 24.6
Open DS-Coder-V2-Lite-Instruct [10] 86.4 61.8 - 0/30 19.7
Weight Qwen2.5-Math-72B-Instruct [36] 95.9 85.0 28/40 9/30 36.3
10
Llama3.1-70B-Instruct [21] 95.8 67.9 19/40 6/30 19.0
to
Open 100B NuminaMath-72B-CoT [19] 91.4 68.0 21/40 1/30 28.4
Source OpenMath2-Llama3.1-70B (ours) 94.9 71.9 20/40 4/30 23.1
+ maj@256 96.0 79.6 24/40 6/30 27.6
which the test set question is presented first. If any
of the 2𝑘 pair is determined to be a paraphrase, the
synthesized question is removed.
We use a popular Sentence Transformer model for
embedding,[6] and Llama3.1-405B-Instruct for paraphrase detection (details on the prompt are provided
in Appendix D.4). In our experiment, we use 𝑘 = 5,
which results in 10 LLM inference calls for each generated question. To emphasize the importance of using
an LLM in the decontamination pipeline, we provide
multiple examples of questions flagged as contaminated that cannot be found via 𝑛-gram matching (see
Table 10 in the Appendix). Overall, our decontamination pipeline removes about 50K questions out of the
569K new questions synthesized (569K 519K).
_−→_
### 4. Results
**Training Details.** All the models are trained with
a batch size of 512, using the AdamW optimizer [20]
with a constant learning rate of 2e-5 and a weight
decay of 1e-2. For the 8B model, we train the model
on 1M, 2M, and 5M fair downsampled versions of
OpenMathInstruct-2 to understand the impact of the
data scaling. Due to computational constraints, we
train the 70B model only on the 5M subset with a
learning rate of 1e-5. The models are trained for 2
epochs, and we save 6 equally spaced checkpoints
during the training runs, which are averaged to create
the final model (See Appendix A.4 for performance
gains with checkpoint averaging).
**Evaluation Details.** We evaluate our models on a
set of common benchmarks that consists of GSM8K
(1.3K examples), MATH (5K examples), AMC 2023
6https://huggingface.co/sentence-transformers/multi-qaMiniLM-L6-cos-v1
(40 examples), AIME 2024 (30 examples), and OmniMATH (4.4K examples) [26]. These datasets cover a
broad spectrum of difficulty levels, ranging from grade
school mathematics to advanced competition problems. Unless noted otherwise, all fine-tuned models
are assessed in a zero-shot setting with both greedy
decoding and majority voting out of 256 sampled
solutions with temperature of 0.7 [32].
We use GPT-4o [27] as a judge to compare the
ground truth answers with those predicted by our
models (the detailed prompt is provided in Appendix
D.5).
**Impact of Data Scaling.** Figure 1 plots the
performance on the MATH test set with the increase in SFT data size. With even the 1M fairdownsampled version of OpenMathInstruct-2, the final model easily outperforms Llama3.1-8B-Instruct
and NuminaMath-7B-CoT. We observe a consistent
gain with an increase in data size, and even at 14M
dataset size, we see no signs of saturation in performance gains.
**Final** **Results.** Table 4 presents the results for top-performing, open-weight and
open-source models (without tool use). The
```
OpenMath2-Llama3.1-8B model, which is finetuned
```
on the full OpenMathInstruct-2 dataset, outperforms or matches Llama3.1-8B-Instruct on all
the math reasoning benchmarks. Among the
open-source models, we outperform the recently
released NuminaMath-7B-CoT on all benchmarks
as well. Finally, among all the presented models,
7Omni-MATH dataset was released after we finished training
our models, so we didn’t use it during decontamination. After
checking for contamination, we found that about 1.4% of the
test set questions are part of our training data.
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
the OpenMath2-Llama3.1-8B is second only to the
```
Qwen2.5-Math-7B-Instruct, which has been trained
```
on more than a trillion synthetically generated math
reasoning tokens, and starts with a base model,
```
Qwen2.5-Math, which is about 35% better than
```
`Llama3.1-8B-Base.` [8]
The OpenMath2-Llama3.1-70B is our strongest
performing model which is the Llama3.1-70B-Base
model finetuned on the 5M fair downsampled subset
of OpenMathInstruct-2. While our 8B model demonstrates strong accuracy gains compared to other LLMs
of similar size, the 70B model only shows improvements on a subset of benchmarks. We hypothesize
that our data blend or solution format might be more
suited for weaker models, since we made all of the
design decisions based on the 8B model accuracy on
validation subsets.
### 5. Related Work
In recent years, significant progress has been made in
developing datasets to enhance mathematical reasoning abilities of LLMs. NuminaMath [19] contains a collection of 860K pairs of competition-level
math problems and solutions, annotated with chainof-thought traces [33]. Skywork-MathQA [41] collects
2.5M question-solution pairs, incorporating three different augmentation techniques and a diverse seed
problem set. MuggleMath [18] is created by complicating and diversifying queries, as well as sampling multiple reasoning paths from existing datasets.
MetaMathQA [38] introduced a dataset with 395K entries created by bootstrapping questions from MATH
and GSM8K, employing techniques such as semantic
rephrasing, self-verification, and backward reasoning. MAmmoTH2 [40] introduced a paradigm for
efficiently extracting 10 million naturally occurring
instruction data points from pre-training web corpora,
enhancing LLM reasoning and improving benchmark
performance without the need for in-domain training. Li et al. [16] expanded the MATH dataset to
480K and the GSM8K dataset to 960K by generating
both questions and CoT-based solutions, resulting
in significant accuracy improvements for fine-tuned
models.
Tool-integrated methods for math problem-solving
have also become prevalent. Chen et al. [8] pioneered
the Program of Thoughts (PoT) approach, combining text and programming language statements to
arrive at solutions. Building on similar concepts,
other datasets have been developed. For instance,
8
We are unsure of the 𝑛-gram based data contamination
protocol followed by Qwen2.5-Math given its obvious weakness
in detecting paraphrases.
OpenMathInstruct-1 [30] introduced a math instruction tuning dataset of 1.8 million examples, synthesizing code-interpreter solutions for GSM8K and MATH
benchmarks. InfinityMATH [42] developed a scalable
instruction tuning dataset for programmatic mathematical reasoning, consisting of 100K data points.
Similar to prior work, we also leverage CoT-based
solutions and question augmentation to construct a
novel dataset. Yet our approach distinguishes itself in
several important ways: (a) we leverage open-weight
models instead of proprietary closed-source LLMs allowing us to release the dataset under a permissive
license; (b) we offer novel insights into the impact
of low-quality data, effectiveness of on-policy training and the design of solution format; (c) we ensure
our results are accurate by performing a comprehensive decontamination process using an LLM-based
pipeline that can detect rephrased variations of test
set questions.
### 6. Conclusion
Recent advances in LLM mathematical reasoning have
mostly been closed-source since instruction tuning
data is often not shared or has restrictive license. In
this paper we contribute towards open-source progress
by sharing the OpenMathInstruct-2 dataset and all
the code necessary to reproduce our work. Besides
releasing high-performing models and data, we also
conduct detailed ablations that advance our understanding of how to best construct such datasets. In
summary, we show that:
a) Not all chain-of-thought formats are equally effective, and longer solutions are not necessarily
better.
b) On-policy data for SFT is perhaps less effective
than previously suspected.
c) Data filtering has limited utility for math reasoning datasets as models are quite robust to the
presence of incorrect solutions during SFT.
d) Training on a diverse set of questions is crucial,
but proper decontamination has to be performed
to ensure the benchmark evaluations accurately
represent model strengths.
### References
[1] AIME 2024. `https://artofproblemsolving.`
```
com/wiki/index.php/2024_AIME_I, 2024.
```
[2] Rachith Aiyappa, Jisun An, Haewoon Kwak, and
Yong-yeol Ahn. Can we trust the evaluation on
ChatGPT? In 3rd Workshop on Trustworthy
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
_Natural Language Processing (TrustNLP 2023),_
2023.
[3] AMC 2023. `https://github.com/QwenLM/`
```
Qwen2.5-Math/blob/main/evaluation/data/
amc23/test.jsonl, 2023.
```
[4] Zhangir Azerbayev, Hailey Schoelkopf, Keiran
Paster, Marco Dos Santos, Stephen Marcus
McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An Open
Language Model for Mathematics. In ICLR,
2024.
[5] Hritik Bansal, Arian Hosseini, Rishabh Agarwal,
Vinh Q. Tran, and Mehran Kazemi. Smaller,
Weaker, Yet Better: Training LLM Reasoners
via Compute-Optimal Sampling, 2024.
[6] Bradley Brown, Jordan Juravsky, Ryan Ehrlich,
Ronald Clark, Quoc V. Le, Christopher Ré, and
Azalia Mirhoseini. Large Language Monkeys:
Scaling Inference Compute with Repeated Sampling, 2024.
[7] Xin Chan, Xiaoyang Wang, Dian Yu, Haitao Mi,
and Dong Yu. Scaling Synthetic Data Creation
with 1,000,000,000 Personas, 2024.
[8] Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W. Cohen. Program of Thoughts
Prompting: Disentangling Computation from
Reasoning for Numerical Reasoning Tasks.
_TMLR, 2023._
[9] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton,
Reiichiro Nakano, Christopher Hesse, and John
Schulman. Training Verifiers to Solve Math Word
Problems. _arXiv preprint arXiv:2110.14168,_
2021.
[10] DeepSeek-AI. DeepSeek-Coder-V2: Breaking
the Barrier of Closed-Source Models in Code
Intelligence, 2024.
[11] DeepSeek-AI. DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language
Model, 2024.
[12] Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong
Shen, Yujiu Yang, Minlie Huang, Nan Duan, and
Weizhu Chen. ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving. In
_ICLR, 2024._
[13] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang,
Dawn Song, and Jacob Steinhardt. Measuring
Mathematical Problem Solving With the MATH
Dataset. In NeurIPS Datasets and Benchmarks,
2021.
[14] Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes,
and Yejin Choi. The Curious Case of Neural
Text Degeneration. In ICLR, 2020.
[15] Naman Jain, Tianjun Zhang, Wei-Lin Chiang,
Joseph E. Gonzalez, Koushik Sen, and Ion Stoica. LLM-Assisted Code Cleaning For Training
Accurate Code Generators. In ICLR, 2024.
[16] Chen Li, Weiqi Wang, Jingcheng Hu, Yixuan
Wei, Nanning Zheng, Han Hu, Zheng Zhang, and
Houwen Peng. Common 7B Language Models
Already Possess Strong Math Capabilities. arXiv
_preprint arXiv:2403.04706, 2024._
[17] Chen Li, Weiqi Wang, Jingcheng Hu, Yixuan
Wei, Nanning Zheng, Han Hu, Zheng Zhang, and
Houwen Peng. Common 7B Language Models
Already Possess Strong Math Capabilities, 2024.
[18] Chengpeng Li, Zheng Yuan, Hongyi Yuan,
Guanting Dong, Keming Lu, Jiancan Wu,
Chuanqi Tan, Xiang Wang, and Chang Zhou.
MuggleMath: Assessing the Impact of Query
and Response Augmentation on Math Reasoning. In ACL, 2024.
[19] Jia Li, Edward Beeching, Lewis Tunstall, Ben
Lipkin, Roman Soletskyi, Shengyi Huang, Kashif
Rasul, Longhui Yu, Albert Q Jiang, Ziju Shen,
et al. NuminaMath: The largest public dataset in
AI4Maths with 860k pairs of competition math
problems and solutions, 2024.
[20] Ilya Loshchilov and Frank Hutter. Decoupled
Weight Decay Regularization. In ICLR, 2019.
[21] Meta-AI. The Llama 3 Herd of Models, 2024.
[22] Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel
Artetxe, Mike Lewis, Hannaneh Hajishirzi, and
Luke Zettlemoyer. Rethinking the Role of Demonstrations: What Makes In-Context Learning
Work? In EMNLP, 2022.
[23] Mistral AI. `https://mistral.ai/news/`
```
mathstral/, 2024.
```
[24] NVIDIA. Nemotron-4 340B Technical Report,
2024.
[25] Maxwell Nye, Anders Johan Andreassen, Guy
Gur-Ari, Henryk Michalewski, Jacob Austin,
David Bieber, David Dohan, Aitor Lewkowycz,
Maarten Bosma, David Luan, Charles Sutton,
and Augustus Odena. Show Your Work: Scratchpads for Intermediate Computation with Language Models, 2021.
10
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
[[26] Omni-Math. https://omni-math.github.io/,](https://omni-math.github.io/)
2024.
[27] OpenAI. Gpt-4 technical report, 2023. URL
```
https://openai.com/research/gpt-4.
```
[28] Luca Soldaini, Rodney Kinney, Akshita Bhagia, Dustin Schwenk, David Atkinson, Russell Authur, Ben Bogin, Khyathi Chandu, Jennifer Dumas, Yanai Elazar, Valentin Hofmann,
Ananya Jha, Sachin Kumar, Li Lucy, Xinxi Lyu,
Nathan Lambert, Ian Magnusson, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew Peters, Abhilasha Ravichander, Kyle Richardson, Zejiang Shen, Emma
Strubell, Nishant Subramani, Oyvind Tafjord,
Evan Walsh, Luke Zettlemoyer, Noah Smith,
Hannaneh Hajishirzi, Iz Beltagy, Dirk Groeneveld, Jesse Dodge, and Kyle Lo. Dolma: an
Open Corpus of Three Trillion Tokens for Language Model Pretraining Research. In ACL,
2024.
[29] Zayne Sprague, Fangcong Yin, Juan Diego
Rodriguez, Dongwei Jiang, Manya Wadhwa,
Prasann Singhal, Xinyu Zhao, Xi Ye, Kyle Mahowald, and Greg Durrett. To CoT or not to
CoT? Chain-of-thought helps mainly on math
and symbolic reasoning, 2024.
[30] Shubham Toshniwal, Ivan Moshkov, Sean Narenthiran, Daria Gitman, Fei Jia, and Igor Gitman.
OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset. In NeurIPS Datasets
_and Benchmarks, 2024._
[35] Jason Wei, Xuezhi Wang, Dale Schuurmans,
Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le,
Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models.
_NeurIPS, 2022._
[36] An Yang, Beichen Zhang, Binyuan Hui, Bofei
Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu,
Jianhong Tu, Jingren Zhou, Junyang Lin, Keming Lu, Mingfeng Xue, Runji Lin, Tianyu Liu,
Xingzhang Ren, and Zhenru Zhang. Qwen2.5Math Technical Report: Toward Mathematical
Expert Model via Self-Improvement, 2024.
[37] Shuo Yang, Wei-Lin Chiang, Lianmin Zheng,
Joseph E. Gonzalez, and Ion Stoica. Rethinking
Benchmark and Contamination for Language
Models with Rephrased Samples, 2023.
[38] Longhui Yu, Weisen Jiang, Han Shi, Jincheng
Yu, Zhengying Liu, Yu Zhang, James T Kwok,
Zhenguo Li, Adrian Weller, and Weiyang Liu.
MetaMath: Bootstrap Your Own Mathematical
Questions for Large Language Models. In ICLR,
2024.
[39] Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
MAmmoTH: Building math generalist models
through hybrid instruction tuning. In ICLR,
2024.
[40] Xiang Yue, Tuney Zheng, Ge Zhang, and Wenhu
Chen. MAmmoTH2: Scaling Instructions from
the Web. arXiv preprint arXiv:2405.03548, 2024.
[41] Liang Zeng, Liangjun Zhong, Liang Zhao, Tian
[31] Pablo Villalobos and David Atkinson. Trad- wen Wei, Liu Yang, Jujie He, Cheng Cheng, Rui
ing Off Compute in Training and Inference, Hu, Yang Liu, Shuicheng Yan, Han Fang, and
2023. URL `https://epochai.org/blog/` Yahui Zhou. Skywork-Math: Data Scaling Laws
`trading-off-compute-in-training-and-inference.` for Mathematical Reasoning in Large Language
Models – The Story Goes On, 2024.
[32] Xuezhi Wang, Jason Wei, Dale Schuurmans,
Quoc Le, Ed Chi, Sharan Narang, Aakanksha [42] Bo-Wen Zhang, Yan Yan, Lin Li, and Guang Liu.
Chowdhery, and Denny Zhou. Self-consistency InfinityMATH: A Scalable Instruction Tuning
improves chain of thought reasoning in language Dataset in Programmatic Mathematical Reasonmodels. arXiv preprint arXiv:2203.11171, 2022. ing. arXiv preprint arXiv:2408.07089, 2024.
[33] Xuezhi Wang, Jason Wei, Dale Schuurmans, [43] Hugh Zhang, Jeff Da, Dean Lee, Vaughn RobinQuoc V. Le, Ed H. Chi, Sharan Narang, son, Catherine Wu, Will Song, Tiffany Zhao,
Aakanksha Chowdhery, and Denny Zhou. Self- Pranav Raja, Dylan Slack, Qin Lyu, Sean
Consistency Improves Chain of Thought Reason- Hendryx, Russell Kaplan, Michele Lunati, and
ing in Language Models. In ICLR, 2023. Summer Yue. A Careful Examination of Large
Language Model Performance on Grade School
[34] Zhilin Wang, Yi Dong, Olivier Delalleau, Ji- Arithmetic, 2024.
aqi Zeng, Gerald Shen, Daniel Egert, Jimmy J.
Zhang, Makesh Narsimhan Sreedhar, and Oleksii
Kuchaiev. HelpSteer2: Open-source dataset for
training top-performing reward models, 2024.
11
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
**Instruct Prompt Template**
```
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
FEW-SHOT PROMPTS
Question:
{question}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{generation}
```
Figure 7: Typical instruct prompt template used with Llama-Instruct models.
**Base Prompt Template**
```
<|begin_of_text|>FEW-SHOT PROMPTS
Question:
{question}
My solution:
{generation}
```
Figure 8: Base prompt template where we drop the special tokens for marking roles when using the
```
Llama-Instruct models.
```
### A. Miscellaneous
**A.1. Generating Solution in OpenMath CoT**
**Format**
When we prompt the Llama3.1-405B-Instruct
model with few-shot examples in OpenMath CoT
format from Appendix D.1 in tandem with the in_struct prompt, shown in Figure 7, almost 57% of the_
generated solutions are in the Llama CoT format on
which the model is most likely trained on.[9] We find
that dropping the Llama special tokens for marking
roles in the prompt, as shown in Figure 8, results
in much better adherence to our proposed few-shot
prompt with only 0.1% generations in the Llama CoT
format.
**A.2. Post-Processing**
We remove or modify solutions based on the following
criteria:
- Remove solutions with multiple \boxed entries.
- Remove prefix My Solution: from solutions.
- Truncate the solution till the first sentence with
```
\boxed.
```
- Remove incorrect arithmetic calculations.
[9https://huggingface.co/datasets/meta-llama/](https://huggingface.co/datasets/meta-llama/Meta-Llama-3.1-8B-Instruct-evals/viewer/Meta-Llama-3.1-8B-Instruct-evals__math__details)
```
Meta-Llama-3.1-8B-Instruct-evals/viewer/Meta-Llama-3.
1-8B-Instruct-evals__math__details
```
- Split complex arithmetic calculations to step-bystep calculations to make it easier for the model
to generate.
- Remove solutions longer than 1024 Llama3.1
tokens.
- Remove solutions with less than 200 characters.
**A.3. Composition of OpenMathInstruct-2**
Table 5 represents the composition of
OpenMathInstruct-2. The dataset consists of
about 592K new synthetically-generated questions
which contribute about 11M new question-solution
pairs.
**A.4. Checkpoint Averaging**
We have found consistent gains in our setup with
checkpoint averaging. Figure 9 shows a gain of more
than 2% for one of our ablation runs when the final
checkpoint is created using the average of the last
4 checkpoints in comparison to using only the last
checkpoint.
### B. Performance Comparison be- tween Different Teacher Models
In this section, we explore the impact of lowquality data produced by two distinct teacher models:
12
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
Table 5: Composition of OpenMathInstruct-2
Dataset Approach # of Unique Ques. # of Unique Ques.-Sol. Pairs
GSM8K Solution Augmentation 7.4K 0.46M
GSM8K Question-Solution Augmentation 73.6K 2.11M
MATH Solution Augmentation 7.4K 2.46M
MATH Question-Solution Augmentation 519.1K 8.94M
Total - 607.3K 13.97M
Table 6: Performance of the SFT Llama3.1-8B-Base model on the MATH validation set after applying
different filtering strategies to remove poor-quality data from two-choice teacher models: 8B-Base and
405B-Instruct. Results for the 405B-Instruct model are averaged over 4 runs, while the 8B-Base results are
based on a single run.
Teacher model Filtering Strategy Data Size MATH Validation Accuracy
Unfiltered 128K 43.6 1.7
_±_
LLM-as-a-Judge: Prompt 1 113K 43.4 0.1
_±_
LLM-as-a-Judge: Prompt 2 116K 43.0 0.8
_±_
Nemotron-4-340B-Reward: Helpfulness 3 118K 43.7 0.4
_≥_ _±_
Nemotron-4-340B-Reward: Correctness 3 120K 43.1 0.4
_≥_ _±_
Unfiltered 128K 29.8
LLM-as-a-Judge: Prompt 1 70K 30.3
LLM-as-a-Judge: Prompt 2 72K 29.3
Nemotron-4-340B-Reward: Helpfulness 3 42K 28.1
_≥_
Nemotron-4-340B-Reward: Correctness 3 49K 30.5
_≥_
405B-Inst
8B-Base
dataset serving as the seed. We ensured that all solutions produced led to the correct final answer, and
restricted the maximum token length of generated solutions to 1024. Data statistics and SFT performance
are summarized in Table 6.
The percentage of low-quality data generated by
the Llama3.1-8B-Base teacher model, when applying different filtering strategies, ranged from 45% to
67%. This is notably higher than the percentage
observed with the Llama3.1-405B-Instruct model,
as expected. More advanced teacher models, like
```
Llama3.1-405B-Instruct, generally produce higher
```
quality data.
The SFT performance of the student model
```
Llama3.1-8B-Base remained relatively stable across
```
the various filtering strategies, regardless of
whether the teacher was Llama3.1-8B-Base or
`Llama3.1-405B-Instruct.` However, the overall performance was consistently higher when
```
Llama3.1-405B-Instruct was used as the teacher.
```
This observation aligns with the findings discussed in
Section 2.2.3, which highlight that SFT performance
experiences minimal to no degradation, even when a
significant portion of the training data is noisy.
Finally, Table 7 and Table 8 present low
44.0
43.5
43.0
42.5
42.0
MATH Validation Accuracy (in %) 41.5
1 2 3 4
Averaging Last N Checkpoints
Figure 9: MATH Validation accuracy as a function
of the final checkpoint being an average of the last 𝑁
checkpoints.
```
Llama3.1-8B-Base and Llama3.1-405B-Instruct.
```
To identify low-quality data, we employ the same
methods outlined in Section 2.2.3, specifically, LLMas-a-judge and reward model labeling.
For the teacher model Llama3.1-8B-Base, we generated 128K data samples using the same configuration as Llama3.1-405B-Instruct, with the MATH
13
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
Table 7: Sample solutions generated by Llama3.1-8B-Base identified as having incorrect reasoning steps.
We highlight in red incorrect judgements made by the models.
**Strategy** **Judgement** **Question** **Solution**
**correctness**
LLM-as-a- **Model** **cor-** The faces of an octahedral We need to find the probability of summing 15 with a pair of dice rolls.
Judge **rectly iden-** die are labeled with digits For each die roll, there are 8 possible outcomes, which are {1, 2, 3, 4, 5, 6, 7, 8}.
**tifies** **incor-** 1 through 8. What is the To calculate the total number of possible outcomes, we find 8 × 8 = 64.
**rect reason-** probability, expressed as a (Correct reasoning)
**ing** common fraction, of rolling Now we need to find how many rolls lead to a sum of 15. For each die roll,
a sum of 15 with a pair of we can only pick two from {3, 4, 5, 6, 7, 8}, since both die rolls have to be
such octahedral dice? less than 15. (Incorrect reasoning)
We can pick any combination of two from the set of six, which is 62 = 15.
There are two ways to roll 15: (3, 12) and (4, 11). (Incorrect reasoning)
Now find the probability (Correct final answer): [(︀] )︀
probability = [number of successful rolls] = [2] 1
total number of rolls 64 [=] 32
LLM-as-a- **Model** **la-** In a physical education Let 𝑥 be the number of students in each row that has the same number of
Judge **bels** **as** class, students line up in students. Let 𝑦 be the number of students in the row that has one more
**incorrect,** four rows to do exercises. student than the other rows.
**but** **the** One particular class has If 𝑦 students are in the row that has one more student than the other rows,
**solution** **is** over 30 students, with three then there are 𝑦 _−_ 1 students in each of the other rows. So the total number
**correct** rows having the same num- of students is (Correct reasoning)
ber of students and one row
having one more studentthan the other three rows. (𝑦 _−_ 1)+(𝑦 _−_ 1)+(𝑦 _−_ 1)+𝑦 = 3𝑦 _−_ 3+𝑦 = 4𝑦 _−_ 3
What is the smallest possi
We need the total number of students to be greater than 30. Thus, we have
ble class size for this physi
(Correct reasoning)
cal education class?
4𝑦 _−_ 3 > 30 ⇒ 4𝑦> 33 ⇒ _𝑦> 33 : 4 ⇒_ _𝑦> 8_
With the constraints 𝑦 _≥_ 9 and 𝑦 is an integer, the smallest possible value
of 𝑦 is 𝑦 = 9. (Correct reasoning)
Thus the smallest possible class size is 4𝑦 _−_ 3 = 4(9) − 3 = 33 . (Correct
final answer)
Reward **Model** **cor-** Determine the value of 𝑥 We apply the power rule to 5 _𝑥√𝑥[3]._
Model **rectly iden-** that satisfies 5 _𝑥√𝑥[3]_ = 3. (Incorrect reasoning)
**tifies** **incor-** √︀
**rect reason-**
**ing** √︀ 5 _𝑥√𝑥[3]_ = 5 _𝑥√𝑥[3]_ 51 = (𝑥√𝑥[3]) 51 = (𝑥[(1][/][2)])[(1][/][5)] = 𝑥 12 _[×][ 1]5 = 𝑥_ 101
(Incorrect reasoning√︀ √︀ )
1
Substitute 𝑥 10 back into the original equation:
1 1
3 = 𝑥 10 ⇒ _𝑥_ 10 = 3 ⇒ _𝑥[(1][/][10)]_ = 3[2] _⇒_ _𝑥_ = 3[2] _⇒_ _𝑥_ = 9
Therefore, the value of 𝑥 that satisfies 5 _𝑥√𝑥[3]_ = 3 is 9 . (Correct final
answer)
√︀
14
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
Table 8: Sample solutions generated by Llama3.1-405B-Instruct identified as having incorrect reasoning
steps. We highlight in red incorrect judgements made by the models.
**Strategy** **Judgement** **Question** **Solution**
**correctness**
Using AM-GM inequality, we have (Correct reasoning)
_𝑥[2]+1 ≥_ 2𝑥 and 3𝑥+3 ≥ 6[√]𝑥
(Incorrect reasoning)
**(𝑥[2]+1)+(3𝑥+3) ≥** 2𝑥+6[√]𝑥 _≥_ 2
2𝑥 _· 3[√]𝑥_ = 6𝑥
_⇒_ _𝑥[2]+3𝑥+4 ≥_ 6𝑥 _⇒_ _[𝑥][2][+]𝑥[3][𝑥][+][1]_
_≥_ 5
(Incorrect reasoning)
By similar process we get _[𝑦][2][+]𝑦[3][𝑦][+][1]_ _≥_ 5 and _[𝑧][2][+]𝑧[3][𝑧][+][1]_ _≥_ 5. Thus we
can conclude that
**(𝑥[2]+3𝑥+1)(𝑦[2]+3𝑦+1)(𝑧[2]+3𝑧+1)**
_𝑥𝑦𝑧_ _≥_ 5 × 5 × 5 = 125
(Incorrect reasoning)
Since 𝑥 = 𝑦 = 𝑧 = 1 gives us
**(1[2]+3(1)+1)(1[2]+3(1)+1)(1[2]+3(1)+1)**
1 · 1 · 1 = 5 × 5 × 5 = 125
and the minimum value of
**(𝑥[2]+3𝑥+1)(𝑦[2]+3𝑦+1)(𝑧[2]+3𝑧+1)**
_𝑥𝑦𝑧_
is 125 .
Let 𝑝 be the cost of the pen and 𝑖 be the cost of the ink refill. We
know:
1. The total cost is $1.10 (Correct reasoning):
_𝑝_ + 𝑖 = 1.10
2. The pen costs $1 more than the ink refill (Correct reasoning):
_𝑝_ = 𝑖 + 1
Solve for 𝑝: From the second equation, substitute 𝑖 = 𝑝 _−_ 1 into the
first equation: (Correct reasoning)
_𝑝_ + (𝑝 _−_ 1) = 1.10 ⇒ 2𝑝 _−_ 1 = 1.10
_⇒_ 2𝑝 = 2.10 ⇒ _𝑝_ = 2.10 : 2 = 1.05
This is an extremely difficult problem to be solved by hand. (Incorrect
reasoning)
The number of switches in position A is 650 .
The total number of factors of 𝑁 can be found by multiplying together
one more than each of the exponents of the prime factors of 𝑁 .
Since the prime factorization of 𝑁 is: (Correct reasoning)
_𝑁_ = 2[4] 3[3] 5[2] 7[2]
_·_ _·_ _·_
the total number of factors is: (Correct reasoning)
**(4+1) · (3+1) · (2+1) · (2+1) = 5 · 4 · 3 · 3 = 180**
|So the answer is|180|.|
|---|---|---|
15
LLM-as- **Model** **cor-** Let 𝑥, 𝑦, and 𝑧 be positive real numbers.
a-Judge **rectly iden-** Find the minimum value of
**tifies** **incor-**
**rect reason-** **(𝑥[2]+3𝑥+1)(𝑦[2]+3𝑦+1)(𝑧[2]+3𝑧+1)**
**ing** _._
_𝑥𝑦𝑧_
LLM-as- **Model** **la-** A pen and its ink refill together cost
a-Judge **bels** **as** $1.10. The pen costs $1 more than the
**incorrect,** ink refill. What is the cost of the pen in
**but** **the** dollars
**solution** **is**
**correct**
Reward **Model** **cor-** There is a set of 1000 switches, each of
Model **rectly iden-** which has four positions, called 𝐴, 𝐵, 𝐶,
**tifies** **incor-** and 𝐷. When the position of any switch
**rect reason-** changes, it is only from 𝐴 to 𝐵, from 𝐵
**ing** to 𝐶, from 𝐶 to 𝐷, or from 𝐷 to 𝐴. Ini-
tially each switch is in position 𝐴. The
switches are labeled with the 1000 differ-
ent integers (2[𝑥])(3[𝑦])(5[𝑧]), where 𝑥, 𝑦, and
_𝑧_ take on the values 0, 1, . . . , 9. At step
i of a 1000-step process, the 𝑖-th switch
is advanced one step, and so are all the
other switches whose labels divide the la-
bel on the 𝑖-th switch. After step 1000
has been completed, how many switches
will be in position 𝐴?
Reward **Model** **la-** How many natural-number factors does
Model **bels** **as** _𝑁_ have if 𝑁 = 2[4] _· 3[3]_ _· 5[2]_ _· 7[2]?_
**incorrect,**
**but** **the**
**solution** **is**
**correct**
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
Table 9: Comparison of SFT performance when
selecting synthesized question-solution pairs with
varying majority thresholds for determining whether
to include the question in SFT data.
Min-votes Data size MATH Validation Accuracy
0 381K **50.1**
8 339K 49.2
16 254K 44.4
24 160K 42.0
quality solutions identified by the two methods for
```
Llama3.1-8B-Base and Llama3.1-405B-Instruct
```
respectively.
### C. Question-Solution Augmentation
**C.1. Minimum Majority Vote Ablation**
To determine the answer to synthetically generated
questions, we use majority voting as a proxy for
ground truth answer. We conduct an ablation study
to determine the threshold for a minimum number of
majority votes. The questions for which the number
of majority vote solutions is less than the threshold
are removed. We generate 32 solutions per question
for a small set of initial synthesized questions (after
performing decontamination with MATH validation
subset) and perform a comparison of varying the majority vote threshold from {0, 8, 16, 24}. Based on the
results presented in Table 9, we select the threshold
of 0 in our experiments.
**C.2. Contaminated Examples Detected by**
**LLMs**
The decontamination pipeline described in Section 3.1
identifies questions that will be missed by a simple 𝑛gram baseline. Using it we have effectively filtered out
approximately 50K questions from the 569K newly
synthesized questions, reducing the total from 569K
to 519K.
We show two such examples in Table 10. Our
dataset does have questions that are similar (but not
equivalent) to MATH test set questions with sample
pairs shown in Table 11.
16
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
Table 10: Examples of paraphrases detected by our decontamination pipeline which will be missed by 𝑛-gram
matching.
**MATH Test Set Question** **Synthesized Question**
How many ordered triplets (𝑎, 𝑏, 𝑐) of rational
numbers are there where 𝑎, 𝑏, 𝑐 are the roots of
_𝑥[3]_ + 𝑎𝑥[2] + 𝑏𝑥 + 𝑐 = 0?
In how many ways can we seat 6 people around
a round table if Fred and Gwen insist on sitting
opposite each other? (Two seatings are considered
equivalent if one is a rotation of the other.)
Find the number of ordered triplets (𝑎, 𝑏, 𝑐) of real
numbers such that the cubic equation 𝑥[3] + 𝑎𝑥[2] +
_𝑏𝑥_ + 𝑐 = 0 has roots 𝑎, 𝑏, and 𝑐.
A circular table has 6 identical chairs placed
around it. In how many ways can 6 people, including Alice and Bob, be seated around the table
if Alice and Bob want to sit opposite each other?
Two seating arrangements are considered the same
if one is a rotation of the other.
Table 11: Examples of questions from OpenMathInstruct-2 which are similar (but not equivalent) to
questions from the MATH test set.
**MATH Test Set Question** **Similar question from OpenMathInstruct-2**
Determine the number of ways to arrange the Find the number of ways to arrange the letters of
letters of the word GAMMAS the word DETAIL
Factor 32𝑥[3] 4𝑥[2] + 20𝑥 Factor the expression 𝑥[6] 20𝑥[3] 30
_−_ _−_ _−_
Three points are chosen randomly and indepen- Three points are chosen uniformly at random on
dently on a circle. What is the probability that a circle. What is the probability that no two of
all three pairwise distances between the points these points form an obtuse triangle with the circle’s
are less than the radius of the circle? center?
Compute
Compute
7𝜋
cos
4
(︂
)︂
cos [2]7 [𝜋] [cos 4]7 [𝜋] [cos 8]7[𝜋]
What is the remainder when 5[30] is divided by 7? What is the remainder when 5[2005] is divided by 27?
What is the digit in the hundredths place of the Find the digit in the hundredths place of the decimal
decimal equivalent of 1609 [?] equivalent of _√12_ [.]
17
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
### D. LLM Prompts
**D.1. Solution Augmentation Prompt**
**Few-shot Prompt: Solution Augmentation**
Here are some examples of questions and s o l u t i o n s followed by a new question that you
need to s o l v e .
Make sure to put the answer ( and only answer ) i n s i d e \boxed {{}}.
Question :
A parabola with equation $y=x^2+bx+c$ passes through the points $(−1,−11)$ and $ (3,17)
$ . What i s $c$ ?
My s o l u t i o n :
From the question we know that points $(−1, −11)$ and $ (3, 17) $ l i e on the parabola .
This means that when we s u b s t i t u t e $x$ and $y$ from these points into the equation $y
= x^2 + bx + c$, the equation must hold true . We s u b s t i t u t e these two points into the
given equation to s o l v e f o r $c$ .
For the point $(−1, −11)$ :
Substitute $x = −1$ and $ y = −11 $ into the equation :
\ [ −11 = (−1)^2 + b(−1) + c \ Rightarrow −11 = 1 −b + c \ Rightarrow −b + c = −12 \ ]
For the point $ (3, 17) $ :
Substitute $x = 3$ and $y = 17$ into the equation :
\[ 17 = (3) ^2 + b (3) + c \ Rightarrow 17 = 9 + 3b + c \ Rightarrow 3b + c = 8 \]
In summary, we have the two equations
\ begin { a l i g n ∗}
−b + c &= −12\\
3b + c &= 8
\end{ a l i g n ∗}
To s o l v e f o r $c$ we can eliminate $b$ by multiplying the f i r s t equation by 3 and
adding equations together .
Multiplying the f i r s t equation by 3, we have $3(−b + c ) = 3 (−12) \ Rightarrow −3b + 3c
= −36$ . Adding equations together g i v e s us
\[ (−3b + 3c ) + (3b + c ) = −36 + 8 \ Rightarrow −3b + 3b + 3c + c = −28 \ Rightarrow 4c
= −28 \ Rightarrow c = −28 : 4 \ Rightarrow c = \boxed{−7} \ ]
Question :
Let $f ( x ) $ be an odd function . I s $f ( f ( x ) ) $ even, odd, or n e i t h e r ?\n\nEnter " odd ", "
even ", or " n e i t h e r " .
My s o l u t i o n :
To determine whether $f ( f ( x) ) $ i s even, odd, or neither, we need to use the property
of $f ( x ) $ being an odd function .
An odd function i s defined as :
\[ f (−x ) = −f (x ) \quad \ text { f o r a l l } x \]
Given that $f ( x) $ i s odd, let ’ s find $f ( f (−x ) ) $ and see how i t r e l a t e s to $f ( f ( x ) ) $ .
1 . Substitute $−x$ into the function $f ( x ) $ :
\ [ f (−x ) \]
18
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
2 . Since $f (x ) $ i s odd, apply the d e f i n i t i o n of an odd function :
\ [ f (−x ) = −f (x ) \]
3 . Now s u b s t i t u t e $−f (x ) $ into the function $f$ :
\ [ f ( f (−x ) ) = f (−f (x ) ) \]
4 . Again, using the f a c t that $f ( x) $ i s odd, apply the d e f i n i t i o n :
\ [ f (−f ( x ) ) = −f ( f (x ) ) \]
5 . We have found that :
\ [ f ( f (−x ) ) = −f ( f (x ) ) \]
This matches the d e f i n i t i o n of an odd function .
So, the answer i s :
\ [ \boxed{\ text {odd}} \]
Question :
A rectangular box $P$ i s i n s c r i b e d in a sphere of radius $r$ . The s u r f a c e area of $P$
i s 384, and the sum of the lengths of i t s 12 edges i s 112. What i s $r$ ?
My s o l u t i o n :
Let the dimensions of the rectangular box $P$ be $x$, $y$, and $z$ . We know the
f o l l o w i n g :
1 . The sum of the lengths of the edges of $P$ i s
\ [ 4(x + y + z ) = 112 \ Rightarrow x + y + z = 112 : 4 \ Rightarrow x + y + z = 28 \]
2 . The s u r f a c e area of $P$ i s
\ [ 2xy + 2yz + 2xz = 384 \ Rightarrow xy + yz + xz = 384 : 2 \ Rightarrow xy + yz + xz =
192 \]
Since the box i s i n s c r i b e d in the sphere, the diagonal of the box i s the diameter of
the sphere . The length of the diagonal i s $\ sqrt {x^2 + y^2 + z ^2}$
The diameter of the sphere i s $2r$, so :
\ [ 2 r = \ sqrt {x^2 + y^2 + z ^2} \ Rightarrow (2 r ) ^2 = x^2 + y^2 + z^2 = ( x + y + z ) ^2 −
(2 xy + 2yz + 2xz ) \]
Substitute the known values :
\ [ 4 r ^2 = 28^2 −384 = 784 −384 = 400 \ Rightarrow r ^2 = 100 \ Rightarrow r = \boxed
{10} \]
Question :
Let $\mathbf{a} = \ begin {pmatrix} 2 \\ 1 \\ 5 \end{ pmatrix } . $ Find the vector $\
mathbf{b}$ such that $\mathbf{a} \ cdot \mathbf{b} = 11$ and\n \[\ mathbf{a} \ times \
mathbf{b} = \ begin {pmatrix} −13 \\ −9 \\ 7 \end{ pmatrix } . \ ]
My s o l u t i o n :
Let $\mathbf{b} = \ begin {pmatrix} x \\ y \\ z \end{ pmatrix }$ .
First, use the dot product condition :
\ [ \mathbf{a} \ cdot \mathbf{b} = 11 \ Rightarrow 2x + y + 5z = 11 \]
Next, use the c r o s s product condition :
19
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
\ [ \mathbf{a} \ times \mathbf{b} = \ begin { pmatrix } 2 \\ 1 \\ 5 \end{pmatrix} \ times \
begin { pmatrix} x \\ y \\ z \end{pmatrix } = \ begin {pmatrix } −5y + z \\ 5x −2z \\ −x +
2y \end{ pmatrix} = \ begin {pmatrix} −13 \\ −9 \\ 7 \end{ pmatrix } \ ]
This g i v e s us the system of equations :
\ begin { a l i g n ∗}
2x + y + 5z = 11 \quad &(1) \\
−5y + z = −13 \quad &(2) \\
5x −2z = −9 \quad &(3) \\
−x + 2y = 7 \quad &(4)
\end{ a l i g n ∗}
Solve f o r $x$, $y$, and $z$ step−by−step :
From (2), $z = 5y −13$ .
From (4), $x = 2y −7$ .
Substitute $z = 5y −13$ into (1) :
\ [ 2(2y −7) + y + 5(5y −13) = 11 \ Rightarrow 4y −14 + y + 25y −65 = 11 \ Rightarrow
30y −79 = 11 \ Rightarrow 30y = 90 \ Rightarrow y = 3 \ ]
Now f i n d $x$ and $z$ :
\ [ x = 2y −7 = 2(3) −7 = −1 \]
\ [ z = 5y −13 = 5(3) −13 = 2 \]
Thus, the vector $\mathbf{b}$ i s :
\ [ \mathbf{b} = \boxed{\ begin {pmatrix } −1 \\ 3 \\ 2 \end{ pmatrix }} \ ]
Question :
{ question }
My s o l u t i o n :
20
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
**D.2. Question-Solution Augmentation Prompts**
**Few-shot prompt for GSM8K Question Augmentation**
Help the user to create a new math problem s i m i l a r to a given one . Make the new
problem reasonable and s o l v a b l e .
Here are some examples of how to complete t h i s task .
Problem :
Olivia has $23 . She bought f i v e bagels f o r $3 each . How much money does she have l e f t ?
Write another problem s i m i l a r to t h i s one :
Aiden has $35 . He purchased eight p e n c i l s f o r $2 each and a notebook f o r $5 . How much
money does he have remaining ?
Problem :
Michael had 58 g o l f b a l l s . On tuesday, he l o s t 23 g o l f b a l l s . On wednesday, he l o s t 2
more . How many g o l f b a l l s did he have at the end of wednesday?
Write another problem s i m i l a r to t h i s one :
Sarah c o l l e c t e d 72 s e a s h e l l s during her beach vacation . On Thursday, she gave 15
s e a s h e l l s to her f r i e n d as a souvenir . On Friday, she found 8 more s e a s h e l l s while
exploring the shore . How many s e a s h e l l s did Sarah have at the end of Friday ?
Problem :
Angelo and Melanie want to plan how many hours over the next week they should study
together f o r t h e i r t e s t next week . They have 2 chapters of t h e i r textbook to study and
4 worksheets to memorize . They f i g u r e out that they should dedicate 3 hours to each
chapter of t h e i r textbook and 1.5 hours f o r each worksheet . I f they plan to study no
more than 4 hours each day, how many days should they plan to study t o t a l over the
next week i f they take a 10−minute break every hour, include 3 10−minute snack breaks
each day, and 30 minutes f o r lunch each day?
Write another problem s i m i l a r to t h i s one :
Samantha and David are preparing f o r t h e i r upcoming s c i e n c e f a i r p r o j e c t . They have
four d i f f e r e n t experiments to conduct and a research paper to write . Each experiment
i s estimated to take 2 hours, and the research paper w i l l r e q u i r e 8 hours to complete .
To stay focused and productive, they plan to take a 15−minute break f o r every 1.5
hours of work and have three 20−minute snack breaks each day . Additionally, they
a l l o c a t e 45 minutes f o r lunch each day . I f they want to l i m i t t h e i r d a i l y study time
to 5 hours, how many days should they plan to work on t h e i r p r o j e c t over the next two
weeks ?
Problem :
Leah had 32 chocolates and her s i s t e r had 42. I f they ate 35, how many p i e c e s do they
have l e f t in t o t a l ?
Write another problem s i m i l a r to t h i s one :
Tom has 50 marbles, and h i s f r i e n d Jerry has 65 marbles . I f they decide to play a game
and bet 20 marbles each, how many marbles w i l l they have l e f t in t o t a l a f t e r the game
21
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
?
Problem :
There were nine computers in the s e r v e r room . Five more computers were i n s t a l l e d each
day, from monday to thursday . How many computers are now in the s e r v e r room?
Write another problem s i m i l a r to t h i s one :
In a garden, there were 12 f l o w e r s . Every morning f o r a week ( from Monday to Sunday ),
3 more f l o w e r s were planted . How many f l o w e r s are there in the garden now?
Problem :
Jason had 20 l o l l i p o p s . He gave Denny some l o l l i p o p s . Now Jason has 12 l o l l i p o p s . How
many l o l l i p o p s did Jason give to Denny?
Write another problem s i m i l a r to t h i s one :
Sarah had 35 marbles . She gave some marbles to her f r i e n d Emma. Now Sarah has 18
marbles l e f t . How many marbles did Sarah give to Emma?
Problem :
Sam bought a dozen boxes, each with 30 h i g h l i g h t e r pens inside, f o r $10 each box . He
rearranged f i v e of these boxes into packages of s i x h i g h l i g h t e r s each and sold them
f o r $3 per package . He sold the r e s t of the h i g h l i g h t e r s s e p a r a t e l y at the rate of
three pens f o r $2 . How much p r o f i t did he make in total, in d o l l a r s ?
Write another problem s i m i l a r to t h i s one :
Amy purchased 8 crates, each containing 24 c o l o r f u l markers, f o r $12 per crate . She
decided to create s e t s of 4 markers each and s e l l them f o r $2 per s e t . The remaining
markers she sold i n d i v i d u a l l y at a rate of 5 markers f o r $3 . Calculate the t o t a l
p r o f i t Amy made, in d o l l a r s .
Problem :
There are 15 t r e e s in the grove . Grove workers w i l l plant t r e e s in the grove today .
After they are done, there w i l l be 21 t r e e s . How many t r e e s did the grove workers
plant today ? ’,
Write another problem s i m i l a r to t h i s one :
In a garden, there are 25 rose bushes . The gardener plans to plant some more rose
bushes today . After planting, there w i l l be a t o t a l of 40 rose bushes in the garden .
How many rose bushes w i l l the gardener plant today ?
22
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
Here i s the problem from the user :
{ question }
Write another problem s i m i l a r to t h i s one . Start d i r e c t l y with the problem statement
and DO NOT include any phrases such as " Here i s a new problem s i m i l a r to a given one " .
After the problem i s generated f i n i s h your response r i g h t away .
**Few-shot Prompt 1: MATH Question Augmentation**
Help the user to create a new math problem s i m i l a r to a given one . Make the new
problem reasonable and s o l v a b l e .
Here are some examples of how to complete t h i s task .
Problem :
In the equation $$5x^2−kx+1=0$$ determine $k$ such that the d i f f e r e n c e of the roots be
equal to unity .
Write another problem s i m i l a r to t h i s one :
Consider the quadratic equation : $$3x^2 + mx −2 = 0$$
Find the value of $m$ f o r which the sum of the roots i s equal to 4 .
Problem :
Solve the f o l l o w i n g equation
$\\ ds \\ f {3+x}{3x}=\\ sqrt {\\ ds \\ f {1}{9}+\\ ds \\ f {1}{x}\\ sqrt {\\ ds \\ f {4}{9}+\\ ds \\ f {2}{
x^2}}}$
Write another problem s i m i l a r to t h i s one :
Solve the f o l l o w i n g equation :
$\\ f r a c {2−y}{4y} = \\ sqrt {\\ f r a c {1}{16} + \\ f r a c {1}{y}\\ sqrt {\\ f r a c {9}{16} + \\ f r a c
{3}{y^2}}}$
Problem :
In an i n f i n i t e l y decreasing geometric p r o g r e s s i o n the sum of a l l the terms occupying
odd pl ace s i s equal to 36, and that of a l l the terms at even pla ces equals 12.
Find the p rog res si on .
Write another problem s i m i l a r to t h i s one :
In an i n f i n i t e l y decreasing geometric sequence, the sum of a l l terms in
p o s i t i o n s that are multiples of 3 i s equal to 54, while the sum of a l l remaining terms
i s 126.
Find the f i r s t term and common r a t i o of t h i s geometric sequence .
Problem :
Two railway s t a t i o n s are at a distance of 96 km from each other . One t r a i n covers t h i s
distance 40 minutes f a s t e r than does the other . The speed of the f i r s t t r a i n i s 12 km
/h higher than that of the second .
Determine the speed of both t r a i n s .
23
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
Write another problem s i m i l a r to t h i s one :
Two a i r p o r t s are located 450 miles apart . A commercial a i r l i n e r f l i e s t h i s route 30
minutes f a s t e r than a smaller private j e t . The speed of the commercial a i r l i n e r i s 75
mph g r e a t e r than that of the private j e t . Calculate the speed of both a i r c r a f t .
Here i s the problem from the user :
{ question }
Write another problem s i m i l a r to t h i s one . Start d i r e c t l y with the problem statement
and DO NOT include any phrases such as " Here i s a new problem s i m i l a r to a given one " .
After the problem i s generated f i n i s h your response r i g h t away .
**Few-shot Prompt 2: MATH Question Augmentation**
Help the user to create a new math problem i n s p i r e d by a given one . Make the new
problem reasonable and s o l v a b l e .
Here are some examples of how to complete t h i s task .
Problem :
In the equation $$5x^2−kx+1=0$$ determine $k$ such that the d i f f e r e n c e of the roots be
equal to unity .
Write another problem i n s p i r e d by t h i s one :
The roots $x_1$ and $x_2$ of the equation $$x^2−3ax+a^2=0$$ are such that $x_1^2+x_2
^2=1.75$ . Determine $a$ .
Problem :
Solve the f o l l o w i n g equation $\\ ds \\ f {3+x}{3x}=\\ sqrt {\\ ds \\ f {1}{9}+\\ ds \\ f {1}{x}\\
sqrt {\\ ds \\ f {4}{9}+\\ ds \\ f {2}{x^2}}}$
Write another problem i n s p i r e d by t h i s one :
Solve the f o l l o w i n g equation $\\ sqrt {1+x\\ sqrt {x^2+24}}=x+1$
Problem :
In an i n f i n i t e l y decreasing geometric p r o g r e s s i o n the sum of a l l the terms occupying
odd pl ace s i s equal to 36, and that of a l l the terms at even pla ces equals 12. Find
the p r o g r e ssi on .
Write another problem i n s p i r e d by t h i s one :
The sum of the terms of an i n f i n i t e l y decreasing geometric p r o g r e s s i o n i s equal to 56,
and the sum of the squared terms of the same p r o g r e s s i o n i s 448. Find the f i r s t term
and the common r a t i o .
24
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
Problem :
Two railway s t a t i o n s are at a distance of 96 km from each other .
One t r a i n covers t h i s distance 40 minutes f a s t e r than does the other . The speed of the
f i r s t t r a i n i s 12 km/h higher than that of the second . Determine the speed of both
t r a i n s .
Write another problem i n s p i r e d by t h i s one :
A student was asked to multiply 78 by a two−d i g i t number
in which the tens d i g i t was three times as l a r g e as the units d i g i t ; by mistake, he
interchanged the d i g i t s in the second f a c t o r and thus obtained a product smaller than
the true product by 2808. What was the true product ?
Here i s the problem from the user :
{ question }
Write another problem i n s p i r e d by t h i s one .
Don ’ t j u s t change the numbers and context, but try to c r e at e a problem that r e q u i r e s
another approach to s o l v e .
Start d i r e c t l y with the problem statement and DO NOT include any phrases such as " Here
i s a new problem i n s p i r e d by a given one " .
After the problem i s generated f i n i s h your response r i g h t away .
25
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
**D.3. LLM-as-a-Judge Prompts to Detect Low-Quality Solutions**
**LLM-as-a-Judge: Prompt 1**
Below i s a mathematical question, followed by a s o l u t i o n and the expected answer .
Evaluate whether the s o l u t i o n c o r r e c t l y addresses the question and produces the
expected answer .
The s o l u t i o n might be flawed but s t i l l r e s u l t in the c o r r e c t f i n a l answer .
I f there are s i g n i f i c a n t mistakes during intermediate steps, respond with \boxed{{No}}
even i f the f i n a l answer i s c o r r e c t .
Summarize your reasoning in one sentence, then respond with e i t h e r \boxed{{Yes}} or \
boxed{{No}}.
YOUR TASK
Question : { question }
Solution : { output }
Expected_answer : {expected_answer}
**LLM-as-a-Judge: Prompt 2**
You are given a question, a proposed solution, and a r e f e r e n c e answer .
Your job i s to evaluate the proposed s o l u t i o n by comparing i t with the r e f e r e n c e answer
. Focus on both the f i n a l answer and the reasoning process .
Please remember, even i f the f i n a l answer produced by the s o l u t i o n i s correct, i f the
process i s flawed or i n c o r r e c t, i t should s t i l l be considered a wrong answer .
Follow i n s t r u c t i o n s below :
I n s t r u c t i o n s :
1. Review the question : Start by understanding the question thoroughly . Ensure that you
grasp what i s being asked before evaluating the s o l u t i o n s .
2. Analyze the proposed s o l u t i o n : Break down the proposed s o l u t i o n into i t s component
steps . I d e n t i f y the l o g i c a l reasoning and methodology used to a r r i v e at the f i n a l
answer .
3. Compare with the r e f e r e n c e answer : Look at the r e f e r e n c e answer and i t s reasoning
process . Determine how i t approaches the problem and the c o r r e c t n e s s of i t s steps .
4. I d e n t i f y e r r o r s or i n c o n s i s t e n c i e s : Check i f the proposed s o l u t i o n has any l o g i c a l
flaws, i n c o r r e c t assumptions, or d e v i a t i o n s from standard p r a c t i c e s, even i f the f i n a l
answer appears c o r r e c t .
5. Evaluate the c o r r e c t n e s s of the process : Assess whether the process used in the
proposed s o l u t i o n i s v a l i d and a l i g n s with the l o g i c a l approach of the r e f e r e n c e answer
.
6. Provide a d e t a i l e d assessment : Explain in d e t a i l whether the proposed s o l u t i o n i s
c o r r e c t or i n c o r r e c t . I f the s o l u t i o n i s c o r r e c t but the reasoning i s flawed, explain
why i t should s t i l l be considered wrong . Conversely, i f the f i n a l answer i s i n c o r r e c t
but the process was l o g i c a l, explain what went wrong .
YOUR TASK
Question : { question }
Solution : { output }
Reference answer : {expected_answer}
Summarize your reasoning within 500 words, then respond with e i t h e r \boxed{{Yes}} or \
26
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
boxed{{No}}.
Remember to put only the f i n a l conclusion " Yes " or "No" in \boxed {{}}.
**D.4. LLM-as-a-Judge for Decontamination**
**LLM Prompt for Decontamination**
I w i l l now give you two questions Original question and Candidate question, ple ase help
me determine i f the f o l l o w i n g two questions are the same .
Original question : { question }
Candidate question : { candidate }
Disregard the names and minor changes in word order that appear within .
I f t h e i r question prompts are very s i m i l a r and, without c o n s i d e r i n g the s o l u t i o n
process, they produce the same answer, we consider them to be the same question .
Please respond with only " True " or " False " based on your judgment . Do not respond with
anything e l s e .
**D.5. LLM-as-a-Judge for Evaluation**
**LLM Prompt for Final Evaluation**
You w i l l be asked to look at the two answers ( predicted and expected ) to a math problem
and to judge whether they are equivalent within the context of the problem .
Please f i r s t explain your reasoning in a couple of sentences . Then respond with only
Yes or No as your judgement on whether the two answers are the same .
When comparing answers only perform t r i v i a l s i m p l i f i c a t i o n s .
Here are a few examples .
Example 1 :
Problem : Factor $7x^3 −21x^2 + 14x$ .
Predicted answer : $7x ( x −2) (x −1) $
Expected answer : $7x (x−1)(x−2)$
Reasoning : The order of the f a c t o r s does not matter, so the answers are the same .
Judgement : Yes
Example 2 :
Problem : A r e c t a n g l e has a length of 6 meters and a width of 2 meters . I f the length i s
reduced by 3 meters and the width i s halved, what i s the new area of the r e c t a n g l e in
square meters ?
Predicted answer : 3/2
Expected answer : 1.5
Reasoning : 3/2 i s the same as 1.5
Judgement : Yes
Example 3 :
Problem : Simplify the expression $\ sqrt {{7!}} $, where $n ! $ stands f o r $n\ cdot (n−1)\ cdot
(n−2)\ cdots \ cdot 2\ cdot 1$ .
Predicted answer : 71
Expected answer : 12\ sqrt {{35}}.
Reasoning : This i s non−t r i v i a l to simplify, so the answers are d i f f e r e n t .
Judgement : No
27
-----
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction Data
Example 4 :
Problem : What i s the s i m p l i f i e d form of the expression $\ sqrt {{98 x^{{3}} y^{{5}} z }} ?
\ begin {{ a l i g n ∗}}
\ text {{A) }} & 2 x y z \ sqrt {{7 x y z }} &
\ text {{B) }} & 7 x^{{2}} y^{{2}} \ sqrt {{2 y z }}
\\
\ text {{C) }} & 7 x y^{{2}} \ sqrt {{2 x y z }} &
\ text {{D) }} &49 x y^{{2}} \ sqrt {{2 x y z }}
\\
\end{{ a l i g n ∗}}
Predicted answer : 7 x y^{{2}} \\ sqrt {{2 x y z }}
Expected answer : C
Reasoning : Predicted answer i s the same as the expected answer choice C.
Judgement : Yes
Example 5 :
Problem : A l i n e segment of length $5$ has one endpoint at $ (1, 2) $ and the other
endpoint at $ (4, b) $ . Find a l l p o s s i b l e values of $b$, separated by commas .
Predicted answer : −2, 6
Expected answer : 6, −2
Reasoning : The order doesn ’ t matter in the context of the problem .
Judgement : Yes
Example 6 :
Problem : Solve $\tan x = \ s i n x$ f o r $0 \ l e x \ l e 2 \ pi . $ Enter a l l the s o l u t i o n s,
separated by commas .
Predicted answer : 0, \ pi
Expected answer : 0,\ pi,2\ pi .
Reasoning : Number of s o l u t i o n s i s d i f f e r e n t .
Judgement : No
YOUR TASK
Problem : {problem}
Predicted answer : { predicted_answer }
Expected answer : {expected_answer}
28
-----
| [
"Shubham, Toshniwal",
"Ivan, Moshkov",
"Wei, Du",
"Branislav, Kisacanin",
"Alexan, Ayrapetyan",
"Igor, Gitman"
] | 2024-10-02T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.01560 | https://arxiv.org/abs/2410.01560 | https://www.semanticscholar.org/paper/94c82c25d8943fea9461d804257ee16c9d672548 |
PARAMANU-GANITA: Language Model with Mathematical Capabilities | In this paper, we present Paramanu-Ganita, a 208 million parameter novel Auto Regressive (AR) decoder based language model on mathematics. The model is pretrained from scratch at context size of 4096 on our curated mixed mathematical corpus. We evaluate our model on both perplexity metric and GSM8k mathematical benchmark. Paramanu-Ganita despite being 35 times smaller than 7B LLMs, outperformed generalist LLMs such as LLaMa-1 7B by 28.4% points, LLaMa-2 7B by 27.6% points, Falcon 7B by 32.6% points, PaLM 8B by 35.3% points, and math specialised LLMs such as Minerva 8B by 23.2% points, and LLEMMA-7B by 3.0% points in GSM8k test accuracy metric respectively. Paramanu-Ganita also outperformed giant LLMs like PaLM 62B by 6.4% points, Falcon 40B by 19.8% points, LLaMa-1 33B by 3.8% points and Vicuna 13B by 11.8% points respectively. The large significant margin improvement in performance of our math model over the existing LLMs signifies that reasoning capabilities of language model are just not restricted to LLMs with humongous number of parameters. Paramanu-Ganita took 146 hours of A100 training whereas math specialised LLM, LLEMMA 7B, was trained for 23,000 A100 hours of training equivalent. Thus, our approach of pretraining powerful domain specialised language models from scratch for domain adaptation is much more cost-effective than performing continual training of LLMs for domain adaptation. Hence, we conclude that for strong mathematical reasoning abilities of language model, we do not need giant LLMs and immense computing power to our end. In the end, we want to point out that we have only trained Paramanu-Ganita only on a part of our entire mathematical corpus and yet to explore the full potential of our model. | Paramanu-Ganita despite being 35 times smaller than 7B LLMs, outperformed generalist LLMs and math specialised LLMs such as Minerva 8B, LLEMMA 7B, and LLEMMA-7B in GSM8k test accuracy metric respectively and concludes that for strong mathematical reasoning abilities of language model, the authors do not need giant LLMs and immense computing power to their end. | ## PARAMANU-GANITA: Language Model with Mathematical Capabilities
**Mitodru Niyogi**
Gyan AI Research
Abu Dhabi, UAE
[email protected]
**Arnab Bhattacharya**
Dept. of Computer Science & Engineering,
Indian Institute of Technology Kanpur,
India
& Gyan AI Research
[email protected]
vron et al., 2023b), PaLM (et al., 2022), Falcon
(Almazrouei et al., 2023), Code LlaMa (Rozière
et al., 2024), MPT [1], etc.) have demonstrated multidimensional abilities, such as in open-ended dialogue or instruction following (Ouyang et al., 2022)
capabilities and being typically generalist language
models balancing the performance across the entire
distribution of natural language tasks. However,
these generalist models are humongous in size and
requires million dollars to train aside from high
engineering inference cost involved. Traditionally,
to optimize performance within specific domains
such as finance (Wu et al., 2023), medicine (Singhal et al., 2023), etc., these models have been continued trained on domain specific data. However,
domain specific continual pretraining of LLMs are
also very expensive to our opinion. For employing
a domain-specific LLM, lot of computation and inference costs are involved along with high requirement of GPUs. For example, to improve the mathematical reasoning capabilities of LLMs, LLEMMA
7B (Azerbayev et al., 2024) was trained on 256
A100 40GB GPUs for roughly 23,000 A100 training hours, which is yet very expensive. Instead of
following the domain adaptation method of LLMs
for better mathematical reasoning, we focused on
pretraining from scratch a generative mathematical
language model only on our curated high quality
mathematical corpus. This avoids requiring immense compute power, high engineering maneuver and techniques to load LLMs in memory, and
mostly high cost of training, and non-specialised
tokenizer issue of existing LLMs.
Following our previous work for domain adaptation (Niyogi and Bhattacharya, 2024b), we continued our exploration to see whether we can develop
strong reasoning mathematical language model
from scratch and compares how well it performs
with respect to LLMs in mathematical reasoning
benchmarks. We trained a powerful mathematical
**Abstract**
In this paper, we present PARAMANU-GANITA,
a 208 million parameter novel Auto Regressive
(AR) decoder based language model on mathematics. The model is pretrained from scratch
at context size of 4096 on our curated mixed
mathematical corpus. We evaluate our model
on both perplexity metric and GSM8k mathematical benchmark. Paramanu-Ganita despite
being 35 times smaller than 7B LLMs, outperformed generalist LLMs such as LLaMa-1
7B by 28.4% points, LLaMa-2 7B by 27.6%
points, Falcon 7B by 32.6% points, PaLM
8B by 35.3% points, and math specialised
LLMs such as Minerva 8B by 23.2% points,
and LLEMMA-7B by 3.0% points in GSM8k
test accuracy metric respectively. ParamanuGanita also outperformed giant LLMs like
PaLM 62B by 6.4% points, Falcon 40B by
19.8% points, LLaMa-1 33B by 3.8% points
and Vicuna 13B by 11.8% points respectively.
The large significant margin improvement in
performance of our math model over the existing LLMs signifies that reasoning capabilities of language model are just not restricted to
LLMs with humongous number of parameters.
Paramanu-Ganita took 146 hours of A100 training whereas math specialised LLM, LLEMMA
7B, was trained for 23,000 A100 hours of training equivalent. Thus, our approach of pretraining powerful domain specialised language
models from scratch for domain adaptation is
much more cost-effective than performing continual training of LLMs for domain adaptation.
Hence, we conclude that for strong mathematical reasoning abilities of language model, we
do not need giant LLMs and immense computing power to our end. In the end, we want to
point out that we have only trained ParamanuGanita only on a part of our entire mathematical
corpus and yet to explore the full potential of
our model.
**1** **Introduction**
Pretrained Large Language Models (LLMs)
(LLaMa (Touvron et al., 2023a), LLaMa-2, (Tou
[1https://www.databricks.com/blog/mpt-7b](https://www.databricks.com/blog/mpt-7b)
-----
language model from scratch which only required
146 hours of A100 training. Yet, our mathematical
language model, Paramanu-Ganita outperformed
LLEMMA 7B math specialised model on GSM8K
(Cobbe et al., 2021) benchmark by significant margin of 3 percentage points despite being 35 times
smaller in size. On the memory requirements, the
LLEMMA 7B checkpoint size is 13.5GB whereas
our model, Paramanu-Ganita checkpoint size is less
than 1GB. Comparing with LLEMMA 7B training,
we dropped the requirement of requiring 23,000
A100 hours of continual training to 146 hours of
pretraining our mathematical language model from
scratch.
Our math model is based on Paramanu (Niyogi
and Bhattacharya, 2024a), released earlier by us.
We have trained an auto-regressive model from
scratch at a context size of 4096 on a single NVidia
A100-PCIE-40GB GPU. Our work is an attempt
to make dedicated mathematical specialized model
from scratch rather than performing continual pretraining of existing LLMs for domain adaptation.
Our models are much smaller in size by large order of magnitude of LLMs, having only 208 million parameters. Hence, our models are very fast
in inference without requiring any quantization of
weights, and our math model can be run on CPU
without need of GPU.
Our main contributions are as follows:
- We have curated an exclusive mathematical
pretraining corpus with high quality mathematical text including textbooks, lecture notes,
web crawled mathematical text, mathematical
source code from various programming languages, mathematical ArXiV papers, mathematical question answers pairs from forums
like StackExchange, Reddit. We also developed a math domain specialised tokenizer
from scratch.
- We developed first ever exclusive Auto Regressive decoder mathematical model of
208 million parameters only from scratch,
Paramanu-Ganita at context size of 4096 respectively on a single GPU. We pretrained
only on a part of our curated mathematical
corpus and are yet to explore the full potential
of the capabilities of our model.
- We evaluated our mathematical pretrained
models on validation perplexity, and on model
FLOPs Utilization (MFU) metric for pretraining. Table 1 shows the validation perplexity
and MFU metrics of pretraining.
- We also benchmarked our math model on popular math benchmark, i.e, GSM8k on CoT
prompting and compared with the generalist
LLMs and math domain specialised LLMs.
- Our model, Paramanu-Ganita 208M, outperformed LLaMa-1 (33B, 13B, 7B), LLaMa2 (7B, 13B), Falcon (40B, 7B) (Almazrouei
et al., 2023), PaLM (62B, 8B), MPT (30B,
7B), Vicuna 13B (Chiang et al., 2023), and
math-specialised LLMs like Minerva 8B
(Lewkowycz et al., 2022), LLEMMA-7B on
GSM8k benchmark by large significant margin despite being smaller by multiple orders
of magnitude in size.
**2** **Background**
**2.1** **Language Modeling**
This objective of the language modeling can formally described as maximizing the probability of a
sequence of tokens w1, w2, . . ., wN
_P_ (w1, w2, . . ., wn) = _i=1_ _P_ (wi | w1, w2, . . ., wi−1)
Y
(1)
where p(wt|w0, . . . wt−1) is the probability of
token wt given the sequence of previous tokens
_w0, . . ., wt−1._
The performance of a language model is generally being evaluated using the total cross-entropy
loss, i.e, the negative log-likelihood of the observed
data under the model under consideration, which
for a given dataset is defined as:
Avg Loss = − _N[1]_ _i=1_ log(P (wi | w1, w2, . . ., wi−1))
X
(2)
Lower the loss better is the model but just computing the loss may be not intuitive. Therefore,
perplexity is a metric to evaluate the performance
of a given language model which is the exponent
of the average loss.
**2.2** **Model Flops Utilization (MFU)**
Model Flops Uitilization (MFU) (Korthikanti et al.,
2022) estimate is the ratio of the observed throughput (tokens-per-second), relative to the theoretical
-----
maximum throughput of a system at peak FLOPs.
Model flops utilization (MFU) estimate the number of flops we do per iteration. It quantifies how
efficiently the GPUs are utilized in model training.
**3** **Data**
We have curated high quality mathematical text
from mathematics text books, lecture notes, web
such as OpenWebMath (Paster et al., 2023), blogs,
articles, AlgebraStack (Azerbayev et al., 2024),
mathematical question answers pairs from StackExchange, and math classified ArXiv scientific papers. We templatised the mathematical question
answer tuples as CoT (Wei et al., 2023) prompt.
The following template was used to templatise
the mathematical question answers pairs such as
”Below is an instruction that describes a task. Write
a response that appropriately completes the request.
### Q:{question} ### A: Let’s think step by step.
{answer}”
We also included templatised training set of
GSK8k in the pretraining dataset. Therefore, our
combined mathematical corpus is a mix dataset of
mathematical text, source code of programming
languages like TeX, Python, C, Matlab, etc., and
mathematical question answers tuples in CoT templatised format.
**4** **Related Work**
(Wei et al., 2023) boosts the reasoning capacity of
LLMs by supplementing the output with a series
of intermediate steps leading to the answer. Several approaches have been suggested to enhance
the quality of these reasoning paths. For instance,
Complexity-based CoT (Fu et al., 2023) picks
examples with more steps as in-context demonstrations, demonstrating that prompting with additional reasoning steps improves performance. SelfConsistency (Wang et al., 2023b) generates multiple reasoning paths and selects the final answer
through majority voting. Another set of techniques
involves finetuning-based methods, which adapt
open-source models (like LLaMA) using insights
from advanced closed-source LLMs (GPT-4, and
GPT-3.5-Turbo). (Magister et al., 2023) explore
the transfer of reasoning abilities through knowledge distillation. (Yuan et al., 2023) advocate for
the use of rejection sampling finetuning (RFT)
to enhance mathematical reasoning performance.
WizardMath (Choi et al., 2022) introduces a reinforced evol-instruct method for strengthening rea
soning abilities through supervised fine-tuning and
PPO training (Schulman et al., 2017). MAmmoTH
(Yue et al., 2023) integrates CoT and Program-ofThought (Chen et al., 2023) rationales to teach
LLMs how to utilize external tools (such as a
Python interpreter) for solving mathematical problems. (Wang et al., 2023a) propose a constraint
alignment loss for finetuning LLMs to improve
calibration.
**5** **Training**
We have pretrained our math model, ParamanuGanita, from scratch at a context size of 4096 on a
part of our curated corpus. However, we have excluded training of our math model on ArXiv math
papers as we believe that to learn basic mathematical concepts, and acquire mathematical logical
reasoning, ArXiv math papers are not required as
they generally meant to serve beyond high school
level mathematics. We started with simple strategy
to use a part of our curated corpus which generally covers various mathematical and logical concepts till secondary school education in general.
We performed mix-training combining both mathematical plain text, source code of programming
languages, and templatised mathematical question
answers pairs in the pretraining phase. For pretraining Paramanu-Ganita (4096 context size), we
performed 95%-5% data split for pretraining, as
we wanted to use most of the dataset for pretraining. We reported the validation perplexity of our
pre-trained mathematical language model in table
1. We then fine-tuned the math model on the templatised GSM8k training dataset for 2 epochs.
However, we are also working on training multiple pretrained models from scratch to check
whether different combinations of mathematical
books, web crawled mathematical text, ArXiv math
papers, source code of relevant programming languages, and mathematical question answers pairs
from popular forums such as StackExchange, Reddit improves the reasoning ability of our models
based on popular math benchmark, GSM8K.
**6** **Evaluation**
We evaluate the model’s ability to solve mathematics problems using chain of thought reasoning. Our
evaluations include GSM8k (Cobbe et al., 2021),
the de-facto standard benchmarks for evaluating
quantitative reasoning in language models. We reported Pass@1 accuracy of Paramanu-Ganita as
-----
**Model** **Perplexity** **MFU**
Paramanu-Ganita (4096) 4.34927 40.39193
Table 1: Perplexity and MFU metrics of ParamanuGanita pretrained models.
shown in the table 2.
We used the following evaluation prompt for
GSM8k test set for our math model.
”Below is an instruction that describes a task.
Write a response that appropriately completes the
request. ### Q:{question} ### A: Let’s think step
by step. ”
Table 2 also reports the various LLMs and their
reported scores quoted from the respective publications. Paramanu-Ganita despite being 35 times
smaller than 7B LLMs, outperformed LLaMa1 7B by 28.4% points, LLaMa-2 7B by 27.6%
points, Falcon 7B by 32.6% points, PaLM 8B
by 35.3% points, Minerva 8B by 23.2% points,
and LLEMMA-7B by 3% points respectively.
Paramanu-Ganita also outperformed PaLM 62B
by 6.4% points despite being smaller by 305 times,
Falcon 40B by 19.8% points (smaller by 197 times),
LLaMa-1 33B by 3.8% points (smaller by 162
times) and Vicuna 13B by 11.8 % points respectively despite being smaller by 64 times in model
parameters. LLEMMA 34B, Minerva 62B, Minerva 540B are the giant LLMS that performed better than Paramanu-Ganita on GSM8k benchmark.
However, as we have only trained our math model
on a part of our entire corpus, so we hope its not a
fair comparison to test the full potential of our math
model, we also did not perform DPO or PPO training to improve the performance of our ParamanuGanita 208M compared to other math specialised
LLMs.
**7** **Conclusions**
In this paper, we presented an exclusive mathematical Auto Regressive Decoder based language
model, Paramanu-Ganita 208M, pretrained from
scratch on a part of our entire mathematical corpus for a context size of 4096. We evaluated our
mathematical model on validation perplexity and
benchmarked our model on popular GSM8K math
benchmark. We found that Paramanu-Ganita despite being 35 times smaller than 7B LLMs, outperformed generalist LLMs such as LLaMa-1 7B
by 28.4% points, LLaMa-2 7B by 27.6% points,
Falcon 7B by 32.6% points, PaLM 8B by 35.3%
|Model|Parameters|GSM8k Pass@1|
|---|---|---|
|LLaMa-1|33B|35.6|
|LLaMa-1|7B|11.0|
|LLaMa-2|13B|28.7|
|LLaMa-2|7B|11.8|
|Code LLaMa|7B|10.5|
|Code LLaMa|34B|29.6|
|Falcon|40B|19.6|
|Falcon|7B|6.8|
|MPT|30B|15.2|
|MPT|7B|6.8|
|GPT-J|6B|34.9|
|Vicuna|13B|27.6|
|PaLM|8B|4.1|
|PaLM|62B|33.0|
|Minerva|8B|16.2|
|Minerva|62B|52.4|
|Minerva|540B|58.8|
|LLEMMA|7B|36.4|
|LLEMMA|34B|51.5|
|Paramanu-Ganita|208M|39.4|
Table 2: Evaluation of LLMs on GSM8k test set.
PaLM, LLaMa-1 (Touvron et al., 2023a), LLaMa-2
(Touvron et al., 2023b), Falcon (Almazrouei et al.,
2023), Code Llama (Rozière et al., 2024), MPT, Vicuna (Chiang et al., 2023), Minerva (Lewkowycz et al.,
2022) (Lewkowycz et al., 2022) scores are quoted from
respective author papers.
points, math specialised LLMs such as Minerva 8B
by 23.2% points, and LLEMMA 7B by 3% points
in accuracy metric respectively. Paramanu-Ganita
also outperformed PaLM 62B by 6.4% points, Falcon 40B by 19.8% points, LLaMa-1 33B by 3.8%
points and Vicuna 13B by 11.8 % points in test accuracy metric of GSM8k respectively despite just
having 208 million parameters only. However, we
have not trained our model on the entire mathematical corpus so we have not yet explored the full
potential of our model. We are currently working on extensive study to train multiple pretrained
mathematical language models from scratch and
of various sizes, somewhere in the same range, to
explore the different combinations of mathematical
books, web crawled mathematical text, ArXiv math
papers, source code of relevant programming languages, and mathematical question answers pairs
from popular forums such as StackExchange, Reddit to judge the full potential of our models and
check how far the reasoning ability of our current
-----
model can be improved based on popular math
benchmark, GSM8K and whether it performs better than the state-of-the-art LLM on GSM8k benchmark despite just being of 208 million parameters
in size.
**References**
Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru,
Mérouane Debbah, Étienne Goffinet, Daniel Hesslow,
Julien Launay, Quentin Malartic, Daniele Mazzotta,
Badreddine Noune, Baptiste Pannier, and Guilherme
[Penedo. 2023. The falcon series of open language](http://arxiv.org/abs/2311.16867)
[models.](http://arxiv.org/abs/2311.16867)
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster,
Marco Dos Santos, Stephen McAleer, Albert Q.
Jiang, Jia Deng, Stella Biderman, and Sean Welleck.
[2024. Llemma: An open language model for mathe-](http://arxiv.org/abs/2310.10631)
[matics.](http://arxiv.org/abs/2310.10631)
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W. Cohen. 2023. [Program of thoughts](http://arxiv.org/abs/2211.12588)
[prompting: Disentangling computation from reason-](http://arxiv.org/abs/2211.12588)
[ing for numerical reasoning tasks.](http://arxiv.org/abs/2211.12588)
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng,
Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan
Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion
[Stoica, and Eric P. Xing. 2023. Vicuna: An open-](https://lmsys.org/blog/2023-03-30-vicuna/)
[source chatbot impressing gpt-4 with 90%* chatgpt](https://lmsys.org/blog/2023-03-30-vicuna/)
[quality.](https://lmsys.org/blog/2023-03-30-vicuna/)
Jason Ingyu Choi, Saar Kuzi, Nikhita Vedula, Jie Zhao,
Giuseppe Castellucci, Marcus Collins, Shervin Malmasi, Oleg Rokhlenko, and Eugene Agichtein. 2022.
[Wizard of tasks: A novel conversational dataset for](https://aclanthology.org/2022.coling-1.310)
[solving real-world tasks in conversational settings.](https://aclanthology.org/2022.coling-1.310)
In Proceedings of the 29th International Conference
_on Computational Linguistics, pages 3514–3529,_
Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training verifiers to solve math word prob-](http://arxiv.org/abs/2110.14168)
[lems.](http://arxiv.org/abs/2110.14168)
[Aakanksha Chowdhery et al. 2022. PaLM: Scaling](http://arxiv.org/abs/2204.02311)
[language modeling with pathways.](http://arxiv.org/abs/2204.02311)
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and
[Tushar Khot. 2023. Complexity-based prompting for](http://arxiv.org/abs/2210.00720)
[multi-step reasoning.](http://arxiv.org/abs/2210.00720)
Vijay Korthikanti, Jared Casper, Sangkug Lym,
Lawrence McAfee, Michael Andersch, Mohammad
[Shoeybi, and Bryan Catanzaro. 2022. Reducing acti-](http://arxiv.org/abs/2205.05198)
[vation recomputation in large transformer models.](http://arxiv.org/abs/2205.05198)
Aitor Lewkowycz, Anders Andreassen, David Dohan,
Ethan Dyer, Henryk Michalewski, Vinay Ramasesh,
Ambrose Slone, Cem Anil, Imanol Schlag, Theo
Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy
[Gur-Ari, and Vedant Misra. 2022. Solving quantita-](http://arxiv.org/abs/2206.14858)
[tive reasoning problems with language models.](http://arxiv.org/abs/2206.14858)
Lucie Charlotte Magister, Jonathan Mallinson, Jakub
Adamek, Eric Malmi, and Aliaksei Severyn. 2023.
[Teaching small language models to reason. In Pro-](https://doi.org/10.18653/v1/2023.acl-short.151)
_ceedings of the 61st Annual Meeting of the Associa-_
_tion for Computational Linguistics (Volume 2: Short_
_Papers), pages 1773–1781, Toronto, Canada. Associ-_
ation for Computational Linguistics.
[Mitodru Niyogi and Arnab Bhattacharya. 2024a. Para-](http://arxiv.org/abs/2401.18034)
[manu: A family of novel efficient indic generative](http://arxiv.org/abs/2401.18034)
[foundation language models.](http://arxiv.org/abs/2401.18034)
Mitodru Niyogi and Arnab Bhattacharya. 2024b.
[Paramanu-ayn: An efficient novel generative and](http://arxiv.org/abs/2403.13681)
[instruction-tuned language model for indian legal](http://arxiv.org/abs/2403.13681)
[case documents.](http://arxiv.org/abs/2403.13681)
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, John
Schulman, Jacob Hilton, Fraser Kelton, Luke Miller,
Maddie Simens, Amanda Askell, Peter Welinder,
Paul Christiano, Jan Leike, and Ryan Lowe. 2022.
[Training language models to follow instructions with](http://arxiv.org/abs/2203.02155)
[human feedback.](http://arxiv.org/abs/2203.02155)
Keiran Paster, Marco Dos Santos, Zhangir Azerbayev,
and Jimmy Ba. 2023. [Openwebmath: An open](http://arxiv.org/abs/2310.06786)
[dataset of high-quality mathematical web text.](http://arxiv.org/abs/2310.06786)
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten
Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi,
Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy
Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna
Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron
Grattafiori, Wenhan Xiong, Alexandre Défossez,
Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, and Gabriel
[Synnaeve. 2024. Code llama: Open foundation mod-](http://arxiv.org/abs/2308.12950)
[els for code.](http://arxiv.org/abs/2308.12950)
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec
[Radford, and Oleg Klimov. 2017. Proximal policy](http://arxiv.org/abs/1707.06347)
[optimization algorithms.](http://arxiv.org/abs/1707.06347)
Karan Singhal, Tao Tu, Juraj Gottweis, Rory Sayres,
Ellery Wulczyn, Le Hou, Kevin Clark, Stephen Pfohl,
Heather Cole-Lewis, Darlene Neal, Mike Schaekermann, Amy Wang, Mohamed Amin, Sami Lachgar,
Philip Mansfield, Sushant Prakash, Bradley Green,
Ewa Dominowska, Blaise Aguera y Arcas, Nenad
Tomasev, Yun Liu, Renee Wong, Christopher Semturs, S. Sara Mahdavi, Joelle Barral, Dale Webster,
Greg S. Corrado, Yossi Matias, Shekoofeh Azizi,
Alan Karthikesalingam, and Vivek Natarajan. 2023.
[Towards expert-level medical question answering](http://arxiv.org/abs/2305.09617)
[with large language models.](http://arxiv.org/abs/2305.09617)
-----
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal
Azhar, Aurelien Rodriguez, Armand Joulin, Edouard
[Grave, and Guillaume Lample. 2023a. Llama: Open](http://arxiv.org/abs/2302.13971)
[and efficient foundation language models.](http://arxiv.org/abs/2302.13971)
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton
Ferrer, Moya Chen, Guillem Cucurull, David Esiobu,
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa,
Isabel Kloumann, Artem Korenev, Punit Singh Koura,
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten,
Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu,
Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas
[Scialom. 2023b. Llama 2: Open foundation and](http://arxiv.org/abs/2307.09288)
[fine-tuned chat models.](http://arxiv.org/abs/2307.09288)
Peiyi Wang, Lei Li, Liang Chen, Feifan Song, Binghuai
Lin, Yunbo Cao, Tianyu Liu, and Zhifang Sui. 2023a.
[Making large language models better reasoners with](http://arxiv.org/abs/2309.02144)
[alignment.](http://arxiv.org/abs/2309.02144)
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc
Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery,
[and Denny Zhou. 2023b. Self-consistency improves](http://arxiv.org/abs/2203.11171)
[chain of thought reasoning in language models.](http://arxiv.org/abs/2203.11171)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and
[Denny Zhou. 2023. Chain-of-thought prompting elic-](http://arxiv.org/abs/2201.11903)
[its reasoning in large language models.](http://arxiv.org/abs/2201.11903)
Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski,
Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, and Gideon Mann. 2023.
[Bloomberggpt: A large language model for finance.](http://arxiv.org/abs/2303.17564)
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting
Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and
[Jingren Zhou. 2023. Scaling relationship on learning](http://arxiv.org/abs/2308.01825)
[mathematical reasoning with large language models.](http://arxiv.org/abs/2308.01825)
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao
Huang, Huan Sun, Yu Su, and Wenhu Chen. 2023.
[Mammoth: Building math generalist models through](http://arxiv.org/abs/2309.05653)
[hybrid instruction tuning.](http://arxiv.org/abs/2309.05653)
**Acknowledgements**
The first author wants to dedicate his work to his
beloved parents, Rita Niyogi and Malay Niyogi for
their outstanding support throughout his journey.
-----
| [
"Mitodru, Niyogi",
"Arnab, Bhattacharya"
] | 2024-04-22T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2404.14395 | https://arxiv.org/abs/2404.14395 | https://www.semanticscholar.org/paper/c96cbdcbb9a45c0646dcbd6bce29cccbeb150f9b |
PATCH! Psychometrics-AssisTed benCHmarking of Large Language Models: A Case Study of Proficiency in 8th Grade Mathematics | Many existing benchmarks of large (multimodal) language models (LLMs) focus on measuring LLMs' academic proficiency, often with also an interest in comparing model performance with human test takers. While these benchmarks have proven key to the development of LLMs, they suffer from several limitations, including questionable measurement quality (e.g., Do they measure what they are supposed to in a reliable way?), lack of quality assessment on the item level (e.g., Are some items more important or difficult than others?) and unclear human population reference (e.g., To whom can the model be compared?). In response to these challenges, we propose leveraging knowledge from psychometrics - a field dedicated to the measurement of latent variables like academic proficiency - into LLM benchmarking. We make three primary contributions. First, we introduce PATCH: a novel framework for {P}sychometrics-{A}ssis{T}ed ben{CH}marking of LLMs. PATCH addresses the aforementioned limitations, presenting a new direction for LLM benchmark research. Second, we implement PATCH by measuring GPT-4 and Gemini-Pro-Vision's proficiency in 8th grade mathematics against 56 human populations. We show that adopting a psychometrics-based approach yields evaluation outcomes that diverge from those based on existing benchmarking practices. Third, we release 4 high-quality datasets to support measuring and comparing LLM proficiency in grade school mathematics and science against human populations. | null | ## PATCH! Psychometrics-AssisTed benCHmarking of Large Language Models: A Case Study of Proficiency in 8th Grade Mathematics
**Qixiang Fang & Daniel L Oberski**
Department of Methodology & Statistics
Utrecht University
Padualaan 14, 3584 CH, Netherlands
_{q.fang,d.l.oberski}@uu.nl_
**Dong Nguyen**
Department of Computing Sciences
University of Utrecht
Princetonplein 5, 3584 CC, Netherlands
_{d.p.nguyen}@uu.nl_
**Abstract**
Many existing benchmarks of large (multimodal) language models (LLMs)
focus on measuring LLMs’ academic proficiency, often with also an interest
in comparing model performance with human test takers. While these
benchmarks have proven key to the development of LLMs, they suffer
from several limitations, including questionable measurement quality (e.g.,
Do they measure what they are supposed to in a reliable way?), lack of
quality assessment on the item level (e.g., Are some items more important
or difficult than others?) and unclear human population reference (e.g., To
whom can the model be compared?). In response to these challenges, we
propose leveraging knowledge from psychometrics - a field dedicated to
the measurement of latent variables like academic proficiency - into LLM
benchmarking. We make three primary contributions. First, we introduce
PATCH: a novel framework for Psychometrics-AssisTed benCHmarking
of LLMs. PATCH addresses the aforementioned limitations, presenting
a new direction for LLM benchmark research. Second, we implement
PATCH by measuring GPT-4 and Gemini-Pro-Vision’s proficiency in 8th
grade mathematics against 56 human populations. We show that adopting
a psychometrics-based approach yields evaluation outcomes that diverge
from those based on existing benchmarking practices. Third, we release
4 high-quality datasets to support measuring and comparing LLM proficiency in grade school mathematics and science against human populations.
**1** **Introduction**
Large language models (LLMs), including their multimodal variants like vision language
models, have witnessed significant advancements in recent years. These models are typically evaluated on established benchmarks that assess their performance across a diverse
set of tasks, including commonsense reasoning (e.g., HellaSwag by Zellers et al. (2019), WinoGrande by Sakaguchi et al. (2021)), coding (HumanEval by Chen et al. (2021), Natural2Code
by Google (2023), and academic proficiency. Academic proficiency, in particular, has become a
crucial part of LLM evaluation, as evidenced by the large number of related benchmarks
(e.g., MMLU by Hendrycks et al. (2021), ARC by Clark et al. (2018), GSM8K by Cobbe et al.
(2021), DROP by Dua et al. (2019), MATH by Hendrycks et al. (2021)), and recent model
technical reports’ focus on them (e.g., OpenAI, 2023; Google, 2023). In these benchmarks,
LLM performance is also often contrasted with human performance.
Despite the success of existing benchmarks in advancing LLM research, they are not without
limitations. The first concern is measurement quality: Do these benchmarks measure what
they are supposed to in a reliable way? Many benchmarks are created via crowd-sourced
knowledge, by asking a convenience group of individuals (e.g., crowd workers, paper
authors) to create new test items (e.g., GSM8K, DROP) or collecting them from (often undocumented) sources (e.g., websites, textbooks, school exams) (e.g., MATH, MMLU, ARC).
Without domain expert input and rigorous testing of item quality, undesirable outcomes can
-----
occur, including a mismatch between a benchmark and its claimed measurement goal, missing information in a question, wrong answer keys, and low data annotation agreement (Nie
et al., 2020).[1]
Second, current benchmarks do not account for differences across test items, such as item
discriminativeness[2] and difficulty (see Section 3.1). For example, consider three items A
(easy), B (hard) and C (hard). While answering correctly to A and B would result in the
same accuracy score as answering correctly to B and C, the latter (i.e., answering correctly
to more difficult items) would imply higher proficiency. Furthermore, items that are too
easy or too difficult (i.e., low discriminativeness) will fail to differentiate models of different
proficiency levels. Thus, without accounting for item differences, benchmarking results,
especially model rankings, can be misleading.
Third, while many benchmarks are used to compare LLMs against humans, the human population to be compared is unclear (Tedeschi et al., 2023). For instance, human performance
in MATH is based on the paper’s authors; in MMLU, crowd workers; in MATH, 6 university
students. Using such convenience samples (with none to little information about sample
characteristics), the resulting human performance is local to that specific sample and cannot
be generalised to other human samples or specific populations.
To address these challenges, we propose integrating insights from psychometrics - a field
dedicated to the measurement of latent variables like cognitive abilities and academic
proficiency - into LLM benchmarking processes. In particular, we draw on two research areas
in psychometrics: item response theory (see Section 3.1) and test development (see Section 3.2
and 3.3). The former can help to estimate academic proficiency more accurately than
common practice in LLM benchmarks (e.g., means, percentages, accuracy scores). It can
also provide diagnostic information about the quality of each test item. The latter, test
development knowledge, can help to build high quality LLM benchmarks where comparison
to specific human populations can be made.
Our paper makes three primary contributions. First, we present PATCH: a novel framework
for Psychometrics-AssisTed benCHmarking of LLMs, which addresses the aforementioned
limitations of existing benchmarks. Second, we demonstrate the implementation of PATCH
by testing GPT-4 and Gemini’s proficiency in 8th grade mathematics using the released test
items and data from Trends in International Mathematics and Science Study[3] (TIMSS) 2011. We
show empirically that a psychometrics-based approach can lead to evaluation outcomes that
diverge from those obtained through conventional benchmarking practices and are more
informative, underscoring the potential of psychometrics to reshape the LLM benchmarking
landscape. Third, we make our benchmark dataset and evaluation code[4] based on TIMSS
2011 available to future researchers, along with three other mathematics and science datasets
based on TIMSS 2011 and 2008[5].
**2** **Related Work**
We are not the first to propose leveraging psychometrics for research on LLMs and other
areas in NLP. For instance, psychometric scales have been used to examine the psychological
profiles of LLMs such as personality traits and motivations (Huang et al., 2024; Pellert et al.,
2023; Dillion et al., 2023). The text in these scales can also be used to improve encoding
and prediction of social science constructs like personality traits (Kreuter et al., 2022; Vu
et al., 2020; Yang et al., 2021; Fang et al., 2023a). Psychometrics-based reliability and validity
tests have also been proposed or/and used to assess the quality of NLP bias measures (Du
et al., 2021; van der Wal et al., 2024), text embeddings (Fang et al., 2022), political stance
1We avoid calling out specific datasets here, but a quick Internet search would reveal many blogs
reporting large percentages of errors in existing LLM benchmarks.
2In psychometrics, the term “item discrimination” is used. However, given the ambiguity and
negative connotation of “discrimination”, we adopt “discriminativeness” instead.
[3http://timssandpirls.bc.edu/timss2015/encyclopedia/](http://timssandpirls.bc.edu/timss2015/encyclopedia/)
[4Available at https://github.com/fqixiang/patch llm benchmarking with psychometrics](https://github.com/fqixiang/patch_llm_benchmarking_with_psychometrics)
[5Available at https://zenodo.org/records/12531906](https://zenodo.org/records/12531906)
-----
detection (Sen et al., 2020), annotations (Amidei et al., 2020), user representations (Fang
et al., 2023b), and general social science constructs (Birkenmaier et al., 2023).
The most closely related work to our paper is the use of IRT models in NLP for constructing
more informative test datasets (Lalor et al., 2016), comparison of existing evaluation datasets
and instances (e.g., difficulty, discriminativeness) (Sedoc & Ungar, 2020; Vania et al., 2021;
Rodriguez et al., 2021; Lalor et al., 2018; Rodriguez et al., 2022), as well as identification
of difficult instances from training dynamics (Lalor & Yu, 2020; Lalor et al., 2019). Our
work distinguishes itself from these papers in two aspects. First, we do not apply IRT to
_existing LLM datasets/benchmarks. Instead, we introduce a framework for benchmarking_
LLMs by leveraging both IRT and test development knowledge from psychometrics. The
goal of this framework is to generate new, high-quality benchmarks for LLMs that warrant
valid comparison with human populations. Second, we demonstrate our framework with
a mathematics proficiency test validated on 56 human populations, and compare LLM
performance with human performance. To the best our knowledge, we are the first to apply
psychometrically validated (mathematics) proficiency tests to LLMs and make valid model
versus human comparison.
**3** **Preliminaries**
**3.1** **Item Response Theory**
Item response theory (IRT) refers to a family of mathematical models that describe the
functional relationship between responses to a test item, the test item’s characteristics (e.g.,
item difficulty and discriminativeness) and test taker’s standing on the latent construct being
measured (e.g., proficiency) (AERA et al., 2014). Unlike classical test theory and current LLM
benchmarks, which focus on the total or mean score of a test, IRT models takes into account
the characteristics of both the items and the individuals being assessed, offering advantages
like more accurate estimation of test takers’ proficiency, and item quality diagnostics. As
such, IRT models have gained widespread adoption in various fields, including education,
psychology, and healthcare, where precise measurement and assessment are crucial.
We describe below three fundamental IRT models suitable for different types of test items:
the 3-parameter logistic (3PL) model for multiple choice items scored as either incorrect or
correct, the 2-parameter logistic (2PL) model for open-ended response items scored as either
incorrect or correct, as well as the generalised partial credit (GPC) model for open-ended
response items scored as either incorrect, partially correct, or correct.
The 3PL model gives the probability that a test taker, whose proficiency is characterised by
the latent variable θ, will respond correctly to item i:
1 _ci_
_P (xi = 1_ _θ, ai, bi, ci) = ci +_ _−_ (1)
_|_ 1 + exp (−1.7 · ai · (θ − _bi))_ _[≡]_ _[P][i][,1][ (][θ][)]_
where xi is the scored response to item i (1 if correct and 0 if incorrect); θ is the proficiency
of the test taker, where higher proficiency has a greater probability of responding correctly;
_ai is the slope parameter of item i, characterising its discriminativeness (i.e., how well_
the item can tell test takers with higher θ from those with lower θ)[6]; bi is the location
parameter of item i, characterising its difficulty; ci is the lower asymptote parameter of
item i, reflecting the chances of test takers with very low proficiency selecting the correct
answer (i.e., guessing). Correspondingly, the probability of an incorrect response to item i is:
_Pi,0 = P (xi = 0 | θk, ai, bi, ci) = 1 −_ _Pi,1 (θk). The 2PL model has the same form as the 3PL_
model (Equation 1), except that the ci parameter is fixed at zero (i.e., no guessing).
The GPC model Muraki (1992) gives the probability that a test taker with proficiency θ will
have, for the i[th] item, a response xi that is scored in the l[th] of mi ordered score categories:
6The number 1.7 is a scaling parameter to preserve historical interpretation of parameter ai on the
normal ogive scale (Camilli, 1994). Also applies to 2PL and GPC models.
-----
_xi = l | θ, ai, bi, di,1, · · ·, di,mi−1 =_ ∑[m]g=[i]exp[−]0[1] [exp]∑[l]v=∑0vg[1.7]=0[ ·][1.7][ a][i][ ·][·][ a] [i]θ[ ·] −θb −i +b di +i,v d[]i,v
_≡_ _Pi,l (θ)_ []
(2)
where mi is the number of response score categories for item i, usually 3; xi is the scored
response to itemcorrect, and correct responses); i, ranging between 0 and θ, ai, bi have the same interpretations as in the 3PL and mi − 1 (i.e., 0, 1 and 2, for incorrect, partially
2PL models; di,1 is the category l threshold parameter. Setting di,0 = 0 and ∑[m]j=[i][−]1 [1] _di,j = 0_
resolves the indeterminacy of the model parameters.
Assuming conditional independence, the joint probability of a particular response pattern x
across a set of n items is given by:
_mi−1_
## ∏ Pi,l (θ)[u][i][,][l] (3)
_l=0_
_P (x_ _θ, item parameters ) =_ ∏
_|_ _i=1_
where Pi,l (θ) is of the form appropriate to the type of item (i.e., 3PL, 2PL or GPC), mi is
equal to 2 for dichotomously scored items and 3 for polytomously scored items, and ui,l is
an indicator variable defined as:
1 if response xi is in category l
_ui,l =_
0 otherwise
This function can be viewed as a likelihood function to be maximised by the item parameters.
With the estimated item parameters, θ can then be estimated (Reise & Revicki, 2014).
**3.2** **Test Development in Psychometrics**
|Psychometrics|LLM Benchmarking|
|---|---|
|1. Construct and test need specification 2. Overall planning. 3. Item development. a. Construct refinement. b. Item generation. c. Item review. d. Piloting of items. e. Psychometric quality analysis. 4. Test construction and specification. 5. Implementation and testing. 6. Psychometric quality analysis. 7. Test scoring and norming. 8. Technical Manual.|1. (Construct and) test need specification 2. Overall planning. 3. Dataset development. a. Existing item collection OR - Quality control. b. Item creation and/or annotation. - Instructions. - (Pilot) study. - Agreement analysis. - Error analysis. 4. Dataset construction. 5. Model selection and evaluation. 6. Benchmark release.|
|---|---|
Table 1: Contrasting test development between psychometrics and LLM benchmarking.
Test development in psychometrics concerns the process of developing and implementing
a test according to psychometric principles (Irwing & Hughes, 2018). Table 1 contrasts
psychometric test development (based on Irwing & Hughes (2018)) with common LLM
benchmarking procedures (based on Bowman et al. (2015); Raji et al. (2021)). What sets
psychometric test development apart from typical LLM benchmark development is its focus
on ensuring that the test matches a well-defined construct via expert-driven item generation,
rigorous pilot testing, use of factor analysis and IRT models for item and test diagnostics,
establishment of scoring and normalisation standards, and testing on representative samples
of intended test takers. The result of this elaborate process is a high-quality test that can
assess the construct of interest for the test takers in a valid and reliable way. Many large-scale
-----
assessments, such as PISA (Programme for International Student Assessment), TIMSS and
PIRLS (Progress in International Reading Literacy Study), conform to such a process.
We will use Proficiency in Grade School Mathematics (PGSM) as the construct of interest
to further illustrate this process. In Step 1, the construct of interest and the test need are
specified. For instance, how do we define PGSM? Is it based on a specific curriculum? What
does existing literature say? Which education levels are we interested in? Is the test meant
for comparison between students within a school, or between schools within a country?
Such questions help us to clarify what we want to measure and how it can be measured.
In Step 2, we make necessary planning: How many test items? What kind of item format
(e.g., multiple choice, short answer questions)? Will the test scores be standardised? How
to assess the quality of test items? What are the desired psychometric properties of the
test items (e.g., how discriminative and difficult should the items be?) and the test as a
whole (e.g., internal consistency)? Will we pilot any test item? Will the test be computer- or
paper-based? To sample test takers, what kind of sampling frames and strategies should we
use?
In Step 3, we develop test items, which is an iterative procedure involving five steps: (a)
construct refinement, where we further clarify the definition of PGSM (e.g., What content
domains should be included: number, algebra, probability theory? Is proficiency only about
knowing, or also about applying and reasoning?); (b) generate a pool of items with domain
experts; (c) review the items for obvious misfit, errors and biases; (d) pilot the items with an
ideally representative sample of test takers; (e) with the responses from the pilot step, we
can assess the psychometric properties of the test items with IRT and factor analysis (e.g.,
item discriminativeness; item difficulty; factor structure[7]). We iterate this procedure until
we have a set of test items with acceptable psychometric properties. Then, in Step 4, we
construct the PGSM test by specifying, for instance, which items to include (if not all), in
which order, how many equivalent test versions, and what scoring instructions to use.
In Step 5, the test gets implemented to the intended test takers, followed by Step 6: another
round of quality analysis. If any item displays low quality characteristics (e.g., zero or
negative discriminativeness), it will be left out of the final scoring. In Step 7, responses of
the test takers are scored for each item, and the resulting item-level scores form the basis
for estimating proficiency scores using IRT or simpler procedures like (weighted) sums.
Normalising the proficiency scores are also typical (e.g., a mean of 500 and a standard
deviation of 100) to facilitate interpretations and comparisons. Finally, in Step 8, a technical
manual is compiled, detailing all the results from Step 1–7, to facilitate correct re-use of the
collected data, the test, as well as interpretation of test scores, among other purposes.
**3.3** **LLM Benchmark Development**
Developing LLM benchmarks follows a similar yet different process. Take GSM8K (Cobbe
et al., 2021) as an example. According to the GSM8k paper, the authors started by specifying
the need for a large, high quality mathematics test at grade school level that is of moderate
difficulty for LLMs (Step 1). The implied construct (i.e., PGSM) is, however, not explicitly
linked to any specific curriculum.
Then, the overall planning is made (Step 2): The number of items should be in the thousands;
the items will be curated by crowd workers; agreement and error analysis will be used to
investigate the quality of the dataset; GPT-3 will be used to benchmark the dataset and
verify the difficulty of the dataset.
In Step 3, where dataset development[8] takes place, often one of the two strategies is used:
_either collect items from existing datasets and other sources and compile them into a new_
dataset, or, like in GSM8K, create own items from scratch (with annotations). The latter is
7Factor structure refers to the correlational relationships between the test items expected to capture
the construct of interest.
8Note that we use the term “dataset development” here, contrasting “item development” in
psychometrics, because of LLM benchmarks’ typical emphasis on large and multiple datasets rather
than concrete test items.
-----
usually an iterative procedure consisting of four parts: creating instructions (and possibly a
user interface) for item generation and/or annotation; conduct a (pilot) study to collect the
items and/or annotations; check annotator agreement; and assess errors associated with the
items or annotations. This step is iterated until a sufficient number of items and datasets
is reached while meeting desired quality standards (e.g., high annotator agreement, low
error rate). In total, GSM8K includes 8,500 items with solutions, with identified annotator
disagreements resolved and a less than 2% error rate.
In Step 4, the generated items make up the final dataset, typically split into training, evaluation and testing partitions. In Step 5, the final selection of LLMs is made and evaluated on
the dataset. Finally, in Step 6, the benchmark gets released, which typically consists of the
dataset as well as its documentation (often a research paper) and benchmarking results.
**Comparison with Psychometrics** While sharing similarity with test development in psychometrics, benchmark development for LLMs falls short on four aspects. First, the construct
of interest is often under-specified, leading to a mismatch between the intended construct
and what the dataset actually measures. Take GSM8K as an example: While the dataset
is intended to measure proficiency in grade school mathematics, the target grade level(s)
are unclear and it only focuses on one content domain (algebra), missing other relevant
ones like geometry and data. This is likely the result of not using established mathematics
curricula and domain experts to develop test items.
Second, despite researchers’ interest in comparing LLM performance with human test takers
(e.g., the GSM8K paper claims that “a bright middle school student should be able to solve
every problem”), such comparisons usually cannot be made because the test has not been
designed with humans in mind or validated on any representative samples of the test’s
target user populations.
Third, besides agreement and error analysis, LLM benchmarks can benefit from psychometric analysis of test items, (i.e., checking item discriminativeness and difficulty, as well as
the factor structure of the items). While this is not yet the norm, there have been promising
attempts (see Section 2).
Lastly, the released benchmark often does not contain sufficient details about all the steps
involved in creating the benchmark. For instance, the GSM8K paper does not present the
instructions for item creation and annotation, the results from the pilot study, the agreement
statistics, or annotator characteristics, all of which are important for external researchers to
independently validate the quality of the benchmark.
**4** **PATCH: Psychometrics-AssisTed benCHmarking of LLMs**
Figure 1 illustrates PATCH, our conceptualisation of a psychometrics-assisted framework
for benchmarking LLMs.[9] According to PATCH, the first step is to define the construct of
interest (e.g., proficiency in 8th grade mathematics).
The second step is to look for an existing validated psychometric test that measures this property; alternatively, a test can be developed from scratch, following the procedures described
in Section 3.2, which likely requires collaboration with experienced psychometricians. The
term “validated” means that the test has been tested on a representative sample of the target
population of human test takers and fulfils several psychometric quality requirements (e.g.,
discriminative items that are well distributed across different difficulty levels; showing
high reliability (e.g., high internal consistency) and validity (e.g., the test’s factor structure
matches the construct definition)).
Next (Step 3→4), we use the items in the validated psychometric test to construct prompts
for the LLMs under evaluation and then sample responses. A response typically consists of
a task description, an explanation and an answer (key). Therefore, in Step 4→5, we extract
the answer (key) for each item’s response, then grade it to obtain item scores (Step 5→6).
9PATCH is partly inspired by the Hexagon Framework of scientific measurements proposed
by Mari et al. (2023).
-----
|4. sampled responses|extraction|5. extracted responses|scoring|6. item scores|Col6|
|---|---|---|---|---|---|
|||||||
|3. prompts||||7. IRT model(s)||
||||||norm|
|||||8. proficiency estimates||
Figure 1: PATCH: A Psychometrics-AssisTed framework for benCHmarking LLMs.
For Step 2→7, the responses of human test takers (and of LLMs, if a sufficient number of
LLMs are involved) can be used to estimate IRT item parameters and subsequently the
latent proficiency scores for each test taker (human or LLM) with uncertainty estimates.
Multiple IRT models are often used because of the use of different types of test items. These
latent scores are typically standardised z-scores (i.e., mean of 0 and standard deviation of
1), which can optionally go through further normalisation (e.g., re-scaling to a mean of 500
and a standard deviation of 100) (Step 6→7). These final proficiency scores can be used for
comparison with other models and populations.
It can be said that the heart of PATCH is a validated psychometric test, which not only
provides the basis for accurate measurement of a model’s capability of interest but also
facilitates comparison between LLMs and human test takers. Unfortunately, developing
such a test can be a long and expensive process; utilising existing tests can be a shortcut,
which, however, should satisfy three requirements: clear human population reference; test
items available (released); human responses and/or item parameter estimates available.
The second and third requirement are in practice difficult to meet, as many test institutes
do not make their test items public due to commercial interests (e.g., SAT) or the need to
measure trends over time (e.g., PISA). Collaboration with test institutes would alleviate this
problem.
To the best of our knowledge, when it comes to academic proficiency tests, only TIMSS and
PIRLS tests from certain years can be readily used for PATCH-based LLM benchmarking.
TIMSS measures proficiency in grade school mathematics and science (4th grade, 8th grade,
and final year of secondary school), while PIRLS assesses reading comprehension in 9/10year-olds. Both TIMSS and PIRLS are administered in a large number of countries and
regions with representative student samples, enabling country/region-level comparisons. In
the following section, we demonstrate PATCH by measuring GPT-4 and Gemini’s proficiency
in 8th grade mathematics, using the latest available data from TIMSS 2011.
**5** **Demonstration: Measuring LLM Proficiency in 8th Grade Mathematics**
**5.1** **Data: TIMSS 2011 8th Grade Mathematics**
56 countries/regions participated in TIMSS 2011, with typically a random sample of about
150 schools in each country/region and a random sample of about 4,000 students from
these schools. These sample sizes are determined on the basis of a ≤ .035 standard error for
each country’s mean proficiency estimate. The use of random sampling makes unbiased
proficiency estimates possible at the population level. TIMSS 2011 has released a publicly
available database[10], of which three components are relevant to our study:
[10https://timssandpirls.bc.edu/timss2011/international-database.html](https://timssandpirls.bc.edu/timss2011/international-database.html)
-----
**Test Items** The TIMSS 2011 study has released 88 mathematics test items, 48 of which are
multiple choice, 30 open-ended items scored as either incorrect or correct, and 10 open-ended
items scored as either incorrect, partially correct, or correct. These items assess four content
domains representative of 8th grade mathematics curriculum (agreed upon by experts from
participating countries/regions): number, algebra, geometry, data and chance. Within each
domain, items are designed to cover various subtopics (e.g., decimals, functions, patterns)
and three cognitive domains: knowing, applying and reasoning. These test items are only
available in a PDF file that can be downloaded from the NCES website, which includes also
scoring instructions.[11] To extracting them into a format compatible with LLMs, we used
OCR tools to extract as much textual information as possible, converted mathematical objects
(e.g., numbers, symbols, equations, tables) into LaTeX format (following earlier benchmarks
like MATH (Hendrycks et al., 2021)) and figures into JPEG format. See Appendix A.1 for
examples. We have released this LLM-compatible version of test items, as well as an eighth
grade science test dataset from TIMSS 2011, an advanced secondary school mathematics
test dataset from TIMSS 2008, and an advanced secondary school physics test dataset from
TIMSS 2008[12].
**IRT and Item Parameters** The second part of the dataset concerns the specific IRT model
used for each test item and the estimated item parameters (e.g., discriminativeness, difficulty), which can be used to reconstruct the IRT models for estimating proficiency scores.
**Student Responses and Proficiency Estimates** Lastly, responses of the sampled students
(about 4,000 on average per country/region) to each test item and their proficiency estimates
have also been made available, allowing us to construct proficiency score distributions for
each country and region.
**5.2** **LLMs: GPT-4 with Vision and Gemini-Pro-Vision**
Considering that more than 1/3 of the test items contain visual elements, we chose two
vision language models: GPT-4 with Vision (GPT-4V) and Gemini-Pro-Vision, using the
respective APIs. We are aware of other LLMs with vision capabilities. However, our goal is
to showcase PATCH instead of benchmarking all relevant LLMs.
A major concern in using these closed-source LLMs is data contamination, which is difficulty
to check due to inaccessible (information about) training data. However, as our focus is on
demonstrating the PATCH framework, data contamination is less worrying. Furthermore,
data contamination is still unlikely for four reasons. First, these test items are copyrighted,
forbidding commercial use. Second, the test items are hard to extract from the PDF file.
Third, to the best of our knowledge, these test items do not exist in current LLM mathematics
benchmarks. Fourth, we ask GPT-4V and Gemini-Pro-Vision to explain or provide solutions
to the test items’ IDs (available in the PDF file). Both failed to recognise these specific test
IDs.
**5.3** **Prompts and Temperature**
We design two separate prompts for each test item: the system message and the user
message. We design the system message according to the prompt engineering guide by
OpenAI, utilising chain-of-thought and step-by-step instructions on how to respond to the
user message (i.e., with a classification of question type, an explanation and an answer
(key)).[13] The system message is the same for all test items (see Appendix A.2). Furthermore,
to account for LLMs’ sensitivity to slight variations in prompts (e.g., Sclar et al., 2024;
Loya et al., 2023), we generate 10 additional variants of the system prompt with slight
perturbations (e.g., lowercase a heading, vary the order of unordered bullet points).
[11https://nces.ed.gov/timss/pdf/TIMSS2011 G8 Math.pdf](https://nces.ed.gov/timss/pdf/TIMSS2011_G8_Math.pdf)
[12Available via https://zenodo.org/records/12531906](https://zenodo.org/records/12531906)
[13https://platform.openai.com/docs/guides/prompt-engineering](https://platform.openai.com/docs/guides/prompt-engineering)
-----
The user message is item-specific, containing both the item’s textual description and the
associated image(s) in base 64 encoded format. See Appendix A.1 for examples.[14]
Following OpenAI (2023)’s technical report, we set the temperature parameter at 0.3 for
multiple choice items and 0.6 for the others. See Appendix B for example responses.
**5.4** **Response Scoring and Proficiency Estimation**
We manually examine the sampled responses from GPT-4V and Gemini-V and score them
following the official scoring rubrics of TIMSS 2011. Then, for multiple choice items,
we apply the 3PL model (Equation 1); for open-ended items, we apply the GPC model
(Equation 2) if partially correct response is admissible, otherwise the 2PL model. We use
maximum likelihood to obtain unbiased estimates of model proficiency scores (θ) with the
mirt package in R (Chalmers, 2012). This results in 11 θ estimates per model due to the use
of 11 system message variants. We then use inverse variance weighting (Mar´ın-Mart´ınez &
Sanchez-Meca´, 2010) to combine these estimates. Inverse variance weighting gives more
weight to estimates that are more precise (i.e., having lower variance) and less weight to
those that are less precise (i.e., having higher variance). This way, we obtain a more accurate
_overall θ estimate and its 95% confidence interval (CI) for each model._
**5.5** **Results**
|A GPT−4V Korea,Rep.of Chinese Taipei Singapore Hong Kong SAR Japan Russian Federation Gemini−Pro−Vision Canada, Quebec Israel Hungary Canada, Ontario Finland England United States Canada, Alberta Australia 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0|B GPT−4V Korea,Rep.of Singapore Chinese Taipei Hong Kong SAR Japan Gemini−Pro−Vision Russian Federation Canada, Quebec Israel Finland Canada, Ontario United States England Canada, Alberta Hungary Australia 200 300 400 500 600 700 800|
|---|---|
Figure 2: Distribution of proficiency estimates for GPT-4V, Gemini-Vision-Pro and se**lected participating countries/regions of the TIMSS 2011 8th grade mathematics test. Left**
figure (A) shows the proficiency estimates based on the percentages of correct responses.
Right figure (B) shows the IRT-based proficiency estimates. The middle vertical line in each
box plot represents the weighted mean proficiency score, with the error bars indicating its
95% confidence interval. The borders of each box indicate the range of the middle 50% of all
values, with the two whiskers indicating the 5th and 95th percentiles.
Figure 2 shows the proficiency score distribution and ranking of 15 selected participating
countries and regions, GPT-4V and Gemini-Pro-Vision. Only 15 countries are shown here
to save space. The complete figures can be found in Appendix C. The proficiency scores
(x-axis) on the left panel are percentages of correct responses, which is the default approach
in current LLM benchmarking; the proficiency estimates on the right panel are based on
IRT. We make three observations. First, regardless of the method of proficiency estimation,
14While we are aware of other prompt engineering techniques, such as few-shot prompting and
self-consistency, we did not experiment with them, as our focus is on demonstrating the use of PATCH.
-----
GPT-4V has the overall best performance relative to Gemini-Pro-Vision and the average
proficiency of 8th grade students of each participating country/region. Second, the method
of proficiency estimation affects the ranking results. For instance, while Chinese Taipei is
ranked 3rd on the left, it is ranked 4th on the right; Gemini-Pro-Vision is ranked 8th on
the left, but ranked 7th on the right. Similarly, while Hungary is ranked 11th on the left,
it drops to the 16th place on the right. Third, the method of proficiency estimation affects
the estimated 95% CIs, which are usually wider when IRT is used (as it accounts for both
item and test taker variances). Notably, while on the left panel the CI of GPT-4V does not
overlap with the second best, South Korea, indicating a statistically significant difference,
they overlap on the right panel, suggesting otherwise. This finding shows that the adoption
of PATCH is likely going to make a difference to LLM benchmark results.
**6** **Conclusions**
In this paper, we propose PATCH, a psychometrics-inspired framework to address current
limitations of LLM benchmarks, especially for the purpose of model and human comparison.
We demonstrate PATCH with an 8th grade mathematics proficiency test, where PATCH
yields evaluation outcomes that diverge from those based on existing benchmarking practices. This underscores the potential of PATCH to reshape the LLM benchmarking landscape.
Nevertheless, our paper has several limitations. First, PATCH requires validated tests, which
can be resource-intensive if tests need to be developed from scratch. However, this also
opens up opportunities for collaboration between LLM researchers, psychometricians and
test institutes. Second, the validity, reliability, and fairness of using tests validated solely
on humans for LLM benchmarking are debatable due to possibly differing notions of proficiency and cognitive processes between LLMs and humans. Nonetheless, such tests are still
better than non-validated benchmarks, particularly for comparison of model and human
performance. Advancing LLM benchmarking further requires tests validated on LLMs (and
humans for model-human comparisons), necessitating theoretical work on LLM-specific
constructs and the development of LLM-specific IRT models and testing procedures. Third,
our experiment only includes two proprietary LLMs and one proficiency test. We consider
this sufficient for demonstrating PATCH, but not enough if the goal is to benchmark all
relevant LLMs across different tests.
**Acknowledgements**
We thank Anna Wegmann, Yupei Du, Melody Sepahpour-Fard, Elise Herrewijnen, Gianluca
Sperduti for their helpful suggestions and comments. This work was supported by the
Dutch Research Council (NWO) (grant number VI.Vidi.195.152 to D. L. Oberski; grant
number VI.Veni.192.130 to D. Nguyen).
**References**
AERA, APA, and NCME. The Standards for Educational and Psychological Testing. American
Educational Research Association, 2014.
Jacopo Amidei, Paul Piwek, and Alistair Willis. Identifying annotator bias: A new IRTbased method for bias identification. In Proceedings of the 28th International Conference on
_Computational Linguistics, pp. 4787–4797, 2020. doi: 10.18653/V1/2020.COLING-MAIN._
421.
Lukas Birkenmaier, Clemens Lechner, and Claudia Wagner. ValiTex - A uniform validation
framework for computational text-based measures of social science constructs. arXiv,
[2023. URL https://arxiv.org/abs/2307.02863.](https://arxiv.org/abs/2307.02863)
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A
large annotated corpus for learning natural language inference. In Llu´ıs Marquez, Chris`
Callison-Burch, and Jian Su (eds.), Proceedings of the 2015 Conference on Empirical Meth_ods in Natural Language Processing, pp. 632–642, Lisbon, Portugal, 2015. Association for_
Computational Linguistics. doi: 10.18653/v1/D15-1075.
-----
Gregory Camilli. Teacher’s corner: origin of the scaling constant d= 1.7 in item response theory. Journal of Educational Statistics, 19(3):293–295, 1994. doi: 10.3102/10769986019003293.
R Philip Chalmers. mirt: A multidimensional item response theory package for the r
environment. Journal of Statistical Software, 48:1–29, 2012. doi: 10.18637/jss.v048.i06.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan,
Harrison Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri,
Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke
Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad
Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, David W. Cummings,
Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William H.
Guss, Alex Nichol, Igor Babuschkin, Suchir Balaji, Shantanu Jain, Andrew Carr, Jan Leike,
Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew M. Knight, Miles
Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam
McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models
[trained on code. arXiv, 2021. URL https://arxiv.org/abs/2107.03374.](https://arxiv.org/abs/2107.03374)
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick,
and Oyvind Tafjord. Think you have solved question answering? Try ARC, the AI2
[Reasoning Challenge. arXiv, 2018. URL https://arxiv.org/abs/1803.05457.](https://arxiv.org/abs/1803.05457)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz
Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher
Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv, 2021.
[URL https://arxiv.org/abs/2110.14168.](https://arxiv.org/abs/2110.14168)
Danica Dillion, Niket Tandon, Yuling Gu, and Kurt Gray. Can AI language models replace
human participants? Trends in Cognitive Sciences, 2023. doi: 10.1016/j.tics.2023.04.008.
Yupei Du, Qixiang Fang, and Dong Nguyen. Assessing the reliability of word embedding
gender bias measures. In Proceedings of the 2021 Conference on Empirical Methods in Natural
_Language Processing, pp. 10012–10034, Online and Punta Cana, Dominican Republic, 2021._
Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.785.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt
Gardner. DROP: A reading comprehension benchmark requiring discrete reasoning over
paragraphs. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the
_2019 Conference of the North American Chapter of the Association for Computational Linguistics:_
_Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2368–2378, Minneapolis,_
Minnesota, 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1246.
Qixiang Fang, Dong Nguyen, and Daniel L. Oberski. Evaluating the construct validity
of text embeddings with application to survey questions. EPJ Data Science, 11(1):1–31,
December 2022. doi: 10.1140/epjds/s13688-022-00353-7.
Qixiang Fang, Anastasia Giachanou, Ayoub Bagheri, Laura Boeschoten, Erik-Jan van
Kesteren, Mahdi Shafiee Kamalabad, and Daniel Oberski. On text-based personality computing: Challenges and future directions. In Findings of the Association for Computational
_Linguistics: ACL 2023, pp. 10861–10879, 2023a. doi: 10.18653/v1/2023.findings-acl.691._
Qixiang Fang, Zhihan Zhou, Francesco Barbieri, Yozen Liu, Leonardo Neves, Dong Nguyen,
Daniel L Oberski, Maarten W Bos, and Ron Dotsch. Designing and evaluating generalpurpose user representations based on behavioural logs from a measurement process
[perspective: A case study with snapchat. arXiv, 2023b. URL https://arxiv.org/abs/](https://arxiv.org/abs/2312.12111)
[2312.12111.](https://arxiv.org/abs/2312.12111)
Gemini Team Google. Gemini: a family of highly capable multimodal models. arXiv, 2023.
[URL https://arxiv.org/abs/2312.11805.](https://arxiv.org/abs/2312.11805)
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and
Jacob Steinhardt. Measuring massive multitask language understanding. In International
_[Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=](https://openreview.net/forum?id=d7KBjmI3GmQ)_
[d7KBjmI3GmQ.](https://openreview.net/forum?id=d7KBjmI3GmQ)
-----
Jentse Huang, Wenxuan Wang, Eric John Li, Man Ho LAM, Shujie Ren, Youliang Yuan,
Wenxiang Jiao, Zhaopeng Tu, and Michael Lyu. On the humanity of conversational AI:
Evaluating the psychological portrayal of LLMs. In The Twelfth International Conference on
_[Learning Representations, 2024. URL https://openreview.net/forum?id=H3UayAQWoE.](https://openreview.net/forum?id=H3UayAQWoE)_
Paul Irwing and David J. Hughes. Test development. In The Wiley Handbook of Psychometric
_Testing, pp. 1–47. John Wiley & Sons, Ltd, 2018. doi: 10.1002/9781118489772.ch1._
Anne Kreuter, Kai Sassenberg, and Roman Klinger. Items from psychometric tests as training
data for personality profiling models of twitter users. In Proceedings of the 12th Workshop
_on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, pp. 315–323._
Association for Computational Linguistics, 2022. doi: 10.18653/v1/2022.wassa-1.35.
John P. Lalor and Hong Yu. Dynamic data selection for curriculum learning via ability
estimation. In Trevor Cohn, Yulan He, and Yang Liu (eds.), Findings of the Association
_for Computational Linguistics: EMNLP 2020, pp. 545–555, Online, 2020. Association for_
Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.48.
John P. Lalor, Hao Wu, and Hong Yu. Building an evaluation scale using item response
theory. In Jian Su, Kevin Duh, and Xavier Carreras (eds.), Proceedings of the 2016 Conference
_on Empirical Methods in Natural Language Processing, pp. 648–657, Austin, Texas, 2016._
Association for Computational Linguistics. doi: 10.18653/v1/D16-1062.
John P Lalor, Hao Wu, Tsendsuren Munkhdalai, and Hong Yu. Understanding deep learning
performance through an examination of test set difficulty: A psychometric case study. In
Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun’ichi Tsujii (eds.), Proceedings of
_the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical_
_Methods in Natural Language Processing, pp. 4711–4716. Association for Computational_
Linguistics, 2018. doi: 10.18653/v1/D18-1500.
John P. Lalor, Hao Wu, and Hong Yu. Learning latent parameters without human response
patterns: Item response theory with artificial crowds. In Kentaro Inui, Jing Jiang, Vincent
Ng, and Xiaojun Wan (eds.), Proceedings of the 2019 Conference on Empirical Methods in
_Natural Language Processing and the 9th International Joint Conference on Natural Language_
_Processing (EMNLP-IJCNLP), pp. 4249–4259, Hong Kong, China, 2019. Association for_
Computational Linguistics. doi: 10.18653/v1/D19-1434.
Manikanta Loya, Divya Sinha, and Richard Futrell. Exploring the sensitivity of LLMs’
decision-making capabilities: Insights from prompt variations and hyperparameters. In
_Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 3711–3716, 2023._
doi: 10.18653/v1/2023.findings-emnlp.241.
Luca Mari, Mark Wilson, and Andrew Maul. _Measurement across the sciences: Devel-_
_oping a shared concept system for measurement._ Springer Nature, 2023. doi: 10.1007/
978-3-031-22448-5.
Fulgencio Mar´ın-Mart´ınez and Julio Sanchez-Meca. Weighting by inverse variance or by´
sample size in random-effects meta-analysis. Educational and Psychological Measurement,
70(1):56–73, 2010. doi: 10.1177/0013164409344534.
Eiji Muraki. A generalised partial credit model: Application of an EM algorithm. Applied
_Psychological Measurement, 16(2):159–176, 1992. doi: 10.1002/j.2333-8504.1992.tb01436.x._
Yixin Nie, Xiang Zhou, and Mohit Bansal. What can we learn from collective human
opinions on natural language inference data? In Bonnie Webber, Trevor Cohn, Yulan
He, and Yang Liu (eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural
_Language Processing (EMNLP), pp. 9131–9143, Online, 2020. Association for Computational_
Linguistics. doi: 10.18653/v1/2020.emnlp-main.734.
[OpenAI. GPT-4 technical report. arXiv, 2023. URL https://arxiv.org/abs/2303.08774.](https://arxiv.org/abs/2303.08774)
-----
Max Pellert, Clemens M Lechner, Claudia Wagner, Beatrice Rammstedt, and Markus
Strohmaier. AI Psychometrics: Assessing the psychological profiles of large language
models through psychometric inventories. Perspectives on Psychological Science, 2023. doi:
10.1177/17456916231214460.
Deborah Raji, Emily Denton, Emily M. Bender, Alex Hanna, and Amandalynne Paullada. AI
and the everything in the Whole Wide World Benchmark. In J. Vanschoren and S. Yeung
(eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Bench_[marks, volume 1, 2021. URL https://datasets-benchmarks-proceedings.neurips.cc/](https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/084b6fbb10729ed4da8c3d3f5a3ae7c9-Paper-round2.pdf)_
[paper files/paper/2021/file/084b6fbb10729ed4da8c3d3f5a3ae7c9-Paper-round2.pdf.](https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/084b6fbb10729ed4da8c3d3f5a3ae7c9-Paper-round2.pdf)
Steven P Reise and Dennis A Revicki. Handbook of item response theory modeling. Taylor &
Francis New York, NY, 2014.
Pedro Rodriguez, Joe Barrow, Alexander Miserlis Hoyle, John P Lalor, Robin Jia, and
Jordan Boyd-Graber. Evaluation examples are not equally informative: How should that
change NLP leaderboards? In Proceedings of the 59th Annual Meeting of the Association
_for Computational Linguistics and the 11th International Joint Conference on Natural Language_
_Processing (Volume 1: Long Papers), pp. 4486–4503, 2021. doi: 10.18653/v1/2021.acl-long._
346.
Pedro Rodriguez, Phu Mon Htut, John P Lalor, and Joao Sedoc. Clustering examples in multi-˜
dataset benchmarks with item response theory. In Proceedings of the Third Workshop on
_Insights from Negative Results in NLP, pp. 100–112, 2022. doi: 10.18653/v1/2022.insights-1._
14.
Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. WinoGrande:
An adversarial winograd schema challenge at scale. Commun. ACM, 64(9):99–106, 2021.
ISSN 0001-0782. doi: 10.1145/3474381.
Melanie Sclar, Yejin Choi, Yulia Tsvetkov, and Alane Suhr. Quantifying language models’
sensitivity to spurious features in prompt design or: How I learned to start worrying
about prompt formatting. In The Twelfth International Conference on Learning Representations,
[2024. URL https://openreview.net/forum?id=RIu5lyNXjT.](https://openreview.net/forum?id=RIu5lyNXjT)
Joao Sedoc and Lyle Ungar. Item response theory for efficient human evaluation of chatbots.˜
In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, pp. 21–33,
2020. doi: 10.18653/v1/2020.eval4nlp-1.3.
Indira Sen, Fabian Flock, and Claudia Wagner. On the reliability and validity of detecting¨
approval of political actors in tweets. In Proceedings of the 2020 Conference on Empirical
_Methods in Natural Language Processing (EMNLP), pp. 1413–1426, Online, 2020. Association_
for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.110.
Simone Tedeschi, Johan Bos, Thierry Declerck, Jan Hajic, Daniel Hershcovich, Eduard Hovy,ˇ
Alexander Koller, Simon Krek, Steven Schockaert, Rico Sennrich, Ekaterina Shutova, and
Roberto Navigli. What’s the meaning of superhuman performance in today’s NLU? In
Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st
_Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp._
12471–12491, Toronto, Canada, 2023. Association for Computational Linguistics. doi:
10.18653/v1/2023.acl-long.697.
Oskar van der Wal, Dominik Bachmann, Alina Leidinger, Leendert van Maanen, Willem
Zuidema, and Katrin Schulz. Undesirable biases in NLP: Addressing challenges of
measurement. Journal of Artificial Intelligence Research, 79:1–40, 2024. doi: 10.1613/jair.1.
15195.
Clara Vania, Phu Mon Htut, William Huang, Dhara Mungra, Richard Yuanzhe Pang, Jason
Phang, Haokun Liu, Kyunghyun Cho, and Samuel Bowman. Comparing test sets with
item response theory. In Proceedings of the 59th Annual Meeting of the Association for
_Computational Linguistics and the 11th International Joint Conference on Natural Language_
_Processing (Volume 1: Long Papers), pp. 1141–1158, 2021. doi: 10.18653/v1/2021.acl-long.92._
-----
Huy Vu, Suhaib Abdurahman, Sudeep Bhatia, and Lyle Ungar. Predicting responses to
psychological questionnaires from participants’ social media posts and question text
embeddings. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp.
1512–1524, Online, 2020. Association for Computational Linguistics. doi: 10.18653/v1/
2020.findings-emnlp.137.
Feifan Yang, Tao Yang, Xiaojun Quan, and Qinliang Su. Learning to answer psychological
questionnaire for personality detection. In Findings of the Association for Computational
_Linguistics: EMNLP 2021, pp. 1131–1142. Association for Computational Linguistics, 2021._
doi: 10.18653/v1/2021.findings-emnlp.98.
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. HellaSwag: Can
a machine really finish your sentence? In Anna Korhonen, David Traum, and Llu´ıs
Marquez (eds.),` _Proceedings of the 57th Annual Meeting of the Association for Computational_
_Linguistics, pp. 4791–4800, Florence, Italy, 2019. Association for Computational Linguistics._
doi: 10.18653/v1/P19-1472.
**A** **Prompts**
**A.1** **Example Test Items (User Messages)**
**Example 1**
The fractions 14[4] [and][ □]21 [are equivalent. What is the value of][ □] [?]
[A] 6 [B] 7 [C] 11 [D] 14
**Example 2**
Which number does K represent on this number line?
[A] 27.4 [B] 27.8 [C] 27.9 [D] 28.2
**Example 3**
The volume of the rectangular box is 200 cm[3]. What is the value of x ?
**A.2** **Example System Messages**
Base prompt:
You are given a mathematics question written in LaTeX format.
Instructions:
1. Type of question: Is it multiple choice, free text response, or drawing?
2. Think step by step, and describe your thought process and reasoning.
3. Answer:
- For multiple choice: [selected answer key].
- For free-text response: [provide your short answer].
-----
- For drawing: [describe clearly the steps to complete the drawing].
- If uncertain, make an educated guess.
Variant 1 (item type reordered):
You are given a mathematics question written in LaTeX format.
Instructions:
1. Type of question: Is it drawing, free text response, or multiple choice?
2. Think step by step, and describe your thought process and reasoning.
3. Answer:
- For drawing: [describe clearly the steps to complete the drawing].
- For free-text response: [provide your short answer].
- For multiple choice: [selected answer key].
- If uncertain, make an educated guess.
Variant 2 (new lines removed):
You are given a mathematics question written in LaTeX format. Instructions:
1. Type of question: Is it multiple choice, free text response, or drawing?
2. Think step by step, and describe your thought process and reasoning.
3. Answer: - For multiple choice: [selected answer key]. - For free-text
response: [provide your short answer]. - For drawing: [describe clearly the
steps to complete the drawing]. - If uncertain, make an educated guess.
Variant 3 (upper case heading):
You are given a mathematics question written in LaTeX format.
INSTRUCTIONS:
1. Type of question: Is it multiple choice, free text response, or drawing?
2. Think step by step, and describe your thought process and reasoning.
3. Answer:
- For multiple choice: [selected answer key].
- For free-text response: [provide your short answer].
- For drawing: [describe clearly the steps to complete the drawing].
- If uncertain, make an educated guess.
**B** **Example Responses**
Note that the example responses below correspond to the example test items in Appendix A.1.
**Example 1 (base prompt, GPT-4-V, correct)**
Type of question: Multiple choice
To find the value of the square that makes the fractions equivalent, we
need to find a number that, when divided by 21, gives the same value as 4
divided by 14.
To simplify the fraction 14[4] [, we divide both the numerator and the denomi-]
nator by their greatest common divisor, which is 2. This gives us [2]7 [.]
Now, we need to find a number that, when placed in the square and divided
by 21, gives us the fraction [2]7 [.]
Since 21 = 7 × 3, we need to find a number that, when multiplied by [1]3 [,]
gives us 2. That number is 2 × 3 = 6.
Therefore, the value of the square that makes the fractions equivalent is 6.
Answer: [A] 6
Answer: - For multiple-choice: [A] 6
-----
**Example 2 (base prompt, Gemini-Vision-Pro, incorrect)**
1. This is a multiple-choice question.
2. First, I notice that the number line is labelled from 26 to 28. This means
that the value of K must be between 26 and 28.
3. Next, I look at the answer choices. [A] 27.4 is the closest to 27, so it is a
possibility. [B] 27.8 is also close, but it is further away from 27 than [A] 27.4.
[C] 27.9 is even further away from 27 than [B] 27.8, so it is unlikely to be
the correct answer. [D] 28.2 is the furthest away from 27, so it is the least
likely to be the correct answer.
4. Based on this reasoning, I would select [A] 27.4 as the best answer.
**Example 3 (prompt variant 1, GPT4-V, correct)**
Type of Question: [Free Text Response] This is a free-text response question.
To find the value of x, we will use the formula for the volume of a rectangular box, which is:
Volume = length × width × height
The volume of the box is given as 200 cm[3], the width is 5 cm, and the height
is 4 cm. We can set up the equation as follows:
200 = x × 5 × 4
To solve for x, we divide both sides of the equation by 5 × 4:
_x =_ [200]
5 × 4
_x =_ [200]
20
_x = 10 cm_
Answer: x = 10 cm
-----
**C** **Detailed Result Figures**
|A GPT−4V Korea,Rep.of Chinese Taipei Singapore Hong Kong SAR Japan Russian Federation Gemini−Pro−Vision Canada, Quebec Israel Hungary Armenia Canada, Ontario Finland England Italy Lithuania Slovenia United States Canada, Alberta Australia Ukraine Kazakhstan Sweden New Zealand Norway Romania Lebanon Dubai,UAE Georgia Turkey Macedonia, Rep. of United Arab Emirates Thailand Malaysia Chile Abu Dhabi, UAE Qatar Iran, Islamic Rep. of Bahrain Tunisia Palestinian Jordan Indonesia Syrian Arab Rep. Morocco Saudi Arabia Oman Botswana South Africa Ghana Honduras 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0|B GPT−4V Korea,Rep.of Singapore Chinese Taipei Hong Kong SAR Japan Gemini−Pro−Vision Russian Federation Canada, Quebec Israel Finland Canada, Ontario United States England Canada, Alberta Hungary Australia Slovenia Lithuania Italy New Zealand Kazakhstan Sweden Ukraine Dubai,UAE Norway Armenia Romania United Arab Emirates Turkey Lebanon Abu Dhabi, UAE Malaysia Georgia Thailand Macedonia, Rep. of Tunisia Chile Iran, Islamic Rep. of Qatar Bahrain Jordan Palestinian Botswana Saudi Arabia Indonesia Syrian Arab Rep. Morocco Oman South Africa Honduras Ghana 200 300 400 500 600 700 800|
|---|---|
Figure 3: Distribution of proficiency estimates for GPT-4V, Gemini-Pro-Vision and all
**participants of TIMSS 2011 8th grade mathematics test. Left figure (A) shows the profi-**
ciency estimates based on the percentages of correct responses. Right figure (B) shows the
IRT-based proficiency estimates. The middle vertical line in each box plot represents the
weighted mean proficiency score, with the error bars indicating its 95% confidence interval.
The borders of each box indicate the range of the middle 50% of all values, with the two
whiskers indicating the 5th and 95th percentiles.
-----
| [
"Qixiang, Fang",
"Daniel L., Oberski",
"Dong, Nguyen"
] | 2024-07-25T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2404.01799 | https://arxiv.org/abs/2404.01799 | null |
Paraphrase and Solve: Exploring and Exploiting the Impact of Surface Form on Mathematical Reasoning in Large Language Models | This paper studies the relationship between the surface form of a mathematical problem and its solvability by large language models. We find that subtle alterations in the surface form can significantly impact the answer distribution and the solve rate, exposing the language model’s lack of robustness and sensitivity to the surface form in reasoning through complex problems. To improve mathematical reasoning performance, we propose Self-Consistency-over-Paraphrases (SCoP), which diversifies reasoning paths from specific surface forms of the problem. We evaluate our approach on four mathematics reasoning benchmarks over three large language models and show that SCoP improves mathematical reasoning performance over vanilla self-consistency, particularly for problems initially deemed unsolvable. Finally, we provide additional experiments and discussion regarding problem difficulty and surface forms, including cross-model difficulty agreement and paraphrasing transferability, and Variance of Variations (VOV) for language model evaluation. | This paper proposes Self-Consistency-over-Paraphrases (SCoP), which diversifies reasoning paths from specific surface forms of the problem and shows that SCoP improves mathematical reasoning performance over vanilla self-consistency, particularly for problems initially deemed unsolvable. | ## Paraphrase and Solve: Exploring and Exploiting the Impact of Surface Form on Mathematical Reasoning in Large Language Models
**Yue Zhou[1][∗]** **Yada Zhu[2]** **Diego Antognini[2]** **Yoon Kim[3]** **Yang Zhang[2]**
1University of Illinois Chicago 2MIT-IBM Watson AI Lab, IBM Research 3MIT CSAIL
[email protected], [email protected]
[email protected], {diego.antognini, yang.zhang2}@ibm.com
**Abstract**
This paper studies the relationship between the
surface form of a mathematical problem and
its solvability by large language models. We
find that subtle alterations in the surface form
can significantly impact the answer distribution and the solve rate, exposing the language
model’s lack of robustness and sensitivity to
the surface form in reasoning through complex
problems. To improve mathematical reasoning performance, we propose Self-Consistencyover-Paraphrases (SCoP), which diversifies reasoning paths from specific surface forms of the
problem. We evaluate our approach on four
mathematics reasoning benchmarks over three
large language models and show that SCoP
improves mathematical reasoning performance
over vanilla self-consistency, particularly for
problems initially deemed unsolvable. Finally,
we provide additional experiments and discussion regarding problem difficulty and surface
forms, including cross-model difficulty agreement and paraphrasing transferability, and Variance of Variations (VOV) for language model
evaluation.
problem. The second is the surface form, i.e., how
the questions, assumptions, and constraints are described in the math problem. Intuitively, the semantic information should be the primary determining
factor of the difficulty of the math problem, and
surface form should only have a marginal impact.
This paper investigates the extent to which this is
true for LLMs.
In this paper, we explore into the relationship
between the problem’s surface form and its solvability by an LLM. Specifically, we follow the
self-consistency setting (Wang et al., 2022) to sample multiple answers to the same math problem
and compute solve rate as the percentage of correct answers. Our primary finding is that, counterintuitively, subtle alterations in the surface form of
a math problem can significantly impact the answer
distribution and solve rate. Consider the example in
Figure 1, where the left and right panels contain an
identical math problem described in two different
ways. Despite the no change in problem semantics,
the solve rate increases from 5% to 100%, with
all reasoning paths leading to the correct answer
- what initially appears to be a difficult problem
to the language model transforms into an easily
solvable one. This phenomenon exposes the language model’s lack of robustness and sensitivity
to the surface form in reasoning through complex
problems.
Motivated by this finding, we propose to improve
the mathematical reasoning performance of the language model by diversifying reasoning paths from
specific surface forms of the problem. We leverage the language model’s paraphrasing ability to
generate surface forms with identical semantics[1]
and propose Self-Consistency-over-Paraphrases
**(SCoP), which consists of two steps: ❶** For each
math problem, generate K paraphrase using an
1Rigorously, the surface forms can be regarded as “quasiparaphrases that convey approximately the same meaning
using different words” (Bhagat and Hovy, 2013).
**1** **Introduction**
Despite the impressive performance of large-scale
language models (LLMs) across many tasks, their
ability to reason through complex problems such as
mathematics remains a bottleneck (Rae et al., 2022;
Srivastava et al., 2023; Liang et al., 2023); they can
solve problems that are challenging for humans
but can also struggle with seemingly simple ones.
This raises the following question: what factors
contribute to the difficulty of a math problem for
an LLM?
Specifically, the information in a math problem
can be divided into two types. The first is the se_mantic information, which involves the problem_
specification and knowledge involved in the math
_∗This work was done during the first author’s internship_
at MIT-IBM Watson AI Lab.
-----
|Lauren is saving 20% of every paycheck. How many more years does Lauren need to work if she plans to save for a retirement period of 20 years, live with 40% of her current annual salary, and her current salary is $100,000? Chain-of-Thoughts + Se "10""2" ".77""6" Answer "120""19.5" Distribution: "4" "14" "3.33" Majority Vote: "40"(5%) 1.54 (years) ✗ "20" "1.54" "1.67" (20%) "33.3" Solve Rate: 5%|Lauren is saving 20% of every paycheck. How many more years does Lauren need to work if she plans to save for a retirement period of 20 years, live with 40% of her current annual salary, and her current salary is $100,000?|Col3|Col4|Col5|Col6|If Lauren is saving 20% of her current salary of $100,000, how many additional years does she need to work if she intends to save for a retirement period of 20 years and live off 40% of her current annual salary?|
|---|---|---|---|---|---|---|
||||||||
|||Chain-of-Thoughts + Se||lf-|Consistency Reasoning||
||||||||
Figure 1: Comparison of the answer distribution and solve rate between surface form variations of a math word
problem from GSM8K, when prompted to GPT-3.5-turbo using Self-Consistency, with 40 sampled reasoning paths.
Solve rate can vary dramatically between surface forms with equivalent semantics.
**2** **Problem Difficulty and Surface Forms**
In this section, we present our pilot study of the
impact of surface form on LLMs’ ability to solve
the problem. In all our studies, we follow the selfconsistency setting (Wang et al., 2022), which extends over chain-of-thought (Wei et al., 2022) by
using sampling to generate a variety of reasoning
paths. From this setting, we quantify the difficulty
of a problem w.r.t a language model by its solve
**rate, which is the proportion of the reasoning paths**
that lead to the correct answer.
When the solve rate exceeds 50%, a majority
vote guarantees the correct answer. Note the solve
rate measures the difficulty of a single problem
input and is also a model-dependent metric.
To study how surface form impacts the solve
rate, we use the math word problem from the
GSM8K dataset (Cobbe et al., 2021). For each
math problem, we generate a paraphrase using
GPT-3.5-turbo[3] (detailed instructions are shown in
Appendix E). We then compare the solve rates of
the original problem and the paraphrase solved by
GPT-3.5-turbo using self-consistency with N = 40
and a temperature of 0.7.
Our finding is that the solve rate varies significantly across the surface forms. Figure 1 shows an
example with the original problem on the left and
the paraphrased one on the right. In the original
problem, the reasoning paths result in a disarrayed
answer distribution, with merely 5% achieving the
correct answer “40” and the aggregated answer
“1.54” (20%). In contrast, the solve rate of the paraphrase problem is 100%. We have identified many
[3https://platform.openai.com/docs/models/](https://platform.openai.com/docs/models/gpt-3-5)
[gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
LLM; and ❷ Ask the LLM to generate N/K reasoning paths for each paraphrase, and then select
the most consistent answer among the N answers.
The intuition is that if a problem exhibits a low
solve rate and ineffective reasoning paths due to
its original surface form, introducing diversity in
its surface forms can be beneficial. We also introduced in-context exemplars to the language model
when paraphrasing, which are the paraphrases that
obtain a solve rate improvement over their original
problem, aiming to generate surface forms with
the same semantics yet a higher solve rate through
language models’ in-context learning abilities (Min
et al., 2022; Brown et al., 2020).
We evaluate our approach on four mathematics reasoning benchmarks: GSM8K (Cobbe
et al., 2021), AQuA (Ling et al., 2017),
MATH (Hendrycks et al., 2021), and MMLUMath (Hendrycks et al., 2020), over three large
language models: LLaMA-2-70b (Touvron et al.,
2023), GPT-3.5-turbo and GPT-4 (OpenAI, 2023).
Our experiments show that SCoP improves mathematical reasoning performance over vanilla SelfConsistency, particularly for problems initially
deemed unsolvable. In additional experiments, we
show that the difficulty ranks across language models are positively correlated, with higher agreement
within the GPT model family and simpler datasets.
We propose Variance of Variations (VOV) as a
metric for evaluating language model robustness
against surface form variations. Finally, we explain
why SCoP can be effective using a data difficulty
map based on the entropy of answer distribution
and the solve rate. Our code is publicly available.[2]
[2https://github.com/Yue-LLM-Pit/SCoP/](https://github.com/Yue-LLM-Pit/SCoP/)
-----
1 _aggregate_
|1|Col2|
|---|---|
|2|Col2|
|---|---|
|3|Col2|
|---|---|
_Chain-of-Thought (CoT)_
_P_ LM
_SC_ 4
|4|Col2|
|---|---|
_paths answer_
_SCoP (Ours)_
|1|Col2|
|---|---|
|1||
|2||
|1 2|Col2|
|---|---|
|3||
|4||
_Q1_ LM
_P_ LM _CoT_
_Q2_ LM
Exemplars
_Paraphrases_
Figure 3: A comparison between Self-Consistency and
our SCoP. SCoP splits N reasoning paths over K incontext learned paraphrases, instead of devoting all N
reasoning paths to the single original problem P . The
final answer is selected by aggregating all reasoning
paths from these paraphrases with a majority vote.
exhibits a low solve rate and ineffective reasoning
paths due to its original surface form, introducing
diversity in its surface forms would be beneficial.
There are two important notes regarding SCoP.
First, when we increase K, the total number of
reasoning paths N is held fixed, which separates
the effect of increasing the diversity of reasoning
paths from increasing the number of reasoning
paths. This also ensures a fair comparison with
other self-consistency baselines.
Second, there are two procedures in SCoP that
involve an LLM, one to generate paraphrases (Step
1) and one to generate answers (Step 2). We use
the same LLM to perform both tasks. In this way,
we can ensure that any performance improvement
of SCoP is due to the diversity of paraphrasing itself, rather than cross-sharing of knowledge across
different LLMs. In addition, there is no human annotation, training, fine-tuning, or auxiliary models
involved in our SCoP framework.
**3.2** **Paraphrase Generation**
The paraphrase generation in Step 1 is crucial to
the success of SCoP. In this work, we explore two
paraphrase generation methods.
**Naïve.** The naïve approach instructs the language
model to generate K paraphrases of the math problem. However, this could generate many paraphrases with worse solve rate, because the solve
rate change has high variability in both directions
(as shown in Figure 2).
**In-Context Learning.** To increase the chance
of generating ‘good’ paraphrases, we propose an
150
100
Count
50
0
1.0 0.5 0.0 0.5 1.0
Absolute Solve Rate Difference
Figure 2: GSM8K - solve rate difference - from original
to one of the random naive paraphrases.
more such examples with drastic improvement in
solve rate, presented in Table 6.
We further calculate the histogram of the solve
rate changes in the paraphrased problem compared
to the original one, shown in Figure 2. As can
be observed, the distribution is heavy-tailed, with
11.7% of the paraphrases resulting in over 25%
absolute improvement in solve rate and with 13%
resulting in over 25% absolute deterioration.
This phenomenon exposes the language model’s
lack of robustness and sensitivity to a comprehensive problem’s surface form. It suggests that the
challenge of some problems may not be due to
the model’s limitations, but rather the ineffective
generation of reasoning paths from certain surface
forms. We therefore seek to take advantage of this
phenomenon to improve language model reasoning
through surface form modifications, mirroring the
way paraphrasing aids a student’s cognitive and
problem-solving processes (Swanson et al., 2019).
**3** **Self-Consistency over Paraphrases**
Motivated by the findings in Section 2, we propose a framework called Self-Consistency-over**Paraphrases (SCoP), which leverages the LLMs**
to generate paraphrases of math problems to improve their ability in solving them.
**3.1** **Framework Overview**
As shown in Figure 3, SCoP consists of two steps.
_• Step 1: Paraphrase. Prompt the LLM to gen-_
erate K paraphrases of the original problem. For
notational ease, denote p as the original problem,
and _k=1[{][q][k][}][ as the][ K][ paraphrases.]_
_• Step 2: Solve. For each paraphrase, we ask the_
LLM to generate[S][K] _N/K reasoning paths, and thus_
the total number of generated answers is N . We
then select the most consistent answer across the
_N reasoning paths as the final answer._
The intuition behind SCoP is that if a problem
-----
**Algorithm 1 Paraphrase Exemplar Search**
1: Input: Training data D[tr], Nshot, margin δ. Init. Candidates list C.
2: for step t in {1, 2, . . ., T _} do_
3: **if Length(C) = Nshot then**
4: **break**
5: Sample a problem p from D[tr] without replacement.
6: Compute solve rate SR(p)
7: Obtain K Paraphrases {q1, . . ., qK _} of p._
8: **for k = 1 to K do**
9: Compute solve rate SR(qk)
10: **if SR(qk) >= SR(p) + δ then**
11: Add {p, qk} to Candidates list C.
12: **break**
dataset, which comprises college and high-schoollevel mathematics, statistics, and abstract algebra.
**Language Models** We utilize three popular
LLMs trained with RLHF (Ouyang et al., 2022):
LLaMA-2 (70B) (Touvron et al., 2023), an opensource LLM by Meta AI, GPT-3.5-turbo (version
0613), and GPT-4 (OpenAI, 2023), accessed via
the OpenAI API. All experiments are conducted in
zero-shot or few-shot settings, without training or
fine-tuning the language models. We choose the
temperature T = 0.7 and Top-p = 1.0 for sampling decoding for all three language models. The
total number of reasoning paths N we sample for
each problem is 40, following Wang et al. (2022).
**Implementation Details** For paraphrase generation (Step 1), we evaluate the two aforementioned
schemes ❶ **Naïve: We use the template “ Para-**
_phrase the following math problem: {question}”_
to prompt the language model to paraphrase
the original problem; ❷ **In-Context Learning**
**(ICLpara): We randomly select a set of 8 para-**
phrase exemplars by Algorithm 1 with margin[5]
_δ = 0.3. The details of the prompt templates are_
available in Appendix E.
For answer generation (Step 2), we also implement two schemes: ❶ **Zero-Shot Chain-of-**
**Thought (CoT) (Kojima et al., 2023), which ap-**
pends “Let’s think step by step.” to the question
text; and ❷ **Four-Shot CoT, where we append four-**
shot in-context examples with CoT to the LLM
when solving the math problems. Note that the
in-context examples for answer generation are different in functionality and format from the ones for
ICLpara.
**4.2** **Main Results**
**Zero-Shot CoT** Table 1 illustrates the performance of SCoP under the zero-shot CoT setting,
compared with the vanilla self-consistency (SC),
using LLaMA-2-70b and GPT-3.5-turbo. We vary
the number of paraphrases K across {1, 2, 4, 8}
while keeping the total number of reasoning paths
fixed as 40. Due to resource constraints, we randomly sampled 300 data points from each test set,
except for AQuA, which contains 254 testing examples.
The performance metric is the accuracy of the
self-consistency answer. We also report the accu
5We performed an ablation study of the margin effect on a
separate development sets and found that using an extremely
large margin can damage performance. See Appendix A.
in-context learning approach,[4] where we obtain
_Nshot ‘good’ paraphrases as the in-context exem-_
plars (marked as [Exemplars] in Figure 3). The
‘good’ paraphrases are formally defined as paraphrases that contribute to a solve rate improvement
(by a preset margin δ) over the original problem.
To obtain the ‘good’ paraphrases, we first generate
some candidate paraphrases using the aforementioned naïve approach on a small number of math
problems with labeled answers. We then compute
the solve rate of the original problem and the paraphrases and select those whose improvement is
over the margin δ. The detailed algorithm is presented in Algorithm 1.
**4** **Experiments**
In this section, we will describe our experiment
results evaluating the effectiveness of SCoP, as well
as additional studies on how SCoP works.
**4.1** **Experimental Settings**
**Datasets** We evaluate our approach on the following public mathematics reasoning benchmarks:
_• GSM8K (Cobbe et al., 2021) contains 8.5K lin-_
guistically diverse grade school-level math questions with moderate difficulties.
_• AQuA (Ling et al., 2017) consists of 100K al-_
gebraic word problems, including the questions,
the possible multiple-choice options, and natural
language answer rationales from GMAT and GRE.
_• MATH (Hendrycks et al., 2021) is a competition_
mathematics dataset containing 12,500 problems
with challenging concepts such as Calculus, Linear
Algebra, Statistics, and Number Theory.
_• MMLU (Hendrycks et al., 2020) is a compre-_
hensive dataset containing various subjects. We
specifically utilized the mathematics section of the
4An alternative can be automatic prompt engineering, see
Appendix B.
-----
GPT-3.5-Turbo LLaMA-2-70b
GSM8K AQuA MATH MMLU GSM8K AQuA MATH MMLU
HPR (%) 31.3 42.5 68.0 64.0 52.0 76.3 98.2 81.6
SC 76.3 (24.5) 66.9 (22.2) 59.0 (39.7) 52.8 (26.3) 58.7 (20.5) 40.5 (22.0) 10.5 (8.9) 32.8 (17.4)
_k = 1_ 72.2 (27.7) 63.4 (28.9) 55.0 (37.5) 48.4 (27.5) 51.0 (28.2) 38.1 (26.8) 24.6 (23.2) 27.2 (20.2)
SCoP _k = 2_ 76.0 (34.0) 65.8 (28.9) 56.5 (39.7) 52.8 (32.5) 54.3 (26.9) 39.5 (25.0) 29.8 (28.6) 29.6 (22.3)
(Naïve) _k = 4_ 77.7 (36.2) 67.3 (29.8) 57.5 (39.0) 56.0 (36.3) 55.7 (32.1) 41.4 (25.6) **31.6 (30.4)** 32.0 (24.9)
_k = 8_ 79.3 (39.4) 68.1 (33.5) 59.5 (43.4) 55.6 (33.8) 60.3 (33.3) 41.4 (25.6) 28.1 (26.8) 35.6 (28.0)
_k = 1_ 77.9 (39.0) 66.4 (29.8) 54.0 (36.8) 52.5 (32.6) 58.7 (39.9) 42.9 (29.8) 23.4 (22.0) 34.6 (23.7)
SCoP _k = 2_ **80.5 (39.2)** 68.5 (31.7) 57.5 (39.1) 55.5 (34.1) 59.3 (36.3) 43.7 (30.4) 24.6 (23.2) 37.6 (26.3)
(ICLpara) _k = 4_ 79.2 (38.3) **70.5 (35.4)** 58.0 (41.2) **58.0 (39.5)** 61.7 (40.5) 44.5 (30.4) 26.9 (25.6) **37.8 (26.5)**
_k = 8_ 80.2 (40.6) 69.7 (34.4) **60.0 (44.1)** 56.5 (34.9) **63.3 (40.5)** **46.5 (31.9)** 25.2 (23.8) 37.6 (25.8)
Table 1: Accuracy of SCoP distributing N/K reasoning paths over K in {1, 2, 4, 8} paraphrases in Naïve and
ICLpara settings, against Self-Consistency (SC). Hard Problem Ratio (HPR%) represents the percentage of problems
with an original solve rate ≤ 0.5 by Self-Consistency (SC). Accuracy is reported for both Hard Problems (HPR%
_≤_ 0.5) (inside parentheses) and global accuracy across the entire dataset (outside parentheses).
Model GSM8K AQuA MATH MMLU
HPR (%) 56 75.2 95.2 81.6
LLaMA-2-70b Self-Consistency 61.1 (30.0) 44.1 (25.7) 13.4 (9.2) 34.4 (19.6)
SCoP (ICLpara, k = 8) **65.1 (38.6)** **48.8 (33.5)** **23.6 (20.2)** **36.4 (27.0)**
HPR (%) 22 36 75 62
GPT-3.5-Turbo Self-Consistency 80.0 (9.1) 70.0 (16.7) 51.6 (29.8) 54.4 (26.9)
SCoP (ICLpara, k = 8) **82.0 (36.4)** **74.0 (27.8)** **57.6 (38.2)** **58.4 (36.5)**
HPR (%) 4 18 58 38
GPT-4 Self-Consistency 98.0 (50.0) 84.0 (11.1) 64.0 (37.9) 74.0 (31.6)
SCoP (ICLpara, k = 8) 98.0 (50.0) **86.0 (33.3)** **66.0 (41.4)** **78.0 (57.9)**
Table 2: A comparison of the performance (accuracy) between SC and SCoP (ICLpara paraphrasing, with k = 8)
using 4-shot in-context chain-of-thought exemplars over three language models. Accuracy is reported for both Hard
Problems (HPR% ≤ 0.5) (inside parentheses) and global accuracy across the entire dataset (outside parentheses).
racy over hard problems, defined as the problems
whose original accuracy is below 50%. The accuracies over all problems and hard problems are reported inside and outside parentheses respectively.
HPR% (Hard Problem Ratio) denotes the percentage of such hard problems.
There are three general observations. First,
SCoP with the two paraphrasing schemes both
outperform the vanilla self-consistency baseline.
Surprisingly, even the naïve paraphrasing can lead
to performance improvement, despite the high
chances of generating paraphrases with a worse
solve rate (see Figure 2). We will discuss a hypothesis in Section 5. Between the two schemes,
ICLpara consistently outperforms Naïve. Second,
the performance improvement generally increases
as K increases. Third, more significant performance gain over LLaMA-2-70B.
The results further indicate that MATH and
MMLU are considerably more challenging than
GSM8K and AQuA, as evidenced by their high
HPR% and low overall accuracy. Moreover, significant accuracy gains are from the original “Hard
Problems”, suggesting that changing surface forms
can solve the problems initially deemed unsolvable by self-consistency. Finally, when solving the
MATH dataset with LLaMA-2-70b, ICLpara underperforms Naïve paraphrasing. We hypothesize that
the MATH problems present a significant challenge
for LLaMA-2-70b, making it difficult to effectively
learn paraphrasing from in-context examples.
**Four-Shot CoT** One caveat of the zero-shot CoT
results is that SCoP (ICLpara) has indirect access
to additional ground-truth information from incontext exemplars. There is also a question of
whether the advantage of SCoP over SC will diminish as both are exposed to more examples. To
ensure a fair comparison and further validate the
effectiveness of SCoP, Table 2 shows results under the four-shot CoT setting, where the baselines
also have access to some ground-truth answer information. Due to resource constraints, we evaluate GPT4 with 100 random samples from each
dataset. The results show that while four-shot CoT
can improve SC and SCoP in general (compared
with zero-shot CoT), SCoP still consistently outperforms SC over all three language models. The
-----
GPT3.5, GPT4 GPT3.5, LLaMA-2 GPT4, LLaMA-2
GSM8K 0.573** 0.649*** 0.445*
AQUA 0.543*** 0.227*** 0.314*
MATH 0.554*** 0.242* 0.433*
MMLU 0.313* 0.320*** 0.233
Table 3: Spearman’s rank correlation of original problems’ solve rate across language models.
only exception is GPT4 on GSM8K, which already
achieves near-perfect performance with SC, thus
SCoP only achieves equivalent performance.
**4.3** **Additional Studies**
**Searching for Exemplars** Since our in-context
learning paraphrasing scheme requires access to
ground-truth answers, we would like to study how
many problems with ground-truth answers are
needed. Figure 4 illustrates how many data points
in the training set, on average, need to be sampled
to obtain Nshot ‘good’ paraphrases (x-axis) with
different margins. We can observe that, although
satisfying a large margin requires more samples, it
is relatively easy (typically every ±5 example) to
find a sample that substantially improves the solve
rate after paraphrasing. This, again, indicates the
sensitivity of the language model to surface form
variations in mathematical reasoning.
**Difficulty Beliefs Across Language Models** An
intriguing question is how different language models rank the difficulty of the problems. We measure
the agreement between language models on problem difficulty by Spearman’s rank correlationof the
solve rate for original problems across four datasets.
As shown in Table 3, the ranks of the difficulty (by
solve rate) are all positively correlated. However,
the degree of correlation varies, with higher agreement observed within the GPT model family and
on simpler datasets.
**Paraphrase Transfer** We investigate whether
paraphrases from a stronger LLM can be transferred to weaker ones and improve SCoP. Table 4
demonstrates the paraphrase transfer performance
of SCoP (Naïve, k = 8) on 100 randomly sampled data points from MMLU and GSM8K under
the zero-shot CoT setting. In general, paraphrases
produced by GPT-4 can be utilized by GPT3.5turbo or LLaMA-2-70b for further performance
improvements, with an exception with LLaMA-2
on MMLU, where GPT4 and LLaMA-2 exhibit the
lowest Spearman rank correlation of solve rate. We
hypothesize that the benefits of transferring para
Solver Paraphraser MMLU GSM8K
GPT-3.5 Self 50.0 78.0
GPT-3.5 GPT-4 **54.0** **84.0**
LLaMA-2 Self **37.0** 61.0
LLaMA-2 GPT-4 34.0 **69.0**
Table 4: Performance of SCoP (Naïve, k = 8) on MMLU
and GSM8K, with different paraphrasers.
GSM8K AQuA MATH MMLU
Naïve 20.3 17.5 12.9 17.5
LLaMA2
ICLpara 18.9 15.7 12.2 16.6
Naïve 20.6 16.1 15.8 16.9
GPT-3.5
ICLpara 16.2 10.7 15.6 15.6
GPT-4 ICLpara 9.7 11.5 17.0 21.3
Table 5: VOV values across datasets and language models, shown as standard deviation.
phrases across models may depend on the agreement in their beliefs of problem difficulty.
**Variance of Variations** In light of the considerable variability observed in solve rates among
problem surface forms (Figure 2), we propose and
advocate Variance of Variations (VOV) for evaluating language models on reasoning robustness.
Let X(p) ∈ [0, 1] be the random variable representing the solve rates of various paraphrases of a
problem p. Then the VOV value of the dataset D
is then defined as:
VOV = Ep _D[Var(X(p))]_ (1)
_∼_
where Var(·) is the variance. A large value of VOV
indicates high variability in the language model’s
reasoning ability against problem surface forms.
We compute VOV using the solve rate for the k = 8
paraphrases and the original problem as X(p) for
each p. As shown in Table 5, while VOV decreases
when a robust model solves a more manageable
dataset (e.g., GPT-4 on GSM8K), and ICLpara generated paraphrases can generally reduce VOV, VOV
remains unreasonably high over more challenging
datasets and all language models.
**Examples of ‘Good’ Paraphrases** We provide
some qualitative examples comparing the solve
rates between the original problem and a paraphrased version in Table 6. It is difficult to visually
tell what contributes to a good paraphrase. We will
publish these data to encourage future research.
**5** **Discussion**
We have an intriguing observation that even the
naïve scheme of generating math paraphrases can
-----
200
150
100
50
150
125
100
75
50
25
100
80
60
40
20
250
200
150
100
50
|Col1|margin = 0.2|
|---|---|
||margin = 0.3 margin = 0.4|
||margin = 0.5|
|||
|Col1|marg|in =|0.2|Col5|
|---|---|---|---|---|
||marg marg|in = in =|0.3 0.4||
||marg|in =|0.5||
||||||
|gin =|0.2|Col3|Col4|Col5|
|---|---|---|---|---|
|gin = gin =|0.3 0.4||||
|gin =|0.5||||
||||||
|margin = 0.2|Col2|Col3|Col4|
|---|---|---|---|
|margin = 0.3 margin = 0.4||||
|margin = 0.5||||
|||||
margin = 0.2
margin = 0.3
margin = 0.4
margin = 0.5
margin = 0.2
margin = 0.3
margin = 0.4
margin = 0.5
margin = 0.2
margin = 0.3
margin = 0.4
margin = 0.5
margin = 0.2
margin = 0.3
margin = 0.4
margin = 0.5
# Shots
(a)
# Shots
(b)
# Shots
(c)
# Shots
(d)
Figure 4: (a) GSM8K (b) AQuA (c) MATH (d) MMLU. The average number of data points in the training set
needed for obtaining Nshot exemplars at different margins.
1.0
0.8
0.6
0.4
0.2
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
|Col1|Col2|Col3|Original Paraphr Improve|ased d|
|---|---|---|---|---|
||||||
||||||
||||||
||||||
||||||
Entropy
1.0
0.8
0.6
0.4
0.2
0.0
1.0
0.8
0.6
0.4
0.2
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
|Col1|Col2|Col3|Original|Col5|
|---|---|---|---|---|
||||Original Paraphrased Deteriorated-Overcon|fident|
||||||
||||||
||||||
||||||
||||||
Original
Paraphrased
Deteriorated-Overconfident
Entropy
(b)
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
|Col1|Col2|Original|Col4|
|---|---|---|---|
|||Original Paraphrased Deteriorated-Un|certain|
|||||
|||||
|||||
|||||
|||||
Original
Paraphrased
Deteriorated-Uncertain
Entropy
(c)
(a)
Figure 5: Data Difficulty Map for GSM8K using GPT3.5, with three types of changes from solving the original
problem to one of its random paraphrases: (a) Improvement, (b) Overconfidence, and (c) Uncertainty. Arrows
indicate the solve rate and entropy change from solving the original problem to its paraphrased version.
improve the overall accuracy. However, the naïve
scheme has a significant chance of generating
worse paraphrases. Why would aggregating over
the mixture of better and worse paraphrases still
significantly improve the performance?
To explain this, Figure 5 shows three scatter plots
of the solve rate against the entropy of answer distributions. The outcome of solving each random
paraphrase is represented as a black dot. As can be
observed, the dots roughly form a triangular region.
The top left corner represents the ideal case with
high solve rates and high confidence. The bottom
corners, on the other hand, represent two failure
modes. The bottom right corner represents the case
with low solve rates and low confidence, and the
bottom left corner with low soft rates but high confidence (commonly known as over-confidence).
The blue arrows in Figure 5(a) visualize the
cases where the paraphrases improve the solve rate,
and they mostly point to the top-left corner. The
arrows in Figures 5(b) and (c) represent the cases
where the paraphrases lower the solve rate, and we
can observe that the arrows pointing to the bottom
right corner (yellow arrows in (b)) far outnumber
those to the bottom left corner (red arrows in (c)).
This indicates that while the ‘good’ paraphrases
would sharpen the answer distribution, the ‘bad’
paraphrases mostly would flatten the distribution.
Since the final aggregated answer distribution is
predominantly influenced by the sharp distributions, the damage brought by the “bad” paraphrases
is small compared to the benefit brought by the
‘good’ paraphrases, and thus the aggregate effect
across all the paraphrases is still positive.
**6** **Related Work**
**Mathematical Reasoning in LLMs** The complexity of mathematics necessitates System-2 reasoning, characterized by a slow, step-by-step cognitive process (Kahneman, 2011). Numerous works
have sought to emulate this process in solving mathematics with LLMs (Wei et al., 2022; Wang et al.,
2022; Kojima et al., 2023; Lightman et al., 2023;
Qiao et al., 2022). As a prominent framework,
chain-of-thought (Wei et al., 2022; Kojima et al.,
2023) prompts the language model to generate a
sequence of reasoning steps instead of a direct answer; Wang et al. (2022) extended chain-of-thought
by Self-Consistency, in which they replaced greedy
decoding with sampling decoding to generate a
-----
**Problem** **Source** **Label** **SR** **Voted (F.)**
**Original: Jenna has 4 roommates. Each month the electricity bill is $100. How much will each**
0.15 "300" (0.8)
roommate pay per year for electricity, if they divide the share equally?
GSM8K "240"
**Paraphrased: Jenna shares an apartment with 4 other people. The electricity bill is $100 per month.**
0.95 "240"
If they split the bill equally, how much will each roommate contribute towards the electricity bill in a year?
**Original: Jenny goes to the florist to buy some flowers. Roses cost $2 each and $15 for a dozen.**
If she bought 15 roses and arrived with five 5 dollar bills and they only have quarters for change, 0.0 "20" (0.3)
how many quarters does she leave with?
GSM8K "16"
**Paraphrased: Jenny visits the flower shop to purchase flowers. She can buy roses individually for $2 each**
or buy a dozen roses for $15. Jenny decides to buy 15 roses in total. She pays with five $5 bills and 0.55 "16"
the florist can only give her change in quarters. The question asks how many quarters Jenny receives as change.
**Original: Assistants are needed to prepare for preparation. Each helper can make either 2 large cakes or**
35 small cakes/hr. The kitchen is available for 3 hours and 20 large cakes & 700 small cakes are needed. 0.2 A (0.25)
How many helpers are required?
**Paraphrased: How many helpers are needed if each helper can make either 2 large cakes or 35 small cakes** AQUA B
per hour, and the kitchen is available for 3 hours and needs 20 large cakes and 700 small cakes? 0.8 B
**Options:[A)8,B)10,C)12,D)15,E)19]**
**Original: A starts a business with Rs.40,000. After 2 months, B joined him with Rs.60,000. C joined them after**
some more time with Rs.120,000. At the end of the year, out of a total profit of Rs.375,000, C gets Rs.150,000 0.1 C (0.4)
as his share. How many months after B joined the business, did C join?
**Paraphrased: A starts a business with Rs.40,000 and after 2 months, B joins with Rs.60,000. C joins the business** AQUA B
at some point later with Rs.120,000. At the end of the year, the total profit is Rs.375,000, and C receives
0.45 B
Rs.150,000 as their share. How many months after B joined the business did C join?
**Options:[A)2,B)4,C)23,D)24,E)84]**
**Original: A star-polygon is drawn on a clock face by drawing a chord from each number to the fifth number**
counted clockwise from that number. That is, chords are drawn from 12 to 5, from 5 to 10, from 10 to 3, 0.05 "150" (0.4)
and so on, ending back at 12. What is the degree measure of the angle at each vertex in the star-polygon?
MATH "30"
**Paraphrased: What is the measure of the angle at each vertex in the star-polygon formed by drawing a**
0.5 "30"
chord from each number on the clock face to the fifth number counted clockwise from that number?
**Original: By partial fractions,**
Find A + B.
1
_ax[2]+bx+c_ [=]
**Original: By partial fractions,** _ax[2]+bx+c_ [=] _√b[2]_ 4ac + _√b[2]_ 4ac Find A + B. 0.2 "1" (0.25)
_x−_ _[−][b][ +]_ 2a _−_ _x−_ _[−][b][ −]_ 2a _−_
**Paraphrased: Find the sum of A and B in the expression** MATH "0"
_ax[2]+1bx+c_ [=] _x−_ _[−][b][+]A[√]2ba[2]−4ac_ + _x−_ _[−][b][−]B[√]2ba[2]−4ac_ _. *Note the LATE[Xcode was paraphrased from][ \frac][ to][ \dfrac.]_ 0.65 "0"
**Original: Statement 1 | For every positive integer n there is a cyclic group of order n.**
0.05 C (0.95)
Statement 2 | Every finite cyclic group contains an element of every order that divides the order of the group.
**Paraphrased: Statement 1 says that there exists a cyclic group of any positive integer n.** MMLU A
Statement 2 says that in any finite cyclic group, there is an element for every possible order that divides
0.5 A
the order of the group.
**Options:[A)True,True, B)False,False, C)True,False, D)False,True]**
**Original: What is the probability that a randomly selected integer in the set** 1, 2, 3, . . ., 100 is divisible by 2
_{_ _}_ 0.25 A (0.4)
and not divisible by 3? Express your answer as a common fraction.
**Paraphrased: What is the chance that if we randomly choose an integer from the set of numbers 1 to 100,**
MMLU D
it will be divisible by 2 but not divisible by 3? Write your answer as a fraction. 0.8 D
**Options:[A)** 6631 [, B)] 1766 [, C)] 1731 [, D)] 1750 []]
Table 6: Qualitative examples where the original problems and corresponding surface form variations exhibit
substantial solve rate difference using GPT-3.5-turbo.
variety of reasoning paths, with multiple paths potentially leading to the same answer from different
angles. Other multi-step reasoning variations with
verifiers exist (Lu et al., 2023; Besta et al., 2023;
Yao et al., 2023); however, they are less related
to our focus primarily on the language model’s
internal ability to solve mathematical problems.
**Paraphrasing Variability** Previous research on
the impact of paraphrasing mathematical problems
on their solvability by language learning models
(LLMs) is limited. The study by (Gonen et al.,
2022) explored how paraphrased instructions affect
the performance of traditional NLP benchmarks.
This sensitivity of instructive prompts has inspired
further research in prompting learning (Shin et al.,
2020; Zhou et al., 2023; Sordoni et al., 2023) and
in-context exemplar mechanisms (Min et al., 2022;
Brown et al., 2020; Ye and Durrett, 2022). However, our work focuses on the sensitivity of the
mathematical problem presentation itself instead
of the instruction or in-context examples.
**7** **Conclusions**
This work highlights the variability in the solve rate
of large-scale language models to the surface form
of mathematical problems. Leveraging this, we
introduced the Self-Consistency-over-Paraphrases
(SCoP), which improves mathematical reasoning
performance over Self-Consistency. We hope our
findings will inspire the need for more robust language models that can reason effectively regardless
of how a problem is presented.
-----
**Limitations**
While we derive thorough conclusions about the
relationship between the surface form of a mathematical problem and its solvability by large-scale
language models with the effectiveness of SCoP
and additional studies, one limitation is the need for
a mechanism for identifying or generating surface
forms that are easier to solve than others. The study
is solely conducted in English, while the generalizability of SCoP in other languages is unexplored.
Future research could address these by exploring
the rationalization of surface forms, i.e., determining the optimal form given the original one, and the
verifiability of the framework in other languages.
**Ethics Statement**
The datasets that we used in experiments are publicly available. In our work, we explore the relationship between the surface form of a mathematical
problem and its solvability by large-scale language
models. We do not expect any direct ethical concern from our work.
**Acknowledgements**
This study was supported by MIT-IBM Watson AI
Lab.
**References**
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz
Lehmann, Michal Podstawski, Hubert Niewiadom[ski, Piotr Nyczyk, and Torsten Hoefler. 2023. Graph](http://arxiv.org/abs/2308.09687)
[of thoughts: Solving elaborate problems with large](http://arxiv.org/abs/2308.09687)
[language models.](http://arxiv.org/abs/2308.09687)
[Rahul Bhagat and Eduard Hovy. 2013. What Is a Para-](https://doi.org/10.1162/COLI_a_00166)
[phrase? Computational Linguistics, 39(3):463–472.](https://doi.org/10.1162/COLI_a_00166)
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, et al. 2020. Language models are few-shot
learners. Advances in neural information processing
_systems, 33:1877–1901._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, et al. 2021. Training verifiers to solve math
word problems. arXiv preprint arXiv:2110.14168.
Hila Gonen, Srini Iyer, Terra Blevins, Noah A. Smith,
[and Luke Zettlemoyer. 2022. Demystifying prompts](http://arxiv.org/abs/2212.04037)
[in language models via perplexity estimation.](http://arxiv.org/abs/2212.04037)
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou,
Mantas Mazeika, Dawn Song, and Jacob Steinhardt.
2020. Measuring massive multitask language understanding. In International Conference on Learning
_Representations._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021. Measuring mathematical
problem solving with the math dataset. NeurIPS.
Daniel Kahneman. 2011. _Thinking, fast and slow._
macmillan.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu[taka Matsuo, and Yusuke Iwasawa. 2023. Large lan-](http://arxiv.org/abs/2205.11916)
[guage models are zero-shot reasoners.](http://arxiv.org/abs/2205.11916)
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris
Tsipras, Dilara Soylu, Michihiro Yasunaga, et al.
[2023. Holistic evaluation of language models.](http://arxiv.org/abs/2211.09110)
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri
Edwards, Bowen Baker, Teddy Lee, Jan Leike,
John Schulman, Ilya Sutskever, and Karl Cobbe.
2023. Let’s verify step by step. _arXiv preprint_
_arXiv:2305.20050._
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun[som. 2017. Program induction by rationale genera-](https://doi.org/10.18653/v1/P17-1015)
[tion: Learning to solve and explain algebraic word](https://doi.org/10.18653/v1/P17-1015)
[problems. In Proceedings of the 55th Annual Meet-](https://doi.org/10.18653/v1/P17-1015)
_ing of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 158–167, Vancouver,_
Canada. Association for Computational Linguistics.
Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, KaiWei Chang, Ying Nian Wu, Song-Chun Zhu, and
[Jianfeng Gao. 2023. Chameleon: Plug-and-play com-](http://arxiv.org/abs/2304.09842)
[positional reasoning with large language models.](http://arxiv.org/abs/2304.09842)
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe,
Mike Lewis, Hannaneh Hajishirzi, and Luke Zettle[moyer. 2022. Rethinking the role of demonstrations:](https://doi.org/10.18653/v1/2022.emnlp-main.759)
[What makes in-context learning work? In Proceed-](https://doi.org/10.18653/v1/2022.emnlp-main.759)
_ings of the 2022 Conference on Empirical Methods in_
_Natural Language Processing, pages 11048–11064,_
Abu Dhabi, United Arab Emirates. Association for
Computational Linguistics.
[OpenAI. 2023. Gpt-4 technical report.](http://arxiv.org/abs/2303.08774)
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, John
Schulman, Jacob Hilton, Fraser Kelton, Luke Miller,
Maddie Simens, Amanda Askell, Peter Welinder,
Paul Christiano, Jan Leike, and Ryan Lowe. 2022.
[Training language models to follow instructions with](http://arxiv.org/abs/2203.02155)
[human feedback.](http://arxiv.org/abs/2203.02155)
Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen,
Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang,
and Huajun Chen. 2022. Reasoning with language model prompting: A survey. arXiv preprint
_arXiv:2212.09597._
-----
Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, et al. 2022.
[Scaling language models: Methods, analysis and in-](http://arxiv.org/abs/2112.11446)
[sights from training gopher.](http://arxiv.org/abs/2112.11446)
Taylor Shin, Yasaman Razeghi, Robert L. Logan IV
[au2, Eric Wallace, and Sameer Singh. 2020. Auto-](http://arxiv.org/abs/2010.15980)
[prompt: Eliciting knowledge from language models](http://arxiv.org/abs/2010.15980)
[with automatically generated prompts.](http://arxiv.org/abs/2010.15980)
Alessandro Sordoni, Xingdi Yuan, Marc-Alexandre
Côté, Matheus Pereira, Adam Trischler, Ziang Xiao,
Arian Hosseini, Friederike Niedtner, and Nicolas Le
[Roux. 2023. Joint prompt optimization of stacked](http://arxiv.org/abs/2306.12509)
[llms using variational inference.](http://arxiv.org/abs/2306.12509)
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao,
Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch,
[Adam R. Brown, et al. 2023. Beyond the imitation](https://openreview.net/forum?id=uyTL5Bvosj)
[game: Quantifying and extrapolating the capabili-](https://openreview.net/forum?id=uyTL5Bvosj)
[ties of language models. Transactions on Machine](https://openreview.net/forum?id=uyTL5Bvosj)
_Learning Research._
H Lee Swanson, Jennifer E Kong, Amber S Moran, and
Michael J Orosco. 2019. Paraphrasing interventions
and problem-solving accuracy: Do generative procedures help english language learners with math
difficulties? Learning Disabilities Research & Prac_tice, 34(2):68–84._
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint_
_arXiv:2307.09288._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le,
Ed H Chi, Sharan Narang, Aakanksha Chowdhery,
and Denny Zhou. 2022. Self-consistency improves
chain of thought reasoning in language models. In
_The Eleventh International Conference on Learning_
_Representations._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural
_Information Processing Systems, 35:24824–24837._
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran,
Thomas L. Griffiths, Yuan Cao, and Karthik
Narasimhan. 2023. [Tree of thoughts: Deliberate](http://arxiv.org/abs/2305.10601)
[problem solving with large language models.](http://arxiv.org/abs/2305.10601)
Xi Ye and Greg Durrett. 2022. The unreliability of
explanations in few-shot prompting for textual reasoning. Advances in neural information processing
_systems, 35:30378–30392._
Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han,
Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy
[Ba. 2023. Large language models are human-level](http://arxiv.org/abs/2211.01910)
[prompt engineers.](http://arxiv.org/abs/2211.01910)
-----
**A** **Choose Margin**
We examine the effect of margin on selecting exemplars for in-context paraphrasing using GPT-3.5
and separate dev-sets from GMS8K and MMLU,
each with 250 data points. The results in Table 7
show that a moderate margin outperforms a large
one in SCOP, as the latter may decrease the diversity of exemplars.
Margin MMLU (Dev, k = 8) GSM8K (Dev, k = 8)
SC/HPR% 53.2 (26.9) / 64 73.6 (21.4) / 33.6
0.2 55.6 (34.4) 75.2 (31.0)
0.3 56.8 (35.6) 74.8 (32.1)
0.4 55.2 (33.8) 75.6 (33.3)
0.5 53.6 (32.5) 74.4 (35.7)
Table 7: Ablation on the margin effect of exemplar
selection.
**B** **APE Alternatives**
A potential alternative to finding an optimal prompt
for paraphrasing is to use the Automatic Prompt
Engineering (APE) settings (Zhou et al., 2023). We
formulate the procedure into four steps:
1. Present a set of input-output pairs where the
inputs are the original problems and the outputs are the paraphrased exemplars. Prompt
the language model to generate C candidate
instructions that could produce the outputs
from the inputs.
2. Prompt each candidate instruction to the language model to generate paraphrases for a
batch size B of problems in the development
set and compare the mean solve rate change
before and after paraphrasing.
3. Choose the instruction that maximizes the
mean solve rate change.
4. Repeat steps 1 - 3 E times.
We implemented this procedure using GPT-3.5
on the AQUA development set to obtain the instruction (C = 15, B = 30, B = 0). We
tested the performance in both AQUA (in-domain)
and GSM8K (out-of-domain), comparing it with
ICLpara. Although the in-domain AQUA performance was similar to ICLpara, the out-of-domain
performance worsened, and APE required more
data than ICLpara. Therefore, this approach has
yielded negative results. The performance results
are presented in Table 8.
GSM8K AQUA
ICLpara APE ICLpara APE
SC 76.3 (24.5) 66.9 (22.2)
N/1 77.9 (39.0) 73.7 (34.0) 66.4 (29.8) 66.1 (29.0)
N/2 80.5 (39.2) 76.3 (36.2) 68.5 (31.7) 66.8 (30.0)
N/4 79.2 (38.3) 77.7 (33.0) 70.5 (35.4) 70.8 (32.0)
N/8 80.2 (40.6) 79.0 (41.5) 69.7 (34.4) 69.2 (31.0)
Table 8: A comparison between the performance of
APE and ICLpara paraphrasing.
**C** **Temperature and Randomness**
To further validate that SCoP goes beyond simply
increasing randomness, we introduce two variants
of SC, where we increase the temperature up to
0.9 and 1 using GPT-3.5 on 1/4 of the MMLU and
AQuA datasets. The results are shown in Table 9.
Temperature MMLU AQuA
0.7 (baseline) 49 (21.5) 68 (27.3)
0.9 49 (24.6) 68 (27.3)
1 49 (24.6) 64 (18.2)
Table 9: Increasing randomness by temperature saturates reasoning performance. The numbers inside the
parenthesis are the accuracy for the hard problems.
As can be observed, although increasing temperature brings a slight improvement in the hard
problems, the performance gain soon saturates and
is not nearly comparable to that of SCoP.
**D** **Surface Forms with Solve Rate**
**Degradation**
As previously discussed, surface form modification by paraphrasing can lead to degradation in the
solve rate. Here, we present additional qualitative
examples where the solution rate worsened after
paraphrasing. See Table 10.
-----
**Problem** **Source** **Label** **SR** **Voted (F.)**
**Original: Howie wants to buy cupcakes for everyone in his class as a special treat. He’s not sure if**
people will want vanilla or chocolate cupcakes so he decides to get one of each for everyone.
0.8 -
If he gets the same amount of 2 cupcakes for each himself, his teacher, and his 25 classmates,
how many cupcakes should Howie buy? GSM8K "54"
**Paraphrased: Howie wants to purchase cupcakes for his entire class as a special treat.**
Since he is unsure of the flavor preference, he plans to buy both vanilla and chocolate cupcakes.
0.25 "27" (0.35)
Howie wants to ensure that he has an equal amount of cupcakes for himself, his teacher, and
his 25 classmates. How many cupcakes should Howie purchase in total?
**Original: Janice bikes at 10 miles per hour, while Jennie bikes at 20. How long until they have**
0.55 -
collectively biked 1 mile?
**Paraphrased: Janice and Jennie are biking at different speeds. Janice bikes at a rate of 10 miles per hour,**
AQUA B
while Jennie bikes at a rate of 20 miles per hour. How much time will it take for them to collectively
0.1 C (0.35)
bike a distance of 1 mile?
**Options:[A)1 minute, B)2 minutes, C)3 minutes, D)4 minutes, E)5 minutes]**
**Original: How many primes are in the row of Pascal’s Triangle that starts with a 1 followed by a 6?** 0.7 -
**Paraphrased: Starting with the numbers 1 and 6, how many prime numbers are there in the** MATH "0"
0.1 "2" (0.25)
sequence of numbers in Pascal’s Triangle?
**Original: John divided his souvenir hat pins into two piles. The two piles had an equal number of pins.**
He gave his brother one-half of one-third of one pile. John had 66 pins left. How many pins did John 0.9 -
originally have?
MMLU B
**Paraphrased: John started with a certain number of souvenir hat pins. He divided them into two equal piles.**
He then gave his brother one-half of one-third of one of the piles. After that, John was left with 66 pins.
0.35 A (0.4)
How many pins did John have at the beginning?
**Options:[A) 396, B) 72, C) 66, D) 36]**
Table 10: Qualitative examples where paraphrased surface forms of the original problems can also exhibit solve rate
degradation, using GPT-3.5-turbo.
**E** **Prompt Templates**
We list the prompt templates used in the paper below.
Few-shot Chain-of-thought
Question: At Academic Academy, to pass
an algebra test you must score at least 80.
If there are 35 problems on the test, what is
the greatest number you can miss and still
pass?
Answer Choices: A) 7 B) 28 C) 35 D) 8
Rationale: First, we need to find 80% of 35.
We can do this by multiplying 35 by 0.80:
35 × 0.80 = 28. So, if you get 28 problems
correct, you will have scored 80% on the
test.
To find the greatest number you can miss
and still pass, subtract the number you
can get correct from the total number of
problems:35 − 28 = 7.
Therefore, the greatest number you can miss
and still pass is (A) 7.
(Repeat Nshot)
Question: {target problem}
Naïve Paraphrasing
Paraphrase the following math problem:
{target problem}
ICL Paraphrasing
Paraphrase the following math problem:
{input problem}
Output: {Paraphrased exemplar}
(Repeat Nshot)
Paraphrase the following math problem:
{target problem}
APE Candidate Search
A student is completing a task that requires
producing a text output from a text input.
The student receives instruction about sev-
eral rules that describe how to produce the
outputs given the inputs. What is the in-
struction?
-----
| [
"Yue, Zhou",
"Yoon, Kim",
"Yada, Zhu",
"Diego, Antognini",
"Yang, Zhang"
] | 2024-04-17T00:00:00 | NAACL 2024 Long Papers | true | 0 | 0 | null | http://arxiv.org/abs/2404.11500 | https://arxiv.org/abs/2404.11500 | https://www.semanticscholar.org/paper/7bddf68afbdfe0b2245aed312c0255fb486da95b |
PersonaMath: Enhancing Math Reasoning through Persona-Driven Data Augmentation | While closed-source Large Language Models (LLMs) demonstrate strong mathematical problem-solving abilities, open-source models continue to struggle with such tasks. To bridge this gap, we propose a data augmentation approach and introduce PersonaMathQA, a dataset derived from MATH and GSM8K, on which we train the PersonaMath models. Our approach consists of two stages: the first stage is learning from Persona Diversification, and the second stage is learning from Reflection. In the first stage, we regenerate detailed chain-of-thought (CoT) solutions as instructions using a closed-source LLM and introduce a novel persona-driven data augmentation technique to enhance the dataset's quantity and diversity. In the second stage, we incorporate reflection to fully leverage more challenging and valuable questions. Evaluation of our PersonaMath models on MATH and GSM8K reveals that the PersonaMath-7B model (based on LLaMA-2-7B) achieves an accuracy of 24.2% on MATH and 68.7% on GSM8K, surpassing all baseline methods and achieving state-of-the-art performance. Notably, our dataset contains only 70.3K data points-merely 17.8% of MetaMathQA and 27% of MathInstruct-yet our model outperforms these baselines, demonstrating the high quality and diversity of our dataset, which enables more efficient model training. We open-source the PersonaMathQA dataset, PersonaMath models, and our code for public usage. | This work proposes a data augmentation approach and introduces PersonaMathQA, a dataset derived from MATH and GSM8K, on which the PersonaMath models are trained, and introduces a novel persona-driven data augmentation technique to enhance the dataset's quantity and diversity. | [
"Run, Luo",
"Jiaming, Li",
"Jing, Luo",
"Chang, Ao",
"Longze, Chen",
"Liang, Zhu",
"Min, Yang",
"Chengming, Li",
"Yukun, Chen",
"Xin, Cheng",
"Jiayuan, Su",
"Wen, Yang"
] | 2024-10-02T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.01504 | https://arxiv.org/abs/2410.01504 | https://www.semanticscholar.org/paper/082af3340e57caae2034a93d3f82350d168144bc |
|
Positional Description for Numerical Normalization | We present a Positional Description Scheme (PDS) tailored for digit sequences, integrating placeholder value information for each digit. Given the structural limitations of subword tokenization algorithms, language models encounter critical Text Normalization (TN) challenges [1] when handling numerical tasks. Our schema addresses this challenge through straightforward pre-processing, preserving the model architecture while significantly simplifying number normalization, rendering the problem tractable. This simplifies the task and facilitates more compact production-ready models capable of learning from smaller datasets. Furthermore, our investigations reveal that PDS enhances the arithmetic processing capabilities of language models, resulting in a relative accuracy improvement of 23% to 51% on complex arithmetic tasks. We demonstrate that PDS effectively mitigates fatal numerical normalization errors in neural models, requiring only a modest amount of training data without rule-based Finite State Transducers (FST). We demonstrate that PDS is essential for both the Text-To-Speech and Speech Recognition text processing, enabling effective TN under production constraints. | A Positional Description Scheme tailored for digit sequences, integrating placeholder value information for each digit, which effectively mitigates fatal numerical normalization errors in neural models, requiring only a modest amount of training data without rule-based Finite State Transducers. | [
"Deepanshu, Gupta",
"Javier, Latorre"
] | 2024-08-22T00:00:00 | INTERSPEECH 2024 Speech Synthesis | false | 0 | 0 | null | https://arxiv.org/abs/2408.12430v1 | https://arxiv.org/abs/2408.12430 | https://www.semanticscholar.org/paper/365641ba2cd8b3d5272806fb4d34d9daa4b91bf9 |