title
stringlengths 10
147
| abstract
stringlengths 3
2.41k
| tldr_text
stringlengths 96
425
⌀ | content_markdown
stringlengths 0
464k
⌀ | authors
sequencelengths 1
41
⌀ | date
timestamp[ms] | publish_info
stringclasses 111
values | publish_is_top
bool 2
classes | citation_count
uint32 0
1k
| citation_count_filtered_math_and_top_conf
uint32 0
127
| theorem_provers
sequencelengths 1
4
⌀ | url
stringlengths 31
152
⌀ | arxiv_url
stringlengths 32
32
⌀ | semantics_scholar_url
stringlengths 78
78
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Pre-Calc: Learning to Use the Calculator Improves Numeracy in Language Models | Quantitative and numerical comprehension in language is an important task in many fields like education and finance, but still remains a challenging task for language models. While tool and calculator usage has shown to be helpful to improve mathematical reasoning in large pretrained decoder-only language models, this remains unexplored for smaller language models with encoders. In this paper, we propose Pre-Calc, a simple pre-finetuning objective of learning to use the calculator for both encoder-only and encoder-decoder architectures, formulated as a discriminative and generative task respectively. We pre-train BERT and RoBERTa for discriminative calculator use and Flan-T5 for generative calculator use on the MAWPS, SVAMP, and AsDiv-A datasets, which improves performance on downstream tasks that require numerical understanding. Our code and data are available at https://github.com/calc-cmu/pre-calc. | This paper proposes Pre-Calc, a simple pre-finetuning objective of learning to use the calculator for both encoder-only and encoder-decoder architectures, formulated as a discriminative and generative task respectively. | # Pre-Calc: Learning to Use the Calculator Improves Numeracy in Language Models
**Vishruth Veerendranath** [1] **Vishwa Shah** [1] **Kshitish Ghate** [1]
Quantitative and numerical comprehension in language is an important task in many fields like
education and finance, but still remains a challenging task for language models. While tool and
calculator usage has shown to be helpful to improve mathematical reasoning in large pretrained
decoder-only language models, this remains unexplored for smaller language models with encoders. In this paper, we propose Pre-Calc, a
simple pre-finetuning objective of learning to use
the calculator for both encoder-only and encoderdecoder architectures, formulated as a discriminative and generative task respectively. We pretrain BERT and RoBERTa for discriminative calculator use and Flan-T5 for generative calculator use on the MAWPS, SVAMP, and AsDiv-A
datasets, which improves performance on downstream tasks that require numerical understand[ing. Our code and data are available at https:](https://github.com/calc-cmu/pre-calc)
[//github.com/calc-cmu/pre-calc.](https://github.com/calc-cmu/pre-calc)
**1. Introduction**
The advancement of language modeling in natural language
processing has significantly impacted various computational
tasks. However, the intricacy of numerical and quantitative
comprehension in text remains a challenging frontier. Numerals, unlike words, possess unique characteristics that
necessitate specialized handling by language models either
in tokenization or processing. This necessity becomes particularly evident in tasks involving quantitative reasoning,
where the ability to interpret and manipulate numerical information is crucial.
Numeracy involves majorly 2 properties. The first is seman_tic reasoning which focuses more on the understanding of_
relations in text and the second is computational abilility
1Carnegie Mellon University. Correspondence to: Vishruth
Veerendranath <[email protected]>.
_The First AI for MATH Workshop at the 41_ _[st]_ _International Con-_
_ference on Machine Learning, Vienna, Austria. Copyright 2024 by_
the author(s).
Operation
Operand Identification Classification
(Token-level) (Seq Level)
0 0 1 0 1 +
-
T1 T2 T3 ... Tm Tm+1 TOP -
/
Encoder-only Transformer
(BERT / RoBERTa)
[Mark] [had] [2] [Doll]. ... Now Mark has [161] Doll. How many ...? [OP]
_Figure 1. Pre-Calc for Encoder-Only models_
which focuses on performing explicit mathematical operations. Hence, the aim is to develop systems that can perform
explicit mathematical operations while retaining or improving its quantitative reasoning.
We present Pre-Calc, a pre-finetuning objective of learning to use the calculator, to improve numerical abilities in
language models. We propose Pre-Calc objectives for both
the encoder-only and encoder-decoder classes of language
models, and use a combination of the MAWPS (KoncelKedziorski et al., 2016), SVAMP (Patel et al., 2021), AsDivA (Miao et al., 2020) datasets to pre-finetune the models.
Our encoder-only objective, used to pre-finetune BERT and
RoBERTa models, offers quick and efficient processing suitable for tasks where speed is paramount. The Pre-Calc
versions of the models show competent performance on 6
quantitative downstream tasks from Chen et al. (2023) and
substantial improvements on 4 sub-tasks with an improvement greater than 10 points for RedditNLI and AWPNLI
specifically.
Similarly, our encoder-decoder approach, used to prefinetune Flan-T5, demonstrates an improved ability to perform explicit computations in computation-intensive tasks
like AWPNLI. Although there is a noted trade-off, with a
slight decrease in performance on text-focused and semantic tasks, the objective showcases strengths in processing
mathematically intensive language.
-----
**Pre-Calc: Learning to Use the Calculator Improves Numeracy in Language Models**
Our study underscores the potential of tailored language
models to significantly enhance numeracy in NLP, providing
an avenue for more efficient and effective processing of
numerical data in language.
**2. Task and Data**
**2.1. Downstream Tasks**
We focus on downstream quantitative reasoning tasks,
specifically QNLI and QQA (Chen et al., 2023).
**QNLI is the task of making natural language inferences**
based on quantitative clues. This dataset is adopted from
the EQUATE(Ravichander et al., 2019) and is composed of
NewsNLI, RedditNLI, AWPNLI and RTE-Quant. StressTest
involves numerical reasoning instances from Naik et al.
(2018), used as a synthetic sanity check.
**QQA involves the task of multiple-choice question answer-**
_ing involving commonsense as well as quantitative com-_
parisons. The dataset for this is adopted from Task 3 of
NumGLUE (Mishra et al., 2022) and the Quarel dataset
(Tafjord et al., 2019), which includes questions from quantitative domains such as physics and economics.
**2.2. Pre-Finetuning Data**
We use the MAWPS dataset (Koncel-Kedziorski et al.,
2016), SVAMP (Patel et al., 2021) and AsDiv-A (Miao et al.,
2020) as the numerical domain datasets for pre-finetuning.
These consist of simple arithmetic word problems, along
with their numerical solutions. The three datasets create a
dataset with 4,225 total examples which are challenging and
require understanding the context of numbers, represented
either as digits or in words.
We construct this dataset from the Calc-X collection
(Kadlcˇ´ık et al., 2023) that has been annotated with equations for each problem, as well as <gadget> annotations
in the answer to train a model to use a calculator when the
<gadget> token is produced. We use the annotations of
the equations particularly in our methodology.
**3. Pre-Calc Methodology**
We posit that learning to use a calculator requires understanding of numbers and ways in which numbers can be
combined. This is used to formulate the Pre-Calc objectives described below.
**3.1. Encoder-Only**
3.1.1. DATA PREPROCESSING
We preprocess Calc-MAWPS, Calc-SVAMP and CalcAsDiv-A (from the Calc-X collection) (Kadlcˇ´ık et al., 2023)
and add 2 new features required for Pre-Calc. First is the
_operand tag sequence, which is a sequence of binary tags_
that is 1 if the original token it corresponds to is an operand
and 0 if it isn’t. Secondly we extract the Operation, which
is the operation among {+ (add), - (subtract), * (multiply),
/ (divide)} that is required for the question. We extract the
operation either directly from the equation or the reasoning
chain in Calc-X and generate the operand tag sequence, by
first extracting the operands and then tagging the occurances
of the operands in the binary sequence with a 1. As part
of this process we also filter out instances where there are
more than one distinct operations as part of the equation.
3.1.2. PRE-CALC METHOD
An illustration of the Pre-Calc method for Encoder-only
model can be seen in Fig 1. This is decomposed into two
tasks as a dual-objective.
Firstly, we use the pretrained Encoder-only language model
for the task of Operand Identification, which is a token-level
classification task. The tags possible for each token are 1
and 0.
Secondly, we perform the task of Operation Classficiation
by adding a special [OP] token at the end of each sequence
and using this [OP] token’s final layer representation to
classify the operation required in this sequence (+, -, *, /).
Hence, this is essentially a sequence-level classification task
similar to classifying from the representation of a [CLS]
token. However, we do not use the [CLS] token at the
start of the sequence, to enable this objective even in nonbidirectional models with an autoregressive attention mask
(like decoder-only models).
In essence, we use two heads — one token classification
head for Operand Identification, and one sequence classification head for Operation Classification — to train it with
the dual objective as per Equation 1
_L = Loperation + λLoperand_ (1)
where Loperation is the cross-entropy loss for the sequence
classification ([OP]) head and Loperand is the binary cross
entropy (BCE) loss for the token classification head. Here
we empirically set λ = 1.
3.1.3. DOWNSTREAM TASK INFERENCE
For most downstream tasks, we do not explicitly perform
calculator computations using operands and operations predicted by the model and instead use Pre-Calc only as a
learning objective before finetuning it for specific downstream tasks. However, as AWPNLI task requires the model
to be able to perform calculations explicitly, we utilize an
alternative strategy for its inference adapted from our prefinetuning strategy shown in Fig 1. We first extract the
-----
**Pre-Calc: Learning to Use the Calculator Improves Numeracy in Language Models**
operand labels for each token from the premise Ti and operation using the Top token. This gives us the operands and
operation, after which we automate the calculation of the
final answer comparing it with the hypothesis. This helps
the model focus on the semantic extraction of operation and
offload explicit computation to the calculator.
**3.2. Encoder-Decoder**
Encoder-Decoder or Decoder based models provide the
abilities of long-form unbounded generations. This is advantageous for numerical problems, where multiple intermediate operations might be required for computation or
reasoning(Wei et al., 2022). By reframing our task to output
expressions, we distil the task to output the set of operations,
leaving the computation part to the tool.
3.2.1. DATA PREPROCESSING
As mentioned earlier, we use the MAWPS dataset for training our model to output expressions. As each instance in
MAWPS consists of a question and a single numerical answer, to obtain closer resemblance to NLI format tasks, we
reframe the question-numerical answer instance to a pair
of complete sentences using prompting with LLaMa-7B
(Touvron et al., 2023) as this is a simple text generation task.
To obtain contradiction pairs, we perturb the true numerical
answer by a small value (ranging from -5 to +5) before
passing to the LLaMa-7B model as these will create harder
instances for the model to learn from. We additionally also
use the Multi-NLI(Williams et al., 2018) dataset to retain
and improve textual inferential abilities of the model. We
train on this combined data as Seq2Seq generation task.
Combining these tasks should allow the model to infer both
semantic and computational capabilities.
3.2.2. PRE-CALC METHOD
```
math-nli sentence1: <equate> (58 -33,16)
```
`There were 58 geeseand 33 ducks` Pre-Calc
1 `sentence2: 16 moreducks were there` `<text> ENTAILMENT`
Flan-T5
2 `text-nliNifty traded above sentence1:` `<text> CONTRADICTION`
```
7500, say calls today
```
`sentence2: Nifty` Standard
`above 7400` `<text> ENTAILMENT` training
_Figure 2. Our Encoder-Decoder based approach_
We utilize this ability of Seq2Seq modeling by fine-tuning
Flan-T5 on our NLI-based tasks for pre-finetuning mentioned in 2.2. As shown in Figure 2, we use a math-nli
prefix tag for tasks that require mathematical computation
eg: MAWPS reformatted as above and use a text-nli
tag for text-based tasks eg: Multi-NLI. This lets the model
decipher whether the task requires explicit calculation - in
which case it should output an expression for tool use, or use
its inherent text-capabilities to reason over text numeracy.
Similar to Kadlcˇ´ık et al. (2023), we make the model output
token <equate> with the corresponding expression for
computational tasks as essentially the task involves equating
expressions in sentence 1 and sentence 2 and <text> for
more textual-numeracy tasks with the final answer. This
helps at inference to verify if the final computation requires
to go through the calculator or not. We also hope to expand to <compare>,<compute> tokens as we extend
this method in future to more down-stream tasks. We denote
this as our Pre-Calc method for Encoder-Decoder models
which performs tool-based pre-finetuning. As our baseline
we also evaluate the performance of doing only text-based
fine-tuning which we call our Standard Training approach.
**4. Experiments**
**4.1. Baselines**
We compare the performance of our method against several
baseline models on tasks that require numeracy. Following
Chen et al. (2023), the baseline methods involve reframing
techniques, namely Original, Digit-based, and Scientific
Notation methods, and are pre-finetuned on the Comparing
_Numbers Dataset (CND). Each of these methods are applied_
to both BERT (Devlin et al., 2019) and RoBERTa (Liu et al.,
2019) to create two versions of each baseline method.
**4.2. Encoder-Only**
We use the pretrained BERT and RoBERTa base models
and pre-finetune as per Section 3.1.2. We use the 4-class
cross-entropy loss for training the Operation Classification
head, and a 2-class cross-entropy (equivalent to binary cross
entropy) loss for the Operand Identification head. The models are trained with the Adam optimizer for 20 epochs, a
batch size of 8, and a learning rate of 5e-4. The checkpoint after this pre-finetuning is named Pre-Calc-BERT or
Pre-Calc-RoBERTa[1].
We then finetune Pre-Calc-BERT and Pre-Calc-RoBERTa
on the downstream tasks of QNLI and QQA using the same
hyperparameters used by the CN-BERT baselines (Chen
et al., 2023) — AdamW optimizer with a learning rate of
5e-5, batch size of 8 for 5 epochs. We use 10-fold cross
validation to report our results for the tasks where an explicit
test split is not available.
**4.3. Encoder-Decoder**
We use Flan-T5 as our base model, For pre-finetuning, we
collect a balanced sample consisting about 4200 instances
created from MAWPS for math-nli task and 3900 instances extracted from Multi-NLI for text-nli task[1].
-----
**Pre-Calc: Learning to Use the Calculator Improves Numeracy in Language Models**
|Model Notation|QNLI QQA RTE-QUANT News Reddit AWPNLI Streess Test|
|---|---|
|Org Digit Sci BERT CN-Digit CN-Sci Pre-Calc (ours)|66.73 74.22 62.40 59.20 99.46 56.79 60.22 75.94 62.86 53.20 99.70 52.63 66.80 75.60 65.14 60.73 99.46 53.33 62.88 76.97 68.57 60.27 99.58 53.60 66.87 77.98 65.64 54.70 99.58 52.38 67.00 76.54 76.00 68.97 99.47 53.93|
|Org Digit Sci RoBERTa CN-Digit CN-Sci Pre-Calc (ours)|62.79 78.35 59.33 57.64 100.00 52.27 62.67 79.38 63.71 56.69 99.94 58.94 62.93 79.37 62.88 57.41 100.00 56.47 68.13 77.66 62.99 58.80 100.00 51.21 63.97 74.57 63.80 58.74 99.98 53.6 73.90 82.21 78.00 58.17 100.00 61.05|
_Table 1. Micro-F1-Scores (in %) of Pre-Calc trained models as compared to CN (Comparing Numbers) trained and reframing (Digit, Sci)_
baselines
As this is a sequence generation task, the objective is same
as that in CausalLM modeling for next-word prediction.
We use AdamW optimizer with a learning rate of 5e-5,
batch size of 8 for 5 epochs. We do not perform any finetuning on our downstream tasks, and show results for prompt
based few-shot evaluations on each task. We call our model
FlanT5-Pre-Calc [1] and use 2 baselines, Flan-T5 few-shot
and Flan-T5-ST only with standard text training[1].
**5. Results and Discussion**
**5.1. Encoder-Only**
Our evaluation on the QNLI and QQA tasks, as outlined in
Table 1, demonstrates the efficacy of our Pre-Calc approach.
For BERT, our Pre-Calc method significantly outperforms
all other reframing techniques for RedditNLI, AWPNLI and
RTE-Quant. These results highlight the effectiveness of our
method in dealing with diverse numerical information in
natural language. In the case of RoBERTa, the Pre-Calc
approach consistently outperformed other methods across
all three tasks - RTE-Quant, NewsNLI and RedditNLI. This
performance is markedly superior compared to the original
RoBERTa and other variants that use the reframing techniques, with lower scores in all categories.
For AWPNLI we report results for baselines from Chen
et al. (2023), and for our results we compute F1-score on
the complete dataset using our methodology described in
section 4.2. We see a substantial improvement in Pre-Calc
compared to the earlier baselines which can be attributed
to our training and inference strategy which can precisely
attend and compute an expression which is essential for the
AWPNLI task.
1 [https://huggingface.co/collections/](https://huggingface.co/collections/Calc-CMU/pre-calc-657a5ad5f1ae42fb12364563)
[Calc-CMU/pre-calc-657a5ad5f1ae42fb12364563](https://huggingface.co/collections/Calc-CMU/pre-calc-657a5ad5f1ae42fb12364563)
In QQA as well, Pre-Calc-RoBERTa improves performance
over its counterpart. This indicates that Pre-Calc improves
commonsense reasoning abilities and this effect is more
pronounced in RoBERTa which is a stronger base model.
Overall, the results validate our hypothesis that the PreCalc approach, which integrates calculator-like capabilities
into the model, significantly enhances performance in tasks
requiring numeral-aware semantic and computational capabilities .
**5.2. Encoder-Decoder**
We present the results for our Encoder-Decoder based approach in Table 2. We see that for AWPNLI which requires
explicit computation, FlanT5(Pre-Calc) achives almost double performance compared to FlanT5-few shot and FlanT5ST, showing that Flan-T5 originally did not have this capability to evaluate expressions and compare values and
this property cannot be instilled only via text finetuning as
can be seen from the performance of FlanT5-ST. Further
compared to prior works (Chen et al., 2023), this achieves
state-of-the art results on AWPNLI.
However, we see that the performance slightly decreases on
NewsNLI and RTEQuant which are more text-focused tasks.
We see that original pretrained FlanT5 does better at this as it
already has inherent properties to handle semantic numeracy.
This is likely because training with specific tasks discussed
above causes forgetting/over-fitting in the model. This can
also be attributed to the language-modeling MLE loss which
focuses more on generating outputs specific to the format
discussed in Fig 2 rather than its original properties of incontext learning and reasoning. To combat this, in the future
we hope to regularize learning better so that a diversity of
tasks can be included avoiding overfitting in the model.
-----
**Pre-Calc: Learning to Use the Calculator Improves Numeracy in Language Models**
instances where the model is required to understand the
idea of ratio-proportion which requires more complex understanding compared to other operations.
Operand Identification F1-score
0.8
0.6
eF1-Scor 0.4
0.2
Pre-Calc-BERT
0.0 Pre-Calc-RoBERTa
0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0
Epoch
_Figure 4. Operand Identification Loss Plot_
Operation Classification Accuracy
0.9
0.8
0.7
Accuracy
0.6
0.5 Pre-Calc-BERT
Pre-Calc-RoBERTa
0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0
Epoch
_Figure 5. Operation Classification Loss Plot_
**7. Related Work**
**Numeracy in LMs** Numeracy, or the ability to understand
and work with numbers, is a critical aspect that has been
relatively underexplored compared to other linguistic competencies in NLP models. Spithourakis & Riedel (2018)
emphasized the need for LMs to better understand numbers,
setting a precedent for subsequent research.
Chen et al. (2019) introduced Numeracy-600K, a large-scale
dataset designed to improve the ability of models to detect
exaggerated information in financial texts. Concurrently,
Wallace et al. (2019) explored the embedding properties of
numbers, shedding light on how numeracy can be integrated
into LMs. Zhang et al. (2020) analyzed the representation
of numerals in scientific notation, addressing the challenge
of scale understanding in LMs. Chen et al. (2021) furthered
this exploration by suggesting a digit-based encoder for
numeral encoding, providing a novel perspective on numeral
representation.
**AWP** **News** **RTE**
Model Task
_\_ **NLI** **NLI** **Quant**
Few-shot 41.56 77.47 **85.74**
ST (ours) 37.55 **76.75** 73.43
Pre-Calc (ours) **80.29** 75.20 71.26
_Table 2. Micro F1-score of Flan-T5-large when using our Encoder-_
Decoder based approach
**6. Analysis**
**6.1. Dual-Objective in Encoder-Only Pre-Calc**
We inspect the characteristics of the two objectives during
pre-finetuning. Fig. 4 shows the F1-score across epochs for
the operand identification objective on the validation data.
While this seems to fluctuate, it consistently stays above
90% (the accuracy for this task also consistently remains at
about 99%), indicating that the operand identification task is
not very challenging and that there is very little loss signal
from this task beyond the first few epochs. Regardless of
having the second objective, the F-1 for this task is still
maintained at a high number.
In Fig. 5 we see the accuracy plot on validation data for the
operation classification objective across epochs. Here we
see that the accuracy consistently increases but still remains
under 75% which tells us that this objective is a lot more
challenging, which is also explained by the fact that it has to
be inferred from text. Together, the two aid different abilities — picking numbers out with operand identification and
combining numbers with operation classification — which
are both important for any downstream quantitative task
**6.2. Operation wise difficulty for FlanT5**
We sample 500 instances from the MAWPS reframed
dataset, to observe operation-wise accuracy for the model.
We observe in Figure 3 that about 60% errors are for instances that entail a divide operation. This could be because
understanding division requires the model to develop an
understanding of what operand should be the numerator
and which should be the denominator. There are also rare
_Figure 3. Operation wise error for FlanT5-Pre-Calc_
-----
**Pre-Calc: Learning to Use the Calculator Improves Numeracy in Language Models**
**References**
Aghajanyan, A., Gupta, A., Shrivastava, A., Chen, X.,
Zettlemoyer, L., and Gupta, S. Muppet: Massive multitask representations with pre-finetuning. arXiv preprint
_arXiv:2101.11038, 2021._
Chen, C.-C., Huang, H.-H., Takamura, H., and Chen, H.H. Numeracy-600k: Learning numeracy for detecting
exaggerated information in market comments. In Pro_ceedings of the 57th Annual Meeting of the Association_
_for Computational Linguistics, pp. 6307–6313, 2019._
Chen, C.-C., Huang, H.-H., and Chen, H.-H. Nquad:
70,000+ questions for machine comprehension of the
numerals in text. In Proceedings of the 30th ACM Inter_national Conference on Information & Knowledge Man-_
_agement, pp. 2925–2929, 2021._
Chen, C.-C., Takamura, H., Kobayashi, I., and Miyao,
Y. Improving numeracy by input reframing and quantitative pre-finetuning task. In Findings of the Associ_ation for Computational Linguistics: EACL 2023, pp._
69–77, Dubrovnik, Croatia, May 2023. Association for
Computational Linguistics. doi: 10.18653/v1/2023.
findings-eacl.4. [URL https://aclanthology.](https://aclanthology.org/2023.findings-eacl.4)
[org/2023.findings-eacl.4.](https://aclanthology.org/2023.findings-eacl.4)
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT:
Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Confer_ence of the North American Chapter of the Association for_
_Computational Linguistics: Human Language Technolo-_
_gies, Volume 1 (Long and Short Papers), pp. 4171–4186,_
Minneapolis, Minnesota, June 2019. Association for
Computational Linguistics. doi: 10.18653/v1/N19-1423.
[URL https://aclanthology.org/N19-1423.](https://aclanthology.org/N19-1423)
Geva, M., Gupta, A., and Berant, J. Injecting numerical reasoning skills into language models. In Pro_ceedings of the 58th Annual Meeting of the Associa-_
_tion for Computational Linguistics, pp. 946–958, On-_
line, July 2020. Association for Computational Linguis[tics. doi: 10.18653/v1/2020.acl-main.89. URL https:](https://aclanthology.org/2020.acl-main.89)
[//aclanthology.org/2020.acl-main.89.](https://aclanthology.org/2020.acl-main.89)
Gou, Z., Shao, Z., Gong, Y., yelong shen, Yang, Y., Huang,
M., Duan, N., and Chen, W. Tora: A tool-integrated
reasoning agent for mathematical problem solving, 2023.
Kadlcˇ´ık, M., Stef[ˇ] anik, M., Sotol´ a´ˇr, O., and Martinek, V.
Calc-x and calcformers: Empowering arithmetical chainof-thought through interaction with symbolic systems. In
_Proceedings of the The 2023 Conference on Empirical_
_Methods in Natural Language Processing: Main track,_
Singapore, Singapore, December 2023. Association for
Computational Linguistics. [URL https://arxiv.](https://arxiv.org/abs/2305.15017)
[org/abs/2305.15017.](https://arxiv.org/abs/2305.15017)
**Pre-Finetuning** In addition to these studies focused on
numeral representation, other researchers have investigated
the potential of pre-finetuning tasks to enhance LM capabilities. Aghajanyan et al. (2021) introduced a massive
multi-task representation with pre-finetuning, demonstrating the efficacy of pre-finetuning in improving model performance across a range of tasks. Geva et al. (2020) proposed
GENBERT, which is trained on automatically-generated
synthetic data in a multi-task setup. This training significantly improves performance on numerical reasoning tasks
such as DROP and math word problems, while maintaining
high performance on standard reading comprehension tasks.
Wang et al. (2017) presented a deep neural solver, a hybrid
model combining the RNN with a similarity-based retrieval
to translate math word problems into equation templates.
**Tool-Use** Gou et al. (2023) presented a series of Toolintegrated Reasoning Agents (ToRA) designed to solve
complex mathematical problems by augmenting the model
with external computational tools. The training process involves collecting interactive tool-use trajectories and applying imitation learning and output space shaping, showcasing
the efficacy of combining natural language reasoning with
program-based tool use. Kadlcˇ´ık et al. (2023) introduced
Calc-X, a collection of datasets designed to integrate calculator usage into language model reasoning chains. Calc-X
consolidates 300,000 samples from several chain-of-thought
tasks requiring arithmetic reasoning. The study demonstrates how Calcformers, models trained on Calc-X, significantly enhance the accuracy of generating correct results by
offloading computations to symbolic systems.
**8. Conclusion and Future Work**
In this work, we improve the numeracy in language models
on the QNLI and QQA tasks which involve textual and computational quantitative reasoning. We do so by proposing
calculator usage as a pre-finetuning task in a discriminative and generative fashion for encoder-only and encoderdecoder models respectively. This improves encoder-only
models across various downstream tasks and improves
encoder-decoder models on tasks that require explicit computation.
Future work can address the balance between textual understanding and numerical reasoning, by refining regularization
strategies to maintain the language model’s core strengths
while enhancing its computational abilities. Tool-use in
encoder-only models could also be extended to more complex tools similar to decoder-only models.
**Acknowledgments**
We thank Robert Lo for the helpful discussions.
-----
**Pre-Calc: Learning to Use the Calculator Improves Numeracy in Language Models**
Koncel-Kedziorski, R., Roy, S., Amini, A., Kushman, N.,
and Hajishirzi, H. Mawps: A math word problem repository. In Proceedings of the 2016 conference of the north
_american chapter of the association for computational lin-_
_guistics: human language technologies, pp. 1152–1157,_
2016.
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy,
O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. Roberta:
A robustly optimized BERT pretraining approach. CoRR,
abs/1907.11692, 2019.
Miao, S.-y., Liang, C.-C., and Su, K.-Y. A diverse corpus for evaluating and developing English math word
problem solvers. In Proceedings of the 58th Annual Meet_ing of the Association for Computational Linguistics, pp._
975–984, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.
92. [URL https://aclanthology.org/2020.](https://aclanthology.org/2020.acl-main.92)
[acl-main.92.](https://aclanthology.org/2020.acl-main.92)
Mishra, S., Mitra, A., Varshney, N., Sachdeva, B., Clark,
P., Baral, C., and Kalyan, A. NumGLUE: A suite
of fundamental yet challenging mathematical reasoning tasks. In Proceedings of the 60th Annual Meet_ing of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pp. 3505–3523, Dublin, Ire-_
land, May 2022. Association for Computational Linguis[tics. doi: 10.18653/v1/2022.acl-long.246. URL https:](https://aclanthology.org/2022.acl-long.246)
[//aclanthology.org/2022.acl-long.246.](https://aclanthology.org/2022.acl-long.246)
Naik, A., Ravichander, A., Sadeh, N., Rose, C., and Neubig, G. Stress test evaluation for natural language inference. In Proceedings of the 27th International Con_ference on Computational Linguistics, pp. 2340–2353,_
Santa Fe, New Mexico, USA, August 2018. Association for Computational Linguistics. [URL https:](https://aclanthology.org/C18-1198)
[//aclanthology.org/C18-1198.](https://aclanthology.org/C18-1198)
Patel, A., Bhattamishra, S., and Goyal, N. Are NLP
models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the
_North American Chapter of the Association for Compu-_
_tational Linguistics: Human Language Technologies, pp._
2080–2094, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.
[168. URL https://aclanthology.org/2021.](https://aclanthology.org/2021.naacl-main.168)
[naacl-main.168.](https://aclanthology.org/2021.naacl-main.168)
Ravichander, A., Naik, A., Rose, C., and Hovy, E. EQUATE:
A benchmark evaluation framework for quantitative reasoning in natural language inference. In Proceedings
_of the 23rd Conference on Computational Natural Lan-_
_guage Learning (CoNLL), pp. 349–361, Hong Kong,_
China, November 2019. Association for Computational
[Linguistics. doi: 10.18653/v1/K19-1033. URL https:](https://aclanthology.org/K19-1033)
[//aclanthology.org/K19-1033.](https://aclanthology.org/K19-1033)
Spithourakis, G. P. and Riedel, S. Numeracy for language
models: Evaluating and improving their ability to predict
numbers. arXiv preprint arXiv:1805.08154, 2018.
Tafjord, O., Clark, P., Gardner, M., Yih, W.-t., and Sabharwal, A. Quarel: A dataset and models for answering
questions about qualitative relationships. In Proceed_ings of the AAAI Conference on Artificial Intelligence,_
volume 33, pp. 7063–7071, 2019.
Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi,
A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P.,
Bhosale, S., Bikel, D., Blecher, L., Ferrer, C. C., Chen,
M., Cucurull, G., Esiobu, D., Fernandes, J., Fu, J., Fu, W.,
Fuller, B., Gao, C., Goswami, V., Goyal, N., Hartshorn,
A., Hosseini, S., Hou, R., Inan, H., Kardas, M., Kerkez,
V., Khabsa, M., Kloumann, I., Korenev, A., Koura, P. S.,
Lachaux, M.-A., Lavril, T., Lee, J., Liskovich, D., Lu, Y.,
Mao, Y., Martinet, X., Mihaylov, T., Mishra, P., Molybog,
I., Nie, Y., Poulton, A., Reizenstein, J., Rungta, R., Saladi,
K., Schelten, A., Silva, R., Smith, E. M., Subramanian, R.,
Tan, X. E., Tang, B., Taylor, R., Williams, A., Kuan, J. X.,
Xu, P., Yan, Z., Zarov, I., Zhang, Y., Fan, A., Kambadur,
M., Narang, S., Rodriguez, A., Stojnic, R., Edunov, S.,
and Scialom, T. Llama 2: Open foundation and fine-tuned
chat models, 2023.
Wallace, E., Wang, Y., Li, S., Singh, S., and Gardner, M.
Do nlp models know numbers? probing numeracy in
embeddings. arXiv preprint arXiv:1909.07940, 2019.
Wang, Y., Liu, X., and Shi, S. Deep neural solver for math
word problems. In Proceedings of the 2017 Conference on
_Empirical Methods in Natural Language Processing, pp._
845–854, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/
[D17-1088. URL https://aclanthology.org/](https://aclanthology.org/D17-1088)
[D17-1088.](https://aclanthology.org/D17-1088)
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi,
E. H., Le, Q., and Zhou, D. Chain of thought prompting elicits reasoning in large language models. CoRR,
abs/2201.11903, 2022.
Williams, A., Nangia, N., and Bowman, S. A broadcoverage challenge corpus for sentence understanding
through inference. In Proceedings of the 2018 Confer_ence of the North American Chapter of the Association_
_for Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long Papers), pp. 1112–1122, New_
Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-1101. URL
[https://aclanthology.org/N18-1101.](https://aclanthology.org/N18-1101)
Zhang, X., Ramachandran, D., Tenney, I., Elazar, Y., and
Roth, D. Do language embeddings capture scales? arXiv
_preprint arXiv:2010.05345, 2020._
-----
| [
"Vishruth, Veerendranath",
"Vishwa, Shah",
"Kshitish, Ghate"
] | 2024-06-13T00:00:00 | null | false | 0 | 0 | null | https://openreview.net/forum?id=Hb5gA02FyR&name=pdf | https://arxiv.org/abs/2404.14355 | https://www.semanticscholar.org/paper/47a2ba4cf074bbc31243aec8a23e25993e2863da |
Pretrained Large Language Models Use Fourier Features to Compute Addition | Pre-trained large language models (LLMs) exhibit impressive mathematical reasoning capabilities, yet how they compute basic arithmetic, such as addition, remains unclear. This paper shows that pre-trained LLMs add numbers using Fourier features---dimensions in the hidden state that represent numbers via a set of features sparse in the frequency domain. Within the model, MLP and attention layers use Fourier features in complementary ways: MLP layers primarily approximate the magnitude of the answer using low-frequency features, while attention layers primarily perform modular addition (e.g., computing whether the answer is even or odd) using high-frequency features.Pre-training is crucial for this mechanism: models trained from scratch to add numbers only exploit low-frequency features, leading to lower accuracy.Introducing pre-trained token embeddings to a randomly initialized model rescues its performance.Overall, our analysis demonstrates that appropriate pre-trained representations (e.g., Fourier features) can unlock the ability of Transformers to learn precise mechanisms for algorithmic tasks. | null | null | null | null | NeurIPS 2024 | true | 0 | 0 | null | https://neurips.cc/virtual/2024/poster/94033 | null | null |
Proceedings 12th International Workshop on Theorem proving components for Educational software | The ThEdu series pursues the smooth transition from an intuitive way of doing mathematics at secondary school to a more formal approach to the subject in STEM education, while favouring software support for this transition by exploiting the power of theorem-proving technologies. What follows is a brief description of how the present volume contributes to this enterprise. The 12th International Workshop on Theorem Proving Components for Educational Software(ThEdu'23), was a satellite event of the 29th international Conference on Automated Deduction (CADE 2023), July 1-4, 2023, Rome, Italy. ThEdu'23 was very successful, with one invited talk, by Yves Bertot (Inria, France), "The challenges of using Type Theory to teach Mathematics", and seven regular contributions. An open call for papers was then issued, to which eight contributions were submitted. Seven submissions have been accepted by our reviewers, who jointly produced at least three careful reports on each of the contributions. The resulting revised papers are collected in the present volume. We, the volume editors, hope that this collection of papers will further promote the development of theorem-proving based software, and that it will allow to improve the mutual understanding between computer scientists, mathematicians and stakeholders in education. PC Chairs:Julien Narboux (University of Strasbourg, France); Walther Neuper (JKU, Johannes Kepler University, Linz, Austria); Pedro Quaresma (University of Coimbra, Portugal) | The volume editors hope that this collection of papers will further promote the development of theorem-proving based software, and that it will allow to improve the mutual understanding between computer scientists, mathematicians and stakeholders in education. | null | [
"Pedro, Quaresma",
"Walther, Neuper",
"Julien, Narboux"
] | 2024-04-04T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2404.03709v1 | https://arxiv.org/abs/2404.03709 | https://www.semanticscholar.org/paper/34e60d5c0652201a91724d7f889c851f12af976d |
Project Description: Experiments with Language Models for Isabelle Autoformalization | N/A | null | # Project Description: Experiments with Language Models for Isabelle Autoformalization
David Valente[1], Manuel Eberl[2], Cezary Kaliszyk[2], and Josef Urban[3]
1 Instituto Superior Tecnico, Universidade de Lisboa
2 University of Innsbruck
3 Czech Technical University
## 1 Motivation
The formalization of mathematical theorems and their proofs stands as a cornerstone in modern
mathematics and computer science. Manual formalization, although precise, is prone to errors
and can consume significant time and effort.
Learning-assisted autoformalization [5] may offer a promising path to this challenge. It operates as a subset of machine translation tasks [8] in which (large) language models (LMs/LLMs)
have shown to have remarkable performance, albeit with the added complexity of adhering to
rigid and intricate grammatical structures inherent in formal logic systems.
In this recently started project, we experiment with the capabilities of LMs to tackle the
autoformalization task. Specifically, our objective is to finetune the Phi-2 model on the task
of translating LaTeX, a widely used typesetting system for mathematical documents, into Isabelle [9], a formal proof assistant. Furthermore we plan on exploring the benefits of building
a feedback loop that adds type-checking and theorem proving to continuously improve the
learner [7] and possibly adding RAG [6] to the pipeline for more accurate use of the AFP.
## 2 Training Data Description
Our training data consists of a curated dataset containing pairs of natural language statements
and corresponding Isabelle lemmas. To generate LaTeX representations, we used an existing
dataset of natural language-Isabelle lemma pairs [3], prompting the Mistral Large model [4]
to generate the corresponding LATE[X. Notably, multiple LaTeX representations were generated]
for each natural language statement, ensuring diversity and coverage. In total, our dataset
comprises over 100,000 pairs of LaTeX-Isabelle lemma pairs.
**2.1** **Example Data**
**Natural Language Statement: If a set X is countable, then the cardinality of set X is less**
than or equal to Aleph null (the smallest infinite cardinal number).
**Corresponding LaTeX Representation:**
```
If a set X is countable, then $|X| \leq \aleph_0$.
```
**Corresponding Isabelle Lemma:**
```
lemma countable_imp_g_le_Aleph0: "countable X \<Longrightarrow> gcard X \<le> \<aleph>0"
```
-----
Experiments with Language Models for Isabelle Autoformalization Valente, Eberl, Kaliszyk, Urban
## 3 Training Methodology
**Data Preparation:**
We preprocessed the data by merging input and output sequences while incorporating special
tokens to delineate the beginning and end of LaTeX and Isabelle sections.
**Model Configuration and Fine-tuning:**
For model configuration, we loaded the pre-trained ”microsoft/phi-2” model, ensuring its compatibility with the autoformalization task. Various optimizations were employed during model
loading, including quantization with 4-bit configuration (BitsAndBytesConfig) and utilization
of Flash Attention. Additionally, the model underwent further optimization using Quantized
Low-Rank Adapters (QLoRA), focusing on key weight matrices (Wqkv) and fully-connected
layers (fc1, fc2). Finetuning was then done through SFTTrainer to integrate PEFT and improve
data and resource efficiency. See Appendix A for the details of the training.
## 4 Initial Evaluation
Our initial evaluation is done on the book “Introduction to Analytic Number Theory” [1]
formalized in Isabelle by the second author [2]. We run the trained model on the LATE[X versions]
of the 338 main theorems (only statements, no proofs) and lemmas in that book. Note that in
principle we are evaluating on data that are in various ways related to the training set, because
the Isabelle formalization has been very likely seen by the various LMs used for producing our
training data. If our results were very good, we would switch to books that are not formalized
yet, however (as will be seen below), this is far from being the case yet.
We then created Isabelle/HOL scripts that (to some extent) allow us to classify the results
automatically, and we also classify some of the results manually.
From the 338 translations, 152 result in Isabelle texts that parse and typecheck without
producing errors. The remaining translations trigger various parsing and typechecking issues
when processed by Isabelle. Only 16 of the 152 parsable ones can be automatically proved by
Sledgehammer. An example of such an automatically provable statement is "gcd a b = gcd
```
b a", which is however only a truncated translation of Theorem 1.4 in [1].[1]
```
Our manual classification of 38 of the results is shown in Appendix B, along with some
sample translations. Despite being often grammatically correct, these results are so far largely
semantically incorrect. Their summary statistics is as follows: 15 nonsense; 6 true but unrelated
to the original text; 4 quite wrong; 9 partially ok; 3 quite good; 1 correct.
1More precisely, the theorem there is a conjunction of four properties, and the trained LM only produced
one of them.
-----
Experiments with Language Models for Isabelle Autoformalization Valente, Eberl, Kaliszyk, Urban
## References
[1] T. M. Apostol. Introduction to analytic number theory. Springer Science & Business Media, 2013.
[2] M. Eberl. Nine chapters of analytic number theory in Isabelle/HOL. In ITP, volume 141 of LIPIcs,
pages 16:1–16:19. Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik, 2019.
[3] A. Q. Jiang, W. Li, and M. Jamnik. Multilingual mathematical autoformalization. _CoRR,_
abs/2311.03755, 2023.
[4] A. Q. Jiang, A. Sablayrolles, A. Roux, A. Mensch, B. Savary, C. Bamford, D. S. Chaplot,
D. de Las Casas, E. B. Hanna, F. Bressand, G. Lengyel, G. Bour, G. Lample, L. R. Lavaud,
L. Saulnier, M. Lachaux, P. Stock, S. Subramanian, S. Yang, S. Antoniak, T. L. Scao, T. Gervet,
T. Lavril, T. Wang, T. Lacroix, and W. E. Sayed. Mixtral of experts. CoRR, abs/2401.04088, 2024.
[5] C. Kaliszyk, J. Urban, J. Vyskocil, and H. Geuvers. Developing corpus-based translation methods
between informal and formal mathematics: Project description. In CICM, volume 8543 of Lecture
_Notes in Computer Science, pages 435–439. Springer, 2014._
[6] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Kuttler, M. Lewis, W. tau Yih,
T. Rockt¨aschel, S. Riedel, and D. Kiela. Retrieval-augmented generation for knowledge-intensive
nlp tasks. ArXiv, abs/2005.11401, 2020.
[7] Q. Wang, C. E. Brown, C. Kaliszyk, and J. Urban. Exploration of neural machine translation in
autoformalization of mathematics in mizar. In CPP, pages 85–98. ACM, 2020.
[8] Q. Wang, C. Kaliszyk, and J. Urban. First experiments with neural translation of informal to
formal mathematics. In CICM, volume 11006 of Lecture Notes in Computer Science, pages 255–
270. Springer, 2018.
[9] M. Wenzel, L. C. Paulson, and T. Nipkow. The Isabelle framework. In O. A. Mohamed, C. A.
Mu˜noz, and S. Tahar, editors, Theorem Proving in Higher Order Logics, 21st International Confer_ence, TPHOLs 2008, Montreal, Canada, August 18-21, 2008. Proceedings, volume 5170 of Lecture_
_Notes in Computer Science, pages 33–38. Springer, 2008._
## A Training
**Number of Training Epochs:** 5
**Batch Size:** 2
**Gradient Accumulation Steps:** 32
**Optimizer:** Paged AdamW 8-bit
**Learning Rate:** 2e-4
**Learning Rate Scheduler Type:** Cosine decay
**Warmup Ratio:** 0.05
**Weight Decay:** 0.01
**Fine-tuning Parameters:**
## First impressions
-----
Experiments with Language Models for Isabelle Autoformalization Valente, Eberl, Kaliszyk, Urban
Theorem Comment
3.8.1 Definition of mutually visible lattice points. Completely wrong; it instead stated some kind of invariance under reflection.
1.11.1 It did not grasp that the ”ps” have to be prime numbers. Multiplicities are ignored completely, as is
the fact that it must be possible to vary the multiplicities of each prime. All that aside, I would not
phrase this with lists (it’s pretty unwieldy in practice).
1.11.2 This is truncated.
10.5.1 Complete nonsense
5.25.1 Complete nonsense
6.14 All nonsense
2.11.1 Does not type check and does not seem to make sense either
10.1.1 Complete nonsense
7.2.1 Well at least it correctly translated ”4n+1” to ”4 * n + 1”, but the statement is still horribly wrong.
7.7.1 Looks pretty good. The ”sum log p over p” should be expanded to something more explicit, of course,
and the ”+ O(λ . 1)” does not quite typecheck (it should be something like ”+o O(λ . 1)”), but close
enough. It doesn’t define the ”N”, but then neither does the LaTeX code you gave it.
7.7.2 Looks syntactically equivalent to the one above
8.12.1 Some good stuff there, but it completely dropped the ”G” and the quantification and the condition
on the ”a” in the end is missing entirely. Also, it did not get that (x, y) is ”gcd x y” and not literally
the tuple ”(x, y)”.
8.12.2 Same issue
2.24.1 The whole assumption is missing; rest is okay
7.6.1 It uses this ”sum moebius over n”, which is not defined anywhere. Also it turned a ”O(1)” into a
”Θ(1)”. Otherwise okay.
9.12 All nonsense
2.27.1 Complete nonsense. No idea where it got this ”selberg prob density” from. Selberg has nothing to do
with probabilities.
1.2.1 Well, part of it is there and correct. But most of it is missing.
6.6 It seems to have hallucinated some notation here that I’ve never seen before. It seems relatively
reasonable though, if one were to replace the notation with the right concept. Most of the stuff seems
to be missing though.
6.8.1 The order n is missing, as is the assumption that G is a finite abelian group
8.18 Nonsense
1.4.1 Well it got commutativity right, but that’s only one of many properties
1.5.2 Wrong. The ”b = Suc 0” should be a ”gcd a b = Suc 0”.
2.26.1 True, but has nothing to do with what was being translated.
2.3.1 True, but has nothing to do with what was being translated.
2.4.1 True, but has nothing to do with what was being translated.
3.13.1 This looks correct.
5.16.1 Nonsensical
5.28 Nonsensical
5.2.2 True but trivial and not what was intended
5.32.1 Nonsense
5.6.1 True, but a completely different property
5.8.1 It basically proved A =⇒ _A. Wrong translation._
6.1.1 Very much not the property that was to be translated
9.11.1 Nonsense. The ”P = 1” etc. is way too strong an assumption. Apparently, it just doesn’t understand
what ”(P, Q) = 1” means. The conclusion is also completely wrong.
9.5.1 Nonsense
9.5.2 Nonsense
-----
Experiments with Language Models for Isabelle Autoformalization Valente, Eberl, Kaliszyk, Urban
(a) Training Loss (b) Validation Loss
Figure 1: Training and Validation Loss
Theorem Translation to Isabelle
3.8 lemma mutually visible lattice points commute: ”mutually visible (a, b) (m, n) ←→ mutually visible
(b, a) (n, m)”
1.11 lemma divisor set prod unique: assumes ”n > 0” ”n = prod list ps” shows ”divisor set n = set (map
(λ p. p[a]) ps)”,
10.5 lemma residue primroot: assumes ”prime p” ”odd p” shows ”residue (primroot p) p = 1”
7.2 lemma prime elem 4 times i: ”prime (4 * n + 1 :: int)”
7.7 lemma sum log p over p: assumes ”x > 1” shows ”sum log p over p x = (1 - N k) / of nat (phis k) *
log x + O(λ . 1)”,
8.12 lemma (in dcharacter) dcharacter mod 1 imp dvd: assumes ”χ mod k ̸= 0” and ”(n, k) > 1” shows
”∃ d. d dvd k d < k χ a = 1”,
2.24 lemma multiplicative function unique: assumes ”multiplicative function f” ”multiplicative function g”
shows ”f = g”
1.2 lemma common divisor dvd: fixes a b :: int shows ”common divisor a b dvd a” ”common divisor a b
dvd b”
6.8 lemma finite characters: ”finite (characters G)”
1.4 lemma gcd comm nat: ”gcd a b = gcd b a” for a b :: nat
3.13 lemma abs sum upto moebius mu over n: ”abs (sum upto (λ n. moebius mu n / real n) x) ≤ 1”
-----
| [
"Cezary, Kaliszyk",
"Manuel, Eberl",
"David, Valente",
"Josef, Urban"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Project Proposal: Forward Reasoning in Hindsight | Hindsight Experience Replay is a promising technique in reinforcement learning. However, we argue that its interpretation in refutational theorem proving is somewhat indirect, and instead propose its application in reasoning settings where consequences are derived from axioms alone until a goal is reached. Such settings include many sequent-like calculi, condensed detachment, non-trivial fragments of dependently-typed languages such as Agda, and we conjecture that unit equational reasoning is also suitable. | It is argued that Hindsight Experience Replay interpretation in refutational theorem proving is somewhat indirect, and instead its application in reasoning settings where consequences are derived from axioms alone until a goal is reached is proposed. | null | [
"Michael, Rawson",
"Zsolt, Zombori",
"Christoph, Wernhard",
"Maximilian, Dore"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | https://www.semanticscholar.org/paper/48701c555c6321571d06000767c4f629010b6fd5 | null | https://www.semanticscholar.org/paper/48701c555c6321571d06000767c4f629010b6fd5 |
Proof Automation with Large Language Models | Interactive theorem provers such as Coq are powerful tools to formally guarantee the correctness of software. However, using these tools requires significant manual effort and expertise. While Large Language Models (LLMs) have shown promise in automatically generating informal proofs in natural language, they are less effective at generating formal proofs in interactive theorem provers. In this paper, we conduct a formative study to identify common mistakes made by LLMs when asked to generate formal proofs. By analyzing 520 proof generation errors made by GPT-3.5, we found that GPT-3.5 often identified the correct high-level structure of a proof, but struggled to get the lower-level details correct. Based on this insight, we propose PALM, a novel generate-then-repair approach that first prompts an LLM to generate an initial proof and then leverages targeted symbolic methods to iteratively repair low-level problems. We evaluate PALM on a large dataset that includes more than 10K theorems. Our results show that PALM significantly outperforms other state-of-the-art approaches, successfully proving 76.6% to 180.4% more theorems. Moreover, PALM proves 1270 theorems beyond the reach of existing approaches. We also demonstrate the generalizability of PALM across different LLMs. | PALM is a novel generate-then-repair approach that first prompts an LLM to generate an initial proof and then leverages targeted symbolic methods to iteratively repair low-level problems, and significantly outperforms other state-of-the-art approaches. | ## Proof Automation with Large Language Models
[Minghai Lu](https://orcid.org/0009-0001-0136-3204)
[email protected]
Purdue University
West Lafayette, IN, USA
[Benjamin Delaware](https://orcid.org/0000-0002-1016-6261)
[email protected]
Purdue University
West Lafayette, IN, USA
[Tianyi Zhang](https://orcid.org/0000-0002-5468-9347)
[email protected]
Purdue University
West Lafayette, IN, USA
**ABSTRACT**
Interactive theorem provers such as Coq are powerful tools to formally guarantee the correctness of software. However, using these
tools requires significant manual effort and expertise. While Large
Language Models (LLMs) have shown promise in automatically
generating informal proofs in natural language, they are less effective at generating formal proofs in interactive theorem provers.
In this paper, we conduct a formative study to identify common
mistakes made by LLMs when asked to generate formal proofs. By
analyzing 520 proof generation errors made by GPT-3.5, we found
that GPT-3.5 often identified the correct high-level structure of a
proof, but struggled to get the lower-level details correct. Based
on this insight, we propose PALM, a novel generate-then-repair
approach that first prompts an LLM to generate an initial proof and
then leverages targeted symbolic methods to iteratively repair lowlevel problems. We evaluate PALM on a large dataset that includes
more than 10K theorems. Our results show that PALM significantly
outperforms other state-of-the-art approaches, successfully proving
76.6% to 180.4% more theorems. Moreover, PALM proves 1270 theorems beyond the reach of existing approaches. We also demonstrate
the generalizability of PALM across different LLMs.
**KEYWORDS**
**Software and its engineering →** _Software verification; Formal_
_software verification._
**ACM Reference Format:**
Minghai Lu, Benjamin Delaware, and Tianyi Zhang. 2024. Proof Automation
with Large Language Models. In 39th IEEE/ACM International Conference
_on Automated Software Engineering (ASE ’24), October 27-November 1, 2024,_
_[Sacramento, CA, USA. ACM, New York, NY, USA, 12 pages. https://doi.org/](https://doi.org/10.1145/3691620.3695521)_
[10.1145/3691620.3695521](https://doi.org/10.1145/3691620.3695521)
**1** **INTRODUCTION**
Correctness is crucial to software systems. Interactive theorem
provers (ITPs) such as Coq [43], Isabelle [34] and Lean [17], are
powerful tools for providing semantically rich guarantees about
software. In an ITP, users can state and prove formal theorems about
a program; these proofs are then mechanically checked by the ITP,
providing a strong, foundational guarantee about its correctness.
This strategy has been successfully applied to several application
domains, including compilers [31], distributed systems [46], and
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
_ASE ’24, October 27-November 1, 2024, Sacramento, CA, USA_
© 2024 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-1248-7/24/10
[https://doi.org/10.1145/3691620.3695521](https://doi.org/10.1145/3691620.3695521)
OS kernels [29]. While powerful, this approach comes at a cost, as
users must supply a proof script that helps the ITP construct the
proof of the desired theorem. Constructing these proof scripts can
require considerable effort. For example, it took 6 person-years to
write 100,000 lines of Coq proof scripts to verify the CompCert C
compiler [31].
Many proof automation techniques have been proposed to reduce the effort required by ITPs. These techniques mainly fall into
two categories: symbolic methods [15, 28, 36, 44] and machine
learning methods [20, 21, 39, 48]. Symbolic methods use a combination of previously established theorems and external automated
theorem provers (ATPs), such as Z3 [16] and CVC5 [14], to automate the proof of a theorem. While effective, these approaches are
constrained by their inability to perform higher-order and inductive reasoning, limiting their ability to prove complex theorems.
Machine learning methods utilize models to predict the next proof
step in a heuristic-guided search process. These methods do not
have the same limitations as symbolic approaches but require a
significant amount of training data [20, 21, 48].
Recently, pretrained Large Language Models (LLMs) have shown
promise in generating informal natural language proofs [47], suggesting a potential to further improve existing proof automation
approaches. Unfortunately, even state-of-the-art LLMs are ineffective at generating formal proofs in one shot: GPT-3.5 proves 3.7% of
theorems in our evaluation, and Llama-3-70b-Instruct proves 3.6%.
In order to understand why, we have conducted a formative study
to analyze mistakes that GPT-3.5 made when generating formal
proofs. In this study, we analyzed 579 theorems of varied complexity and identified seven categories of errors. Overall, we found that
while GPT-3.5 often produced proofs with the right high-level structure, it struggled getting lower-level details of these proofs correct.
Promisingly, we also observed that many of these errors can be
potentially fixed using symbolic methods, including heuristic-based
search and proof repair.
Guided by this formative study, we propose PALM, a novel
generate-then-repair approach that combines LLMs and symbolic
methods. Our key insight is to use LLMs to generate an initial
proof that is likely to have the correct high-level structure, and
then use targeted symbolic methods to iteratively repair low-level
problems related to individual proof steps. PALM relies on four repair mechanisms that target the common types of errors identified
in our formative study. If our repair mechanisms fail, PALM uses a
backtracking procedure to regenerate previous proof steps in an
attempt to fix errors in the high-level proof structure. Although
_PALM targets Coq, its underlying principles can be applied to other_
ITPs, such as Isabelle [34] and Lean [17].
To evaluate the effectiveness of our approach, we have conducted
an extensive evaluation using the CoqGym dataset [48] with 10842
theorems. Our results suggest that PALM can successfully prove
-----
ASE ’24, October 27-November 1, 2024, Sacramento, CA, USA Minghai Lu, Benjamin Delaware, and Tianyi Zhang
1 Theorem add_comm : forall n m : nat, n + m = m + n.
2 Proof .
3 intros n m.
================================
forall n m : nat, n + m = m + n
4 induction n.
5 -
6 auto.
7 -
8 simpl.
9 rewrite IHn.
10 apply plus_n_Sm.
11 Qed .
**Figure 1: A Coq theorem stating that natural number addi-**
**tion is commutative, and a proof of this statement.**
40.4% of the theorems, significantly outperforming the state-ofthe-art methods Passport [39], Proverbot9001 [38] and Draft, Sketch,
_and Prove (DSP) [27], which only prove 14.4%, 17.1% and 22.9%_
theorems, respectively. Moreover, we have conducted experiments
to demonstrate the effectiveness of each component in PALM and
the generalizability of PALM across different LLMs.
In summary, this paper presents the following contributions:
(1) We conduct a formative study to identify the common errors
made by GPT-3.5 while proving theorems in Coq.
(2) We propose PALM, a novel proof automation approach that
combines LLMs and symbolic methods in a generate-thenrepair pipeline.
(3) We evaluate PALM on a large dataset and demonstrate that
_PALM significantly outperforms existing methods. An arti-_
fact containing the source code of PALM and a replication
package is publicly available [7].
**(a) Proof state at the start.**
n, m: nat
================================
n + m = m + n
**(b) Proof state after Figure 1 Line 3 (intros n m).**
m: nat
================================
(1/2)
0 + m = m + 0
(2/2)
S n + m = m + S n
**(c) Proof state after Figure 1 Line 4 (induction n).**
m: nat
================================
0 + m = m + 0
**(d) Proof state after Figure 1 Line 5 (the first subgoal).**
n, m: nat
IHn: n + m = m + n
================================
S n + m = m + S n
**(e) Proof state after Figure 1 Line 7 (the second subgoal).**
n, m: nat
IHn: n + m = m + n
================================
S (n + m) = m + S n
**(f) Proof state after Figure 1 Line 8 (simpl).**
n, m: nat
IHn: n + m = m + n
================================
S (m + n) = m + S n
**(g) Proof state after Figure 1 Line 9 (rewrite IHn).**
**Figure 2: Proof state after the execution of each tactic in the**
**proof of addition’s commutativity.**
after processing each tactic in Figure 1. Following the conventions
of Coq’s user interface, the local context is shown above the double
line, and the current proof obligation is shown below.
**Tactics: Tactics specify strategies for decomposing the current**
proof obligation into a set of simpler subgoals, in order to eventually produce a complete proof. Conceptually, a tactic 𝑡 is a statetransition function: 𝑡 ∈ _𝑆_ × → _𝑆_ [′], where 𝑆 is a goal, is a set
of arguments if any, and 𝑆 [′] is the set of resulting goals. As an example, the tactic induction n[Í] on Line 4 of Figure 1 tells Coq to do[Í]
induction on the natural number n in the local context. Processing
this tactic transforms the proof state in Figure 2b to the proof state
in Figure 2c, which has two subgoals: (1) a base case in which n
is 0, and (2) an inductive case in which n is an arbitrary natural
number.[2] Note that Coq only displays the local context of the first
goal when there are multiple goals. Importantly, a tactic can fail if,
for example, it is applied to a proof state of the wrong form or it
is supplied with wrong arguments. Coq reports the failure back to
the user when this occurs.
**Proofs: A proof of a theorem consists of a sequence of tactics that**
transform the initial goal, i.e., the theorem statement, into subgoals
until none remain. The beginning and end of a proof are delimited
2The term S n is equivalent to n + 1.
**2** **PRELIMINARIES**
**2.1** **Interactive Theorem Proving in Coq**
The Coq proof assistant [43] is a popular tool for developing machinechecked proofs of mathematical theorems and verifying complex
software systems. Coq helps users interactively construct these
proofs using a set of proof tactics. This section first introduces the
basic concepts of interactive proof development in Coq, and then
illustrates the process via an example theorem shown in Figure 1.
**Theorems: In Coq, the definition of a theorem typically starts**
with the keyword Theorem or Lemma, followed by its name and the
theorem statement. Figure 1 shows the theorem add_comm, which
states that natural number addition is commutative.[1] This is then
followed by a proof script, a sequence of tactics that explain how to
build a proof of the desired statement. Proof scripts are typically
developed in an interactive proof mode. Processing the first line of
Figure 1 causes Coq to enter proof mode. During the proof process,
users can freely reuse previously proven theorems.
**Proof States: In proof mode, Coq’s interface displays the current**
_proof state, i.e., a list of unproven 𝑔𝑜𝑎𝑙𝑠. Each of these goals is_
a pair of a local context 𝑙𝑐 and an outstanding proof obligation
_𝑠𝑡. A local context includes hypotheses and assumptions that can_
be used to prove 𝑠𝑡; these are distinct from the set of previously
proven theorems, which are part of the global context. Figure 2
shows the intermediate proof states that appear during the proof of
_add_comm: each listing shows the proof states shown to the user_
1The type of natural numbers in Coq is nat.
-----
Proof Automation with Large Language Models ASE ’24, October 27-November 1, 2024, Sacramento, CA, USA
by the Proof and Qed commands (Lines 2 and 11 of Figure 1). The
latter command prompts Coq’s kernel to check that no outstanding
proof obligations remain. If so, Coq exits proof mode with success
and the theorem is added to the global context.
We now illustrate these concepts using the example of add_comm
in Figure 1. At the beginning of the proof (Line 2), the proof state
consists of a single goal that corresponds to the top-level theorem
statement (Figure 2a). At Line 3, the tactic intros n m tells Coq
to move the universally quantified variables n and m into the local
context (Figure 2b). Then, the aforementioned induction n tactic
performs induction on n (Line 4), resulting in two subgoals corresponding to the base case and inductive case (Figure 2c). We then
prove the first subgoal with a bullet “-” (Line 5), which marks the
beginning of the subgoal’s proof and causes Coq to display only
this subgoal to the user (Figure 2d). After proving this subgoal, we
prove the next subgoal with the same bullet symbol (Line 7). These
bullets help organize the proof by marking the beginning of each
subgoal and instructing Coq to ensure one subgoal is proven before
moving to the next.
We solve the base case (Figure 2d) by invoking the auto tactic
(Line 6), which uses symbolic-based proof automation to discharge
simple goals. Next we move on to the second subgoal—the inductive
case. Importantly, this goal includes an inductive hypothesis in its
local context. We first use the simpl tactic (Line 8), which simplifies
the goal by evaluating the + operator (Figure 2f). Next, we use the
rewrite IHn tactic (Line 9), which substitutes the left-hand side
of the inductive hypothesis (IHn) in the goal with its right-hand
side (Figure 2g). Finally, we apply a previously proven theorem
plus_n_Sm: forall n m : nat, 1 + (n + m) = n + (1 + m)
from the standard Coq library (Line 10). This theorem establishes
that adding 1 to the sum of n + m is the same as adding n to m
+ 1. A theorem of the form A =⇒ B can be applied to a goal if its
conclusion (B) matches the current proof obligation, resulting in a
new goal corresponding to its hypothesis (A). The apply plus_n_Sm
tactic directly solves the current goal, because the conclusion of
plus_n_Sm matches and plus_n_Sm has no premises. Since no goals
remain, the proof is complete, and we use the Qed command to finish
the proof of add_comm.
**2.2** **Hammers**
To facilitate proof construction, Coq is equipped with many established proof automation tactics (e.g., auto). These tactics either
completely solve the current goal, or leave it unchanged if they
fail. Among them, hammers [15, 28, 36] are powerful tactics that
dispatch goals using external automated theorem provers (ATPs),
such as Vampire [37], CVC5 [14], E [3] and Z3 [16]. Many popular
ITPs have hammers, including CoqHammer [15] for Coq, SledgeHammer [36] for Isabelle, and HOLyHammer [28] for HOL Light.
At a high level, hammers work by first encoding the current
goal into a form solvable by an ATP, typically a formula in firstorder logic. This is necessary because ITPs support much richer
logic, e.g., higher-order logic, than most ATPs. In order to enable
the underlying ATP to use previously proven theorems, a subset
of the theorems in the global context are encoded alongside the
current goal, the task of selecting a relevant set of these theorems
is sometimes called premise selection [13]. Early hammers relied
on heuristics to select premises, while modern hammers typically
utilize machine learning algorithms for this purpose. After a goal
and the selected premises have been encoded, hammers invoke an
ATP. If successful, the proof found by the ATP is translated to a form
that can be understood by an ITP. Hammers are typically invoked by
applying specific tactics. CoqHammer offers a collection of tactics
such as hammer, hfcrush, and qsimpl, each of which automatically
proves goals using different strategies. While powerful, hammers
only perform a subset of reasoning available to an ITP: they typically
do not perform induction, for example. This limits their ability to
directly prove complex theorems. Nonetheless, they are effective at
accurately solving small subgoals. For instance, the hammer tactic
is able to completely solve the goals in Figures 2d and 2e, while
Coq’s auto tactic only solves the first goal.
**3** **FORMATIVE STUDY**
While LLMs have previously been used to generate proofs in Coq,
they have not proven particularly effective at the task [22, 42, 50]. To
understand the root causes of this, we have conducted a formative
study to identify the common errors made by LLMs when asked
to generate proof scripts. In this study, we evaluated the ability
of GPT-3.5 [4] to prove 579 theorems from Verdi, a distributed
system verification project [46]. Verdi has also been used in other
studies [20, 21, 48]. We carefully designed our prompt based on the
widely used retrieval augmented generation (RAG) method [32].
This prompt is also used by PALM, and is described in more detail
in Sections 4.2 and 4.3. We prompted GPT-3.5-turbo-1106 API, the
latest version available at the time of this study with the default
decoding temperature. For each theorem, we sampled only one
proof script. We ran the generated proof in Coq, and recorded the
error message of the first encountered error in the proof.
We collected a total of 520 errors and conducted an in-depth
manual analysis, following the grounded theory [23] and the open
coding method [24]. The first author first labeled 100 errors and
came up with an initial categorization. He then discussed and refined the labels and the categorization with the last author in two
meetings. The first author then labeled and categorized the remaining errors based on the refined labels and categorization. Finally,
all authors met to discuss and finalize the categorization, where
the second author, an expert in theorem proving, offered insights
that further enhanced the categorization. The whole process took
approximately 52 person-hours. We categorized the 520 errors into
seven types, as shown in Table 1.
**Table 1: The number of occurrences and percentage of each**
|Type|# of occurrences|Percentage (%)|
|---|---|---|
|Wrong theorem application|258|49.6|
|Invalid reference|79|15.2|
|Incorrect rewrite|61|11.7|
|Redundant introductions|56|10.8|
|Tactic misuse|44|8.5|
|Bullet misuse|19|3.7|
|Miscellaneous errors|3|0.6|
|Total|520|100|
**type of error.**
**1. Wrong theorem application (49.6%): When there is a theo-**
rem or hypothesis of the form H: A =⇒ B and the current goal is B,
-----
ASE ’24, October 27-November 1, 2024, Sacramento, CA, USA Minghai Lu, Benjamin Delaware, and Tianyi Zhang
the apply H tactic can be used to replace the goal with H’s premise
(A). This tactic requires that the conclusion of H matches the goal.
Attempting to apply a theorem or hypothesis whose conclusion
does not match the goal will cause the tactic to fail. For example, in
Figure 3, the proof state has a hypothesis H: m = n, which cannot
be applied because it does not match the goal n = m. Applying H
fails with the error message “Unable to unify ‘m=n’ with ‘n=m’.”
n, m: nat
H: m = n
===========================
n = m
**Figure 3: apply H causes a wrong theorem application: “Un-**
**able to unify ‘m=n’ with ‘n=m’ ”.**
**2. Invalid reference (15.2%): LLMs can generate incorrect refer-**
ences, such as a hypothesis that does not exist in the local context
or a theorem that cannot be found in the environment. This is a
form of LLM hallucination [50].
**3. Incorrect rewrite (11.7%): Given an equation Heq, the rewrite**
Heq tactic replaces occurrences of the left-hand side of Heq in the
goal with the right-hand side of Heq. The error occurs when a theorem or hypothesis is used for rewriting but its left-hand side does
not match any subterms in the goal. For instance, consider the proof
state presented in Figure 5. The rewrite H2 tactic fails with the
error message “Found no subterm matching ‘b’ in the current goal.”
**4.Redundant introductions (10.8%): The intros tactic is used**
to move universally quantified variables and assumptions into the
local context. In some cases, LLMs produce proofs that use intros
to introduce a term with a name that is already in the local context,
or when there is nothing that can be moved to the local context.
**5. Tactic misuse (8.5%): Some tactics can only be used with**
specific arguments. This error occurs when a tactic is given an argument that does not satisfy its requirements. For example, destruct
and induction can only be applied to arguments with inductive
data types such as natural numbers. Both tactics fail with the error
“Not an inductive product” when applied to a non-inductive argument. Conversely, the tactic unfold cannot be applied to arguments
with inductive types, and will throw an error “Cannot turn inductive
_into an evaluable reference.”_
Other built-in tactics can be misused in a way specific to the
tactic. For example, the reflexivity tactic causes an error when
applied to a goal that is not an equality between equivalent terms,
while the contradiction tactic fails when the local context does
not contain a contradiction.
**6. Bullet misuse (3.7%): LLMs can generate proofs which misuse**
bullets in two ways: (1) the proof tries to proceed to the next goal
before the current one is solved, and (2) the proof uses the wrong
bullet to focus on a goal. Figure 6 illustrates this misuse through
two incorrect proofs. In the first proof, the second bullet symbol
should be “-” instead of “+” (Line 6). This leads to an error “Wrong
_bullet +: Expecting -.” In the second proof, simpl fails to completely_
solve the first subgoal (Line 12), so trying to proceed to the second
subgoal while the first is unsolved leads to an error “Wrong bullet
-: Current bullet - is not finished.”
**7. Miscellaneous errors (0.6%) Some errors do not fit into the**
previously defined categories. For example, LLMs can generate
special commands like Abort, which terminates a proof without an
error before it is complete.
A key insight from this formative study is that while LLMs often
generate proof scripts with the right high-level structure, they often
struggle with accurately addressing the sorts of low-level details
that hammers excel at. For example, GPT-3.5 often knows when to
use the induction tactic to decompose theorems into subgoals, but
often fails to generate the right sequence of tactics to prove each
subgoal. On the other hand, CoqHammer is good at addressing
these subgoals using ATPs. In addition, we found that many proof
generation errors are relatively straightforward to fix, e.g., through
rule-based transformation, without the need of regenerating the
proof from scratch. For instance, both cases of bullet misuse can be
repaired by systematically inserting the correct bullet.
**4** **APPROACH**
Guided by our formative study, we propose PALM, a proof automation approach that combines LLMs and symbolic methods. Figure 4
provides an overview of PALM. PALM includes three components:
(1) a retrieval-augmented proof generation method, (2) a set of
repair mechanisms, and (3) a backtracking procedure.
**4.1** **The Overall Algorithm**
Algorithm 1 describes the overall generate-then-repair procedure
used by PALM. The inputs are a theorem statement 𝑡, an environment 𝑒𝑛𝑣, and a language model 𝐿𝑀. First, using the retrieval augmented generation (RAG) method described in Section 4.2, PALM
retrieves relevant premises from 𝑒𝑛𝑣 based on 𝑡 (Line 3). Next, PALM
creates a prompt using 𝑡 and the selected premises (Line 4), and
prompts 𝐿𝑀 to obtain an initial proof script (Line 5). PALM then
executes these tactics in Coq (Lines 6-15). If an error occurs, PALM
employs a set of repair mechanisms to fix the problem based on
the error message, the tactic that throws the error, and the current
proof state (Line 9). If PALM cannot fix an error, it invokes the
backtracking procedure (Line 11) described in Algorithm 2, which
attempts to fix the previous proof using CoqHammer. The proof is
successful if no goals remain unsolved after all tactics have been
executed (Line 16).
**Algorithm 1 Framework**
1: Input: Theorem statement 𝑡, Environment 𝑒𝑛𝑣, Language Model 𝐿𝑀
2: function Prove(𝑡, 𝑒𝑛𝑣, 𝐿𝑀)
3: _𝑃𝑆_ ← RetrievePremises(𝑡,𝑒𝑛𝑣)
4: _𝑝𝑡_ ← BuildPrompt(𝑡, 𝑃𝑆)
5: _𝑇𝐶𝑆_ ← _𝐿𝑀.query(𝑝𝑡_ )
6: **for 𝑡𝑐** ∈ TCS do
7: _𝑒𝑟𝑟𝑜𝑟_ ← Coq.execute(𝑡𝑐)
8: **if 𝑒𝑟𝑟𝑜𝑟** **then**
9: _𝑟𝑒𝑝𝑎𝑖𝑟𝑒𝑑_ ← Repair(𝑒𝑟𝑟𝑜𝑟, 𝑡𝑐, current proof state)
10: **if not repaired then**
11: _𝑝𝑟𝑜𝑜𝑓_ ← Backtracking(current goal)
12: **if 𝑝𝑟𝑜𝑜𝑓** is not None then
13: Coq.execute(𝑝𝑟𝑜𝑜𝑓 )
14: **else**
15: **return False**
16: **return NoUnsolvedGoals()**
-----
Proof Automation with Large Language Models ASE ’24, October 27-November 1, 2024, Sacramento, CA, USA
**Figure 4: Overview of PALM.**
H2: b = c High-quality context is essential for LLMs to produce accurate rea = c=========================== sponses. For theorem proving, we consider the previously proven
theorems and definitions available in the environment as the con
a, b, c: nat
H1: a = b
H2: b = c
===========================
a = c
**Figure 5: rewrite H2 fails with: “Found no subterm matching** text of constructing a proof script.
**‘b’ in the current goal”.** Given there are many available theorems and definitions, it is
difficult to encode all of them in the proof generation prompt. Thus,
12 Theorem (* wrong proof *) add_comm : forall n m, n + m = m + n. we develop an information retrieval method to identify the ones
3 Proof . relevant to the theorem to be proven. Specifically, PALM predicts
4 intros. induction n. relevant premises using Term Frequency-Inverse Document Fre
65 +- auto. (* proof for the inductive case *) quency (TF-IDF) [40] and k nearest neighbors (KNN) [18]. While
7 Qed . more advanced methods such as deep learning [13] can be more
8 accurate, they also require significant amounts of training data and
109 (* wrong proof *) Proof . can take a longer time to make predictions [19, 30]. The premises
11 intros. induction n. predicted by the KNN algorithm are initially ranked by their TF
1312 - simpl.- (* proof for the inductive case *) IDF scores. PALM then employs the BM25 algorithm [11] to rerank
14 Qed . these premises based on their text similarity to the statement of
the theorem, since we observed that BM25 tends to rank premises
**Figure 6: Two examples of bullet misuse.**
used in human-written proofs higher than TF-IDF.
In the rest of this section, we describe each component of PALM **4.3** **Prompt Design**
using the example shown in Figure 7. The proof script shown in
To optimize the quality of the initial proof generated by the LLMs,
the figure was generated by GPT-3.5 and contains several errors.
we carefully designed the prompt used by PALM following strate
The correct proof produced by PALM is shown in Figure 8.
gies for few-shot in-context learning. Our strategy for designing
1 Lemma sqr_le: forall a: Z, a <= a * a. this prompt was inspired by recent findings that LLMs can pro
2 Proof . duce instructions that are superior or equivalent to those crafted by
3 intros. destruct a. humans [53]. We first asked GPT-4 to infer the five most effective in
54 - reflexivity.- induction p. structions for two theorems accompanied by human-written proof
6 + simpl. ring. scripts. Next, we constructed candidate prompts by combining these
78 + apply Z_le_dec.+ apply Z.le_refl. five sets of instructions with the two examples, premises and a new
9 - apply Z.eq_le_incl. theorem to be proven. The inclusion of the two example theorems
10 Qed . in this query was meant to demonstrate the correct Coq syntax and
**Figure 7: A theorem stating 𝑎** ≤ _𝑎_ ×𝑎 **for any integer 𝑎** **(Line 1),** our desired proof style. The first author then manually examined
**and an erroneous proof (Lines 2 to 10) produced by GPT-3.5.** 20 proofs produced by the LLM in response to these prompts and
chose the prompt that yielded the highest quality proofs. Proof
quality was assessed based on correctness and adherence to the
1 Lemma sqr_le : forall a : Z, a <= a * a. instructions, e.g., using bullets for structure, etc. When multiple
2 Proof . proofs met these criteria, the simplest (i.e. shortest) correct proof
3 intros. destruct a. was preferred. Figure 9 illustrates the final prompt template with
54 - reflexivity.- chfcrush use: Zlt_le_succ, Pos2Z.is_pos, an example and the response of GPT-3.5.
67 - hfcrush.Z.le_mul_diag_r. **4.4** **Repair Mechanisms**
8 Qed . The proofs produced by LLMs typically feature a good high-level
**Figure 8: The correct proof found by PALM.** structure that decomposes the proof into reasonable subgoals. How
ever, most of these proofs are rejected by Coq due to errors, as
-----
ASE ’24, October 27-November 1, 2024, Sacramento, CA, USA Minghai Lu, Benjamin Delaware, and Tianyi Zhang
3. First, if the current goal has been solved and the next goal is
focused on using the wrong bullet, Coq will indicate the expected
bullet, and PALM will simply update the proof to use it. Second, if
the proof attempts to proceed to the next goal or finish the proof
while there are still unsolved goals, PALM will delegate the repair
effort to the backtracking procedure described in the next section.
To illustrate this, consider the last subgoal produced by destruct
a, as shown in Line 9 of Figure 7. The apply Z.eq_le_incl tactic
fails to fully solve this subgoal, so attempting to finish the proof
with Qed (Line 10) causes an error. To fix this, PALM starts the
backtracking procedure using the goal that results from the apply
Z.eq_le_incl tactic. Eventually, the backtracking procedure replaces the apply Z.eq_le_incl tactic with the hfcrush tactic
(Line 6 in Figure 8) and completely solves the goal.
**Premise augmentation LLMs can produce proof scripts that**
misuse a theorem, resulting in a wrong theorem application, wrong
_rewriting, or tactic misuse error. Despite this, the misused theorems_
are still potentially helpful: although used improperly in the proof
script, they might still aid in solving the goal if used in a different manner. Based on this insight, PALM leverages CoqHammer
to determine how to use these theorems correctly. Specifically, it
employs the qsimpl tactic provided by CoqHammer, which accepts
a list of theorems as arguments. qsimpl uses sophisticated heuristics to identify which theorems can be applied and simplifies the
current goal accordingly. Similar tactics are available in other proof
assistants. PALM executes qsimpl with a misused theorem as the
argument, allowing it to automatically discover the correct usage of
this theorem. For example, if a tactic apply Zlt_le_succ causes an
error because its conclusion does not match the current goal, PALM
will execute qsimpl use: Zlt_le_succ to utilize this theorem
despite its initial misuse.
**4.5** **Backtracking**
_PALM leverages CoqHammer to solve goals that the initial script_
fails to prove due to errors that cannot be repaired. Although other
proof automation techniques could be employed, we found hammers to be effective in practice. Applying a wrong tactic can result
in a new goal that is more difficult, or even impossible to prove.
Thus, when CoqHammer fails to solve a goal, it is clear that PALM
needs to backtrack to an earlier point in the proof to see if it can be
solved instead. Particularly, if a tactic produces multiple subgoals,
all these subgoals must be proven. If PALM fails to prove any of
them, it needs to revert to the goal before that tactic. For example,
when reasoning by induction, if the base case is proven but the
inductive case fails, the entire induction attempt has failed, and we
need to try a different proof strategy instead of induction.
Algorithm 2 presents our backtracking procedure, which aims
to prove unsolved goals using CoqHammer. The input to the procedure is an unsolved goal _𝑔. If_ _𝑔_ is successfully solved by CoqHammer,
it returns the proof found by CoqHammer immediately (Lines 4-5).
Otherwise, PALM reverts to the goal before the last applied tactic,
and tries CoqHammer again. If the last command is a bullet (Line 6),
it means that our algorithm will not be able to prove this subgoal
using CoqHammer. When this happens, the algorithm identifies
the tactic that produced this subgoal as the 𝑟𝑜𝑜𝑡 (Line 7), and then
discards 𝑟𝑜𝑜𝑡 and all its associated subgoals (Line 8). Having the
**Figure 9: An example of the prompt constructed by PALM,**
**and the response of GPT-3.5.**
discussed in Section 3. To address this issue, we have developed a
set of repair mechanisms to handle the common types of errors.
**Reference replacement When LLMs generate tactics refer-**
encing undefined theorems or hypotheses, PALM systematically
searches in the local and global context for theorems and hypotheses
with similar names, in order to find suitable replacements. Specifically, PALM first collects a set of candidates, including the relevant
theorems selected by the retrieval method, and hypotheses in the
local context. It then ranks these candidates using BM25 based on
the similarity of their names to the initial undefined reference name.
Then, PALM iteratively replaces the undefined reference name in
the tactic with each candidate and asks Coq to re-execute the updated tactic, until the the tactic succeeds. For example, if a candidate
proof uses the tactic apply in_remove_all but in_remove_all
does not exist, PALM searches for similar reference names. It first
ranks the selected theorems and hypotheses based on the similarity
of their names to in_remove_all. Then, PALM iteratively replaces
in_remove_all with the candidates and eventually finds a tactic
apply in_remove_all_preserve, which solves the goal.
**Renaming If a proof script tries to introduce a term using**
intros but the specified name already exists in the local context,
_PALM appends an apostrophe to the specified name and updates the_
tactic accordingly. For example, if the tactic intros H is used but H
already exists in the local context as a hypothesis, PALM updates
the tactic to intros H’. If there is nothing that can be moved to
the local context, PALM simply drops the intros tactic from the
current proof script.
**Bullet transformation PALM handles bullet misuse in two**
ways, depending on the two specific scenarios described in Section
-----
Proof Automation with Large Language Models ASE ’24, October 27-November 1, 2024, Sacramento, CA, USA
**Algorithm 2 Backtracking**
1: Input: Unsolved Goal 𝑔
2: function Backtrack(𝑔)
3: **while exist tactics do**
4: **if 𝑔** is solved by CoqHammer then
5: **return CoqHammer.getProof()**
6: **else if the last tactic is a bullet then**
7: _𝑟𝑜𝑜𝑡_ ← the tactic produces this subgoal
8: discard 𝑟𝑜𝑜𝑡 and its subgoals
9: **else**
10: Coq.undo()
**5** **EVALUATION**
Our experimental evaluation of our approach addresses four key
research questions:
- RQ1: Is PALM more effective at proving theorems than other
state-of-the-art proof automation approaches?
- RQ2: Can PALM generalize to other LLMs with different
parameter sizes?
- RQ3: How much does each component of PALM contribute
to its effectiveness?
- RQ4: Is PALM time-efficient?
We conducted experiments on a workstation with an AMD EPYC
7313 CPU, an NVIDIA A5500 GPU, and 512GB memory. The operating system was 64-bit Ubuntu 22.04 LTS.
**5.1** **Comparison baselines**
We compare PALM against three state-of-the-art proof automation
approaches: Passport [39], Proverbot9001 [38] and Draft, Sketch, and
_Prove (DSP) [27]. Both Passport and Proverbot9001 are machine learn-_
ing methods. Passport employs a Tree-LSTM [41] to model proof
states, incomplete proof scripts, and identifiers in proofs. Prover_bot9001 adopts an RNN to model manually engineered features of_
the proof states. DSP prompts LLMs to translate natural language
proofs into formal proof sketches that outline high-level proof steps
without low-level details. The informal proofs can be either written by humans or generated by LLMs. It then uses off-the-shelf
proof automation tools such as hammers to fill in the gaps. Unlike
_DSP, PALM does not require informal proofs, and employs repair_
mechanisms and a backtracking procedure to fix proof errors. As
human-written proofs were unavailable for the benchmarks used
in our test set, in order to reproduce DSP, we used GPT-3.5 to generate informal proofs and sketches, and use CoqHammer as the
underlying proof automation tool.
**5.2** **Benchmark construction**
Following prior work [38, 39, 48], we use the test set of CoqGym [48]
as the evaluation dataset, which consists of 13,137 theorems from
27 open-source Coq projects. Since the theorems from the Verdi
project used in our formative study are also included in CoqGym,
we exclude them to avoid biases. As we ran the baselines on CoqGym, we found that Passport is compatible exclusively with Coq 8.9,
and relies on CoqGym’s original dataset. Proverbot9001, which does
not use CoqGym, supports only newer versions of Coq, namely Coq
8.10, 8.11, and 8.12. To ensure fairness, our evaluation is conducted
on a subset of CoqGym, including 10842 theorems that are compatible with all relevant versions of Coq. We implement PALM for
Coq 8.10, 8.11, and 8.12, since many language features and standard
libraries of Coq 8.9 are outdated [2].
**5.3** **Results**
In RQ1, we compare the performance of PALM using GPT-3.5 as
the underlying LLM against the baselines. In RQ2, we evaluate the
performance of PALM when using different LLMs.
_5.3.1_ _RQ1: Effectiveness of PALM. Table 2 shows the number and_
percentage of theorems each approach can successfully prove. Compared with existing approaches, Passport, Proverbot9001, and DSP,
11: **return None**
LLM structure its proof using bullets helps PALM identify which
parts of the proof need to be dropped when a subgoal fails. If the
last tactic is not a bullet, PALM simply reverts to the goal before that
tactic (Line 10). This loop continues until CoqHammer succeeds
(Line 5) or no tactics remain in which case the repair attempt fails
(Line 3).
We demonstrate the backtracking procedure on the induction
p tactic and its subgoals (shown in Lines 5-8 of Figure 7). Initially,
all tactics up to ring (Line 6) are executed without errors. However, the ring tactic fails and cannot be repaired, so PALM starts
backtracking. The input is the goal resulting from the execution of
the last tactic, simpl. Algorithm 2 invokes CoqHammer, but it fails
to solve the goal, so PALM reverts the simpl tactic and invokes
CoqHammer again, but CoqHammer still fails. At this point, the
algorithm hits a bullet (“+”), and there are no remaining tactics
that can be repaired using CoqHammer. This indicates that the
first subgoal produced by induction p cannot be proven, leading
to the failure of the entire induction attempt. Accordingly, PALM
discards the induction p tactic (Line 5) along with the tactics corresponding to all its subgoals (Lines 6-8). Algorithm 2 then reverts
to the second subgoal produced by destruct a, which is successfully solved by CoqHammer. The proof found by CoqHammer is
presented in Lines 5-6 of Figure 8.
**Figure 10: Visualization of our backtracking repair algorithm.**
**The red lines indicate the reverted tactics, and the dashed**
**lines indicate the tactics found by CoqHammer during back-**
**tracking.**
-----
ASE ’24, October 27-November 1, 2024, Sacramento, CA, USA Minghai Lu, Benjamin Delaware, and Tianyi Zhang
_PALM is more effective with simpler proofs. Moreover, PALM can_
prove 129 theorems that require 20 tactics or more, outperforming
_Passport (11), Proverbot9001 (30) and DSP (48)._
**Figure 12: Distribution of theorems that are proven or not**
**proven by PALM, categorized by the number of tactics in the**
**ground-truth proofs.**
_Finding 1: PALM is more effective than Passport, Proverbot9001_
_and DSP on our benchmarks, proving significantly more theo-_
_rems. Notably, PALM proves 1270 theorems that none of the_
_other approaches can prove. Additionally, PALM can prove a_
_larger number of complex theorems than other approaches._
_5.3.2_ _RQ2: Generalizability of PALM. To demonstrate the general-_
izability of PALM across LLMs with different parameter sizes, we
further evaluate PALM with GPT-4o [6], Llama-3-70B-Instruct [5]
and Llama-8B-Instruct [5] as the underlying LLMs.
Table 2 presents the theorems proven by each LLM individually,
and by PALM when using them as underlying LLMs. We observe
that all evaluated LLMs perform poorly when used alone, proving
only 0.1%-6.4% of theorems. Augmenting these LLMs with PALM
significantly improves the performance. With the most powerful
GPT-4o model, PALM proves 4614 theorems, achieving a 5.5% absolute improvement compared with using the second most powerful
LLM, GPT-3.5. This highlights the potential enhancements PALM
can achieve with the latest LLMs. When using Llama-3-70B-Instruct,
_PALM proves 4155 theorems, which is comparable with the result_
obtained using GPT-3.5. When using the smaller Llama-8B-Instruct,
_PALM proves 3433 theorems, 21.6% fewer than when using GPT-3.5._
Despite this, PALM still outperforms DSP by 38.5%, suggesting it
can be effective even when using less powerful LLMs. Using all
four LLMs, PALM successfully proves a total of 5210 theorems.
_Finding 2: PALM generalizes to other LLMs of different param-_
_eter sizes, and performs better when using larger LLMs._
_5.3.3_ _RQ3: Effectiveness of each component. We have conducted_
an ablation study to evaluate the effectiveness of each component
within PALM.
**Effectiveness of the repair mechanisms. To study the effec-**
tiveness of each repair mechanism, we constructed four variants
of PALM: PALM_𝑟𝑒𝑓, PALM_𝑟𝑒𝑛𝑎𝑚𝑒, PALM_𝑏𝑢𝑙𝑙𝑒𝑡, and PALM_𝑎𝑢𝑔.
These variants disable the reference replacement, renaming, bullet
**Approach** **# of Theorems Proven**
_Passport_ 1561 (14.4%)
_Proverbot9001_ 1849 (17.1%)
_Draft, Sketch, and Prove (DSP)_ 2478 (22.9%)
GPT-3.5 402(3.7%)
+ PALM 4377 (40.4%)
GPT-4o 689(6.4%)
+ PALM 4614 (42.6%)
Llama-3-70b-Instruct 386(3.6%)
+ PALM 4155 (38.3%)
Llama-3-8b-Instruct 7(0.1%)
+ PALM 3433 (31.7%)
**Table 2: Theorems proved by each approach.**
_PALM proves 180.4%, 136.7%, and 76.6% more theorems, respec-_
tively. Since Passport and Proverbot9001 use less powerful LSTM
and RNN models, they prove fewer theorems than PALM and DSP
which leverage LLMs. DSP underperforms PALM due to two reasons. First, we use GPT-3.5 to generate informal proofs needed by
_DSP, but it may introduce errors in the generation process. Second,_
since DSP does not repair errors, any errors in the generated proof,
no matter how big or small the errors are, will lead to a proof failure.
Compared with DSP, PALM repairs errors in the proofs generated
by the LLM, and performs backtracking to regenerate previous
proof steps when hammers fail.
Figure 11 presents a Venn diagram illustrating the theorems
proven by each approach. All four approaches can collectively prove
4821 (44.5%) theorems, of which only 444 cannot be proven by PALM.
The three baselines are able to prove 3616 distinct theorems in
total, and PALM outperforms their combination by 21.0%. Moreover,
_PALM proves 1270 theorems that none of the other approaches can_
prove.
**Figure 11: Breakdown of theorems proven by each combina-**
**tion of approaches.**
We further analyze the complexity of the theorems that PALM
proves, using the number of tactics in a proof as a proxy metric for
theorem complexity. Figure 12 shows the distribution of theorems
that are proven or not proven by PALM, categorized by the number
of tactics in the ground-truth proofs. The average number of tactics
in the ground-truth proofs is 5.84 and the median is 4, suggesting
-----
Proof Automation with Large Language Models ASE ’24, October 27-November 1, 2024, Sacramento, CA, USA
respectively. Additionally, each successful CoqHammer invocation
averages 4.3 seconds, with 79.3% of successful invocations completing in under 5 seconds. This suggests that PALM could prove
a substantial number of theorems even with a shorter CoqHammer time limit, while a longer limit would potentially benefit more
complex proofs.
_Finding 6: On average, PALM takes longer to prove theorems_
_than other approaches, but this overhead is acceptable given_
_that PALM proves more complex theorems._
**5.4** **Case Studies**
Despite its effectiveness, PALM still fails to prove 59.6% theorems
in our dataset. To understand the underlying reasons for these failures, we randomly sampled 100 theorems that PALM fails to prove
and conducted a manual analysis. Table 4 outlines the 3 primary
reasons for these failures.[3] To illustrate these reasons further, we
now describe a typical case of failure for each.
**Reason** **# occurrences**
Premises not retrieved 58 (58%)
Premises retrieved but not used 14 (14%)
Tactics not used 39 (39%)
**Table 4: The reasons causing PALM to fail.**
_5.4.1_ _Missing premises. A key reason for PALM’s failures (58%) is_
the omission of necessary premises in the retrieval process. Figure 13 presents a theorem that PALM fails to prove because a critical
premise, reduceplus_cb1, was not retrieved. Hence this premise
cannot be used by the LLM, hindering the proof process.
Theorem reducestar_cb1 :
forall (a : poly A0 eqA ltM) (b : list (Term A n))
(Q : list (poly A0 eqA ltM)),
reducestar A A0 A1 eqA invA minusA multA divA eqA_dec n
ltM ltM_dec Q
(s2p A A0 eqA n ltM a) b -> CombLinear (a :: Q) b.
(* Human written proof *)
intros a b Q H'; inversion H'; auto.
apply reduceplus_cb1; auto.
(* LLM generated proof *)
intros a b Q Hred. induction Hred.
- constructor. - apply CombLinear_1; auto.
**Figure 13: A failure case [9] because reduceplus_cb1 is not**
**retrieved.**
_5.4.2_ _Premises retrieved but not used. In 14% of the failures, even_
when a premise is successfully retrieved and included in the prompt,
it may not be used by the LLM. Figure 14 shows a case where the
lemmas map_insert and map_map_exchange are included in the
prompt, but they are not used by the LLM, causing PALM’s failure to
prove the theorem. Although providing CoqHammer with unused
retrieved premises during the backtracking process could solve such
issues, we choose not to do so, as providing too many unrelated
premises slows down CoqHammer and can lead it to timeout.
3The columns in Table 4 sum to more than 100% because a single theorem can fail for
multiple reasons.
**Technique Variant** **# of Theorems Proven**
_PALM_𝑟𝑒𝑓_ 4249 (39.2%)
_PALM_𝑟𝑒𝑛𝑎𝑚𝑒_ 4175 (38.5%)
_PALM_𝑏𝑢𝑙𝑙𝑒𝑡_ 4225 (39.0%)
_PALM_𝑎𝑢𝑔_ 4094 (37.8%)
_PALM_𝑏𝑎𝑐𝑘𝑡𝑟𝑎𝑐𝑘_ 702 (6.5%)
_PALM_𝑟𝑒𝑡𝑟𝑖𝑒𝑣𝑒𝑟_ 4147 (38.2%)
_PALM (GPT-3.5)_ 4377 (40.4%)
**Table 3: Effectiveness of each PALM component.**
transformation, and premise augmentation mechanisms, respectively. Table 3 presents the evaluation results of PALM and the
variants.
Overall, PALM consistently proves 3.0%-6.9% more theorems
than each variant, indicating the importance of each of our repair
mechanisms. Furthermore, all variants continue to prove 65.2%71.5% more theorems than DSP, demonstrating that PALM remains
effective even when equipped with partial repair mechanisms.
_Finding 3: Each repair mechanism of PALM contributes to its_
_ability to prove theorems._
**Effectiveness of backtracking. We evaluated the effectiveness**
of the backtracking procedure (Algorithm 2) by constructing an
additional variant, called PALM_𝑏𝑎𝑐𝑘𝑡𝑟𝑎𝑐𝑘 . It does not perform backtracking when it fails to repair an error, and immediately terminates
the proof process instead. As shown in Table 3, PALM significantly
outperforms PALM_𝑏𝑎𝑐𝑘𝑡𝑟𝑎𝑐𝑘 by 523.5%, indicating that the backtracking procedure is essential to proving many theorems.
_Finding 4: The backtracking procedure is essential to PALM’s_
_effectiveness, enabling it to prove 5× more theorems than only_
_utilizing the repair mechanisms._
**Effectiveness of our premise retriever. To investigate the**
effectiveness of our premise retriever, we constructed a variant
called PALM_𝑟𝑒𝑡𝑟𝑖𝑒𝑣𝑒𝑟, which does not add any premises to the
proof generation prompt.
Table 3 shows that PALM outperforms PALM_𝑟𝑒𝑡𝑟𝑖𝑒𝑣𝑒𝑟 by 5.5%,
which underscores that the premise retriever enables LLMs to produce higher-quality proof scripts.
_Finding 5: The premise retriever is useful to PALM, helping it_
_to prove 5.5% more theorems._
_5.3.4_ _RQ4: Efficiency of PALM. On average, PALM takes 32.89 sec-_
onds to successfully prove a theorem, while Passport, Proverbot9001
and DSP require 3.1, 4.7 and 8.2 seconds, respectively. The main
source of time overhead for PALM is its use of CoqHammer. On
average, CoqHammer is invoked 1.96 times per proof, with each
invocation having a timeout of 10 seconds. This additional time is
justified by PALM’s ability to prove more complex theorems than
other approaches. We further examined the time each approach
takes on all theorems, regardless of whether they were successfully proven or not. On average, PALM takes 105.6 seconds, while
_Passport, Proverbot9001 and DSP take 67.2, 31.8 and 20.6 seconds_
-----
ASE ’24, October 27-November 1, 2024, Sacramento, CA, USA Minghai Lu, Benjamin Delaware, and Tianyi Zhang
Lemmaforall A (f g h : A -> A) x (a : A) e, map_insert_map: for different tactic languages. For example, Isabelle structures sub
(forall a, f (g a) = g (h a)) -> goals using the ‘case‘ keyword instead of bullets, thus the bullet
map f (insert x a (map g e)) = transformation needs to be modified. We plan to extend PALM’s
insert x (f a) (map g (map h e)). implementation to other ITPs, and evaluate it across more diverse
intros. rewrite map_insert. f_equal.eapply map_map_exchange. eauto. **Construct validity.that we use the number of tactics in ground-truth proofs as a proxy One potential threat to construct validity is**
(* LLM generated proof *) metric for the theorem complexity. This metric may not accurately
Lemma map_insert_map:
forall A (f g h : A -> A) x (a : A) e,
(forall a, f (g a) = g (h a)) ->
map f (insert x a (map g e)) =
insert x (f a) (map g (map h e)).
(* Human written proof *)
intros. rewrite map_insert. f_equal.
eapply map_map_exchange. eauto.
(* LLM generated proof *)
intros. apply map_insert_eq. apply H.
**Figure 14: A failure case [10] where the LLM does not use**
**map_insert and map_map_exchange provided in the prompt.**
_5.4.3_ _Tactics not used. Some theorems require specific tactics to_
be proven, and PALM will fail if these tactics are not included in
the proof script generated by the LLM. This accounts for 39% of
the failure cases. Figure 15 shows an example where the proof of a
theorem requires the use of the induction tactic. Since the initial
proof script did not include this tactic, and both CoqHammer and
our repair mechanisms do not perform induction, PALM cannot
prove this theorem.
Lemma last_holder '_unlock_none : forall tr h c,
last_holder ' h tr = Some c ->
slast_holder ' h (tr ++ [( Client c, inl Unlock)])=None.
(* Human written proof *)
induction tr; intros; simpl in *; repeat break_match;
intuition. congruence.
(* LLM generated proof *)
intros tr h c i n H1 H2 H3 H4.
apply (last_holder '_no_out_inv tr h (Client c) n).
apply H1.
**Figure 15: A failure case [8] where the LLM does not perform**
**induction.**
**6** **DISCUSSION**
**6.1** **Threats to Validity**
**Internal validity. One threat to internal validity comes from the**
inherent randomness of LLMs. This randomness is due to the use
of temperature sampling [12, 54] as the decoding strategy, where
LLMs randomly select the next token based on a probability distribution. To reduce this threat, we conduct large-scale experiments
using three state-of-the-art and widely used LLMs: GPT-3.5, GPT-4o,
Llama-3-70B-Instruct, and Llama-3-8B-Instruct, as the underlying
LLMs for PALM. We evaluate their performance across a benchmark
consisting of 10842 theorems from diverse domains. The consistent
results observed in our experiments help reduce this threat. Another threat is that due to the limitation of computational resources
and evaluation time, we only run each of our experiments once;
this may introduce statistical biases into our results.
**External validity. The threat to external validity is alone the gener-**
alizability of our experimental results. We implement and evaluate
only on Coq, while other widely used ITPs, such as Isabelle, HOL
Light, and Mizar, are not included. Nonetheless, we believe the approach and algorithm in PALM can be easily applied to other ITPs
that use tactics for proof construction and support automation tools
like CoqHammer. However, the specifics will need to be adapted
**6.2** **Limitations and Future Work**
_PALM fundamentally depends on the initial proof script generated_
by LLMs. If the LLM generates a completely wrong initial proof,
_PALM struggles to fix it. Future improvements to PALM could_
involve leveraging LLMs to repair incorrect proofs [22] or sampling
multiple initial proofs.
The initial proof script sometimes fails to use relevant tactics,
such as a user-defined tactic with an ambiguous or uninformative
name. As a result, PALM cannot effectively prove theorems that
depend on custom user-defined tactics. This can be improved by
adopting more powerful retrievers [49] that learn from the usage
patterns of these user-defined tactics.
Finally, we did not spend significant effort optimizing the prompt
used by PALM, since our focus was not on prompt engineering.
Different combinations of instructions or using a more advanced
prompting design, such as Chain-of-Thought [45] and Least-to
Most [52] prompting, may improve the performance of PALM.
These approaches are worth exploring in future work.
**7** **RELATED WORK**
_Machine Learning for Formal Verification. There have been vari-_
ous machine learning-based techniques that aim to automatically
generate formal proofs for different ITPs. ASTactic [48] is the first
deep learning-based proof generation technique for ITPs. It leverages Tree-LSTM [41] to model proof states with all Coq terms
parsed into abstract syntax trees (ASTs), and searches for a complete proof via depth-first search (DFS). Many other techniques
have been proposed to enhance the performance of ASTactic. TacTok [21], for example, models not only the proof states, but also the
incomplete proof scripts to provide more context information. To
enlarge the search space, Diva [20] combines multiple models that
are trained with different hyperparameters, such as learning rate
and embedding size, and different orderings of training data. Passport [39] further extends ASTacic and TacTok by adding new encoding mechanisms for identifiers in proof scripts. These techniques
are all evaluated on the CoqGym [48] dataset. Proverbot9001 [38]
learns to predict the tactics and arguments using an RNN model and
a set of manually engineered features. It also leverages advanced
search algorithms such as A-star, and several pruning techniques.
Unlike existing machine learning methods that require significant training, PALM leverages LLMs and does not require any
training or fine-tuning. Instead of using search strategies, PALM
employs repair mechanisms and a backtracking procedure to address errors and solve the goals that LLMs fail to prove.
_Language Models for Formal Verification. Recently, there has been_
considerable interest in applying LLMs to formal verification. The
-----
Proof Automation with Large Language Models ASE ’24, October 27-November 1, 2024, Sacramento, CA, USA
most related work is Draft, Sketch, and Prove (DSP) [27]. Similar to PALM, DSP also synergizes LLMs and automated theorem
provers. DSP uses LLMs to translate natural language proofs (i.e.,
informal proofs) into formal proof sketches that outline high-level
steps without low-level details. Then, it uses off-the-shelf proof
automation tools such as hammers to fill in the gaps. In contrast,
_PALM does not require informal proofs to guide the generation of_
machine-checked proofs. While DSP reports a failure once proof
automation tools cannot fill in a gap, PALM employs a backtracking
procedure to regenerate previous proof steps when hammer fails.
Additionally, PALM adopts repair mechanisms to address common
errors made by LLMs.
Minerva [33] is a LLM trained on mathematical datasets and
achieves state-of-the-art performance on quantitative reasoning
tasks. Baldur [22] fine-tunes Minerva to create (1) a proof genera_tion model that generates whole proofs given a theorem, and (2)_
a proof repair model that repairs an incorrect proof given the error message. To train the proof generation model, it constructs
a dataset by concatenating the proof steps of each theorem from
the PISA dataset [25]. The PISA dataset consists of 183K theorems
collected from the Isabelle standard library [35] and the Archive
of Formal Proofs [1]. To train the proof repair model, it samples
from the proof generation model for each theorem in PISA, and
records the error messages returned from the ITP for each erroneous proof. The dataset comprises tuples of incorrect proofs, error
messages, and correct proofs. Compared with Baldur, PALM adopts
error-specific repair mechanisms to effectively address the errors.
Although Baldur does not perform any search, it needs 64 samples
for proof generation and 32 samples for proof repair to achieve high
performance, while PALM only samples once from LLMs. We have
not compared the performance of PALM with Baldur because the
Minerva model used by Baldur is not open-sourced, and reproducing Baldur by fine-tuning publicly accessible LLMs such as Llama-3
would require extensive computational resources. For instance, the
proof generation model of Baldur is fine-tuned on 64 Google TPU
v3, with a total of 1024 GB memory. To fine-tune Llama-3-8b with
the same settings, over 512 GB GPU memory (around 7 A100s) is
required. Furthermore, the proof repair dataset of Baldur consists
of 150K tuples of wrong proofs, error messages, and correct proofs
in Isabelle. To extend Baldur for Coq, a similar dataset would need
to be constructed for Coq.
Thor [26] augments the PISA dataset [25] by invoking SledgeHammer [36] in each step of the proofs in the dataset, and adding
successful invocations of SledgeHammer to the dataset. Thor trains
a decoder-only transformer model (700M parameters) on this enhanced dataset. This model is designed to learn when to invoke
hammers during a proof, and guide a search process. Unlike Thor,
which does not perform premise retrieval, PALM adopts a premise
retriever to enhance the performance of LLMs. Instead of performing a computationally expensive proof search, PALM leverages
LLMs to produce well-structured initial proofs and adopts repair
mechanisms to fix common errors. PALM is not directly comparable
with Thor, because Thor’s model is specifically trained for Isabelle
proofs rather than Coq, which cannot be reproduced on Coq with
reasonable effort.
Copra [42] uses the state-of-the-art GPT-4 model to guide a
depth-first search process. In each step, GPT-4 is prompted with
the proof state, previous proof steps, the incorrect steps, and the
corresponding error messages to avoid recurrent errors. Copra
can be further augmented by incorporating premise retrieval and
generating informal proofs from informal theorem statements if
they exist. However, Copra’s effectiveness is highly dependent on
the capability of the LLMs it uses. For example, Copra proves 26.63%
theorem in the miniF2F dataset [51] using GPT-4, but only proves
9.02% using GPT-3.5. Moreover, Copra does not directly repair
incorrect tactics and instead prompts the LLM with incorrect tactics
and error messages. In contrast, PALM adopts a set of symbolicbased repair mechanisms to correct erroneous tactics effectively,
and demonstrates consistent performance across LLMs.
**8** **CONCLUSION**
Large Language Models (LLMs) have shown promise in automatically generating informal proofs in natural language, but these
systems have proven to be less effective at generating formal proofs
in interactive theorem provers (ITPs). This paper described a formative study that identified common errors made by GPT-3.5 when
generating machine-checked proofs. Guided by these findings, we
proposed PALM, which combines LLMs and symbolic methods to
automatically prove theorems in an ITP. PALM adopts a premise
retriever to select relevant premises such as lemmas and definitions,
in order to enhance the quality of proofs generated by an LLMs. It
additionally uses a set of repair mechanisms and a backtracking algorithm to correct errors in proof scripts generated by an LLM. We
evaluated PALM on a dataset of 10842 theorems. In the evaluation,
_PALM significantly outperforms existing approaches, and demon-_
strates its generalizability across different LLMs. Furthermore, our
ablation study suggests that all components of PALM are effective.
**ACKNOWLEDGEMENTS**
We thank Prasita Mukherjee and the anonymous reviewers for their
valuable suggestions and feedback. This work is supported in part
by NSF grants ITE-2333736, CCF-2340408, and CCF-2321680.
**REFERENCES**
[[1] 2024. Archive of Formal Proofs. https://www.isa-afp.org/index.html.](https://www.isa-afp.org/index.html)
[[2] 2024. Changelogs of Coq. https://coq.inria.fr/doc/V8.12.0/refman/changes.html.](https://coq.inria.fr/doc/V8.12.0/refman/changes.html)
[[3] 2024. Eprover. http://www.eprover.org.](http://www.eprover.org)
[[4] 2024. GPT-3.5-turbo. https://platform.openai.com/docs/models/gpt-3-5-turbo.](https://platform.openai.com/docs/models/gpt-3-5-turbo)
[[5] 2024. GPT-3.5-turbo. https://llama.meta.com/llama3.](https://llama.meta.com/llama3)
[[6] 2024. GPT-4o. https://openai.com/index/hello-gpt-4o.](https://openai.com/index/hello-gpt-4o)
[[7] 2024. PALM’s source code. https://github.com/lachinygair/PALM.](https://github.com/lachinygair/PALM)
[8] 2024. A failure case because inductive reasoning is not used. [https:](https://github.com/uwplse/verdi/blob/b7f77848819878b1faf0e2e6a730f9bb850130be/theories/Systems/LiveLockServ.v#L1112)
[//github.com/uwplse/verdi/blob/b7f77848819878b1faf0e2e6a730f9bb850130be/](https://github.com/uwplse/verdi/blob/b7f77848819878b1faf0e2e6a730f9bb850130be/theories/Systems/LiveLockServ.v#L1112)
[theories/Systems/LiveLockServ.v#L1112.](https://github.com/uwplse/verdi/blob/b7f77848819878b1faf0e2e6a730f9bb850130be/theories/Systems/LiveLockServ.v#L1112)
[9] 2024. A failure case because key premises are not retrieved.
[https://github.com/coq-community/buchberger/blob/](https://github.com/coq-community/buchberger/blob/92f377ac39c0aec3e6ef77d4c2b26318990e2145/theories/Pcomb.v#L703)
[92f377ac39c0aec3e6ef77d4c2b26318990e2145/theories/Pcomb.v#L703.](https://github.com/coq-community/buchberger/blob/92f377ac39c0aec3e6ef77d4c2b26318990e2145/theories/Pcomb.v#L703)
[[10] 2024. A failure case because the LLM does not use premises. https://github.com/](https://github.com/coq-community/dblib/blob/master/src/Environments.v#L550)
[coq-community/dblib/blob/master/src/Environments.v#L550.](https://github.com/coq-community/dblib/blob/master/src/Environments.v#L550)
[[11] 2024. BM25, wikipedia. https://en.wikipedia.org/wiki/Okapi_BM25.](https://en.wikipedia.org/wiki/Okapi_BM25)
[12] David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. 1985. A learning
algorithm for Boltzmann machines. Cognitive science 9, 1 (1985), 147–169.
[13] Alexander A. Alemi, François Chollet, Niklas Een, Geoffrey Irving, Christian
Szegedy, and Josef Urban. 2016. DeepMath - deep sequence models for premise
selection. In Proceedings of the 30th International Conference on Neural Information
_Processing Systems (Barcelona, Spain) (NIPS’16). Curran Associates Inc., Red Hook,_
NY, USA, 2243–2251.
[14] Haniel Barbosa, Clark W. Barrett, Martin Brain, Gereon Kremer, Hanna Lachnitt,
Makai Mann, Abdalrhman Mohamed, Mudathir Mohamed, Aina Niemetz, Andres
Nötzli, Alex Ozdemir, Mathias Preiner, Andrew Reynolds, Ying Sheng, Cesare
-----
ASE ’24, October 27-November 1, 2024, Sacramento, CA, USA Minghai Lu, Benjamin Delaware, and Tianyi Zhang
Tinelli, and Yoni Zohar. 2022. cvc5: A Versatile and Industrial-Strength SMT
Solver. In Tools and Algorithms for the Construction and Analysis of Systems _28th International Conference, TACAS 2022, Held as Part of the European Joint_
_Conferences on Theory and Practice of Software, ETAPS 2022, Munich, Germany,_
_April 2-7, 2022, Proceedings, Part I (Lecture Notes in Computer Science, Vol. 13243),_
Dana Fisman and Grigore Rosu (Eds.). Springer, 415–442. [https://doi.org/10.](https://doi.org/10.1007/978-3-030-99524-9_24)
[1007/978-3-030-99524-9_24](https://doi.org/10.1007/978-3-030-99524-9_24)
[15] źUkasz Czajka and Cezary Kaliszyk. 2018. Hammer for Coq: Automation for
[Dependent Type Theory. J. Autom. Reason. 61, 1–4 (jun 2018), 423–453. https:](https://doi.org/10.1007/s10817-018-9458-4)
[//doi.org/10.1007/s10817-018-9458-4](https://doi.org/10.1007/s10817-018-9458-4)
[16] Leonardo de Moura and Nikolaj Bjørner. 2008. Z3: An Efficient SMT Solver. In
_Tools and Algorithms for the Construction and Analysis of Systems, C. R. Ramakr-_
ishnan and Jakob Rehof (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg,
[337–340. https://doi.org/10.1007/978-3-540-78800-3_24](https://doi.org/10.1007/978-3-540-78800-3_24)
[17] Leonardo de Moura, Soonho Kong, Jeremy Avigad, Floris van Doorn, and Jakob
von Raumer. 2015. The Lean Theorem Prover (System Description). In Auto_mated Deduction - CADE-25, Amy P. Felty and Aart Middeldorp (Eds.). Springer_
International Publishing, Cham, 378–388.
[18] Sahibsingh A Dudani. 1976. The distance-weighted k-nearest-neighbor rule. IEEE
_Transactions on Systems, Man, and Cybernetics 4 (1976), 325–327._
[19] Michael Färber and Cezary Kaliszyk. 2015. Random forests for premise selection.
In International Symposium on Frontiers of Combining Systems. Springer, 325–340.
[20] Emily First and Yuriy Brun. 2022. Diversity-driven automated formal verification.
In Proceedings of the 44th International Conference on Software Engineering. 749–
761.
[21] Emily First, Yuriy Brun, and Arjun Guha. 2020. TacTok: semantics-aware proof
synthesis. Proceedings of the ACM on Programming Languages 4, OOPSLA (2020),
1–31.
[22] Emily First, Markus Rabe, Talia Ringer, and Yuriy Brun. 2023. Baldur: Wholeproof generation and repair with large language models. In Proceedings of the
_31st ACM Joint European Software Engineering Conference and Symposium on the_
_Foundations of Software Engineering. 1229–1241._
[23] Barney Glaser and Anselm Strauss. 2017. Discovery of grounded theory: Strategies
_for qualitative research. Routledge._
[24] Beverley Hancock, Elizabeth Ockleford, Kate Windridge, et al. 2001. An introduc_tion to qualitative research. Trent focus group London._
[25] Albert Qiaochu Jiang, Wenda Li, Jesse Michael Han, and Yuhuai Wu. 2021. Lisa:
Language models of isabelle proofs. In 6th Conference on Artificial Intelligence
_and Theorem Proving. 378–392._
[26] Albert Qiaochu Jiang, Wenda Li, Szymon Tworkowski, Konrad Czechowski,
Tomasz Odrzygóźdź, Piotr Miłoś, Yuhuai Wu, and Mateja Jamnik. 2022. Thor:
Wielding hammers to integrate language models and automated theorem provers.
_Advances in Neural Information Processing Systems 35 (2022), 8360–8373._
[27] Albert Q Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja
Jamnik, Timothée Lacroix, Yuhuai Wu, and Guillaume Lample. 2022. Draft, sketch,
and prove: Guiding formal theorem provers with informal proofs. arXiv preprint
_arXiv:2210.12283 (2022)._
[28] Cezary Kaliszyk and Josef Urban. 2015. HOL (y) Hammer: Online ATP service
for HOL Light. Mathematics in Computer Science 9 (2015), 5–22.
[29] Gerwin Klein, Kevin Elphinstone, Gernot Heiser, June Andronick, David Cock,
Philip Derrin, Dhammika Elkaduwe, Kai Engelhardt, Rafal Kolanski, Michael
Norrish, et al. 2009. seL4: Formal verification of an OS kernel. In Proceedings of
_the ACM SIGOPS 22nd symposium on Operating systems principles. 207–220._
[30] Daniel Kühlwein, Twan van Laarhoven, Evgeni Tsivtsivadze, Josef Urban, and
Tom Heskes. 2012. Overview and evaluation of premise selection techniques
for large theory mathematics. In Automated Reasoning: 6th International Joint
_Conference, IJCAR 2012, Manchester, UK, June 26-29, 2012. Proceedings 6. Springer,_
378–392.
[31] Daniel Kästner, Ulrich Wünsche, Jörg Barrho, Marc Schlickling, Bernhard Schommer, Michael Schmidt, Christian Ferdinand, Xavier Leroy, and Sandrine Blazy.
2018. CompCert: Practical experience on integrating and qualifying a formally
verified optimizing compiler. In ERTS 2018: Embedded Real Time Software and
_[Systems. SEE. http://xavierleroy.org/publi/erts2018_compcert.pdf](http://xavierleroy.org/publi/erts2018_compcert.pdf)_
[32] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin,
Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel,
et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks.
_Advances in Neural Information Processing Systems 33 (2020), 9459–9474._
[33] Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk
Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo
Gutman-Solo, et al. 2022. Solving quantitative reasoning problems with language
models, 2022. URL https://arxiv. org/abs/2206.14858 (2022).
[34] Tobias Nipkow, Markus Wenzel, and Lawrence C. Paulson. 2002. Isabelle/HOL: a
_proof assistant for higher-order logic. Springer-Verlag, Berlin, Heidelberg._
[35] Tobias Nipkow, Markus Wenzel, and Lawrence C Paulson. 2002. Isabelle/HOL: a
_proof assistant for higher-order logic. Springer._
[36] Lawrence C. Paulson and Jasmin Christian Blanchette. 2012. Three years of experience with Sledgehammer, a Practical Link Between Automatic and Interactive
Theorem Provers. In IWIL 2010. The 8th International Workshop on the Implemen_tation of Logics (EPiC Series in Computing, Vol. 2), Geoff Sutcliffe, Stephan Schulz,_
[and Eugenia Ternovska (Eds.). EasyChair, 1–11. https://doi.org/10.29007/36dt](https://doi.org/10.29007/36dt)
[37] Alexandre Riazanov and Andrei Voronkov. 2002. The design and implementation
of VAMPIRE. AI Commun. 15, 2,3 (aug 2002), 91–110.
[38] Alex Sanchez-Stern, Yousef Alhessi, Lawrence Saul, and Sorin Lerner. 2020. Generating correctness proofs with neural networks. In Proceedings of the 4th ACM
_SIGPLAN International Workshop on Machine Learning and Programming Lan-_
_guages. 1–10._
[39] Alex Sanchez-Stern, Emily First, Timothy Zhou, Zhanna Kaufman, Yuriy Brun,
and Talia Ringer. 2023. Passport: Improving automated formal verification using
identifiers. ACM Transactions on Programming Languages and Systems 45, 2
(2023), 1–30.
[40] Karen Sparck Jones. 1972. A statistical interpretation of term specificity and its
application in retrieval. Journal of documentation 28, 1 (1972), 11–21.
[41] Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved
semantic representations from tree-structured long short-term memory networks.
_arXiv preprint arXiv:1503.00075 (2015)._
[42] Amitayush Thakur, Yeming Wen, and Swarat Chaudhuri. 2023. A language-agent
approach to formal theorem-proving. arXiv preprint arXiv:2310.04353 (2023).
[43] The Coq Development Team. 2024. The Coq Reference Manual – Release 8.19.0.
[https://coq.inria.fr/doc/V8.19.0/refman.](https://coq.inria.fr/doc/V8.19.0/refman)
[44] The Coq Development Team. 2024. Programmable proof search – Release 8.19.0.
[https://coq.inria.fr/doc/V8.19.0/refman/proofs/automatic-tactics/auto.html.](https://coq.inria.fr/doc/V8.19.0/refman/proofs/automatic-tactics/auto.html )
[45] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi,
Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning
in large language models. Advances in neural information processing systems 35
(2022), 24824–24837.
[46] James R. Wilcox, Doug Woos, Pavel Panchekha, Zachary Tatlock, Xi Wang,
Michael D. Ernst, and Thomas Anderson. 2015. Verdi: a framework for implementing and formally verifying distributed systems. SIGPLAN Not. 50, 6 (jun
[2015), 357–368. https://doi.org/10.1145/2813885.2737958](https://doi.org/10.1145/2813885.2737958)
[47] Yuhuai Wu, Albert Qiaochu Jiang, Wenda Li, Markus Rabe, Charles Staats, Mateja
Jamnik, and Christian Szegedy. 2022. Autoformalization with large language
models. Advances in Neural Information Processing Systems 35 (2022), 32353–
32368.
[48] Kaiyu Yang and Jia Deng. 2019. Learning to prove theorems via interacting
with proof assistants. In International Conference on Machine Learning. PMLR,
6984–6994.
[49] Kaiyu Yang, Aidan Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing
Yu, Saad Godil, Ryan J Prenger, and Animashree Anandkumar. 2024. Leandojo:
Theorem proving with retrieval-augmented language models. Advances in Neural
_Information Processing Systems 36 (2024)._
[50] Shizhuo Dylan Zhang, Talia Ringer, and Emily First. 2023. Getting More out of
Large Language Models for Proofs. arXiv preprint arXiv:2305.04369 (2023).
[51] Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. 2021. Minif2f: a crosssystem benchmark for formal olympiad-level mathematics. _arXiv preprint_
_arXiv:2109.00110 (2021)._
[52] Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang,
Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, et al. 2022. Least-tomost prompting enables complex reasoning in large language models. arXiv
_preprint arXiv:2205.10625 (2022)._
[53] Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis,
Harris Chan, and Jimmy Ba. 2022. Large language models are human-level
prompt engineers. arXiv preprint arXiv:2211.01910 (2022).
[54] Yuqi Zhu, Jia Li, Ge Li, YunFei Zhao, Zhi Jin, and Hong Mei. 2024. Hot or Cold?
Adaptive Temperature Sampling for Code Generation with Large Language
Models. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 38.
437–445.
-----
| [
"Minghai, Lu",
"Benjamin, Delaware",
"Tianyi, Zhang"
] | 2024-09-22T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2409.14274v1 | https://arxiv.org/abs/2409.14274 | https://www.semanticscholar.org/paper/80810e2dee551bbe5db9876f0e55de72621b609d |
Proof By Abduction in Isabelle/HOL | When proving an inductive problem, we often prove auxiliary lemmas that are useful for proving the original problem. If these auxiliary lemmas themselves are challenging, we must introduce more lemmas to prove these lemmas. To automate such multi-step conjecturing, we developed Abduction Prover. Given a proof goal, Abduction Prover conjectures a series of lemmas and attempts to prove the original goal using these lemmas. Our working prototype of Abduction Prover for Isabelle/HOL is publicly available on GitHub. | null | [
"Yutaka, Nagashima",
"Daniel Sebastian, Goc"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
|
Proof Recommendation System for the HOL4 Theorem Prover | N/A | null | # Proof Recommendation System for the HOL4 Theorem Prover
Nour Dekhil, Adnan Rashid, and Sofi`ene Tahar
Department of Electrical and Computer Engineering
Concordia University, Montreal, QC, Canada
_{n dekhil,rashid,tahar}@ece.concordia.ca_
We introduce a proof recommender system for the HOL4 theorem prover [1]. Our tool is
built upon a transformer-based model [2] designed specifically to provide proof assistance in
HOL4. The model is trained to discern theorem proving patterns from extensive libraries of
HOL4 containing proofs of theorems. Consequently, it can accurately predict the next tactic(s)
(proof step(s)) based on the history of previously employed tactics. The tool operates by
reading a given sequence of tactics already used in a proof process (in our case, it contains at
least three tactics), referred to as the current proof state, and provides recommendations for
the next optimal proof step(s).
Figure 1 depicts the major steps taken to develop the proof recommendation tool. The
initial block (highlighted in blue color) refers to the construction of a HOL4 proofs dataset. In
the dataset construction phase, we are abstracting the proof scripts to only include the tactics
used to prove a theorem or a lemma. This process involves systematically parsing each sml file,
which contains the proof scripts written in HOL4. Within each file, we identify all theorems
and lemmas that are subject to proof. Once these target points are identified, the next task is
to extract the specific tactics that were used to prove each theorem or lemma. This involves
traversing the proof script to capture only those commands that directly contribute to the
proof, omitting extraneous elements that do not influence the proof’s logical flow.
|Vocabulary Tokens .json file|Training Set .csv file|
|---|---|
|Dataset Construction 6 HOL4 Theories .sml files proofs action|Model Training Vocabulary Training Tokens Set Training .json file .csv file Tokenization Trained Models Splitting .ckpt files|Col3|Col4|Col5|Proof Recommendation System Proof State Best Model Proof Step Recommendation|
|---|---|---|---|---|---|
|Full extr Proof Proof Sequences state .txt files extractio|Proof State-Future Step Pairs .csv file n||Evaluation Testing Set .csv file Evaluation|||
Figure 1: Proof Recommendation System
We created large proof sequences datasets (Datasets 1-5) from five HOL4 theories [3–7]
developed by the Hardware Verification Group (HVG) of Concordia University alongside an
already available dataset created using the real arithmetic theory of HOL4 (Dataset 6) [8]. For
experimental purposes, we combined all datasets into Dataset 7. Our objective is to predict
-----
the subsequent tactic from a sequence of previously employed tactics. To accomplish this, we
approach this challenge as a multi-label classification task using language models. To facilitate
this, we restructure the dataset into pairs of current proof states and possible future tactics.
More details on the datasets used for classification are given in Table 1.
Table 1: Summary of the used Datasets
Dataset 1 Dataset 2 Dataset 3 Dataset 4 Dataset 5 Dataset 6 Dataset 7
**Distinct Tactics** 115 132 26 44 32 89 162
**Proofs** 1,873 2,475 153 295 61 279 5,136
**Proof States** 43,167 57,602 2,973 7,371 1,784 3,259 116,156
Our primary objective is to predict the subsequent tactic in a sequence of previously applied
tactics during a proof. To address this, we framed the problem as a multi-label classification
task, which is particularly suitable for scenarios where multiple correct outcomes are possible.
We restructured the original dataset into pairs, with each pair consisting of a current proof
state (a sequence of tactics that have already been applied) and the corresponding possible
future tactics that could logically follow. This restructuring allows the model to learn the
relationships between different proof states and their subsequent steps, enabling it to make
informed predictions about the next optimal tactic.
In our experimental phase, we explored various transformer-based language models, including BERT [9], RoBERTa [10], and T5 [11]. These models are well-known for their ability to
capture intricate patterns in sequential data, making them ideal for our task of proof recommendation. Each model was trained on the restructured datasets, which were split into a 90-10
ratio for training and testing purposes (block of Figure 1 highlighted in orange color). This
split ensures that the models are exposed to a broad range of examples during training while
still having a significant portion of data reserved for testing.
To optimize the performance of each model, we employed a grid search of hyperparameters, a
method that systematically evaluates a combination of parameters to identify the configuration
that yields the best results (block of Figure 1 highlighted in green color). This process was
critical in fine-tuning the models, ensuring they were not only accurate but also efficient in
their predictions. Given the multitude of possible tactics at each proof state, we decided to
generate multiple recommendations for the next proof step, rather than a single prediction.
This approach acknowledges the inherent complexity and variability of theorem proving, where
several tactics could be appropriate in advancing a proof.
The accuracy of our model’s recommendations was assessed using the n-correctness rate, an
evaluation metric that measures the probability that a correct tactic from the testing dataset is
included among the top-n recommended tactics. This metric is particularly useful in scenarios
where multiple recommendations are provided, as it quantifies the likelihood of the correct
tactic being present within a certain range of suggestions. Through extensive testing, we found
out that RoBERTa demonstrated a superior performance across most cases for n = 7. As a
result, we deploy it into our proof recommendation tool (block of Figure 1 highlighted in grey
color).
With the aim of efficiently predicting the next tactic (k = 1, where k represents the number
of future tactics to predict) for the majority of theory datasets, we also challenged our tool by
attempting to predict two future tactics. Table 2 provides further details of the experimental
results for RoBERTa in predicting one future tactic (k = 1) and two future tactics (k = 2).
After examining the performance results across different datasets, it seems that the variations
arise from the diversity and patterns unique to each dataset, as well as the range of tactics
employed. Specifically, Datasets 1-5 exhibit a uniformity in their proof structures, originating
-----
Table 2: Correctness Rates of RoBERTa Considering Top-7 Recommendations
Dataset 1 Dataset 2 Dataset 3 Dataset 4 Dataset 5 Dataset 6 Dataset 7
**k = 1** 73.6% 79.5% 94.4% **97.8%** 97.6% 64.3% 89.8%
**k = 2** 54.3% 58.6% 88.1% **96.8%** 92.2% 29.4% 80.3%
from one application project written by a single person, thus making the proofs more homogeneous and consistent in style. However, Dataset 6, came from HOL4 libraries containing a
diverse range of theorems regarding different mathematical concepts, presents proofs with heterogeneous patterns, making them challenging to predict. Additionally, we observed a decrease
in performance when attempting to predict two future tactics, which may be attributed to the
expansive space of possibilities and resulting in increased uncertainty.
In the recent past, several studies have integrated artificial intelligence into theorem prover
tools (e.g., PVS and Coq), particularly for predicting future-proof steps. For instance, in the
study reported in [12], accuracies ranging from 50% to 70% were achieved for the top 3-5
recommendations, while the work in [13] achieved 87% accuracy for the top 3, and the one
in [14] reported 54.3% accuracy for the top 10. In comparison, our tool surpasses results
reported in these studies, achieving accuracies of 77.3%, 89.88%, and 93.7% for the top 3, 7,
and 10 next tactic recommendations, respectively, measured on the combined Dataset 7. The
current tool version is available to try online [15]. In the future, we plan to expand it to include
more HOL4 theories and enhance its interfacing with HOL4. In addition, we are investigating
its potential to automatically generate complete proofs, considering the need for optimization
given the exponential growth in combination possibilities with the proof sequence length. To
address this, we plan to use some advanced tree search algorithms.
## References
[[1] HOL4. https://hol-theorem-prover.org/, 2024.](https://hol-theorem-prover.org/)
[2] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez,
Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Neural Information Processing
_Systems, page 6000–6010. Curran Associates Inc., 2017._
[[3] Dataset 1: Formal Dynamic Dependability Analysis using HOL Theorem Proving. https://hvg.](https://hvg.ece.concordia.ca/projects/prob-it/pr9.php)
```
ece.concordia.ca/projects/prob-it/pr9.php, 2024.
```
[4] Dataset 2: Formal Probabilistic Analysis of Wireless Sensor Networks. `https://hvg.ece.`
```
concordia.ca/projects/prob-it/wsn.php, 2024.
```
[5] Dataset 3: Formal Probabilistic Risk Assessment using Theorem Proving. `https://hvg.ece.`
```
concordia.ca/projects/prob-it/pr10/index.php, 2024.
```
[6] Dataset 4: Formal Analysis of Information Flow Using Min-Entropy and Belief Min-Entropy.
```
https://hvg.ece.concordia.ca/projects/prob-it/pr5.php, 2024.
```
[7] Dataset 5: Formalization of Normal Random Variables. `https://hvg.ece.concordia.ca/`
```
projects/prob-it/pr7.html, 2024.
```
[[8] Dataset 6: Proof searching in HOL4 with Genetic Algorithm. https://dl.acm.org/doi/10.1145/](https://dl.acm.org/doi/10.1145/3341105.3373917)
```
3341105.3373917, 2024.
```
[9] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep
bidirectional transformers for language understanding. In North American Chapter of the Associ_ation for Computational Linguistics, pages 4171–4186. Association for Computational Linguistics,_
2019.
[10] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike
Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized BERT pretraining
-----
approach. CoRR, abs/1907.11692, 2019.
[11] Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena,
Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified
text-to-text transformer. volume 21, pages 1–67, 2019.
[12] Eric Yeh, Briland Hitaj, Sam Owre, Maena Quemener, and Natarajan Shankar. CoProver: A
Recommender System for Proof Construction. In Intelligent Computer Mathematics, volume 14101
of LNAI, pages 237–251. Springer, 2023.
[13] Lasse Blaauwbroek, Josef Urban, and Herman Geuvers. Tactic Learning and Proving for the Coq
Proof Assistant. arXiv preprint arXiv:2003.09140, 2020.
[14] Xiaokun Luan, Xiyue Zhang, and Meng Sun. Using LSTM to Predict Tactics in Coq. In Software
_Engineering and Knowledge Engineering, pages 132–137, 2021._
[[15] HOL4PRS: Proof Recommendation System for the HOL4 Theorem Prover. https://github.com/](https://github.com/DkNour/HOL4PRS-Proof-Recommendation-System-for-the-HOL4-Theorem-Prover.git)
```
DkNour/HOL4PRS-Proof-Recommendation-System-for-the-HOL4-Theorem-Prover.git.
```
-----
| [
"Nour, Dekhil",
"Adnan, Rashid",
"Sofiene, Tahar"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
ProofDB: A prototype natural language Coq search engine | N/A | null | [
"Thomas, Reichel",
"Talia, Ringer"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
|
Proving Theorems Using Deep Learning | N/A | null | #### Magnus Midtbø Kristiansen
### Proving Theorems Using Deep Learning
#### Graph Convolutional Networks, Transformers, and Deep Reinforcement Learning for Automatic Formal Reasoning
Master’s thesis
Master’s thesis in Computer Science
Supervisor: Björn Gambäck
June 2021
**NTNU** Norwegian University of Science and Technology Faculty of Information Technology and Electrical Engineering Department of Computer Science
-----
-----
#### Magnus Midtbø Kristiansen
### Proving Theorems Using Deep Learning
#### Graph Convolutional Networks, Transformers, and Deep Reinforcement Learning for Automatic Formal Reasoning
Master’s thesis in Computer Science
Supervisor: Björn Gambäck
June 2021
Norwegian University of Science and Technology
Faculty of Information Technology and Electrical Engineering
Department of Computer Science
-----
-----
##### Abstract
Interactive Theorem Proving (ITP) systems are symbolic-based software systems used
to write and verify formal mathematical proofs. These systems often contain large
datasets of human-written formalized proofs, structured as step-by-step applications of
high-level proof strategies called tactics. The space of tactics is well-defined and contains
a combination of core tactics and tactic arguments. Tactic arguments generally refer
to either already proven theorems or terms and hypotheses in the local proof search
context. Recently, several research groups have focused on automating ITP systems by
training machine learning models to predict what next tactic to apply in any given proof
state. This has resulted in whole frameworks developed for more accessible research into
machine learning models automating underlying ITP systems. Such ITP automation
allows the model to perform high-level formal reasoning similar to human mathematical
reasoning.
This Master’s Thesis develops a new theorem proving agent for end-to-end ITP
theorem proving. The agent transforms the theorem proving task into three separate
multi-class classification problems, allowing a more natural machine learning interpretation of the theorem proving task than previous approaches.
In addition to models imitating human proofs via supervised learning, deep reinforcement learning – implemented using deep Q-learning – is deployed. This has two
advantages: (1) it deals with data scarcity, and (2) it allows the agent to develop its own
proof style, effectively circumventing noisy human-written proofs. Furthermore, two
novel deep learning embedding techniques are tested: Graph Convolutional Networks
(GCNs) and the Bidirectional Encoder Representations from Transformers (BERT)
architecture. More general non-convolutional Graph Neural Networks have recently been
shown to work well on formal logic and been used successfully for ITP theorem proving.
BERT has shown state-of-the-art results on several Natural Language Processing tasks.
In addition, Transformer-based models have recently shown promising results on related
mathematical reasoning tasks.
When trained to imitate human proofs, GCN and BERT-based agents significantly outperform corresponding random guessing agents, proving 37.3% and 16.3%
more theorems, respectively. Deep reinforcement learning improves results further.
These agents are capable of proving 7.6% more theorems than corresponding supervised
agents and 47.7% more theorems than corresponding random guessing agents. This is
the first time GCN, Transformers, and deep reinforcement learning have been used for
tactic-based formal theorem proving.
-----
##### Sammendrag
Interaktive teorembevissystemer (ITP-systemer) er symbolbaserte programvaresystemer
som brukes til å skrive og verifisere formelle matematiske bevis. Disse systemene
inneholder ofte store datasett med menneskeskrevne formaliserte bevis, strukturert som
trinnvise applikasjoner av bevis-strategier kalt taktikker. Rommet av mulige taktikker
er veldefinert og inneholder en kombinasjon av kjernetaktikker og taktikkargumenter.
Taktikkargumenter refererer som regel enten til allerede beviste teoremer eller termer og
hypoteser i den lokale beviskonteksten. Nylig har flere forskningsgrupper fokusert på
automatisering av ITP-systemer ved å trene maskinlæringsmodeller til å forutsi hvilken
neste taktikk som skal brukes i bevissøket. Dette har resultert i rammeverk utviklet for
mer tilgjengelig forskning på maskinlæringsmodeller som automatiserer underliggende
ITP-systemer. En slik automatisering av ITP-systemer lar modeller utføre formell
resonnering på et høyt abstraksjonsnivå, lignende menneskelig matematisk resonnering.
Denne masteroppgaven utvikler en ny bevisagent for ende-til-ende bevissøk i
ITP-systemer. Agenten transformerer bevisproblemet til tre separate klassifiseringsproblemer, noe som gir en mer naturlig maskinlæringstolkning av bevisproblemet enn
tidligere tilnærminger.
I tillegg til modeller som imiterer mennesskrevne bevis via veiledet læring, anvendes også dyp forsterkningslæring – implementert ved hjelp av dyp Q-læring. Dette
har to fordeler: (1) det håndterer knappheten av annotert data, og (2) agenten
har muligheten til å utvikle sin egen bevisstrategi, noe som lar den omgå støy i
menneskeskrevne bevis. Videre testes to nye dyplæringsteknikker: Konvolusjonelle
nevrale nettverk for grafstrukturer (GCNs) og Bidirectional Encoder Representations
from Transformers (BERT) arkitekturen. Mer generelle ikke-konvolusjonelle nevrale
nettverk for grafstrukturer er nylig vist å fungere godt på formell logikk og blitt brukt
til å bevise teoremer i ITP-systemer. BERT har vist overlegne resultater på flere
problemer innen språkbehandlings-feltet (Natural Language Processing). I tillegg har
andre Transformer-modeller nylig vist lovende resultater på relaterte problemer innen
formell logikk.
GCN- og BERT-baserte agenter beviser henholdsvis 37,3 % og 16,3 % flere teoremer enn tilsvarende agenter basert på tilfeldig gjetting, når de blir trent til å imitere
menneskeskrevne bevis. Dyp forsterkningslæring forbedrer resultatene ytterligere. Disse
agentene er i stand til å bevise 7,6 % flere teoremer enn tilsvarende veiledete agenter
og 47,7 % flere teoremer enn tilsvarende agenter basert på tilfeldig gjetting. Dette er
første gang GCN, Transformers og dyp forsterkningslæring er brukt til å automatisere
taktisk-baserte ITP-systemer.
ii
-----
##### Preface
This Master’s Thesis is written as part of the degree Master of Science in Computer
Science at the Norwegian University of Science and Technology, under the supervision of
Björn Gambäck. A special thanks goes out to Björn Gambäck for his valuable guidance
and feedback throughout the entire duration of the project. A thanks also goes out to
Kaiyu Yang at Princeton University for his helpful responses on the CoqGym discussion
board. Furthermore, a thanks goes out to Felix Wu and Yixin Chen for allowing their
figures to be depicted in the Thesis. The HPC group at NTNU also deserves a big
thanks for allowing the use of the Idun cluster to conduct experiments.
I would also like to thank friends and family for their great support along the
way. Finally, I would like to give a special thanks to Elise for having the patience to
listen to my somewhat long-winded monologues about topics in this Thesis and for her
continued support.
Magnus Midtbø Kristiansen
Trondheim, June 11, 2021
iii
-----
-----
# Contents
**1** **Introduction** **1**
1.1 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Goals and Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Research Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
**2** **Background Theory** **9**
2.1 Traditional Automated Theorem Proving . . . . . . . . . . . . . . . . . . 9
2.1.1 Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.2 Analytic Tableaux . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.3 Superposition Calculus . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Interactive Theorem Proving . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.1 Tactic-based Interaction . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.2 Tactic Arguments and Proof Context . . . . . . . . . . . . . . . . . 16
2.2.3 Internal Automatic Engines . . . . . . . . . . . . . . . . . . . . . . 17
2.2.4 The Logic of Computable Functions Principle . . . . . . . . . . . . 17
2.2.5 Coq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3 Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3.1 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3.2 Classification Problems . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.3 Mini-Batch Training . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.4 Loss Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3.6 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.7 Optimizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.8 Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.9 Activation Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.10 Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . 28
2.3.11 Graph Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . 29
2.3.12 Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.3.13 Deep Q-Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.3.14 Other Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
**3** **Related Work** **39**
3.1 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
-----
_Contents_
3.2 Auto-ITP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.2.1 TacticToe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.2.2 HOList . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2.3 GamePad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.2.4 CoqGym . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.3 Hammers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.3.1 The 3-step Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.3.2 Premise Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.3.3 HOL(y)Hammer and CoqHammer . . . . . . . . . . . . . . . . . . 52
3.4 Other Applications of Machine Learning in Formal Reasoning and Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.4.1 Transformer Models Applied to Mathematics . . . . . . . . . . . . 53
3.4.2 Synthesizing Theorems . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.4.3 Tactic Application in Latent Space . . . . . . . . . . . . . . . . . . 55
3.4.4 Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 55
3.4.5 Internal Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.4.6 Autoformalization . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
**4** **Motivation, Agent Design and Architectures** **57**
4.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.1.1 Choosing an Auto-ITP Framework . . . . . . . . . . . . . . . . . . 57
4.1.2 Usefulness of Proxy Metrics . . . . . . . . . . . . . . . . . . . . . . 58
4.1.3 Machine Learning Interpretation of ITP Systems . . . . . . . . . . 58
4.1.4 Choosing Machine Learning Techniques . . . . . . . . . . . . . . . 59
4.2 Proxy Metric: Tactic Groups . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.3 Agent Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.4 Designing Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.4.1 GAST – Graph Convolutional Network-based Architecture . . . . 65
4.4.2 BERTac – BERT-based Architecture . . . . . . . . . . . . . . . . . 67
4.4.3 _QTac – Deep Q-learning Architecture_ . . . . . . . . . . . . . . . . 68
**5** **Experiments and Results** **71**
5.1 Experimental Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.1.1 Experiment 1 – Tactic Groups . . . . . . . . . . . . . . . . . . . . 71
5.1.2 Experiment 2 – Supervised Learning . . . . . . . . . . . . . . . . . 75
5.1.3 Experiment 3 – Reinforcement Learning . . . . . . . . . . . . . . . 78
5.2 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.2.1 Deep Learning Frameworks . . . . . . . . . . . . . . . . . . . . . . 80
5.2.2 CoqGym Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.2.3 Computing Resources . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.3.1 Results from Experiment 1 . . . . . . . . . . . . . . . . . . . . . . 82
5.3.2 Results from Experiment 2 . . . . . . . . . . . . . . . . . . . . . . 85
5.3.3 Results from Experiment 3 . . . . . . . . . . . . . . . . . . . . . . 88
vi
-----
_Contents_
**6** **Evaluation and Discussion** **91**
6.1 Evaluation and Discussion of Research Questions . . . . . . . . . . . . . . 91
6.2 Evaluation of Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.3 Further Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.3.1 _Cτ Predictions_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.3.2 _QTac Training_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.3.3 Proof Style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.3.4 The CoqGym Dataset . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.3.5 CoqGym’s Synthetic Data . . . . . . . . . . . . . . . . . . . . . . . 99
6.3.6 Tailoring Transformer Models to Formal Expressions . . . . . . . . 99
6.3.7 Comparison to Hammers . . . . . . . . . . . . . . . . . . . . . . . 100
6.3.8 Proof Tree Traversal . . . . . . . . . . . . . . . . . . . . . . . . . . 100
**7** **Conclusion and Future Work** **101**
7.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
7.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
**Bibliography** **107**
vii
-----
-----
# List of Figures
2.1 The high-level architecture of a generic ATP system. . . . . . . . . . . . . 10
2.2 Example of a resolution tree. . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Example of a tableau. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Example of a hypothetical proof tree. . . . . . . . . . . . . . . . . . . . . 15
2.5 Example of a Feed Forward Network. . . . . . . . . . . . . . . . . . . . . . 26
2.6 Illustration of the GCN message passing algorithm. . . . . . . . . . . . . . 30
2.7 Illustration of the SGC message passing algorithm. . . . . . . . . . . . . . 31
2.8 Illustration of the DGCNN end-to-end graph classification architecture. . 32
3.1 Overview of the Auto-ITP setting. . . . . . . . . . . . . . . . . . . . . . . 41
3.2 The high-level architecture of a Hammer . . . . . . . . . . . . . . . . . . . 51
4.1 Frequency of core tactics in the proof step datasets. . . . . . . . . . . . . 62
4.2 Frequency of global and local argument occurrence in the proof step datasets. 63
4.3 The end-to-end theorem proving agent. . . . . . . . . . . . . . . . . . . . 64
4.4 The overall end-to-end theorem proving architecture. . . . . . . . . . . . . 65
4.5 The GAST architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.6 The BERTac architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.7 The QTac architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.1 Percentage of proof steps that have n number of hypotheses in the local
context. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.2 Validation accuracy plots for FFN baseline models from experiment 1. . . 83
5.3 Validation accuracy plots for GAST models from experiment 1. . . . . . . 84
5.4 Validation accuracy plots for BERTac models from experiment 1. . . . . . 85
5.5 Validation accuracy plots for C models from experiment 2. . . . . . . . . 86
6.1 Confusion matrices for Cτ models . . . . . . . . . . . . . . . . . . . . . . . 96
6.2 Frequency of core tactic use for different proof agents. . . . . . . . . . . . 98
ix
-----
-----
# List of Tables
2.1 Overview of Coq tactics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.1 Overview of existing Auto-ITP frameworks. . . . . . . . . . . . . . . . . . 42
3.2 State-of-the-art and main results in TacticToe. . . . . . . . . . . . . . . . 44
3.3 State-of-the-art and main results in HOList. . . . . . . . . . . . . . . . . . 45
3.4 State-of-the-art and main results in GamePad. . . . . . . . . . . . . . . . 46
3.5 The CoqGym dataset. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.6 State-of-the-art and main results in CoqGym. . . . . . . . . . . . . . . . . 49
3.7 State-of-the-art and main results for HOL(y)Hammer and CoqHammer. . 53
4.1 GitHub repository statistics for HOL Light, HOL4, and Coq. . . . . . . . 58
4.2 Proof steps in CoqGym for both human-written and synthetic proofs. . . 61
4.3 The tactic grouping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.1 Regularization levels defined for experiments. . . . . . . . . . . . . . . . . 72
5.2 GAST configurations for phase 2 of experiment 1b. . . . . . . . . . . . . . 74
5.3 BERTac configurations for experiment 1c. . . . . . . . . . . . . . . . . . . 74
5.4 Configurations for experiment 2. . . . . . . . . . . . . . . . . . . . . . . . 75
5.5 Dataset sizes for the supervised C models. . . . . . . . . . . . . . . . . . . 76
5.6 Main end-to-end theorem proving results. . . . . . . . . . . . . . . . . . . 82
5.7 Main results from experiment 1. . . . . . . . . . . . . . . . . . . . . . . . 83
5.8 Validation accuracy for GAST and BERTac C models. . . . . . . . . . . . 86
5.9 Performance of GAST and BERTac agents on end-to-end theorem proving. 87
5.10 Results for different depth limits and beam widths. . . . . . . . . . . . . . 88
5.11 Performance of QTac agents on end-to-end theorem proving. . . . . . . . 88
5.12 Theorem proving results for different Coq projects. . . . . . . . . . . . . . 89
xi
-----
-----
## Chapter 1
# Introduction
Automated Theorem Proving (ATP) is a field of study concerned with automatically
proving mathematical theorems using a computer. Traditionally, a set of theorems
and a conjecture (the theorem to be proven) are expressed formally, based on some
logical framework, with the task of proving the conjecture focused around symbolic
manipulation on the set of logically expressed statements. Even with state-of-the-art
inference techniques deployed, this essentially turns into a combinatorial search problem,
where one quickly encounters an exponentially increasing space of combinations (Hoder
and Voronkov, 2011). In addition, validity in First-Order Logic, the most common logic
used in ATP systems, is known to be a semi-decidable problem (Church, 1936; Turing,
1936). Because ATP systems often seek to prove validity, this means there is no effective
way to disprove a conjecture that is in fact false.
Because of these issues, the field of Interactive Theorem Proving (ITP)[1] has
emerged as an alternative way of doing computer-based theorem proving (Harrison
et al., 2014). This branch of computer theorem proving is not concerned with a fully
automated process, but instead tries to facilitate an enhanced theorem proving process
for human users. This is made possible by letting the system deal with the tedious
details of the proof, while the user guides the proof search by inputting high-level proof
strategies (most commonly taking the form of so-called tactics). As with ATP systems,
ITP systems are based on formal logic and designed to guarantee correctness of the
produced proofs (Harrison et al., 2014). Such systems have become the de facto tools in
efforts to formalize mathematics (Hales, 2006; Gonthier et al., 2013; Leroy, 2016).
Several machine learning researchers have recently used ITP systems as a way
to tackle the domain of mathematics and formal reasoning (Bansal et al., 2019a; Huang
et al., 2019; Yang and Deng, 2019; Gauthier et al., 2017). The main idea is to train
machine learning models to predict the next tactic to apply and drive the ITP’s proof
procedure forward automatically. Because this approach automates an underlying ITP
system, it will be referred to as Auto-ITP in this Thesis, a term coined by Yang and
Deng (2019)[2]. In a way, Auto-ITP can be seen as a form of ATP. However, it very
1ITP systems are often also called Proof Assistants. However, this Thesis will only use the term ITP
systems.
2Although the term “Auto-ITP” is not a widely adopted one, it does provide a useful shorthand term to
refer to machine learning-driven automation of ITP systems.
-----
_Chapter 1 Introduction_
different from the classical low-level inference techniques traditional ATP systems rely
on. Instead, Auto-ITP emulates how a human user would interact with ITP systems. It
can therefore be considered a more human-inspired approach to ATP than traditional
ATP systems, where the model proves theorems on abstraction levels closer to human
mathematical reasoning (Yang and Deng, 2019). The Auto-ITP process is similar
to an active learning setup (Settles, 2009), where the Auto-ITP model serves as an
(automatic) oracle for the underlying ITP system. The ITP system queries subgoals
to the Auto-ITP model and the model responds with a tactic corresponding to the subgoal.
In the last couple of years, several frameworks for doing Auto-ITP have emerged
(Bansal et al., 2019a; Huang et al., 2019; Yang and Deng, 2019; Wu et al., 2020). These
frameworks allow machine learning researchers interested in the domain of mathematics
to leverage powerful underlying ITP systems and large datasets of human-written proofs
in the quest to progress machine learning applied to formal reasoning and the progress of
artificial intelligence more broadly (Urban and Vyskočil, 2013; Szegedy, 2020).
This Master’s Thesis will cover state-of-the-art within each existing Auto-ITP
framework, in addition to other work related to Auto-ITP. This includes other
applications of machine learning in mathematics and formal reasoning as well as another
popular approach for automating underlying ITP systems – so-called Hammer systems
(Blanchette et al., 2016). Then, the Thesis narrows its focus to a single Auto-ITP
framework, in which new experiments will be conducted. The framework chosen is
the CoqGym framework (Yang and Deng, 2019), with the overall goal to explore
machine learning techniques not yet tested in CoqGym. This Master’s Thesis tests
new deep learning methods – based on Graph Convolutional Networks (GCNs) and
the Bidirectional Encoder Representations from Transformers (BERT) model (Devlin
et al., 2018) – as embedding techniques for Coq expressions. Models are trained both
to imitate human proofs using supervised learning, and with the deep reinforcement
learning method deep Q-learning (Mnih et al., 2015). In addition, a new theorem proving
agent is developed, interpreting the ITP theorem proving process as three multi-class
classification problems. Lastly, a proxy metric is designed to allow for less expensive
prototyping of supervised learning models in CoqGym.
##### 1.1 Background and Motivation
ATP is in itself motivated by several things. The most obvious might be the goal of
proving new mathematical theorems. Some theorems lend themselves naturally to the
formal way in which traditional ATP systems work, and in those cases ATP systems
have performed reasonably well. An example of this is Robbins’ problem, which asks if
all Robbins algebras are Boolean algebras. This was proven by the EQP system in 1997
(McCune, 1997), essentially by brute-force calculations on combinations of First-Order
expressions.
-----
_1.1 Background and Motivation_
Another important use of both ATP and ITP systems is formal (and guaranteed correct) verification of logically expressed statements. This has been particularly
useful in software and hardware verification, where behavior can be naturally expressed
through formal logic. State-of-the-art systems have been used to verify the correctness of
processors (Harrison, 2000), operating systems (Klein et al., 2014; Chen et al., 2015)
and compilers (Leroy, 2009). Intel, for example, hired ITP pioneer John Harrison to
verify floating point arithmetic on their processors. He developed the ITP system HOL
Light (Harrison, 1996), capable of producing guaranteed correct verification of processor
operations (Harrison, 2000).
Computer systems also provide a natural tool for formalizing mathematics. It
has been a long-standing dream of computer scientists and mathematicians to one day
formalize all of mathematics and science in a machine-understandable way – effectively
reducing the problem of reasoning to “number crunching”, which can be executed by a
machine. This was made explicit in the QED manifesto (Boyer, 1994). While the QED vision has yet to come into full fruition, plenty of efforts in both ATP and ITP research aim
to formalize mathematical proofs (Gonthier, 2008; Gonthier et al., 2013; Hales et al., 2017).
Auto-ITP, on the other hand, has its roots in the machine learning community
and is consequently motivated by mathematics and formal reasoning being a challenging
and relatively unexplored domain for machine learning models (Urban and Vyskočil,
2013; Kaliszyk et al., 2017; Szegedy, 2020). As pointed out by Urban and Vyskočil (2013)
and Szegedy (2020), the theorem proving domain can potentially be used to develop new
and novel machine learning methods. Because of the bridge between formal theorem
proving and software systems, Auto-ITP can also be motivated as a steppingstone for
developing models capable of software synthesis (Szegedy, 2020). An essential aspect
of Auto-ITP research has been large datasets of proof data resulting from already
completed large-scale formalization projects (Kaliszyk et al., 2017). This access to data
opens up the door for data-hungry machine learning methods. However, frameworks
and benchmarks have been lacking from the domain up until recently. Bansal et al.
(2019a) argue that widely adopted benchmarks in other domains, such as ImageNet
(Deng et al., 2009) for object detection and LibriSpeech (Panayotov et al., 2015) for
speech recognition, have been instrumental for the success of machine learning in these
domains. This has led to efforts by several research groups to provide both frameworks
and benchmarks in the domain of theorem proving. These frameworks have mainly been
developed with the goal of tactic application as a machine learning problem in mind,
which has evolved into Auto-ITP.
It is also worth noting that the rapid progress machine learning (in particular
deep learning) has experienced in the last few years motivates Auto-ITP from a
traditional formal theorem proving perspective as well. In particular, large-scale
-----
_Chapter 1 Introduction_
formalization work is massively labor-intensive[3] and more automation of such tasks
is therefore desirable. This has been a significant motivation for developing Hammer
systems, which are capable of proving large chunks of formalization projects automatically
(Blanchette et al., 2016).
Experiments in this Thesis are primarily motivated by asking the following question:
“What successes in machine learning (in the field of formal reasoning and at large)
can be drawn on to explore new techniques for the Auto-ITP task?”. In particular,
this Thesis explores Graph Neural Networks and Transformer networks further, as
they have recently shown promising results as embedding techniques for mathematical
expressions (Paliwal et al., 2020; Rabe et al., 2020; Lample and Charton, 2020; Polu
and Sutskever, 2020). Allowing models to train on not only human-written proofs but
also machine-generated proofs has shown to improve results in the theorem proving
context (Bansal et al., 2019b). Data scarcity is also a concern when training theorem
proving models (Wang and Deng, 2020). This motivates experiments involving both
supervised learning models and reinforcement learning models. Reinforcement learning
allows the agent to learn from exploring tactic applications rather than from a curriculum.
Since there is overhead associated with achieving complete end-to-end theorem
proving, the prototyping of new machine learning models can benefit from simplified
proxy metrics indicating the prototype’s success. However, because of the infancy of
Auto-ITP frameworks, there is a lack of such methodology in the field. This motivates
the focus on developing a proxy metric allowing easier prototyping of models. The
development of a new theorem proving agent for ITP is motivated by a similar idea.
Namely, it is helpful to have familiar machine learning interpretations of the theorem
proving task when studying Auto-ITP.
##### 1.2 Goals and Research Questions
Based on the backdrop described above, a single overarching Goal is formulated for this
Master’s Thesis:
**Goal Further progress machine learning applied to formal reasoning by testing new**
_machine learning techniques on the Auto-ITP task._
This Goal is fairly broad and could potentially encompass a vast amount of experiments.
It will therefore be necessary to restrict the scope to a manageable set of ideas. The first
is to restrict experiments to a single Auto-ITP framework. The chosen framework for
this Thesis is the CoqGym framework (Yang et al., 2016), based on the popular ITP
system Coq (Barras et al., 1997). Furthermore, four main ideas are pursued: (1) proxy
metrics for Auto-ITP, allowing easy prototyping of models, (2) an end-to-end theorem
proving agent easy to interpret from a machine learning perspective, (3) supervised
3An example of this is the formal proof of the Kepler conjecture (Hales et al., 2017), which took
20-person years to complete.
-----
_1.2 Goals and Research Questions_
learning using novel embedding techniques and (4) reinforcement learning. Section 4.1
(after related work has been presented in Chapter 3) covers a more detailed explanation
for why CoqGym and these ideas were chosen. The following Research Questions make
the ideas more explicit:
**Research Question 1 How to design an easy and fast Auto-ITP proxy metric that also**
_indicates end-to-end theorem proving performance?_
Auto-ITP is a domain where training and testing can be reasonably complicated and
slow. Research Question 1 addresses the need for easier and faster prototyping of
Auto-ITP models.
**Research Question 2 How can a conceptually simple end-to-end theorem proving agent**
_be designed for tactic-based ITP theorem proving?_
Although tactic prediction is a fairly straightforward machine learning problem, it gets
more complicated when tactic arguments are introduced. It is not clear exactly how
to interpret ITP theorem proving as a machine learning problem. Thus, this Thesis
argues that designing an agent where the theorem proving task is broken down into
familiar machine learning problems can be helpful. Research Question 2 targets this topic.
**Research Question 3 What novel embedding techniques can help models perform well in**
_CoqGym?_
Research Question 3 is based on the idea that the semantic information contained in
logical expressions is likely essential for Auto-ITP models. Strong embedding networks
have been hugely successful in Natural Language Processing (NLP), with Transformers
like BERT (Devlin et al., 2018) becoming household names in this field. Similar attention
networks have shown promising results for embedding mathematical expressions (Rabe
et al., 2020; Lample and Charton, 2020; Polu and Sutskever, 2020), although not yet
been used specifically for Auto-ITP. Embedding using Graph Neural Networks and
TreeLSTM has already shown promising results in the Auto-ITP domain (Paliwal et al.,
2020; Yang and Deng, 2019), and is therefore also an interesting approach to pursue
further in CoqGym.
**Research Question 4 How does reinforcement learning compare to supervised learning**
_in CoqGym?_
It might be the case that human-written proofs are noisy and hard to learn from. The
proofs are gathered from different formalization projects, where different teams of humans
have been involved. Bansal et al. (2019b) experienced significant improvements with
their Auto-ITP model by letting the model learn from its own proofs and not only
-----
_Chapter 1 Introduction_
human-written proofs. In other words, a machine learning model might be better off
learning its own “style” of proving theorems rather than trying to imitate a human. Data
scarcity has also been pointed out as a bottleneck for formal theorem proving models
(Wang and Deng, 2020). Research Question 4 addresses these points in the context of
CoqGym.
##### 1.3 Research Method
A literature review of machine learning applied to formal reasoning is the starting point
for this Master’s Thesis research. A review of the mechanics of Coq and tactic-based ITP
theorem proving in general is needed to answer Research Questions 1 and 2. Dataset
statistics from CoqGym will also be used to answer Research Question 1 and 2. Research
Questions 3 and 4 are answered using an experimental approach. Each model and
theorem proving agent is compared against each other and to related results from the
literature. To better understand the agents’ performance, a random guessing baseline
agent will be developed and tested.
##### 1.4 Contributions
The main contributions of this Master’s Thesis are the following:
1. An Auto-ITP proxy metric based on predicting tactic groups.
2. An end-to-end theorem proving agent based on solving three multi-class classification
_problems._
3. Experiments using supervised Graph Neural Network – more specifically, Graph
_Convolutional Network – models in CoqGym. These models are trained on both_
_human-written and synthetic proof steps._
4. Experiments using supervised BERT models in CoqGym. These models are trained
_on both human-written and synthetic proof steps. Models with and without pre-_
_trained weights are trained and compared._
5. Experiments with end-to-end theorem proving agents, combining different Graph
_Convolutional Network models and BERT models._
6. Experiments with end-to-end theorem proving agents, trained using a combination
_of deep reinforcement learning and supervised learning._
7. Experiments with end-to-end theorem proving agents, with different depth limits
_and beam widths._
All agents prove significantly more theorems than corresponding random guessing agents
– 16.30% more for the BERT-based agent, 37.28% more for the Graph Convolutional
-----
_1.5 Thesis Structure_
Network-based agent, and 47.76% more for the deep reinforcement learning agent. However, no agent is capable of outperforming state-of-the-art (First et al., 2020), with the
best-performing agent proving 10.74% of the CoqGym test set – 2.16 percentage points
lower than state-of-the-art. Note that a direct comparison to state-of-the-art is difficult
as the theorem agent in this Thesis operates (by design) differently than the agent used
by First et al. (2020).
##### 1.5 Thesis Structure
**Chapter 2 covers all relevant background theory necessary to follow the rest of the**
Thesis. This includes an introduction of traditional ATP and ITP, as well as relevant
machine learning theory.
**Chapter 3 covers relevant work. This includes work in Auto-ITP, Hammer systems and**
other applications of machine learning in the domain of mathematics and formal reasoning.
**Chapter 4 explains the motivation for the experiments pursued in this Thesis.**
This chapter also explains how the proxy metric and the end-to-end theorem proving
agent are designed, as well as the overall deep learning architectures used in the
experiments.
**Chapter 5 covers experiments and results.** This includes concrete model configurations, the experimental setup, and a detailed account of the results.
**Chapter 6 evaluates and discusses how the Master’s Thesis has answered the**
Goal and Research Questions, in addition to further discussing findings from the
experiments and relevant topics in this Thesis.
Finally, Chapter 7 summarizes the Thesis’ contributions and addresses possible future
avenues of work.
-----
-----
## Chapter 2
# Background Theory
This chapter contains the necessary background theory needed to follow the experiments
in this Thesis. In addition, some concepts are included to understand better the
related work covered in Chapter 3. First, Section 2.1 covers a general introduction to
traditional Automated Theorem Proving (ATP) systems. The focus is mainly on the
inference techniques used by such systems. Although this section is not strictly needed
to understand Auto-ITP, it is included because it provides more context to Interactive
Theorem Proving (ITP) systems and, therefore, also Auto-ITP. In addition, Hammer
systems, which are part of the body of related work in Chapter 3, rely on ATP systems,
and many of the ATP inference techniques are used internally by ITP systems. Section 2.2
introduces the main ideas of traditional ITP systems. Relevant details on Coq (Chlipala,
2013) (CoqGym’s underlying ITP system) are included in this section. The focus
then shifts to machine learning, with Section 2.3 covering relevant machine learning theory.
It is assumed that the reader is already familiar with First-Order Logic, calculus, linear algebra, and statistics. ITP systems are typically based on Higher-Order Logic,
but this topic is not strictly necessary for this Master’s Thesis and is not included in the
background theory. Väänänen (2020) provides an excellent introduction to Higher-Order
Logic for the interested reader.
##### 2.1 Traditional Automated Theorem Proving
An Automated Theorem Proving (ATP) system is a computer program operating within
some logical framework (Bibel, 2007). This section focuses on the most common type of
ATP system: systems based on First-Order Logic with equality (Schulz, 2002; Kovács
and Voronkov, 2013). First-Order Logic’s popularity stems from the fact that a vast
amount of mathematics can be expressed formally through First-Order Logic (Ewald,
2019), while it at the same time offers fast inference techniques (some of which will be
discussed here).
The general setup for an ATP system consists of a Knowledge Base of already
known theorems, plus a new theorem to be proven. The new theorem is referred to
as the conjecture. The system tries to infer the conjecture based on the Knowledge
Base by applying one or more inference techniques. Figure 2.1 illustrates the high-level
-----
_Chapter 2 Background Theory_
architecture of a generic ATP system. The inference techniques included in the figure
will be discussed shortly. In the figure, a concrete example of a conjecture that an ATP
system would be able to solve reasonably quickly is included – proving the Inverse of
Group Product from the Group Axioms.
**Knowledge Base** **Inference**
Premise Selection Resolution
Analytic Tableaux
Superpos. calc. Validity
Add new theorem to Knowledge Base
Figure 2.1: The high-level architecture of a generic ATP system.
It is important to note that “to prove something” is fairly loosely defined. More precise
definitions of what is meant by a proof in a formal setting are (1) satisfiability: there
exists some assignment of the variables (also known as a model) such that this assignment
reduces the conjecture to logical True, given the Knowledge Base (Russell and Norvig,
2010, p. 250), and (2) validity: all assignments of variables reduces the conjecture to
logical True, given the Knowledge Base (Russell and Norvig, 2010, p. 249).
The key component in an ATP system is the inference engine. ATP systems
have large libraries of already proven facts, and it is the system’s job to infer new facts
automatically. Inference is, therefore, at the heart of all ATP systems. Most inference
techniques use a so-called proof by refutation (Russell and Norvig, 2010, p. 250). Proof
by refutation works by first negating the conjecture and showing that the negation of the
conjecture is unsatisfiable (i.e., not satisfiable), which proves that the original conjecture is
valid. This is true because a conjecture is valid if, and only if, its negation is unsatisfiable
(Russell and Norvig, 2010, p. 250). The inference techniques involve a preprocessing
step where the set of expressions is formulated using Conjunctive Normal Form (CNF)
(Russell and Norvig, 2010, p. 345). In short, the set represents a conjunction (i.e., the logical AND ) of clauses, where each clause is a disjunction (i.e., the logical OR ) of terms.
_∧_ _∨_
Another important aspect of ATP systems, is the premise selection step (Blanchette
et al., 2016) (depicted in Figure 2.1). When performing inference, the system usually
experiences an explosion in the combinatorial search space. Therefore, it is desirable
only to include the background theory necessary to prove a conjecture and nothing more
(Hoder and Voronkov, 2011). This task is known as premise selection.
10
-----
_2.1 Traditional Automated Theorem Proving_
Next, instead of going into details about specific ATP systems, the most common inference techniques are introduced in a general setting. This covers the theory
needed to understand ATP systems while focusing on the important conceptual aspects
and not on implementation details. Note that, while this section is restricted to
the three most heavily adopted techniques, modern ATP systems usually deploy a
combination of several different inference techniques. Other popular techniques, not
covered here, include generalized Modus Ponens, Model Elimination/Model Checking,
and the Davis–Putnam–Logemann–Loveland algorithm (Davis et al., 1962).
**2.1.1 Resolution**
The resolution technique (Russell and Norvig, 2010, p. 347 - 356) used in ATP systems
combines variable substitution and the resolution inference rule. The idea is to resolve
two disjunctions by unifying the variables in such a way that straightforward application
of standard resolution is possible.
In order to illustrate the resolution inference technique, consider the following
Knowledge Base (in CNF):
Knowledge Base = _P_ (x) _Q(x),_ _Q(y)_ _S(y), P_ (z) _,_
_{¬_ _∨_ _¬_ _∨_ _}_
with the goal to prove S(A). The first step is to negate the conjecture: _S(A). Then,_
_¬_
inference by resolution will yield the resolution tree depicted in figure 2.2, and the
conjecture is proven by refutation. This happens when resolution yields an empty set
of clauses. Resolution is known to be refutation-complete, in that a set of clauses is
unsatisfiable if and only if there exists a derivation of the empty clause using resolution
alone (Russell and Norvig, 2010, p. 345).
Figure 2.2: Example of a resolution tree.
11
-----
_Chapter 2 Background Theory_
**2.1.2 Analytic Tableaux**
Analytic Tableaux (Smullyan, 1968, p. 52-63) is a family of inference techniques, where
the main idea is to break an expression into sub-expressions by a given set of rules for
the logical connectives and quantifiers. This yields a tree structure (the tableau), where
the leaves consist of atomic expressions that cannot be broken down further. A branch
in the tree is considered closed when it inhabits a term and its negation. The original
expression is unsatisfiable when all branches are closed, meaning the conjecture is proven
by refutation.
In general, the tableau is expanded based on the following rules for logical connectives and quantifiers:
- : If a branch of the tree contains A _B, add A and B to the leaves of that branch._
_∧_ _∧_
- : If a branch of the tree contains A _B, split each leaf of the branch into two_
_∨_ _∨_
new leaves; one containing A and one containing B.
- : If a branch of the tree contains (A _B) or_ (A _B), use De Morgan’s law to_
_¬_ _¬_ _∨_ _¬_ _∧_
“push” the negation inwards.
- or : If a branch of the tree contains A _B or A_ _B, use the implication_
_⇒_ _⇔_ _⇒_ _⇔_
identity or the equivalence identity.
- : Get rid of by existential instantiation.
_∃_ _∃_
- : Get rid of by universal instantiation.
_∀_ _∀_
Many variations are possible. For example, one can delay branching as long as possible
in order to avoid duplicate work, and do universal instantiation by the so-called most
_general unifier (Russell and Norvig, 2010, p. 327). Intuitively meaning that we want the_
instantiation variable to correspond to as many already instantiated variables as possible,
so that a direct comparison of terms can be done.
Consider the example
Knowledge Base = _x._ _P_ (x) _Q(x),_ _y.Q(y)_ _S(y)_
_{∃_ _¬_ _∧¬_ _∀_ _∨_ _}_
where the goal is to prove S(A). Using Analytic Tableaux, the tableau illustrated in
Figure 2.3 is obtained. Each branch of the tree contains a contradiction (marked by
dotted line), and S(A) is therefore valid.
12
-----
_2.1 Traditional Automated Theorem Proving_
Figure 2.3: Example of a tableau. The dotted lines indicate conflicting terms (i.e., closed
branches). All branches are closed in this tableau, meaning the original
conjecture is valid by refutation.
**2.1.3 Superposition Calculus**
Most traditional ATP systems revolve around First-Order Logic with equality. The
axiomatic way for dealing with equality (i.e. introducing new rules to the Knowledge
Base that dictates how equality is handled) is usually inefficient, so designers have instead
turned to another concept: Superposition Calculus (Rusinowitch, 1991; Schulz, 2002).
Superposition Calculus involves the introduction of a new inference rule dictating how
the system deals with equality:
_C1 ∨_ _s = t, C2 ∨_ _P_ (s[′]) ), where σ = most general unifier of (s, s[′])
_σ(C1 ∨_ _C2 ∨_ _P_ (t)
This style of inference under equality creates a rewrite system where equations are
subject to some ordering of terms. Ordering is a way to ensure termination, because it
_≻_
dictates in which “direction” to apply the rewrite associated with an equality (e.g., if we
have x = y, should we set occurrences of x equal to y or set occurrences of y equal to x?).
A common problem in superposition calculus is the failure to achieve a conflu_ent rewrite system._ The rewrite system is considered confluent only when it
deterministically outputs the expanded Knowledge Base, without considering the order
of rewrite application (e.g., if we have x = y and a = b, should be rewrite using x = y
first or rewrite using a = b first?). The way to get around this problem is by applying
13
-----
_Chapter 2 Background Theory_
so-called Knuth-Bendix completion (Knuth and Bendix, 1970). The general approach is
the following:
- Identify “critical pairs” (pairs of equations where confluence fails) by leveraging
unification.
- Add critical pairs to Knowledge Base with the correct ordering. And repeat.
Superposition calculus is quite powerful. In the example from Figure 2.1, repeated
mechanical application of the superposition calculus technique on the Knowledge Base
results in:
Knowledge Base = (x _y)_ _z = x_ (y _z),_
_{_ _·_ _·_ _·_ _·_
1 _x = x,_
_·_
_x[−][1]_ _x = 1,_
_·_
(x _y)[−][1]_ = x[−][1] _y[−][1],_
_·_ _·_
(x[−][1])[−][1] = x,
1[−][1] = 1,
_x_ _x[−][1]_ = 1,
_·_
(x _x[−][1])_ _y = y,_
_·_ _·_
_x_ 1 = x,
_·_
(x[−][1] _x)_ _y = y_
_·_ _·_ _}_
Notice that the conjecture from Figure 2.1 is now part of the Knowledge Base. That is,
Inverse of Group Product simply drops out from the Group Axioms by a straightforward
application of superposition calculus.
##### 2.2 Interactive Theorem Proving
Traditional Interactive Theorem Proving (ITP) systems are not designed for theorem
proving automation. Instead, they are used in cooperation with a human user. ITP
research is a large subject, and diverse approaches and systems exist within this line of
research. On one extreme, there are systems that act as safeguards and only formally
verify proof made by a human. On the other extreme, systems can be “almost automated”
and only subject to a small degree of human guidance during proof search. Harrison
et al. (2014) provides an excellent introduction to the field and its history and Nawaz
et al. (2019) a comprehensive comparison of different ITP systems.
Here, the scope is restricted to the concepts needed to understand Auto-ITP.
These concepts are introduced in a general setting rather than via any concrete system.
Some simplifications and generalizations are made as implementation details of specific
systems are not central. However, some extra details are included on the ITP system
Coq – the central ITP systems concerned in this Thesis. Other ITP systems used for
Auto-ITP include HOL Light (Harrison, 1996) and HOL4 (Slind and Norrish, 2008).
14
-----
_2.2 Interactive Theorem Proving_
**2.2.1 Tactic-based Interaction**
Most ITP systems implement so-called tactics (Harrison et al., 2014; Nawaz et al., 2019).
This allows the user to interact with the system in a “backward searching” manner,
meaning that the user starts with the goal (i.e., the conjecture) and breaks down this
goal into simpler and simpler subgoals by applying tactics. This process continues until
only trivially true (e.g., 1 = 1) subgoals are left. Although tactic-based interaction is
not the only option[1], it is this approach used in Auto-ITP. It is also normally the way
human users interact with ITP systems (Harrison et al., 2014).
When interacting with an ITP system using tactics, a search tree is built (Bansal et al.,
2019a). In this tree, the root is the original top-level goal, internal nodes consist of
subgoals, and edges are associated with an applied tactic. Leaves are reached when
a goal is trivially true. A node is closed when all its subgoals are proved. Figure 2.4
Goal
Tactic1 Tactic2
Subgoal1 Subgoal2 Subgoal3
Tactic1 Tactic1
Tactic3
Solved
Subgoal1.1 Subgoal1.2 Subgoal2.1
Tactic1 Tactic2
Tactic1 Tactic2 Tactic3
Solved Solved Unsolvable Unsolvable Unsolvable
Figure 2.4: Example of a hypothetical proof tree. Green indicates closed nodes, yellow
open nodes, and red nodes that are unprovable with the available tactics.
illustrates a hypothetical proof tree. In this example, three tactics are available: Tactic1,
_Tactic2 and Tactic3. The user has first applied Tactic1, resulting in the left-hand side_
subtree. The user has then decided to pursue this subtree further before encountering
_Subgoal2.1. This subgoal is unsolvable with the three available tactics. The user has then_
1One can, for example, interact in a “forward” fashion, using rules like Conjugate and Modus Ponens
to build from a set of background knowledge (Harrison et al., 2014). This is more similar to how
traditional ATP systems work.
15
-----
_Chapter 2 Background Theory_
backtracked to the root and applied Tactic2, resulting in the right-hand side subtree.
This branch is solved by applying Tactic3 to the only subgoal, Subgoal3, in the node.
In this way, the top-level goal is solved too. A more seasoned user might have applied
_Tactic2 and Tactic3 right away, avoiding the left-hand side subtree altogether._
Tactic-based ITP systems usually support their own custom tactic language (Nawaz
et al., 2019). Under the hood of such ITP systems, there is a parser that interprets the
specifics of a tactic script (i.e., a sequence of tactics) and reduces the steps to code that
is interpretable by the underlying programming language. Different systems support
different tactics. In general though, they always represent high-level strategies in the
proof procedure. A tactic can for example be an induction tactic or a rewrite tactic,
corresponding to a proof by induction or a rewriting of the goal (usually using some
already proven theorem), respectively.
**2.2.2 Tactic Arguments and Proof Context**
A user can sometimes pass arguments along with tactics (Barras et al., 1997; Harrison,
1996). Tactic arguments can generally be interpreted as part of one of two proof contexts
(Yang and Deng, 2019; Bansal et al., 2019a). This Thesis will refer to these as the global
context and the local context and interprets them in the following way (based on the the
interpretations in Auto-ITP frameworks (Yang and Deng, 2019; Bansal et al., 2019a;
Gauthier et al., 2020; Huang et al., 2019)).
The global context defines all background knowledge available at search time.
This is analogous to an ATP system’s Knowledge Base. That is, the global context
contains already proven theorems, that can be used as tactic arguments in the proof
procedure. For example, when applying a rewrite tactic, it can be useful to pass an
equality theorem with the tactic, in order for the ITP system to know how to rewrite the
current subgoal. This is essentially the same as the premise selection step done by ATP
systems.
The local context includes the current goal/subgoal being evaluated, in addition
to local hypotheses. Local hypotheses can occur as part of a tactic application. For
example, when proving that n + 0 = 0 using an induction tactic, the problem can be split
in two: the base case 0+0 = 0 and the inductive case “if for any n = k we have k +0 = k,
then k +1+0 = k +1”. In this example, the ITP system will generate a new node containing two subgoals, each within an associated local context: (1) a subgoal for the base case
0 + 0 = 0 where the local context does not include any hypotheses, and (2) a subgoal for
the inductive case k+1+0 = k+1 where the local context includes the hypothesis k+0 = k.
Some generalizations are made in this Thesis, in order to make it easier to talk
about the ITP proof procedure:
- Core tactic. The core tactic is a tactic without any arguments. Some core tactics
16
-----
_2.2 Interactive Theorem Proving_
can be applied right away, while others require arguments to work. A core tactic
will sometimes be denoted τ in this Thesis.
- Tactic argument. A tactic argument is either a theorem from the global context
or a reference in the local context. The local context reference refers either to a
term in the current subgoal or a local hypothesis. Most ITP systems support other
types of arguments as well, but they are less typical and will not be considered in
this Thesis. Theorems from the global context will sometimes be denoted t and
hypotheses from the local context by h.
- Tactic application. A tactic application consists of a core tactic and arguments
from the local and global context. Arguments can be empty, depending on the core
tactic. It will sometimes be denoted in this Thesis.
_T_
Terms in the subgoal can generally be moved to the list of local hypotheses using specific
tactics, resulting in a transformed but equivalent local context. This means that the
local context argument can in practice be considered as only local hypotheses without
restricting the space of tactic applications, if such transformations are applied.
**2.2.3 Internal Automatic Engines**
ITP systems often have internal small-scale automatic inference engines built-in (Hurd,
2003). These are invoked by tactic calls and often work by translating the goal and the
tactic arguments into First-Order Logic before applying traditional First-Order inference
techniques. This allows the user to solve subgoals that are not trivially true with a single
tactic application. These engines are interesting in the context of Auto-ITP because they
can serve as initial baselines (Yang et al., 2016) and are available for the Auto-ITP model
to solve non-trivial subgoals.
**2.2.4 The Logic of Computable Functions Principle**
Modern ITP systems follow the so-called Logic of Computable Functions (LCF) principle
(Geuvers, 2009). Two properties are emphasized in LCF-based ITP systems (Harrison,
2009):
1. The system revolves around a dedicated core: the kernel. The kernel consists of a
set of inference rules, usually referred to as the primitives.
2. The system is implemented in a functional programming language. Type checking
in the programming language ensures that all new inference rules in the system
eventually reduce to the primitives.
The main idea is that if the kernel and the type checking mechanism in the programming
language are sound, then the LCF approach is able to guarantee correctness of all new
theorems entering the system. This is because all theorems that the system encounters
will have to pass through the kernel and thus the system will know if the new theorem is
17
-----
_Chapter 2 Background Theory_
consistent with the primitives.
To implement this, the general-purpose functional programming language Meta_Language was developed (Gordon, 2000; Harrison et al., 2014). It works so that inference_
and theorems are of the same type in the language: thm. This makes it possible
to implement thm as an abstract type of the primitives, which ensures validity by
_construction for new instances of type thm (Harrison et al., 2014)._
The kernel also allows LCF-based systems to adhere to the De Bruijn criterion
(Geuvers, 2009). This criterion states that any proof generated by the ITP system should
be checkable by an (ideally simple) proof checker. Furthermore, this checker should be
self-contained and independent of anything outside of itself. That is, the proof checker is
a trustable and sound “black box” which can guarantee the correctness of proofs. This is
precisely what the kernel in LCF-based systems provides.
**2.2.5 Coq**
The ITP system Coq (Barras et al., 1997) was released in 1989 and has been used in
several formalization projects. Some of the most well known include formal proofs of
the the Feit-Thompson theorem (Gonthier et al., 2013) and the Four Color theorem
(Gonthier, 2008). A C compiler has also been formally verified using Coq (Leroy, 2016),
in addition to the correctness of a Union-Find implementation (Conchon and Filliâtre,
2007). In other words, Coq is used for several problem types, not only pure mathematics.
Each formalization project contains several formal Coq proofs. The large number of
proofs is a primary reason why Coq is attractive as an underlying ITP system for
Auto-ITP frameworks (Yang and Deng, 2019; Huang et al., 2019). Coq also has its own
dedicated website[2], a standalone Integrated Development Environment, and is open
source[3].
Coq implements a tactic language called LTac (Delahaye, 2000), defined by a
context-free grammar (CFG). In short, grammar entries start with core tactics with
production rules defining how to expand the tactic to include arguments. Some core
tactics are terminal grammar entries, meaning that they do not take arguments. Others
have the option of taking arguments, while some require arguments to work. For a full
overview see the Coq reference manual (Barras et al., 1997), which is also available fully
up-to-date in the form of a website: https://coq.inria.fr/doc/.
To keep things simple, this Thesis follows the Coq tactic space defined by the
simplified CFG provided by Yang and Deng (2019) for the CoqGym framework (more
on the CoqGym framework in Section 3.2.4). As explained by Yang and Deng (2019),
statistics from Coq projects show that many tactics are rarely used in practice. The CFG
2https://coq.inria.fr/about-coq
3https://github.com/coq/coq
18
-----
_2.2 Interactive Theorem Proving_
provided in CoqGym covers the most used Coq tactics. For details of the full grammar
see Yang and Deng (2019). An overview of core tactics and their use of arguments is
provided in Table 2.1. Furthermore, relevant tactics are summarized in the following
list. Note that the tactics are interpreted using the CFG from Yang and Deng (2019),
meaning their explanations are simplified compared to the full Coq documentation.
- apply. Matches the subgoal against global context arguments using First-Order
unification. E.g., if the user knows that x implies y, and x is a known theorem
in the global context, apply with x as an argument can be used to solve y. A
local context argument can be used to specify if only a sub-expression should be
matched, including local hypotheses.
- rewrite. Rewrites the subgoal using a global context argument. This only works
if the subgoal and argument are equivalent. A local context argument can be used
to specify if only a sub-expression should be rewritten, including local hypotheses.
- intro/intros. Puts universally quantified (i.e., “forall”) variable in the list of local
hypotheses. This can also be done for the left-hand side of implications. intros is
the same as intro applied continuously until no more variables can be converted
to a local hypothesis.
- unfold. If a term in the subgoal has a definition in the global context, unfold
replaces the term by its definition. A local context argument can be used to specify
if only a sub-expression should be unfolded, including local hypotheses.
- induction. Breaks up the subgoal into a base case and an inductive case by
introducing an inductive hypothesis to the local context. A local context argument
is used as an argument to identify the term to induct on. This can be a direct
reference to a term in the subgoal or a reference to a local context hypothesis.
- split. Splits a subgoal consisting of a conjunction. For instance, splitting the
subgoal x _y into two separate subgoals x and y._
_∧_
- Main internal automatic engines: trivial, auto, tauto, easy, intuition, ring,
```
field, congruence. These implement different strategies for automatic proofs of
```
subgoals. E.g., trivial tries to apply a variety of other tactics under the hood and
```
auto implements a full First-Order resolution procedure. Some are more specialized
```
than others. E.g., auto is general-purpose while ring is specialized for subgoals
consisting of addition and multiplication.
19
-----
_Chapter 2 Background Theory_
Table 2.1: Overview of Coq tactics. LC and GC refer to the local and global context,
respectively. Only tactics from CoqGym’s context-free grammar are included.
**Core tactic** **LC arg.** **GC arg.**
`intro` No No
`intros` No No
`apply` Optional Required
`auto` No Optional
`rewrite` Optional Required
`simpl` Optional No
`unfold` Optional Required
`destruct` No Required
`induction` Required No
`elim` No Required
`split` No No
`assumption` No No
`trivial` No No
`reflexivity` No No
`case` No Required
`clear` Optional No
`subst` Optional No
`generalize` No Required
`exists` Required No
`red` Optional No
`omega` No No
`discriminate` Optional No
`inversion` Required No
`constructor` No No
##### 2.3 Machine Learning
**Core tactic** **LC arg.** **GC arg.**
`congruence` No No
`left` No No
`right` No No
`ring` No No
`symmetry` No No
`f_equal` No No
`tauto` No No
`revert` Required No
`specialize` Required Required
`idtac` No No
`hnf` Optional No
`inversion_clear` Required No
`contradiction` Optional No
`injection` Required No
`exfalso` No No
`cbv` No No
`contradict` Required No
`lia` No No
`field` No No
`easy` No No
`cbn` No No
`exact` No Required
`intuition` No No
`eauto` No Optional
Machine learning refers to a subfield of artificial intelligence concerned with methods
that allow some model to learn over time. Typically, the machine learning model
is a general-purpose function approximator, where a set of trainable parameters θ
describe the function. The task of learning is the task of updating the parameters so
that the model better approximates some ideal function. This Thesis is mainly interested in two subfields of machine learning: supervised learning and reinforcement learning.
Supervised learning (Russell and Norvig, 2010, p. 695) refers to machine learning methods that learn from a dataset of labeled examples. In this setting, one
has a dataset of feature vectors where each feature vector x corresponds to some
label y. A supervised learning model tries to learn the correlation between the
datasets X = (x1, x2, . . ., xn) and Y = (y1, y2, . . ., yn). When fed a new example, the
20
-----
_2.3 Machine Learning_
model makes a prediction ˆy. During training, ˆy is compared to the true label y in
order to compute some error term. The error term is used to incrementally correct
the model’s parameters so it more accurately predicts the true y for the next feature vector.
In reinforcement learning (Russell and Norvig, 2010, p. 830), the model does
not learn from a set of labeled examples. Instead, the model interacts with some
_environment, which in turn leads to either positive or negative reinforcements. It is up to_
the model to learn what actions lead to positive reinforcements and what actions lead to
negative. Typically, this means that the model will deploy some form of trial-and-error
strategy, where a balance between exploration and exploitation is desirable (Russell and
Norvig, 2010, p. 839). The reinforcement learning method relevant in this Thesis is deep
_Q-learning (Mnih et al., 2015)._
Deep Q-learning is similar to supervised learning in that the model also takes
in a feature vector x and makes a prediction ˆy. ˆy will in this case be an action the model
can do in the current state st. However, there is no true label y to compare ˆy against.
Instead, a replay memory is used to train the model. In this setup, there is an expected
reward ˆr whenever st+1 (the state reached by applying action y in st) is not a terminal
state and a true reward r when st+1 is a terminal state. r (or ˆr) is compared to the
models expected reward for applying y in st (known as the temporal difference (Russell
and Norvig, 2010, p. 836)), which is used to correct the models perception of whether or
not y was a good action in st. When a model trains using a temporal difference-based
replay memory, it is trained in a self-supervised manner. This essentially means the model
trains in a supervised way, where labels are generated by interacting with the environment.
The following subsections will cover relevant theory in machine learning. Because deep Q-learning utilizes self-learning, the concepts are explained primarily from
a supervised learning point-of-view. Topics specific to deep Q-learning is explained in
Subsection 2.3.13.
**2.3.1 Features**
Machine learning models take feature vectors as input. Each entry in this vector is called
a feature. Simply put, a feature corresponds to some attribute from the domain at hand.
Models need x to be in a format that is computer understandable. This means that the
attributes have to be converted to some real-valued representation – an encoding of the
attributes – in which x is a real-valued vector that can be used to train the model.
To understand what constitutes features, an Auto-ITP example will be used.
Say the goal g is to prove the expression
_a + 0 = a._
For simplicity, say also that there is a finite set defining all syntactical elements used
to make expressions: S = _a, b, 0, =,_ _, +_ . Then, a simple way to obtain a feature
_{_ _−_ _}_
21
-----
_Chapter 2 Background Theory_
representation of g is to one-hot encode the expression:
**_x = (1, 0, 1, 1, 0, 1)_**
This is a representation where 1 on index i indicates that element i from S is present in
the expression, and vice versa for 0.
A one-hot encoding, like the one described here, is easy to implement and is a
common starting point for dealing with non-numerical categorical attributes. Categorical
attributes are attributes that are (as might be clear from the name) discrete and belong
to some category. However, x does not capture semantic information well. In addition,
it suffers from the curse of dimensional, meaning that feature vectors can become
extensively large when the set of all symbols is large. To deal with this, the encoding can
be mapped to an embedding before a predictive model uses the embedding to make a
prediction. A good embedding will capture semantic information and deal with the curse
of dimensionality.
Other attributes might not need a one-hot encoding as they have a natural
real-valued representation (e.g., the age or height of a person). Alternatively, it could
be the case that no obvious real-valued representation exists for the attribute, and a
one-hot encoding is also not a reasonable approach. In these cases, a more sophisticated
encoding is needed. For example, Transformer models map sequence elements to set of
_tokens to obtain a feature representation (explained further in Subsection 2.3.12)._
**2.3.2 Classification Problems**
A classification problem is a type of machine learning problem where each example
belongs to a single class. In binary classification problems, there are two classes. An
example of this is to predict whether a cat is in an image or not. The two classes would,
in this case, be “no cat” and “cat”.
All models in this Thesis are classification models. More specifically, they are
_multi-class classification models. Multi-class means that there are more than two classes_
to consider. The output of such a model is a probability distribution over the classes,
where the class with the highest probability is the model’s prediction.
Note that this Thesis does not consider multi-label multi-class models. That is,
there is never more than one correct class for each example.
**The Softmax Function**
The Softmax function (Russell and Norvig, 2010, p. 848) is a function mapping any
real-valued vector to a probability distribution of the same dimension of the input vector.
This probability distribution is mapped so that the largest value in the input vector has
the highest probability, the second largest has the second-highest probability, and so
22
-----
_2.3 Machine Learning_
on. Softmax is typically used as a final function in a multi-class classification model to
achieve a probability distribution over the classes. It is given by
_e[y][i]_
softmax(yi) = PK
_j=1_ _[e][y][j][,]_
where K is the number of classes and yi denotes the ith class.
**2.3.3 Mini-Batch Training**
An important concept in training machine learning models is the idea of mini-batch
_training (Ruder, 2016). Mini-batch training refers to a method of training where models_
predict a mini-batch of the training data at a time. Training only occurs between each
mini-batch, where the error term is computed over the whole mini-batch rather than on
individual examples.
The size of the mini-batch can vary and impacts both performance and the
time it takes to train a model (Smith et al., 2018). If available hardware resources can
support it, increasing the mini-batch size will typically decrease the training time. This
is because more (or all) of the examples in the mini-batch can be computed in parallel.
When setting the mini-batch size to one, training simply follows an example-by-example
routine. When setting the mini-batch size equal to the size of the entire training set,
a batch-style training routine is followed (Ruder, 2016). If an example-by-example,
mini-batch-style or batch-style training leads to the best performance usually varies from
problem to problem.
**2.3.4 Loss Function**
For machine learning models to be able to learn, they need some feedback that indicates
whether or not they are doing well or not well. This is usually dealt with by using a loss
_function (Russell and Norvig, 2010, p. 710). The loss function takes the prediction ˆy and_
the label y as input and calculates the loss based on how “wrong” ˆy was compared to y.
Importantly, this loss is a real value which can be used to update the model’s parameters.
When deploying mini-batch training, the loss is usually the mean of the loss for each
example in the mini-batch.
Many variations of loss functions are possible. Two loss functions are used in
this Thesis: Cross-entropy loss and Huber loss.
**Cross-Entropy Loss**
Cross-entropy is defined for two probability distributions. To explain cross-entropy,
assume the binary case with distributions P = (p, 1 _p) and Q = (q, 1_ _q). The_
_−_ _−_
cross-entropy of Q relative to P is defined as
cross-entropy, binary = (p log(q) + (1 _p) log(1_ _q))_
_−_ _−_ _−_
23
-----
_Chapter 2 Background Theory_
This is easily extended to the general case in which the probability distributions are over
_n elements:_
cross-entropy =
_−_
X
_pi log(qi)_
_i=0_
Cross-entropy is used as a loss function by simply applying this formula directly, where
the loss is calculated as the cross-entropy of ˆy relative to y.
Cross-entropy loss is zero whenever ˆy = y. It grows relatively slowly for correct
predictions (i.e., prediction > 0.5 when the label is one) and relatively fast for predictions
that are wrong and where the model is fairly confident in its prediction (i.e., prediction
_<< 0.5 when the label is one). The idea is that the penalty is much stricter when the_
model is radically wrong and milder, but still not zero, when the model is correct but
not confident.
**Huber Loss**
Huber loss, named after its inventor Peter J. Huber (1964), is a combination of meansquared-error (MSE) and mean-absolute-error (MAE). MSE and MAE are defined as
follows (for the binary case, where mini-batch size is set to one):
MSE = [1] _y)[2]_
2 [(][y][ −] [ˆ]
MAE = [1] _y_
_|_
2 _[|][y][ −]_ [ˆ]
The idea of Huber loss is to use MSE whenever the _y_ _yˆ_ is below a certain threshold,
_|_ _−_ _|_
and MAE otherwise. In this way, the loss puts less emphasis on large losses and is,
therefore, less sensitive to outliers. This is useful if the training process is unstable, which
is typically the case training a deep reinforcement learning model like a deep Q-learning
model (Mnih et al., 2015).
**2.3.5 Evaluation**
The loss function is the metric that guides the learning process for machine learning
models. However, the loss function alone is rarely the metric used to evaluate models.
For classification models, it is more typical to care about the accuracy of the model. The
accuracy is the percentage of correctly labeled examples from a given set of examples.
It is common to split the dataset into two parts; one part being the training
set and the other being the test set (also called the holdout set). It is important to
“hold out” the test set from the model so that a fair, final evaluation can be performed.
Furthermore, it is common to use some of the examples in the train set as a validation set.
The model is tested on the validation set at even intervals during training. This is useful
because it indicates how well the model is doing while training and hyperparameters can
be modified dynamically while training.
24
-----
_2.3 Machine Learning_
**2.3.6 Neural Networks**
Neural networks (Russell and Norvig, 2010, p. 727 - 736) are machine learning methods
inspired by the low-level physical structure of the brain. They are part of a family of
methods called deep learning. Neural networks are built as networks of nodes and edges,
where information passes along edges and through nodes. Each edge in the network has
an associated weight. A node takes in the sum of the weighted values of all its input
edges and computes an output value based on an activation function (discussed further in
Subsection 2.3.9). Weights are, therefore, the trainable parameters of the neural network
models. A neural network architecture is made up of layers, with an input layer, an output
layer, and layers in between called hidden layers. In addition to weights, neural networks
have something called bias. Bias is similar to constants in a linear function, whereby
the function is shifted by the constant value. Bias is typically trained, just like the weights.
To train Neural Networks a method known as backpropagation is used (Russell
and Norvig, 2010, p. 733). The general idea is to propagate the error through the
network, from the output layer to the input layer. The gradients of the weights in each
layer are efficiently computed based on the gradients in the prior layer, using the chain
rule. In this way, gradients are calculated while avoiding redundant calculations and
_gradient decent can be applied layer-by-layer. This means that weights can be adjusted_
to minimize the loss by following the slopes of the loss function. In gradient descent, a
step size is defined, dictating how much the weights should be adjusted in each iteration.
This step size is commonly called the learning rate and denoted α. Neural networks are
usually trained for several epochs. One epoch corresponds to one pass over the training
data. Furthermore, validation is typically performed between each epoch.
**Feed Forward Networks**
A common type of neural metwork used for prediction is the Feed Forward Network
(FFN) (Russell and Norvig, 2010, p. 729). The input layer takes in the feature vector,
values are passed along edges and nodes, being manipulated by the activation functions,
and the output layer outputs the prediction. Edges in an FFN point “forward” in the
network. Figure 2.5 illustrates an FFN taking in the goal expression from Subsection
2.3.1 and outputting a probability distribution over two tactics.
**2.3.7 Optimizers**
When training neural networks, an optimizer (Ruder, 2016) is used. Optimizers dictate
exactly how the propagation and gradient descent algorithm is implemented. Most
optimizer leverage so-called momentum. Momentum is a way to favor the previous
direction of the gradient descent by adding the previous update times a constant to the
current update function. This creates the effect of “momentum” being kept from time
step to time step in the search along the loss function’s slopes. Momentum speeds up
the gradient descent process and avoids oscillation.
25
-----
_Chapter 2 Background Theory_
: Rewrite
: Induction
Figure 2.5: Example of a Feed Forward Network.
**Adam**
The Adaptive Moment Estimation (Adam) (Kingma and Ba, 2017) optimizer has become
one of the most popular optimizers for deep learning (Ruder, 2016). It is (as the name
suggests) an adaptive optimizer. This means that the optimizer adapts the learning rate
based on some rules. In the case of Adam, the learning rate for each weight is adapted
based on momentum.
Specifically, Adam uses an exponentially decaying average of previous gradients
as part of the current update function. Furthermore, Adam uses two types of momentum:
first-order mt and second-order vt momentum. The second-order momentum is past
gradients squared. Also, accounting for the fact that first-order and second-order
exponentially decaying momentum is biased towards 0, Kingma and Ba (2017) arrive at
the following bias-correct momentum updates:
_mt =_ _[β][1][m][t][−][1][ + (1][ −]_ _[β][1][)][g][t]_
1 − _β1[t]_
_vt =_ _[β][2][v][t][−][1][ + (1][ −]_ _[β][2][)][g]t[2]_
1 − _β2[t]_
_β is the decay rate and gt is the gradient at time step t. The update rule for the model_
parameters is:
_mt_
_θt+1 = θt −_ _α_ _√vt + ϵ_
_ϵ is a small constant included to avoid division by zero._
26
-----
_2.3 Machine Learning_
**2.3.8 Regularization**
Regularization is an essential concept in machine learning. Regularization methods are
used to combat overfitting of the training data. Overfitting is a phenomenon where
the machine learning model is able to perform well on the training data but does not
perform well on the validation or test data. The model essentially finds a correlation
between Xtrain and Ytrain that is too specific for the training set. This leads to the
model not being able to generalize to other examples outside of the training set[4].
Regularization techniques generally try to penalize more complex models, in favor of simpler ones. Many techniques exist. Here, two are explained as they are the ones
used by models in this Thesis: weight decay and dropout.
**Weight Decay**
Weight decay is a simple technique well known to combat overfitting (Krogh and Hertz,
1992). The basic idea is to decrease the complexity of the network by limiting the growth
of weights. This is achieved by penalizing large weights using a cost term in the loss
function[5]:
ˆ X
loss(θ) = loss(θ) + [1] _θ[2]_
2 _[λ]_
In this way, smaller weights will be favored over large weights, decreasing the complexity
of the network and combating overfitting.
**Dropout**
Dropout (Srivastava et al., 2014) is a technique where at each time step during training,
each neuron in the neural network will have its output (its “contribution”) set to zero.
This happens with probability p (the dropout rate). Dropout works because it limits
co-dependency between neurons, meaning that neurons become less dependent on the
output of other neurons. This is a way to create a more “robust” network, which is less
likely to overfit.
**2.3.9 Activation Functions**
Activation functions are often applied to the output of neurons to allow the network to
approximate non-linear functions. This is because activation functions are non-linear
mappings. Two activation functions are used in this Thesis: ReLU and Tanh.
4The opposite phenomenon is called underfitting, where the model is not even able to find good
predictions for the training set.
5This is very similar to so-called L2 regularization, and weight decay is therefore sometimes referred to
as L2 regularization for neural networks.
27
-----
_Chapter 2 Background Theory_
**ReLU**
The Rectified Linear Unit (ReLU) function is given by
ReLU(x) = max _x, 0_
_{_ _}_
It is a straightforward function and has become the most popular activation function
(Nwankpa et al., 2021). It is faster than most other activation functions and shows strong
generalization ability for deep learning models (Nwankpa et al., 2021).
**Tanh**
Hyperbolic tangent (tanh) maps inputs to the interval ( 1, 1). This allows it to keep
_−_
contributions from negative outputs (something that ReLU does not), while at the same
time making sure that no outputs grow too large (in either negative or positive direction).
The function is given by:
tanh(x) = _[e][x][ −]_ _[e][−][x]_
_e[x]_ + e[−][x]
It is a common activation function for natural language tasks (Nwankpa et al., 2021).
**2.3.10 Convolutional Neural Networks**
A Convolutional Neural Network (CNN) is a special type of neural network designed to
capture spatial relationships in images (Nielsen, 2015). Three key ideas are introduced in
CNNs: local receptive fields, shared weights and pooling.
Instead of having the input neurons fully connected to every neuron in the first
hidden layer, a CNN connects a cluster of spatially close neurons to the same hidden
neuron. For the example of an image, the input neurons can be considered as a matrix
corresponding to the pixels in the image. Each region in the pixel matrix is connected to
a single neuron in the first hidden layer. The regions are defined by a kernel of fixed
dimensions that “slides” across the pixel matrix, mapping local receptive fields to hidden
units. This is known as the convolutional layer, also sometimes also called a feature map
(Nielsen, 2015). Another key property of CNNs is that the weights of the hidden units in
the convolution layer share the same weights and bias (Nielsen, 2015).
CNNs also have pooling layers (Nielsen, 2015). Pooling is the process of simplifying the feature map by mapping regions from the hidden layer to a new layer by some
(simple) mathematical operation. For instance, 2x2 max pooling maps 2x2 regions in the
hidden layer to the largest value contained in the 2x2 window. This is a parameter-free
operation, meaning that the pooling layer does not contain weights and is therefore not
trained during backpropagation.
28
-----
_2.3 Machine Learning_
**2.3.11 Graph Neural Networks**
Graph Neural Networks (GNNs) refer to deep learning methods applied to graph
structures. This is achieved by so-called message passing techniques (Paliwal et al.,
2020). In a typical setup, node embeddings are computed using message passing before
predictions can be made on either individual nodes or the graph as a whole (Zhang
et al., 2018). The message passing function takes in the embedding of a node vi xi and
embeddings of nodes in the local neighborhood of vi. For GNNs, the message passing
function is a neural network and therefore consists of trainable parameters θ (i.e., the
network weights). A typically embedding process involves several rounds of message
passing, called hops. For K hops, a node embedding will depend on neighbors as far as
_K edges away. Self-loops are usually added, meaning that information from vi’s own_
embedding is not lost in the message passing process. Note, in the following subsection
_X does not denote a training set but rather the initial node embedding of the graph._
Three GNN methods are used in this Thesis: Graph Convolutional Networks
(GCN) (Kipf and Welling, 2017), Simple Graph Convolutional Networks (SGC) (Wu
et al., 2019) and Deep Graph Convolutional Neural Networks for end-to-end graph
classification (DGCNN) (Zhang et al., 2018). GCN and SGC were proposed as node
classification methods. However, this Thesis is concerned with graph classification, in
which the graph itself is classified, and not individual nodes. GCN and SGC (and most
types of GNN methods) can serve this purpose too by having some form of readout
function. The readout function takes in the embedded nodes and maps them to a
fixed-sized graph representation. The graph representation can, in turn, be used in
a standard classification model. DGCNN is an architecture that does precisely this.
It leverages the GCN technique for node embeddings and uses a novel sorting-based
readout function. Each of the three methods will now be covered in more detail.
**Graph Convolutional Network**
The starting point for the Graph Convolutional Networks (GCNs) describe by Kipf and
Welling (2017) is a semi-supervised node classification problem. This means that labels
are available for a subset of nodes, but not all. The problem is then to predict the labels
for the remaining nodes.
In short, Kipf and Welling (2017) solve this problem by considering the labeled
nodes as training data for a neural network model, in which both the node embedding
matrix X and the adjacency matrix A are inputs. In this way, both node embeddings and
the relationships between nodes is part of the feature space. This means that crucial relational semantics encoded by the graph structure is included in the message passing process.
A K-layer GCN is identical to propagating node feature vector through a Klayer FFN, with the addition that the hidden representation of each feature wi is
averaged with the feature vectors of its local neighborhood (Wu et al., 2019). This is
29
-----
_Chapter 2 Background Theory_
analogous to a convolution in CNNs (hence the name Graph Convolutional Network).
This achieved in the following way. GCN first adds self-loops to A, using the
identity matrix I: _A[˜] = A + I. Then,_ _A[˜] is normalized using its diagonal degree matrix_ _D[˜]_ :
_S = D[˜]_ _[−]_ 2[1] _A[˜]D[˜]_ _[−]_ 2[1]
The convolutional step in GCN for the kth hidden representation W [(][k][)] can then be
compactly described by:
_W¯_ [(][k][)] _SW_ [(][k][−][1)]
_←_
This is known as feature propagation (Wu et al., 2019) and is similar to feature mapping
in CNNs. The next step is to apply linear transformation by passing _W[¯]_ [(][k][)] through a
parameterized function; an FFN with trainable parameters θ, before nonlinear activation
is applied:
_W_ [(][k][)] _σ( W[¯]_ [(][k][)]θ)
_→_
The overall algorithm is depicted in Figure 2.6. The figure showcases how GCN implements
a 3-step process: feature propagation, linear transformation, and nonlinear activation.
ReLU is used as the activation function σ in the figure. Softmax is applied at the end to
obtain a classification.
**Simplifying Graph Convolutional Networks**
GCN
Input Graph Predictions
**x6** _⇥(K −_ 1)
**x7** **x5** **x4**
Feature Propagation Nonlinearity
**H¯** [(][k][)] **SH[(][k][−][1)]** **H[(][k][)]** ReLU(H[¯] [(][k][)])
**x1** **x2** **x3**
**H[(0)]** = X = [x1, . . ., xn][>] Linear Transformation **Yˆ** GCN = softmax(SH[(][K][−][1)]⇥[(][K][)])
**H¯** [(][k][)] **H[¯]** [(][k][)]⇥[(][k][)]
Figure 2.6:Input Graph Illustration of the GCN message passing algorithm. Figure fromSGC Predictions Wu et al.
**x6** (2019), with permission from Felix Wu.
**x7** **x5** **x4**
**x1** **x2** **x3** _K-step Feature PropagationX¯_ **S[K]X**
Logistic Regression
**X = [x1, . . ., xn][>]**
**Simplified Graph ConvolutionsClass +1:** Class -1: Feature Vector: Feature Value: **Yˆ** SGC = softmax !X⇥¯ "
-1 0 +1
The Simplified Graph Convolution (SGC) message passing technique (Wu et al., 2019)
makes two key simplifications to the GCN technique. The first is to remove the nonlinear
_Figure 1. Schematic layout of a GCN v.s. a SGC. Top row: The GCN transforms the feature vectors repeatedly throughout K layers_
activation function between each GCN layer.and then applies a linear classifier on the final representation. Bottom row: Wu et al. the SGC reduces the entire procedure to a simple feature (2019) hypothesize that activation
between messages is not crucial for capturing relational semantics. The resulting classifierpropagation step followed by standard logistic regression.
becomes:
strate that this method effectively shrinks the graph spectral _vi and vj. A missing edge is represented through aij = 0._
domain, resulting in a low-pass-type filter when applied toSGC = softmax(SS . . . SXθWe define the degree matrix[(1)]θ[(2)] _. . . θ[(][K][)])_ **D = diag(d1, . . ., dn) as a**
SGC. Crucially, this filtering operation gives rise to locally diagonal matrix where each entry on the diagonal is equal
smooth features across the graph (Bruna et al., 2014). to the row-sum of the adjacency matrix di = [P]j _[a][ij][.]_
Through an empirical assessment on node classification
benchmark datasets for citation and social networks, we
30show that the SGC achieves comparable performance to
GCN and other state-of-the-art graph neural networks. However, it is significantly faster, and even outperforms FastGCN (Chen et al., 2018) by up to two orders of magnitude
on the largest dataset (Reddit) in our evaluation. Finally,
we demonstrate that SGC extrapolates its effectiveness to a
Each node vi in the graph has a corresponding ddimensional feature vector xi 2 R[d]. The entire feature
matrix X 2 R[n][⇥][d] stacks n feature vectors on top of one
another, X = [x1, . . ., xn][>]. Each node belongs to one
out of C classes and can be labeled with a C-dimensional
one-hot vector yi 2 {0, 1}[C]. We only know the labels of a
subset of the nodes and want to predict the unknown labels.
-----
_2.3 Machine Learning_
for K hop message passing. The notation is simplified further by collapsing the normalized
adjacency matrix multiplications into a single operations where S is raised to the power
**Simplifying Graph Convolutional Networks**
of K. The weights can also be reparameterized into a single matrix θ:
GCN
Input Graph SGC = softmax(S[K]Xθ) Predictions
SGC is depicted in Figure 2.7. As noted byx6 _⇥(K − Wu et al.1)_ (2019), SGC is easy to interpret. It
**x7** **x5** **x4**
first consists of a parameter-free feature extraction step (the message passing)Feature Propagation Nonlinearity _X[¯] = S[K]X,_
**H¯** [(][k][)] **SH[(][k][−][1)]** **H[(][k][)]** ReLU(H[¯] [(][k][)])
before a classification step outputs the prediction (shown as “Logistic Regression” inx1 **x2** **x3**
**HFigure 2.7) [(0)]** = X = [x1, . . .,Y[ˆ] = softmax( xn][>] _Xθ[¯]_ ). Linear TransformationH¯ [(][k][)] **H[¯]** [(][k][)]⇥[(][k][)] **Yˆ** GCN = softmax(SH[(][K][−][1)]⇥[(][K][)])
Input Graph SGC Predictions
**x6**
**x7** **x5** **x4**
**x1** **x2** **x3** _K-step Feature PropagationX¯_ **S[K]X**
Logistic Regression
**X = [x1, . . ., xn][>]**
Class +1: Class -1: Feature Vector: Feature Value: **Yˆ** SGC = softmax !X⇥¯ "
-1 0 +1
Figure 2.7: Illustration of the SGC message passing algorithm. Figure from Wu et al.
_Figure 1. Schematic layout of a GCN v.s. a SGC. Top row: The GCN transforms the feature vectors repeatedly throughout K layers_
and then applies a linear classifier on the final representation.(2019), with permission from Felix Wu. Bottom row: the SGC reduces the entire procedure to a simple feature
propagation step followed by standard logistic regression.
strate that this method effectively shrinks the graph spectralDeep Graph Convolutional Neural Networkvi and vj. A missing edge is represented through aij = 0.
domain, resulting in a low-pass-type filter when applied to We define the degree matrix D = diag(d1, . . ., dn) as a
SGC. Crucially, this filtering operation gives rise to locallyDeep Graph Convolutional Neural Network (DGCNN) (diagonal matrix where each entry on the diagonal is equalZhang et al., 2018) is an end-tosmooth features across the graph (end architecture for graph classification. The overall architecture is depicted in Figure 2.8.Bruna et al., 2014). to the row-sum of the adjacency matrix di = [P]j _[a][ij][.]_
Through an empirical assessment on node classificationThe first step is K rounds of GCN message passing. For each round, a node embeddingEach node vi in the graph has a corresponding dbenchmark datasets for citation and social networks, weW is stored. Zhang et al. (2018) propose the following GCN implementation:dimensional feature vector xi 2 R[d]. The entire feature
show that the SGC achieves comparable performance to matrix X 2 R[n][⇥][d] stacks n feature vectors on top of one
GCN and other state-of-the-art graph neural networks. How-ever, it is significantly faster, and even outperforms Fast-W [(][k][+1)] = σ( D[˜] _[−]another,out of[1][ ˜]AW C[(] X classes and can be labeled with a[k][)]θ = [[(][k][)])x1, . . ., xn][>]. Each node belongs to one C-dimensional_
GCN (Chen et al., 2018) by up to two orders of magnitude one-hot vector yi 2 {0, 1}[C]. We only know the labels of a
on the largest dataset (Reddit) in our evaluation. Finally,As with GCN from Kipf and Welling (2017), _A[˜]subset of the nodes and want to predict the unknown labels. denotes the graph adjacency matrix with_
we demonstrate that SGC extrapolates its effectiveness to aadded self-loops _A[˜] = A + I, and_ _D[˜] is the diagonal degree matrix of_ _A[˜]. In other words,_
wide-range of downstream tasks. In particular, SGC rivals,this implementation is almost identical to the GCN implementation from2.1. Graph Convolutional Networks Kipf and
if not surpasses, GCN-based approaches on text classification, user geolocation, relation extraction, and zero-shotWelling (2017). Node embeddings from each round of message passing is concatenatedSimilar to CNNs or MLPs, GCNs learn a new feature repreimage classification tasks. The code is available on Githubinto a single graph representation W [(1:][K][)][1].= [Wsentation for the feature[(1)], . . ., W [(][K][)]], with xi of each node over multiple layers, W [(0)] = X.
which is subsequently used as input into a linear classifier.
For the k-th graph convolution layer, we denote the input
**2. Simple Graph Convolution**
The next step is to perform a readout of the node embeddings. DGCNN achieves thisnode representations of all nodes by the matrix H[(][k][−][1)] and
We followby a novel Kipf & Welling SortPool (2017 layer, which extracts the top) to introduce GCNs (and the output node representations n rows from W **H[(1:][(][K][k][)][)]. Naturally, the initial. This is done**
subsequently SGC) in the context of node classification.by first sorting W [(1:][K][)] based on the final message passing computationnode representations are just the original input features: W [(][K][)]. Zhang
Here, GCNs take a graph with some labeled nodes as input
and generate label predictions for all graph nodes. Letet al. (2018) show that the output from the message passing rounds can be viewed asH[(0)] = X, (1)
us formally define such a graph ascontinuous Weisfeiler-Lehman (WL) node colors ( G = (V, A), where V which serve as input to the first GCN layer.Weisfeiler and Lehmann, 1968). In
represents the vertex set consisting of nodesshort, WL colors are node colors obtained by iteratively updating colors based on the {v1, . . ., vn}, A K-layer GCN is identical to applying a K-layer MLP
andmatrix wherenode’s previous color and the color of its local neighborhood A 2 R[n][⇥] a[n] _ijis a symmetric (typically sparse) adjacency denotes the edge weight between nodes_ to the feature vector[6] x. The final outputi of each node in the graph, except W [(][K][)] is
that the hidden representation of each node is averaged with
6“Color” is not meant to be interpreted literally, but rather as a1https://github.com/Tiiiger/SGC its neighbors at the beginning of each layer. In each graph fingerprint representing the node.
convolution layer, node representations are updated in three
31
-----
_Chapter 2 Background Theory_
the most “refined” such coloring, and therefore the basis for the sort (Zhang et al., 2018).
Crucially, SortPool provides a consistent graph representation. This means that if two
graphs are isomorphic[7], their graph representation after SortPool is the same Zhang
et al. (2018).
SortPool can pass gradient loss back to the GCN layers. Learning end-to-end
graph classification is therefore possible with DGCNN. Zhang et al. (2018) also pad
the output from SortPool always contains exactly n rows. Finally, a traditional CNN
network is implemented as a prediction network.
**Input graph** **Graph convolution layers** **SortPooling** **1-D convolution** **Dense layers**
**Pooling**
C
A A A C C C C
C
B C D B C D B C D E E E E
A D D D EE
**Sort**
E F E F B C D A E F B B B DDD
B C D A A A BB
E F F F F B
Substructure feature extraction in terms of Concatenate WL colors Sort vertices using the last A Train CNNs on sorted
A
continuous WL colors using graph convolution E F from all iterations layer’s colors and pool A representations and predict
Figure 2: The overall structure of DFigure 2.8: Illustration of the DGCNN end-to-end graph classification architecture. FigureGCNN. An input graph of arbitrary structure is first passed through multiple graph convolution
layers where node information is propagated between neighbors. Then the vertex features are sorted and pooled with a
SortPooling layer, and passed to traditional CNN structures to learn a predictive model. Features are visualized as colors.from Zhang et al. (2018), with permission from Yixin Chen.
Following this idea, we invent a novel SortPooling layer. **2.3** **Remaining layers**
**2.3.12 Transformers**
each row is a vertex’s feature descriptor and each column isIn this layer, the input is an n × [!]1[h][c][t][ tensor][ Z][1:][h][, where] After SortPooling, we get a tensor Z[sp] of size k × [!]1[h][c][t][ with]
The Transformer (Vaswani et al., 2017) is a deep learning architecture developed in theeach row representing a vertex and each column representing
a feature channel. The output of SortPooling is atensor, wherefield of Natural Language Processing (NLP). The architecture implements two modules: k is a user-defined integer. In the SortPooling k × [!]1[h][c][t] a feature channel. To train CNNs on them, we first reshape
**Z[sp]** into a k([!][h]1 _[c][t][)][ ×][ 1][ vector row-wise. Then we add a 1-D]_
layer, the inputan encode Z[1:][h] and ais first sorted row-wise according to decoder. In addition, K Z[h] encoders and decoders are stacked. The input.
We can regard this last layer’s output as the vertices’to these modules is tokenized representations of natural language, and the output is a most re- convolutional layer with filter size and step [!]1[h][c][t][, in order]
to sequentially apply filters on vertices’ feature descriptors.
**fined continuous WL colors, and sort all the vertices using**
probability distribution. The key components in the Transformer areAfter that, several MaxPooling layers and 1-D convolutional multi-headed self
these final colors. This way, a consistent ordering is imposed
for graph vertices, making it possible to train traditional neu-attention networks, responsible for capturing semantic information in the input sequence.layers are added in order to learn local patterns on the node
sequence. Finally, we add a fully-connected layer followed
ral networks on the sorted graph representations. Ideally, weTransformers are so-called autoregressive systems, meaning that previous outputs are
by a softmax layer.
need the graph convolution layers to be deep enough (mean
part of the current input. The main components will now be explained in more detail.
ing h is large), so that Z[h] is able to partition vertices into
different colors/groups as finely as possible. **3** **Related Work**
**Tokens and Input Embedding** **Graph Kernels** Graph kernels make kernel machines
The vertex order based on Z[h] is calculated by first sorting such as SVMs feasible for graph classification by com
vertices using the last channel ofThe tokenization step maps an input sequence to a vector of tokens. For instance, the Z[h] in a descending order. puting some positive semidefinite graph similarity meaIf two vertices have the same value in the last channel, the sures, which have achieved state-of-the-art classification re
input “My name is Bob” might be mapped to the tokens [My, name, is, Bob]. Before the
tie is broken by comparing their values in the second to last sults on many graph datasets (Vishwanathan et al. 2010;
channel, and so on. If ties still exist, we continue comparingTransformer encoder and decoder can make computations on the tokens, each token isShervashidze et al. 2011). A pioneering work was introtheir values inmapped to a real-valued vector representation. For example, [My, name, is, Bob] might Z[h]i _[−][1], Z[h]i_ _[−][2], and so on until ties are broken._ duced as the convolution kernel in (Haussler 1999), which
Such an order is similar to the lexicographical order, exceptbe mapped to [0.5, 0.3, 0.2, 0.8]. The input embedding can be obtained in several ways.decomposes graphs into small substructures and computes
for comparing sequences from right to left. kernel functions by adding up the pair-wise similarities be
Often a pre-defined map is used to ensure that the same word always has the same initialtween these components. Common types of substructures
the next function of SortPooling is to unify the sizes of theIn addition to sorting vertex features in a consistent order,7Isomorphism between two graphs G1, G2 means that their exists a one-to-one mapping between nodesinclude walks (Vishwanathan et aland Mutzel 2012), paths (Borgwardt and Kriegel 2005), and. 2010), subgraphs (Kriege
output tensors. After sorting, we truncate/extend the outputin G1 and G2. subtrees (Shervashidze et al. 2011; Neumann et al. 2016).
tensor in the first dimension from n to k. The intention is (Orsini, Frasconi, and De Raedt 2015) reformulated many
to unify graph sizes, making graphs with different numbers well-known substructure-based kernels in a general way
of vertices unify their sizes to k. The unifying is done by called graph invariant kernels. (Yanardag and Vishwanathan
deleting the last32 _n −_ _k rows if n > k, or adding k −_ _n zero_ 2015) proposed deep graph kernels which learn latent reprows if n < k. resentations of substructures to leverage their dependency
information. Convolution kernels compare two graphs based
As a bridge between graph convolution layers and tradi- on all pairs of their substructures. Assignment kernels, on
tional layers, SortPooling has another great benefit in that it the other hand, tend to find a correspondence between parts
can pass loss gradients back to previous layers by remem- of two graphs. (Bai et al. 2015) proposed aligned subtree kerbering the sorted order of its input making the training of nels incorporating explicit subtree correspondences (Kriege
-----
_2.3 Machine Learning_
embedding vector. Embedding vectors can be based on pre-trained models that have
learned a meaningful mapping.
**Positional Encoding**
Before input embeddings are fed into the encoder and decoder, positional encodings
are computed for each input embedding. A positional encoding contains additional
information about the absolute and relative position of tokens in a sequence. Vaswani
et al. (2017) compute positional encoding in the following way:
_PE(pos, 2i) = sin(pos/10000[2][i/d][model])_
_PE(pos, 2i + 1) = cos(pos/10000[2][i/d][model])_
_pos is the position of the token in the sequence, i is the dimension of the positional_
encoding, and dmodel is the input embedding dimension. These functions are chosen
because PEpos+k, where k is an offset, is a linear function of Ppos. This means that the
relative position of a token is well formulated in the positional encoding, in addition to
the sinusoid representation of absolute position.
**Multi-Head Self-Attention**
Both the encoder and the decoder consist of multi-head self-attention networks.
Multi-head refers to the fact that several self-attention layers are stacked and run
in parallel before the output from each is concatenated and run through a final linear layer.
Self-attention is a technique able to focus its “attention” on the most important
tokens in the input embedding, based on the relationship between the tokens. The
self-attention layer takes in a query Q and key-value pairs K, V (in matrix form). Q is
the current word, and the key-value pairs represent the “memory” of all the words that
have been generated up to that point. Vaswani et al. (2017) call their attention Scaled
_Dot-Product Attention, because attention is computed based on the dot-product between_
the input matrices and scaled based on the dimension of the key matrix dk:
Attention(Q, K, V ) = softmax( _[QK]√_ _[T]_ )V
_dk_
**Encoder and Decoder**
The encoder takes in the input embedding and the positional encoding. The is
transformed into query Q and key-value pairs K, V and fed into a multi-headed
attention layer. Both the input and the multi-head attention are added together and
normalized in a new layer. Finally, an FFN is used to obtain a linear combination before
the FNN input and output is again added and normalized to obtain a final encoder output.
33
-----
_Chapter 2 Background Theory_
The decoder is similar to the encoder. However, an additional multi-head attention layer is used over the output of the encoder and the output of the first multi-head
attention layer in the decoder. The decoder is autoregressive, meaning it generates
tokens one at a time while being fed in the previous outputs. The input is also right
shifted one position to ensure that the prediction at position i only depends on the
known outputs at positions before i.
**BERT**
Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018)
is a Transformer-based architecture allowing embeddings to be computed based on both
the left and right (i.e., bidirectional) context of the token. A key idea is that BERT can
first be pre-trained on general tasks (so-called upstream training) and later fine-tuned to
a specific task (so-called downstream training). BERT has to attend to both preceding
and succeeding tokens during pre-training.
BERT consists of several layers of fully connected Transformers. Two versions
are implemented by Devlin et al. (2018). BERT-base consists of 12 transformer blocks,
each made up of 12 attention heads. Hidden representations are of size 768, making
BERT-base consist of 110 million parameters. A BERT-large version is also implemented,
which implements more transformer blocks and attention heads. BERT-base is the only
relevant BERT implementation for this Thesis.
BERT uses a so-called WordPiece tokenization technique. In short, an input
embedding is extracted from natural language by breaking words into tokens from a set
of more than 30,000 tokens. It is possible to input more than one sentence to BERT by
separating sentences using a special separation token.
Devlin et al. (2018) define two pre-training tasks for BERT. One is to mask tokens with a 15% probability. BERT is then tasked with predicting the masked tokens.
The second is a next sentence prediction task. Given sentence A and B, BERT has to
predict whether or not B is the next sentence after A in the dataset. The total number
of words in the pre-training data is 3.3 billion.
**2.3.13 Deep Q-Learning**
Deep Q-learning (Mnih et al., 2015) is a reinforcement learning technique, combining
_Q-learning and deep learning. Here, Q-learning will be explained first, before the deep_
learning aspect of deep Q-learning is explained. The important exploration-exploitation
trade-off will also be explained.
34
-----
_2.3 Machine Learning_
_Q-Learning_
_Q-learning (Russell and Norvig, 2010, p. 831) is a so-called model free reinforcement_
learning method. A model in reinforcement learning refers to an explicit transition
probability distribution over the possible state-action pairs in the environment and a
reward function mapping states to reinforcements. Q-learning works without any such
explicit model.
In Q-learning, a function Q(s, a) is learned through trial-and-error. Q(s, a) maps a
state-action pair (s, a) to some real-value. This value represents the expected utility
obtained by performing action a in state s. If a good Q(s, a) is learned, an effective
_Q-learning agent can operate by choosing to perform the best action a[∗]_ for each state:
_a[∗]_ = argmax _Q(s, a)_
_a_
The Q function is trained by using the Bellman equation (Russell and Norvig,
2010, p. 652), where the temporal difference δ (Russell and Norvig, 2010, p. 836) between
state st and st+1 is the critical component. The Bellman equation is the following value
_iteration update function:_
_Q(st, at) ←_ _Q(st, at) + αδ,_
where
_δ = rt + γ maxa_ _Q(st+1, a) −_ _Q(st, at)_
_rt is the reward obtained when reaching st+1. rt is often a neutral reward whenever_
_st+1 is not a terminal state, and either positive or negative reward whenever st+1 is a_
desirable or not desirable terminal state. γ is the discount factor. The purpose of γ is to
discount the importance of estimated future rewards, incentivizing closer rewards over
distant rewards. The temporal difference is weighted by a learning rate α.
maxa Q(st+1, a) means that the model uses the (so-far) best known policy to estimate the future reward obtainable from st+1, called off-policy learning. Off-policy
agents learn the value of the optimal action policy, independently of the agent’s actions[8].
_Q-learning works because the estimate Q(st, at) is gradually refined based on_
the true outcome from performing at in st. The temporal difference essentially encodes
the error between the estimate at time step t and the slightly better estimate at time
step t + 1. Importantly, a reward at a terminal state reveals the ground truth about the
final state, which makes learning possible.
8The opposite is known as on-policy learning. In this setup, the agent will learn the value of the true
policy being carried out by the agent, which typically includes non-optimal explorative actions.
35
-----
_Chapter 2 Background Theory_
**Approximating Q(s, a) via Neural Networks**
Traditional Q-learning encodes the Q function as an explicit dictionary-style map.
However, this is not feasible in complex environments where the space of states (and
possibly actions) is large. A way to overcome this is approximating for Q(s, a) via a
learnable function approximator. This is the main idea of deep Q-learning (Mnih et al.,
2015). Neural networks are universal function approximators and can be used as an
efficient alternative to explicit Q maps.
A replay memory is used to train deep Q-learning agents (Mnih et al., 2015).
Whenever the agent performs a new action at the tuple Q(st, at), Q(st+1, a[∗]t+1[)][, r][t][) is]
added to the replay memory. This is a new experience that the agent can learn from.
Experiences in the replay memory are used to train the model in a self-supervised
manner, where the labels are are rt + Q(st+1, a[∗]t+1[) and the prediction is][ Q][(][s][t][, a][t][). The]
error then becomes the temporal difference and can be used for backpropagation training.
Learning from a replay memory, therefore, closely resembles supervised learning. The
difference is that the labels are imperfect estimates of true values that gradually improve
as the true terminal rewards drive the estimates towards more and more correct estimates.
In practice, a few enhancements are used to improve the replay memory technique. Instead of training on all experiences in the replay memory, only a subset is
used. These are typically chosen uniformly at random. This has shown to result in a
more stable learning process (Mnih et al., 2015). Also, a separate network Qtarget is
used to calculate Q(st+1, a[∗]t+1[) (][Mnih et al.][,][ 2015][). This network is known as the][ target]
network. The target network is periodically updated with the Q-network’s weights,
making it converge towards better and better estimates while always “lagging” behind
the Q-network. This has shown to result in a more stable learning process (Mnih et al.,
2015). In practice the Q-network takes in the current state and outputs a probability
distribution over the entire action space, rather than taking in each state-action pairs
one at a time.
**Exploration vs. Exploitation**
A key aspect of Q-learning (and reinforcement learning in general) is the idea of
exploration vs. exploitation (Russell and Norvig, 2010, p. 839). The idea is that the
agent should not always perform the best action at every time step (exploitation) and
do sub-optimal actions to facilitate exploration of so-far unseen states. Given that the
agent does not know ahead of time what good states are, it risks converging towards a
sub-optimal policy if it has not seen enough different states. The agent needs to combine
the exploitation of previous knowledge and explore new options to ensure that it finds a
good policy. In other words, it is crucial that the model explores many different actions
to be sure that it finds the best actions while at the same time exploiting the best
actions enough times so that it finds strong sequences of actions and becomes confident
in these actions.
36
-----
_2.3 Machine Learning_
A common way to achieve this is by deploying an ϵ-greedy strategy (Mnih et al., 2015).
In this setup, the agent chooses a random action, over the best action with probability ϵ.
_ϵ will typically be a value that decreases over time, meaning that the agent deploys more_
aggressive exploration in the beginning, before steadily choosing the best actions more
and more often. ϵ is decayed exponentially in this Thesis. Given a decay rate d, ϵ at
time step t is calculated as follows:
_ϵt = ϵend +_ _[ϵ][start][ −]_ _[ϵ][end]_
_e[t/d]_
**2.3.14 Other Techniques**
Some more machine learning techniques are mentioned in the context of the related work
covered in Chapter 3. The main ones will now be briefly mentioned.
**Naive Bayes**
Naive Bayes (Russell and Norvig, 2010, p. 808) is a supervised learning algorithm based
on Bayes Theorem and the so-called naive assumption that features are not correlated
with each other (i.e., independent variables). It is a simple machine learning algorithm
that has empirically shown to yield strong results, even though the naive assumption
might not be strictly true for the given problem (Russell and Norvig, 2010, p. 499). It
also has the advantage that it is fast and easily scaled (Alama et al., 2014).
_k Nearest Neighbors_
_k Nearest Neighbors (k-NN) (Russell and Norvig, 2010, p. 738) is a method that, given a_
new example, computes the k most similar examples to the new example. This is based
on some distance measure between points in the feature space. A way to make k-NN
more sophisticated is by including a weight on the features. For example, one typically
wants rare features to have a greater impact on the similarity measure, and common
features to have less impact. A popular way to achieve this is by using so-called term
_frequency–inverse document frequency (TF-IDF). TF-IDF is a concept from the field of_
information retrieval that normalizes the weight of a feature based on how common the
feature is across all examples.
**TreeLSTM**
TreeLSTM (Tai et al., 2015) is a method that can be used on tree structures. It
is a generalization of the Long Short-Term Memory (LSTM) unit (Hochreiter and
Schmidhuber, 1997), able to embed the topology of tree structures. This is done by
having LSTM units (explained below) depend not only on the input vector and the
hidden state at the previous time step but also on the hidden state of units belonging
to children nodes in the tree. The idea is that this allows information to pass from
children to the parent node, meaning that the embedding will capture the relationship
37
-----
_Chapter 2 Background Theory_
between these nodes. This is different from regular chained LSTM units that linearly
pass information between tokens in a sequence.
LSTM units are a special case of recurrent neural networks (RNNs). In short,
RNNs are neural networks containing nodes with edges that point back to the node (i.e.,
they are recurrent). That is, inputs to the recurrent node depend not only on the new
input in the input layer but also on the previous inputs to the node. LSTMs are different
from RNNs in that they also include three gates: input gate, output gate, and forget
gate. By closing the input gate, new inputs will not override the LSTM unit information,
making it capable of longer-term memory. The forget gate controls how long a sequence
element should be part of the recurrent information in the unit, while the output gate
controls the activation of the output from the unit.
38
-----
## Chapter 3
# Related Work
The first section in this chapter, Section 3.1, will cover how the structured literature review
for this Master’s Thesis was performed. Section 3.2 addresses related work involving
Auto-ITP. This section is considered the most relevant for the experiments in this Thesis.
Then, Section 3.3 covers so-called Hammers, which is another popular way of automating
ITP systems. This method is fundamentally different from Auto-ITP but still aims to
achieve end-to-end automation of ITP systems. Hammers are also interesting because
several machine learning techniques have been deployed in these systems to increase
performance, and they have been used to supplement Auto-ITP models. Finally, Section
3.4 covers other related work involving machine learning applied to mathematics and
formal reasoning. Although this literature addresses different problems than Auto-ITP,
many subproblems overlap, and many techniques can be applied to Auto-ITP.
##### 3.1 Literature Review
The initial interest for the topic in this Thesis came from an article written by a team at
Google, where they introduced a new proof dataset (Kaliszyk et al., 2017). To explore
the topic further, a top-down approach was used, where the related work section in
Kaliszyk et al. (2017) served as a starting point. The literature review then quickly
evolved into three distinct branches explored in parallel. The first was related work in
the Auto-ITP space, the second was work on Hammers, and the third was a broader
review of machine learning applied to formal reasoning. A review of ATP, ITP and
machine learning was also conducted. This was performed on an ongoing basis, as
important concepts were mentioned in literature. ATP, ITP and machine learning are
considered background theory and therefore included in Chapter 2.
In order to review the current literature on Auto-ITP, the starting point was,
as mentioned, the work of Kaliszyk et al. (2017). They pointed to other related
work leveraging underlying ITP systems to train machine learning models, which
in turn pointed to other related work within the Auto-ITP space. All Auto-ITP
frameworks developed have been given a name, which was helpful because it enabled
concrete searches on Google and Google Scholar for specific Auto-ITP frameworks.
Such searches were very fruitful. For example, searching for “HOList” on Google
revealed a website dedicated to this framework, where all Auto-ITP efforts in HOList
39
-----
_Chapter 3 Related Work_
are summarized[1]. Moreover, searching for “CoqGym” on Google Scholar revealed
work by several research groups using CoqGym as their benchmark (Sanchez-Stern
et al., 2020; First et al., 2020). More general Google Scholar searches supplemented
this. Here, keywords like “tactic prediction” and “automating proof assistants” were
used, revealing relevant work outside of any defined Auto-ITP framework (Nawaz
et al., 2020; Yang et al., 2016; Lee et al., 2020; Szegedy, 2020; Lample and Charton, 2020).
Hammers were researched similarly to Auto-ITP, with a top-down approach as
the primary review method. This search started with HOL Light’s Hammer:
HOL(y)Hammer (Kaliszyk and Urban, 2015b). It became clear that there are many
Hammer systems already developed. However, Hammers are mainly interesting as
a comparison to Auto-ITP in this Thesis, and the review was therefore restricted to
Hammer systems developed for ITP systems with an Auto-ITP counterpart. This led the
review to focus on HOL(y)Hammer and its evolution (Kaliszyk and Urban, 2013, 2014,
2015b), and the Hammer system for Coq: CoqHammer (Czajka and Kaliszyk, 2018).
Another essential resource on Hammers was the survey article Blanchette et al. (2016).
In order to fully understand Auto-ITP, it was necessary to have a basic grasp
of the underlying ITP systems and the fundamentals of traditional ATP. Therefore
a review of the ITP systems themselves was conducted. Two main approaches were
taken. One was to read survey literature on ITP systems. These articles compared
different modern ITP systems (Nawaz et al., 2019) and explained historical aspects
of the systems (Harrison et al., 2014; Gordon, 2000). The second was to use the
documentation provided by the ITP community[234]. ITP systems are relatively mature
and have thorough documentation. As for ATP, the primary resource was first a survey
lecture given by the ITP pioneer John Harrison[5], which pointed to relevant techniques
and approaches in the field. The methods used by ATP systems are well studied and
included in several books on logic and inference. The primary sources for this research
were the widely adopted Russell and Norvig (2010) and Robinson and Voronkov (2001),
as well as articles introducing the relevant techniques (Smullyan, 1968; Rusinowitch, 1991).
Machine learning literature was reviewed on an ongoing basis, as machine learning techniques came up in the Auto-ITP literature. Both introductory articles and
survey articles were used. To filter out the most relevant articles, the supervisor for the
Master’s Thesis, Björn Gambäck, provided valuable insights and suggestions.
1https://sites.google.com/view/holist/home
2https://coq.inria.fr/distrib/current/refman/
3https://www.cl.cam.ac.uk/ jrh13/hol-light/reference.html
4https://hol-theorem-prover.org/#doc
5https://www.lektorium.tv/lecture/14805
40
-----
_3.2 Auto-ITP_
##### 3.2 Auto-ITP
In this Thesis, Auto-ITP refers to recent efforts by the machine learning community
to build predictive models on top of existing ITP systems in order to predict tactic
application (Bansal et al., 2019a; Yang and Deng, 2019; Gauthier et al., 2017). The whole
system (ITP + predictive model) can automate the theorem proving task end-to-end, in
a way where the machine learning model “acts as the human user”. Figure 3.1 illustrates
this setting. In order to be clear, the following definition is provided for Auto-ITP:
**Auto-ITP Any approach to automating an underlying ITP system where the proof search**
_is driven forward by machine learning models that have learned to predict what_
_tactics and tactic arguments to apply in a given proof state._
|Global context Database of theorems Auto-ITP Agent Encoding Theorems one-hot tokenization Proof Tree ... Local context Embedder (subgoal + local hypotheses) GNN Transformer ... Predictive model FFN -NN ... (tactic + arguments)|Col2|
|---|---|
||Auto-ITP Agent Encoding one-hot tokenization ... Embedder GNN Transformer ... Predictive model FFN -NN ...|
|Proof Tree||
Figure 3.1: Overview of the Auto-ITP setting. Examples of possible encoding, embedding
and predictive approaches are included for the Auto-ITP model.
There are multiple ways to train Auto-ITP models. In particular, one has to decide what
kind of data the models are going to train on. In an imitation setting, the models are
trained on human-written proofs (Bansal et al., 2019b). In this way, the model tries to
imitate how humans prove theorems. In a self-learning setting models trains on its own
proofs. These are typically generated during a reinforcement learning session (Bansal
41
-----
_Chapter 3 Related Work_
et al., 2019b). It is possible to have a pure imitation setting, a combination of imitation
and self-learning, and a pure self-learning setting.
The Auto-ITP process can be interleaved with Hammer calls. This means that
the Hammer tries to prove the current subgoal first. If it fails, the Auto-ITP agent
selects a new tactic, which leads to a new subgoal. The Hammer can then try to prove
this subgoal, and so on. The same idea can be used with the ITP system’s internal
automatic engine (see Section 2.2.3).
Although formal reasoning is a cornerstone of symbolic-based artificial intelligence (Russell and Norvig, 2010), frameworks and benchmarks for combining machine
learning and formal reasoning has been lacking. Auto-ITP research has been focused on
providing full-fledged frameworks to address this. These frameworks generally choose
an existing ITP system as the starting point and provide an Application Programming
Interface (API) for engaging with the system programmatically. Machine learning
researchers can then interact with the ITP systems in a black box fashion and avoid
overheads associated with learning ITP specific domain knowledge. Four Auto-ITP
frameworks have been developed so far. An overview is provided in Table 3.1. Auto-ITP
models for each framework will now be covered. For the CoqGym framework, more
details of the framework itself is provided.
Table 3.1: Overview of existing Auto-ITP frameworks. The data is gathered from several
sources: Kaliszyk and Urban (2013); Gauthier and Kaliszyk (2015); Bansal
et al. (2019a); Gauthier et al. (2017); Huang et al. (2019); Yang and Deng
(2019). Values are rounded to the closest thousand.
**Name** **Underlying ITP** **Human-written proofs**
HOList HOL Light 29k
TacticToe HOL4 8k
GamePad Coq 2k
CoqGym Coq 71k
**3.2.1 TacticToe**
The first attempt at Auto-ITP was made by Gauthier et al. (2017), in the TacticToe
environment. Gauthier et al. (2017) focus exclusively on k-NN as the core machine
learning model. This is motivated by the success of this technique in the Hammer system
HOL(y)Hammer[6] (explained later in Section 3.3.3). TacticToe extracts syntactic features
of the current conjecture and scores its similarity to already proven conjectures using
_k-NN with an Inverse Document Frequency-based weighting scheme. Inverse Document_
Frequency essentially means that similarity is normalized based on how common terms
are. The tactics used to prove the k most similar already-proven goals are applied to the
6The developers of TacticToe are also pioneers on Hammer research
42
-----
_3.2 Auto-ITP_
current goal.
The first experiments with TacticToe did not treat argument prediction as a
standalone problem. Instead, they simply predict tactics with already defined arguments
(Gauthier et al., 2017). However, in later experiments, Gauthier et al. (2020) generalize
tactics by removing their arguments. Arguments are then predicted by a separate k-NN
model. Furthermore, they have one model predicting arguments from the global context
and another model predicting arguments from the local context.
Gauthier et al. (2017) use a novel approach to proof search: a modified version
of the best-first A* algorithm which pursues proof paths that are most likely to
lead to a full proof. They do this by scoring each not-yet closed node in the search
tree and choosing the node with the best score as the next node to expand. In
order to score nodes, Gauthier et al. (2017) try a few different variations. The most
successful is a simple summation of the depth of the node and the number of tactics
previously applied on the goals in the node. Intuitively it makes sense that goals deeper
in the proof tree are likely to be simpler than goals further up, and therefore easier to prove.
Gauthier et al. (2020) replace A* by a variant of the popular Monte Carlo algorithm (Raychaudhuri, 2008). In short, the next node to be evaluated is based on the
number of times nodes along its proof path have been visited. An exploration term is
also included, to allow nodes with not the highest score to be chosen from time to time.
Both the A* experiments (Gauthier et al., 2017) and the Monte Carlo experiments (Gauthier et al., 2020) complement the proof search by integrating a minimal
version of a Hammer with TacticToe. This is done by invoking one of HOL4’s (TacticToe’s
underlying ITP system) internal automatic engines (see Section 2.2.3) every time a
selected goal is being evaluated. The hope is that the automatic engine will be able to
prove the goal without further need for proof tree expansion. As noted by Gauthier
et al. (2020), integration with Hammers are expensive and built-in an internal automatic
engine are therefore chosen instead.
Results from Gauthier et al. (2017) and Gauthier et al. (2020) are summarized
in Table 3.2. This constitutes state-of-the-art in TacticToe. Models are only trained in a
supervised manner human-written proof (the Monte Carlo technique only updates the
next-node heuristic in a reinforcement manner, not the machine learning model itself).
**3.2.2 HOList**
HOList is an Auto-ITP framework developed by Google (Bansal et al., 2019a). Initial
results in this framework are provided by Bansal et al. (2019a). They have two networks
responsible for embedding: one for the goal and one for arguments. They decide to
drop the hypotheses from the local context, meaning that only arguments from the
global context are considered. This comes at the cost that the model will fail on certain
43
-----
_Chapter 3 Related Work_
Table 3.2: State-of-the-art and main results in TacticToe. Results are from experiments
in Gauthier et al. (2017, 2020).
**Setting** **Proof Search** **Argument prediction** **Hammer** **Result**
Imitation Monte Carlo Yes Automatic engine **66.4%**
Imitation A* No Automatic engine 39.43%
Imitation A* No None 29.73%
goals while keeping the experiment reasonably simple. Bansal et al. (2019a) use a
neural network called WaveNet van den Oord et al. (2016) to compute the embeddings. Details of WaveNet are omitted as it is not highly relevant for the rest of the Thesis.
The embedding for the goal is fed into a simple one-layer FFN. This network
predicts what tactic to choose. For each global context theorem, its embedding is
concatenated with the goal embedding and fed into another one-layer FFN. This network
ranks the relevance of the theorem.
In Paliwal et al. (2020) the same high-level architecture is used. However, they
choose to use Graph Neural Networks (GNNs) as embedding networks. In order to
do this, they represent each term as an Abstract Syntax Tree (AST). A few AST
modifications are tested. The most notable being:
- Standard AST.
- Subexpression sharing. Nodes that are syntactically equal are merged.
- Top down. Only keep edges from parent to children.
- Bottom up. Only keep edges from child to parent.
Paliwal et al. (2020) choose GNN message passing over the similar TreeLSTM
architecture, because TreeLSTM fail to consider the full context of sub-expressions.
TreeLSTM will always compute the same embedding for a sub-expressions, regardless of
its context in the whole formula, as they only consider information flow from children
nodes to parent nodes, not in both directions. Part of the reason why Paliwal et al.
(2020) experiment with top-down and bottom-up variations of the ASTs is to better
understand what constitutes the most important context for sub-expressions (e.g., does
the parent node contain more important semantic meaning for child nodes than the other
way around?).
Bansal et al. (2019a) and Paliwal et al. (2020) train in both a strict imitation
manner, in an imitation+self-learning manner, and in a strict self-learning manner.
Self-learning is achieved by appending newly generated theorems to the training
set. A hybrid imitation and self-learning setting is done by seeding the models
with human-written proofs first, allowing it to train on that data in addition to
44
-----
_3.2 Auto-ITP_
machine-generated proofs. In the strict self-learning setting, the dataset is initially
empty.
Results for the self-learning setting were improved in Bansal et al. (2019b), by
including a mechanism for doing exploration of possible tactic arguments, instead of
only exploiting the top-ranked premises. In short, “explorative” premises are selected
based on an Inverse Document Frequency-style similarity measure, similar to the k-NN
weighting scheme used by Gauthier et al. (2017). Top-ranked premises and exploration
premises are interleaved into one list and sent as arguments with the tactic. If the
model produces a successful proof, the self-learning loop ensures that this proof is
used for further training, meaning that the model can explore different (and maybe
unconventional) tactic arguments.
Main results from experiments in Bansal et al. (2019a,b); Paliwal et al. (2020)
are summarized in Table 3.3, which constitutes state-of-the-art in the HOList framework.
All experiments use Breath-First Search to traverse the proof tree (i.e., decide the next
node to evaluate), and no Hammer or automatic engine calls are interleaved with the
proof procedure. Bottom up ASTs perform significantly worse than other AST variations.
This indicates that information flow from parent to child node is important to capture
in AST embeddings. Furthermore, imitation+self-learning models preform better than
both strict imitation and self-learning models.
Table 3.3: State-of-the-art and main results in HOList. Results are from experiments in
Bansal et al. (2019a,b); Paliwal et al. (2020).
**Setting** **Embedding** **AST variant** **Exploration** **Result**
WaveNet - No 32.65%
GNN Standard No 46.66%
GNN Bottom up No 41.86%
GNN Top down No 48.40%
GNN Sub. share No **49.95%**
Imitation
WaveNet - No 38.9%
Imitation + Self-learning
GNN Sub. share Yes **59.9%**
Self-learning GNN Sub. share Yes **56.3%**
**3.2.3 GamePad**
Researchers at OpenAI and Berkeley introduced the GamePad framework in 2019
(Huang et al., 2019). However, there have yet to be any models performing end-to-end
theorem proving in this framework. Part of the reason for this is likely that only 1,602
theorems are available in this framework.
Unlike in other Auto-ITP experiments, Huang et al. (2019) do not attempt to
45
-----
_Chapter 3 Related Work_
prove theorems end-to-end. Instead, they introduce three proxy metrics: position, tactic
and argument prediction. Position prediction is the task of predicting how many tactic
applications are left before the current goal/subgoal is proved. Tactic prediction is the
task of predicting the tactic used in the human-written proof. Argument prediction is the
task of predicting what arguments were used with a given tactic, in the human-written
proof.
Like with experiments in HOList, Huang et al. (2019) simplify the argument
problem by only focusing on the global context. In addition, they use a preprocessing
step that generalizes tactics. This results in only 23 tactics to predict. Furthermore,
position prediction is simplified to predicting one of three classes: (1) close (< 5 steps),
(2) medium (between 6 - 19 steps), and (3) far (> 20 steps).
Notable methods used by Huang et al. (2019) include: (1) always guess the
most common category, (2) predict using a straight-forward Support Vector Machine
(SVM) method, and (3) using TreeLSTM and FFN. For (2), formulas are not embedded
in any feature space. Instead, metrics like goal size and the number of local assumptions
are used as features. For (3), formulas are embedded using TreeLSTM on the Coq AST
representations before a FNN makes the final prediction. Results from experiments in
(Huang et al., 2019) are summarized in Table 3.4. FFN+TreeLSTM outperforms the
other models on all tasks. Results indicate that argument prediction is a difficult task,
with an accuracy of 23.91% being the best result from Huang et al. (2019).
Table 3.4: State-of-the-art and main results in GamePad. Results are from experiments
in Huang et al. (2019)
**Position** **Tactic** **Argument**
Most common category 53.66% 44.75% < 10%
SVM 57.52% 49.45% < 10%
TreeLSTM+FFN **66.30%** **60.55%** **23.91%**
**3.2.4 CoqGym**
Yang and Deng (2019) developed the Auto-ITP framework CoqGym in 2019. The
framework implements a Python API for interacting with Coq, and provides a large set
of proof data. CoqGym’s proof data comprises 70,856 human-written theorems from 123
different formalization projects in Coq. CoqGym will be explained in more detail, as it is
the primary framework concerned in this Master’s Thesis.
**Dataset**
The dataset is split between a train, validation, and test set. This includes proof data
belonging to both pure mathematical domains and software and hardware verification.
46
-----
_3.2 Auto-ITP_
CoqGym also ships with synthetic proofs, generated from human-written theorems. These proofs were generated from intermediate subgoals found in the
human-written proofs. Yang and Deng (2019) extract synthetic proofs of length (i.e., the
number of tactics used in the proof) 1, 2, 3, and 4. The process of generating synthetic
theorems convert terms in the subgoals into hypotheses in the local context, before
human-written sequences of tactics are applied followed by the auto tactic. The sequence
of human-written tactics can be of desired length to generated fixed-size synthetic proofs.
Crucially, this process makes synthetic proofs similar to human-written proofs meaning
they are not a replacement for reinforcement learning but rather an enhancement to
human-based imitation training. Table 3.5 summarizes the proof data in CoqGym for
the train, validation, and test split.
Table 3.5: The CoqGym dataset. h and s refer to human-written and synthetic proofs,
respectively. Information on the number of synthetic proofs for the different
splits is not provided by Yang and Deng (2019).
```
h s, length 1 s, length 2 s, length 3 s, length 4
```
Train 43,844 - - - -
Validation 13,875 - - - -
Test 13,137 - - - -
**Total** **70,856** **159,761** **109,602** **79,967** **61,126**
**SerAPI**
In order to communicate with Coq, CoqGym leverages an API called SerAPI
(Gallego Arias, 2016). SerAPI responds with s-expressions (i.e., a nested list of symbols
with an obvious tree representation), representing the Coq response for the given
input. However, CoqGym wraps the SerAPI calls in a Python class, meaning that one
does not have to directly deal with SerAPI when developing Auto-ITP models in CoqGym.
SerAPI calls are time-consuming and the main bottleneck when a model proves
theorems in an interactive mode (Yang and Deng, 2019). CoqGym sets a default timeout
for each SerAPI call to 12 minutes. If one needs a model to cover more proofs in a
shorter amount of time (e.g., when training a reinforcement learning agent), this timeout
parameter can be modified.
**Abstract Syntax Trees**
All Coq expressions in CoqGym have an associated AST representation. These representations are built using a fixed vocabulary of nonterminals. Simply put, the nonterminals
define the values in which a node in the AST can take. There are 55 nonterminals
in CoqGym. They allow an unambiguous and general way to build ASTs from Coq
47
-----
_Chapter 3 Related Work_
expressions. To build ASTs, CoqGym uses a Python library called Lark[7], which parses
s-expressions based on a provided well-defined grammar.
**Results in CoqGym**
For initial testing in CoqGym, Yang and Deng (2019) develop a deep learning model
capable of generating tactics in a non-trivial way. They call their model ASTactic. The
main idea is to generate tactics as ASTs, not predict tactics from a pre-defined set. To
do this, ASTactic leverages the tactic space defined by Coq’s context-free tactic grammar.
This grammar is briefly explained in Section 2.2.5. More details can be found in (Barras
et al., 1997), for interested readers.
ASTactic embeds ASTs using TreeLSTM. The embeddings and features from
the tactic grammar are inputted to a Gated Recurrent Unit (GRU)[8]. The GRU is
responsible for building a new AST, which represents the next tactic to apply. The
hidden state of the GRU st is the central component in expanding the tactic AST. st is
updated based on st−1, and a concatenation of the current node’s symbol, the parent
node’s symbol, the production rules from the context-free tactic grammar, the goal
embedding and the weighted sum of possible tactic arguments. Arguments are weighted
using an attention mechanism, which depends on the argument and st−1. Yang and
Deng (2019) also test Coq’s Hammer CoqHammer on CoqGym’s test set, and experiment
with interleaving calls to CoqHammer with ASTactic’s proof procedure.
Another Auto-ITP model in CoqGym is TacTok (First et al., 2020). TacTok
was motivated by the fact that the not-yet-finished proof contains semantic information
in the proof procedure. This makes it useful in predicting the next tactic to apply
to subgoals in the node. First et al. (2020) follow the same architecture as ASTactic;
they generate tactics as ASTs, based on embeddings of context and goals, in addition
to the tactic space. However, in TacTok, the current path of tactics in the proof tree
(i.e., the unfinished proof) to the current node is also embedded and part of the GRU input.
Proverbot9001 (Sanchez-Stern et al., 2020) is another model built to do Coq
Auto-ATP. The model works by dealing with tactic selection and premise selection
separately. Formulas are embedded using Recurrent Neural Networks (RNNs). Tactics
are predicted by inputting the embedding of the local context to an FFN, resulting in
a ranking of a pre-defined set of tactics. For each possible tactic argument, a score is
computed by another FNN. This setup is similar to models in HOList (Bansal et al.,
2019a). Tactics and arguments are then given a common score by multiplying their
scores. The unique thing about this is that arguments themselves influence if a core
tactic should be chosen or not instead of first finding the best tactic and then computing
the most relevant arguments for it.
7https://lark-parser.readthedocs.io/en/latest/
8GRUs are almost like LSTMs. The main difference is that GRUs do not have output gates.
48
-----
_3.3 Hammers_
The main results from the experiments in Yang and Deng (2019); First et al.
(2020); Sanchez-Stern et al. (2020) are shown in Table 3.6. Yang and Deng (2019)
and (First et al., 2020) run their experiments on the whole dataset in CoqGym, while
Sanchez-Stern et al. (2020) only run them on a one specific Coq library: the CompCert
project. All experiments used Depth-First Search to traverse the proof tree.
Table 3.6: State-of-the-art and main results in CoqGym. Results are from experiments
in Yang and Deng (2019); First et al. (2020); Sanchez-Stern et al. (2020)
**Dataset** **Hammer** **Result**
ASTactic Full None 12.2%
TacTok Full None **12.9%**
CoqHammer Full - 24.8%
.
ASTactic Full CoqHammer **30.0%**
CoqHammer `CompCert` - 7.39%
ASTactic `CompCert` None 4.59%
Proverbot9001 `CompCert` None **19.36%**
##### 3.3 Hammers
Hammers (Blanchette et al., 2016) are a way of achieving automation of ITP systems.
This means they can serve as a comparison to Auto-ITP models. Furthermore, they
can enhance the ability of Auto-ITP models by interleaving calls to the Hammer
with Auto-ITP’s automated tactic application. However, how Hammers achieve
automation is different from Auto-ITP. Therefore, they are considered a distinct
topic in this Thesis. This section will describe Hammers and provide an overview of
results from two concrete Hammers: HOL(y)Hammer (Kaliszyk and Urban, 2013)
and CoqHammer (Czajka and Kaliszyk, 2018). These are the most relevant for this
Thesis because they are the Hammers developed for HOL Light, HOL4, and Coq,
and therefore provide automation of the same ITP systems as current Auto-ITP models do.
Hammers are tools built on top of existing ITP systems, allowing the system to
prove theorems automatically. They do this by outsourcing the theorem proving job
to third-party ATP systems. Today, most ITP systems have a Hammer extension.
After the first Hammer appeared, in the form of Sledgehammer for the ITP system
Isabelle (Böhme and Nipkow, 2010), the field experienced traction, and other ITP
system designers enabled their own system with a corresponding tool. Hammers
are particularly popular for users who are formalizing large proofs, as they allow
much of the task to be automated. For example, Kaliszyk and Urban (2015c) were
able to generate proofs for 40% of the theorems in the famous Mizar Mathematical
Library (see Grabowski et al. (2010) for details on the Mizar Mathematical Library,
49
-----
_Chapter 3 Related Work_
and Bancerek et al. (2018) for its role in ITP research) fully automatically using Hammers.
An important detail to note is that Hammers do not have to be used in a way
that achieves full automation. One can, for example, invoke the Hammer to solve
particular subgoals in the proof tree, while a human is still responsible for guiding the
system to those subgoals. Full automation is only achieved when the Hammer is already
invoked on the top-level goal. This means that the proof procedure carried out by a
Hammer does not generate a proof tree, in the way shown in Figure 2.4.
**3.3.1 The 3-step Process**
Blanchette et al. (2016) explain the ideas that allow Hammers to work. Hammers are
made possible by a 3-step process:
1. Premise selection. Select a subset of available theorems to pass along with the goal
conjecture to the ATP systems. This is equivalent to deciding the Knowledge Base
for the ATP systems to use (see Section 2.1).
2. Logic translation. For the ATP systems to function, they need to have both the goal
conjecture and Knowledge Base in a logic they can understand. Therefore a process
of translating from the ITP system’s logic (usually a variation of Higher-Order
Logic) to the ATP system’s logic (usually a variation of First-Order Logic) is
needed.
3. Proof reconstruction. For a proof found by an ATP system to be accepted by the
ITP system, it is necessary to reconstruct the proof so that the ITP system can
check it. This is because the translation step is usually not a fully sound process.
The reconstructed proof is checked by running it through the ITP system’s kernel
(see Section 2.2.4).
These problems have to be solved sequentially. The high-level architecture of a Hammer
system is depicted in Figure 3.2.
When evaluating Hammers a human-chronological approach is generally used
(Blanchette et al., 2016). This refers to a setting where the Hammer tries to build
the corpus of proof data in the same order humans built it. In other words, at any
given time, when the Hammer is proving a conjecture c, the only theorems available
to the Hammer are the ones that were also available to humans, when they proved
conjecture c. In this way, the Hammer emulates the actual corpus construction.
This is a way to assure that the correctness of the Hammer evaluation is guaranteed
by construction. It also allows the Hammer to prove a corpus in a “push-button”
mode. This is different from a typical machine learning setting, where the model is
usually indifferent to the order in which it learns from examples. This means that
results from a Hammer cannot be directly compared to the results of an Auto-ITP system.
50
-----
_3.3 Hammers_
In general, it is the premise selection step, together with the number of theorem provers and the resources provided (e.g. time constraints, CPU power, etc.), that
determine the success of the Hammer, not the translation or the reconstruction steps.
Translation and reconstruction are necessary steps with minimal ability to be optimized.
Most Hammer research has therefore focused on premise selection. Premise selection
is also highly agnostic to the underlying systems and can therefore be researched
independently of a specific Hammer system.
|Conjecture|Col2|
|---|---|
**ATPs**
**Hammer**
ATP1
KB
Logic Translation
ATP2
KB + Conjecture **Step 2**
Premise Selection
ATP3 Naive Bayes
-NN
Reconstruction deep learning
...
**Step 3** **Step 1**
Proof
Large corpus of
Reconstructed proof Conjecture
theorems
**ITP**
Database of
Tactic
theorems
language
Proof as
Conjecture
tactic script
Figure 3.2: The high-level architecture of a Hammer system. KB denotes Knowledge
Base.
**3.3.2 Premise Selection**
Premise selection is the process of choosing a subset of available theorems to include as
background theory in a proof procedure. Alama et al. (2014) define it in the following
way:
**Premise selection Given a large number of premises P and a new conjecture c, predict**
_those premises from P that are likely to be useful for automatically constructing a_
_proof of c (Alama et al., 2014)._
Hammers do this step outside of the ATP systems. However, it is common for ATP
systems to have their own internal premise selector, meaning that premise selection is
51
-----
_Chapter 3 Related Work_
performed twice in a Hammer: external of the ATP systems and internally within the
ATP systems.
Several techniques have been applied to the problem, including both non-learning and
learning-based methods. In particular, k-NN, Naive Bayes, Kernel-based methods, and
deep learning methods have been applied to premise selection[9] (Alama et al., 2014;
Kaliszyk et al., 2017; Wang et al., 2017). So far, only Naive Bayes and k-NN have been
implemented as part of a full-fledged Hammer. Naive Bayes and k-NN models are easy
to implement and, crucially, scale well with a full-fledged Hammer (Kaliszyk and Urban,
2014). A popular non-learning method is the SInE method (Hoder and Voronkov,
2011). This is often used internally in ATP systems. Kühlwein et al. (2012) show in a
comparison of different premise selection methods that non-learning methods perform
significantly worse than learning-based methods when tested outside of ATP systems.
**3.3.3 HOL(y)Hammer and CoqHammer**
Kaliszyk and Urban (2014) were the first to experiment with a Hammer for HOL Light.
This resulted in the Hammer system HOL(y)Hammer[10]. Gauthier and Kaliszyk (2015)
ported the same Hammer to make it work for HOL4. In other words, HOL(y)Hammer is
the Hammer used by both HOL Light and HOL4.
For HOL(y)Hammer’s premise selection method to work, features from the theorems in the proof need to be extracted. So far in Hammer research, only syntactic
features have been used.
Kaliszyk and Urban (2013) test HOL(y)Hammer on one of HOL Light’s proof
libraries called Flyspeck, and Gauthier and Kaliszyk (2015) test it on HOL4’s Standard
Library (SL). The Hammer is always tested in a human-chronological way, and the
premise selector is only trained on top-level human-written proofs. Kaliszyk and Urban
(2013) test HOL(y)Hammer using Naive Bayes and k-NN as the premise selector.
Gauthier and Kaliszyk (2015) only test it using k-NN.
Czajka and Kaliszyk (2018) develop the Hammer system CoqHammer for Coq.
Most parts of CoqHammer function in the same way as HOL(y)Hammer. Of course,
because of different logical foundations, the details of the translation and reconstruction
steps must be adopted to account for this. Premise selection in CoqHammer has also
only been implemented using k-NN and Naive Bayes.
State-of-the-art for both HOL(y)Hammer and CoqHammer are shown in Table
9Not all of these methods are explained in the background theory, as not all are highly relevant for the
Thesis.
10https://www.thibaultgauthier.fr/holyhammer.html
52
-----
_3.4 Other Applications of Machine Learning in Formal Reasoning and Mathematics_
3.7. E[11] and Vampire[12] are ATP systems. Note that E-BliStr denotes E run with a
mechanism that allows it to predict what inference strategy (e.g., resolution, tableaux,
superposition calculus, etc.) best suited for solving the given conjecture.
Table 3.7: State-of-the-art and main results for HOL(y)Hammer and CoqHammer. Results are from experiments in Kaliszyk and Urban (2013); Gauthier and
Kaliszyk (2015); Czajka and Kaliszyk (2018). SL denotes the ITP system’s
Standard Library.
**Hammer** **Premise Selection** **#Premises** **Dataset** **ATP** **Result**
HOL(y)Hammer _k-NN_ 128 `Flyspeck` E 31.43%
HOL(y)Hammer _k-NN_ 128 `Flyspeck` E-BliStr **34.88%**
HOL(y)Hammer Naive Bayes 164 `Flyspeck` E 24.17%
HOL(y)Hammer _k-NN_ 128 HOL4’s SL E-BliStr **44.45%**
CoqHammer _k-NN_ 1,024 Coq’s SL Vampire **28.82%**
CoqHammer _k-NN_ 1,024 Coq’s SL E-BliStr 25.59%
CoqHammer Naive Bayes 256 Coq’s SL E-BliStr 17.50%
##### 3.4 Other Applications of Machine Learning in Formal Reasoning and Mathematics
Although this Thesis is mainly concerned with Auto-ITP, numerous works address
different uses of machine learning applied to formal reasoning and mathematics. This
section will provide an overview of selected literature review findings that do not fit neatly
into Auto-ITP or Hammers, but are still interesting as they can inspire new approaches
in Auto-ITP research. Indeed, as will become apparent in Section 4.1, some of the ideas
in the following subsections provide natural starting points for this Thesis’ research in
CoqGym.
**3.4.1 Transformer Models Applied to Mathematics**
A handful of recent papers address the idea of applying Natural Language Processing
(NLP) methods in both formal and informal mathematics and can inspire potential
Auto-ITP models. Three relevant papers are discussed here, all revolving around the
Transformer model (Vaswani et al., 2017).
Rabe et al. (2020) introduce a so-called skip-tree task. The idea is to pre-train
a Transformer model on a task similar to language models such as BERT (see Section
2.3.12). This is achieved by having the model predict masked sub-expressions in the AST
representation of logical expressions based on the unmasked part of the sub-expression.
11https://wwwlehre.dhbw-stuttgart.de/ sschulz/E/E.html
12http://www.vprover.org/
53
-----
_Chapter 3 Related Work_
Rabe et al. (2020) use expressions from the HOList dataset for training, meaning that the
expressions take the form of Higher-Order Logic. The pre-trained model is then tested
on a series of mathematical tasks, with no fine-tuning to specialize it on each particular
task. Although it is hard to argue how strong the results are – there are no benchmarks
to compare them to – they seem to indicate solid formal reasoning capabilities. For
instance, the model achieves an accuracy of 40.86% when tasked with predicting missing
local context hypotheses, and as high as 96.23% when the model predicts masked term
types (i.e., what Higher-Order type a given term in the expression is).
Lample and Charton (2020) train a Transformer network to solve integral problems and ordinary differential equations (ODEs). However, unlike all related work
discussed so far, Lample and Charton (2020) train on mathematics outside of any
formal framework. Instead, they focus on much more familiar textbook-style syntactical
expressions. The model does not train on the skip-tree task presented by Rabe et al.
(2020), but rather on mathematical expressions expressed using infix notation. The
Transformer decoder outputs the integral of the expression or solution to the ODE. The
model is not pre-trained but still able to show strong results. For instance, the model is
capable of solving of 81.2% of order two ODEs from the test set correctly.
OpenAI develops the theorem proving model GPT-f for the ITP system MetaMath (Polu and Sutskever, 2020). MetaMath (Megill and Wheeler, 2019) is an ITP
system with a unique style of interaction. Instead of relying on high-level tactics,
MetaMath uses a substitution-based proof procedure. In short, proofs search is only
driven forward by substituting terms in the current goal using previously proved
theorems[13]. GPT-f is a Transformer model, training in both a pre-training session and
fine-tuned for MetaMath theorem proving. Pre-training is done on data collected from
Github[14], Math StackExchange[15] and arXiv Math[16].
**3.4.2 Synthesizing Theorems**
Synthesizing theorems is the task of teaching machines to generate synthetic theorems
automatically Wang and Deng (2020). This topic tackles data scarcity in the proof
domain. It is related to self-learning Auto-ITP models; machine-generated theorems can
be used in a feedback loop for such models. This is similar to how machine-generated
proofs are used in HOList’s self-learning models (Bansal et al., 2019b).
Generating synthetic theorems is an extensive topic in its own right. Only the
example of Wang and Deng (2020) is mentioned here. They propose a setup where the
goal is to generated human-like theorems in the MetaMath system (mentioned above).
13Note that, because MetaMath operates with such a distinct theorem proving setup, it is considered
different from Auto-ITP in this Thesis.
14github.com
15https://math.stackexchange.com/
16https://arxiv.org/archive/math
54
-----
_3.4 Other Applications of Machine Learning in Formal Reasoning and Mathematics_
The setup is based on loss functions used to teach models to generate new theorems.
The loss function computes how “different” a machine-generated theorem is, compared
to human-written theorems. An adversarial-based loss generated by a network trained to
distinguish human-written and randomly generated theorems is proposed, in addition to
a cross-entropy loss outputed by a language model trained on human-written proofs.
In this way, models can be trained to gradually generate more and more human-like
theorems.
**3.4.3 Tactic Application in Latent Space**
Another interesting, but still very infant, line of research is to predict the outcome of
tactic applications using machine learning. Lee et al. (2020) provide some initial results.
A model is trained to predict the resulting goal after the series of rewrite tactics are
applied to the original goal. Although results are very much early stage, it is still a
fascinating approach to tactic application, where the underlying ITP system is taken
completely out of the loop.
**3.4.4 Evolutionary Algorithms**
A completely different approach to Auto-ITP is to use evolutionary algorithms for tactic
prediction. Some initial results in this line of research are developed by Nawaz et al. (2020)
and Yang et al. (2016). So far, no experiments have been conducted where end-to-end
theorem proving is done on large corpora of proof data. This Thesis chooses not to focus
on evolutionary algorithms simply because of time constraints, but it is an interesting
avenue worth mentioning.
**3.4.5 Internal Guidance**
Internal guidance is an approach to ATP, where machine learning models guide the
internal inference process. It has mainly been studied for analytic tableaux (see Section
2.1.2) style theorem provers (Urban et al., 2011; Kaliszyk and Urban, 2015a). The idea
is to use machine learning models to predict which branch of the tableau to expand
next and select a relevant subset of the Knowledge Base (i.e., premise selection). This
speeds up the inference process by only considering the most promising branches and
relevant theorems (Urban et al., 2011). (Loos et al., 2017) deploy such models in the
ATP system E. Using deep learning internal guidance, they prove 7.36% new theorems
in the famous Mizar Mathematical Library (see Grabowski et al. (2010) for details on
the Mizar Mathematical Library).
This Thesis does not focus on internal guidance. However, it is an interesting
topic to compare to Auto-ITP against and therefore mentioned. While both internal
guidance and Auto-ITP revolve around machine learning applied to formal reasoning,
they are, in some sense, on opposite sides of the spectrum. Internal guidance models
prove theorems by guiding low-level inference processes far from how humans reasons
55
-----
_Chapter 3 Related Work_
while Auto-ITP, on the other hand, operates on high-level tactics much closer to how
humans reasons about mathematics (Yang and Deng, 2019).
**3.4.6 Autoformalization**
Szegedy (2020) outlines a possible path forward for developing better machine learning
models in the context of formal reasoning and, more broadly, artificial general intelligence.
He emphasizes the task of autoformalization as a critical ingredient for reaching artificial
intelligence models capable of more generalized reasoning. The main idea is that the
agent needs to take informal data as input, formalize this data in a way that is consistent
with some logical framework, and then reason based on the formal data. Auto-ITP is a
line of research targeting the latter – reasoning over formal data.
Szegedy (2020) argues that in order to perform autoformalization effectively,
the model needs strong NLP and computer vision capabilities. While it is not the goal
for this Thesis to do autoformalization, the vision outlined by Szegedy (2020) puts the
relative niche task of Auto-ITP into an interesting context of generalized reasoning.
56
-----
## Chapter 4
# Motivation, Agent Design and Architectures
With both background theory and relevant work having been presented, the reminder of
this Master’s Thesis is focused on new experiments in CoqGym. This chapter begins
with a section about the motivation for various decisions made when scoping experiments
and choosing deep learning techniques (Section 4.1). Then, a section is dedicated to
the tactic group proxy metric experiment designed for CoqGym (Section 4.2). This is
followed by a section about the new Interactive Theorem Proving (ITP) agent developed
in this Thesis (Section 4.3). Finally, the last section describes the overall deep learning
model architectures (Section 4.4). Some statistics from the CoqGym dataset are included
in various parts of this chapter to understand the design decisions better.
##### 4.1 Motivation
**4.1.1 Choosing an Auto-ITP Framework**
CoqGym has a larger dataset of human-written proofs than other frameworks, with
~42k more theorems than the following largest dataset (an overview of dataset sizes is
shown in Table 3.1). Having lots of training data is hugely important when training deep
learning models, and was therefore emphasized when choosing an Auto-ITP framework.
CoqGym also provides a large number of synthetic proofs (explained in Section 3.2.4).
No other framework does this.
Another consideration was the diversity of the research groups using each AutoITP framework. CoqGym is the only framework used as a benchmark by researchers
outside the group that introduced the framework. In the case of CoqGym, both TacTok
(First et al., 2020) and Proverbot9001 (Sanchez-Stern et al., 2020) compare themselves
to ASTactic (Yang and Deng, 2019). This seems attractive, as it indicates other research
groups finding CoqGym to be a good benchmark.
The adoption of the underlying ITP system was also considered. As inexperience with ITP systems was a concern in the early phase of the Thesis work, having
a mature community of developers and users supporting the underlying ITP system
57
-----
_Chapter 4 Motivation, Agent Design and Architectures_
was important. Coq is one of the most widely adopted ITP systems. Because HOL
Light, HOL4, and Coq are all open source projects hosted on GitHub, one can look at
repository statistics to indicate the adoption of each system. Table 4.1 summarizes some
main statistics. This is, of course, only an heuristic and not in any way indicating that
one system is superior to others.
Table 4.1: GitHub repository statistics for HOL Light, HOL4, and Coq. Extracted April
30, 2021.
**System** **Contributors** **Stars** **Forks**
HOL Light 8 276 54
HOL4 56 422 80
Coq 194 3,287 498
**4.1.2 Usefulness of Proxy Metrics**
It became clear early on when experimenting in CoqGym that the overhead of training a
model to do end-to-end theorem proving is significant. This means that prototyping a
model becomes cumbersome if the goal is end-to-end proving right away. A possible
way to overcome this is by first using proxy metric experiments before moving on to
end-to-end theorem proving.
Huang et al. (2019) already focus on this task in the GamePad framework. The
experiments in Huang et al. (2019) are more straightforward than end-to-end theorem
proving, while, as argued by Huang et al. (2019), they still have a clear similarity to the
task of predicting tactic application. This makes them helpful in understanding a model’s
ability to do formal reasoning. In other words, it is likely hard for a model to perform
poorly on the proxy metric tasks, but well on end-to-end theorem proving, and vice
versa. This idea is followed here. A proxy metric experiment, based on predicting tactic
_groups, is introduced, with the goal that this will allow easier and faster prototyping of_
Auto-ITP models.
**4.1.3 Machine Learning Interpretation of ITP Systems**
The Auto-ITP models described in Section 3.2 interpret the ITP theorem proving task in
different ways. TacticToe and HOList models turns the task into classification problems
(Bansal et al., 2019a; Gauthier et al., 2020), where one model focuses on core tactics and
another focuses on arguments. This is similar to ProverBot9001 (Sanchez-Stern et al.,
2020). Together the models form a theorem proving agent. However, while core tactic
prediction is a multi-class classification problem, argument prediction is viewed as a series
of binary classification tasks. Moreover, the local context is typically discarded. Yang and
Deng (2019) propose a different setup, involving building tactics from Coq’s context-free
tactic grammar. Core tactic prediction and argument prediction is intertwined in this
58
-----
_4.1 Motivation_
setup. While it is a flexible setup, allowing the models to express the full Coq tactic
space, it is also more difficult to interpret as a traditional machine learning problem.
Having an easy machine learning interpretation of the ITP theorem proving process is
desirable, as it allows less ITP specific knowledge needed when researching Auto-ITP.
This is the motivation for designing a new theorem proving agent in this Thesis.
**4.1.4 Choosing Machine Learning Techniques**
As shown by Paliwal et al. (2020), using Graph Neural Networks (GNNs) for embedding
HOL Light expressions increases performance in HOList. Paliwal et al. (2020) argue that
this is because such embedding techniques capture more of the semantic information
contained in the expressions. This is also partly the reason why TreeLSTM is chosen for
ASTactic (Yang and Deng, 2019). The success of GNNs in HOList serves as a significant
motivation for trying the same approach in CoqGym.
With an easy-to-use proxy metric at hand, several GNN variations can be tested
relatively quickly. Three message passing implementations will be used: (1) Multilayer
Perceptron (MLP)[1], (2) Graph Convolutional Networks (GCN) (Kipf and Welling, 2017),
and (3) Simple Graph Convolutions (SGC) (Wu et al., 2019). See Section 2.3.11 for an
introduction to these GNN techniques.
(1) is directly inspired by the implementation in Paliwal et al. (2020), where
MLPs are used for message passing. It also serves as comparison with the more
sophisticated convolution-based techniques.
(2) is motivated by GCNs success on several graph tasks (Kipf and Welling, 2017).
(3) is motivated by the fast training allowed by SGC (Wu et al., 2019) and its
strong performance on several graph tasks.
For end-to-end theorem proving, efforts are mainly focused on one architecture to make
experiments more manageable. The DGCNN architecture (Zhang et al., 2018) (explained
in Section 2.3.11) is a universal end-to-end graph classification architecture showing
strong performance on several graph tasks and therefore serves as the basis for the
implementation.
In a similar vein, the use of Transformers has shown promising results on several mathematical tasks (Rabe et al., 2020; Lample and Charton, 2020; Polu and
Sutskever, 2020) (explained in Section 3.4.1). However, they have yet to be tested on
the Auto-ITP task. Because of time constraints, developing a tailored Transformer
architecture and performing pre-training, like the skip-tree task (Rabe et al., 2020) or
the GPT-f pre-training (Polu and Sutskever, 2020) is left for future work. Instead,
the powerful BERT (Devlin et al., 2018) architecture will be used off-the-shelf. BERT
has shown state-of-the-art results on several NLP tasks (Devlin et al., 2018), and it is
1An MLP is essentially the same as an FFN. MLP is used here to avoid confusion with other models.
59
-----
_Chapter 4 Motivation, Agent Design and Architectures_
interesting to see how this model performs when natural language is substituted for
formal expressions. To also see if there is any transferability between NLP pre-training
and the Auto-ITP task, BERT will be tested both with and without pre-trained weights.
These weights are provided by Devlin et al. (2018) after BERT has been trained on the
NLP-specific tasks explained in Section 2.3.12.
Bansal et al. (2019b) hypothesize that it could potentially be easier for a machine learning model to teach itself to prove theorems rather than rely on imitation
training. It might be the case that there is so much variation in the way humans prove
theorems in Coq that it is difficult for the model to pick up on clear correlations. Also,
Wang and Deng (2020) point out that the scarcity of labeled examples is a bottleneck
in the formal reasoning domain. This motivates CoqGym experiments involving
self-learning. A reinforcement learning agent is built with the idea that it can find its
own “unique” style of proving theorems based on trial-and-error. The agent will also
have a chance to learn from both successful and failed proof attempts. In other words,
CoqGym’s training data is used more efficiently, combating data scarcity.
In addition, Huang et al. (2019) have argued that theorem proving through an
ITP system is similar to a game, in which the agent has a space of actions it can perform
in any given state, where a subset of actions might lead to a better state. Reinforcement
learning agents have shown strong performance on classical games, such as chess (Silver
et al., 2018), making it a natural approach to also test in the context of Auto-ITP.
It turns out that running even a single theorem proving “game” in CoqGym is
relatively expensive. Each action has to communicate with Coq via SerAPI (see
Section 3.2.4), which is a time-consuming process. As explained by Yang and Deng
(2019); ASTactic is tested on a CPU, rather than a GPU, because the neural network
component of the agent is not the bottleneck but rather the SerAPI calls. This means
that developing a reinforcement learning agent dependent on running lots and lots of
theorem proving simulation is not ideal. Therefore, the famous Monte Carlo algorithm
(Raychaudhuri, 2008) will not be used, but rather Q-learning (Russell and Norvig, 2010).
Moreover, because both the proof space and tactic space is large, a deep Q-learning
(Mnih et al., 2015) approach will be used, allowing the model to approximate the
_Q-function rather than encoding it explicitly (which would likely be infeasible)._
##### 4.2 Proxy Metric: Tactic Groups
The main motivation for having a proxy metric experiment is to allow faster and easier
prototyping of models in CoqGym, with the idea being that promising models can
eventually be used as full-fledged Auto-ITP models. A proxy metric would therefore
ideally possess the following two characteristics:
1. The proxy metric should be easy and fast to use.
60
-----
_4.2 Proxy Metric: Tactic Groups_
2. The proxy metric should be indicative of end-to-end theorem proving performance.
The proxy metric proposed here is based on individual proof steps extracted from the
train and validation set in CoqGym. Yang and Deng (2019) provide a pre-implemented
method that extracts such proof steps from human-written proofs. Extending this to
also include synthetic proofs results in the proof step dataset shown in Table 4.2.
Table 4.2: Proof steps in CoqGym for both human-written and synthetic proofs.
**Human** **Synthetic** **Total**
Train 121,644 174,076 295,720
Validation 68,180 113,048 181,228
The first simplification made for the proxy metric is to only consider the core
tactics. Instead of predicting full tactic applications (i.e., tactic + arguments), the model
only predicts the tactic without arguments (i.e., the core tactic). There are 49 core Coq
tactics (see Section 2.2.5). Further simplification is made by grouping similar tactics
together and having the model only predict the tactic group.
Tactic grouping has two advantages. One is to balance the dataset. Plotting
the occurrence of each tactic for the proof step datasets reveals that some tactics occur
far more often than others. This is shown in Figure 4.1. Unbalanced datasets are often
not desirable for machine learning models. Groups are designed to “even out” the
dataset, making it more balanced.
The second is that it focuses on the overall proof strategy. Usually, more than
just one useful tactic can be applied in a given proof state. For instance, when the proof
state is close to being proven, it often suffices to apply one of the tactics corresponding
to Coq’s internal small-scale inference engines (see Section 2.2.3). Which exact one is
not necessarily crucial, as similar tactics solve many of the same subgoals. The grouping
tries to capture the fact that it is not necessarily essential to predict which exact tactic
to apply, but instead what type of tactic. Huang et al. (2019) also point out that many
tactics have strong similarities.
The groups were designed based on the Coq manual (Barras et al., 1997), as
well as introductory resources on Coq[23]. Table 4.3 details the the exact grouping, as well
as the distribution of each group across the human-written proof steps.
2https://www.cs.cornell.edu/courses/cs3110/2017fa/a5/coq-tactics-cheatsheet.html
3https://coq.inria.fr/refman/proof-engine/tactics.html
61
-----
_Chapter 4 Motivation, Agent Design and Architectures_
Train, human Validation, human Train, synthetic Validation, synthetic
25
20
15
(%)
10
5
0
apply rewrite intros unfold assumption auto simpl intro elim split subst reflexivity tauto exists induction omega discriminat trivial clear exact destruct inversion left intuition right constructor contradict OTHER
Figure 4.1: Frequency of core tactics in the proof step datasets. The least common tactics
are aggregated and put in under the category OTHER.
Table 4.3: The tactic grouping.
**Group** **Frequency** **Members**
```
reflexivity, f_equal, symmetry,
assumption, trivial, easy, auto,
exact, discriminate, constructor,
contradiction, intuition, omega,
eauto, tauto, contradict, ring, field
```
Train: 24.86%
Easy goals
Validation: 21.32%
Train: 31.22% `intro, intros, subst, simpl, unfold,`
Transformations
Validation: 33.66% `left, right`
Train: 30.56%
Apply/Rewrite `apply, rewrite`
Validation: 31.98%
```
split, destruct, inversion,
inversion_clear, induction, elim,
case, generalize, idtac, hnf,
exists, red, congruence, specialize,
clear, injection, exfalso, cbv,
lia, cbn, revert
```
Train: 13.36%
Goal break up/Other
Validation: 13.04%
##### 4.3 Agent Design
In order to have an agent perform end-to-end theorem proving, one has to define the
output of the models contained in the agent. This Thesis sticks with the idea that tactic
application can be viewed as classification problems. However, ranking each argument
62
-----
_4.3 Agent Design_
independently (i.e., a series of binary classification tasks) (Bansal et al., 2019a) will not
be followed. This is due to two reasons:
1. Ranking each argument independently in every proof state is an expensive process,
as noted by Paliwal et al. (2020).
2. When training the binary argument classification model, it will almost never be
exposed to examples where the argument currently being evaluated was used in the
given example. Therefore, there is the danger that the model will converge towards
always scoring arguments with zero.
Table 4.2 showcases the later argument for CoqGym. It plots the number of tactics that
use arguments from either context. Notice that most tactics use zero arguments. Furthermore, local context arguments are relatively more rare than global context arguments.
LC, human LC, synthetic GC, human GC, synthetic
80
60
(%) 40
20
0
0 1 2
Number of tactic arguments
Figure 4.2: Frequency of global and local argument occurrence for the training split of
the proof step datasets. GC refers to the global context and LC to the local
context.
This Thesis considers tactic application instead as three distinct multi-class classification
problems, each dedicated a classification model:
- Cτ : Predict which of the 49 available tactics is the most likely to be used on a
given subgoal. This model only takes the current subgoal as input. The output is a
probability distribution over the 49 tactics P{τ1,τ2,,...,τ49,}.
- CLC: Predict which of the n first available hypotheses in the local context is the
most likely to be used on a given subgoal. This model takes the current subgoal
and the n first local hypotheses as input. The output is a probability distribution
over the n first local hypotheses P{h1,h2,...,hn}.
63
-----
_Chapter 4 Motivation, Agent Design and Architectures_
- CGC: Predict which of the m available theorems from the global context is the
most likely to be used on a given subgoal. This model takes the current subgoal
and the m available theorems as input. The output is a probability distribution
over the m first theorems P{t1,t2,...,tm}.
When training CGC, examples not containing a theorem as an argument will be filtered
out. The same will be the case for CLC for local hypotheses. The models will, therefore,
always have a positive example to learn from. n and m will affect the complexity of
the model and the number of examples in the filtered datasets, and therefore also time
and memory consumption. The higher these values are, the more tactic applications
are available to the agent. In other words, there is a tradeoff between how expensive
training is and the expressivity of the models.
A tactic application is constructed in the simplest way possible. Whenever
_T_
_Cτ suggests a tactic dependent on a theorem from the global context to work, the top_
_CGC theorem is used as a tactic argument. The same is the case for tactics dependent on_
local context arguments to work. In this case, CLC will be used to select an appropriate
argument. The resulting agent is depicted in Figure 4.3. Note that, as explained in
Section 2.2.2, even though ITP local context arguments can be both direct references to
terms contained in subgoals and local hypotheses, the agent can still use local context
argument successfully assuming it modifies local context so that subgoal terms are
put among the local hypotheses. In Coq this can be done by for example the intros
tactic (see Section 2.2.5). The “tactic building” module is independent of the classifiers,
meaning it can be tailored to specific ITP systems. In this Thesis, Table 2.1 is used to
ensure the agent only outputs valid tactic applications.
**GC** **Goal** **LC**
Build tactic
CoqGym API
Figure 4.3: The end-to-end theorem proving agent.
64
-----
_4.4 Designing Architectures_
##### 4.4 Designing Architectures
This section will explain the model architectures. This means explaining different
architectures for Cτ, CLC and CGC. Each architecture is given a name for easier
reference. Hyperparameters and other concrete configurations are not included in this
section. These are instead described in Section 5.1 when a concrete plan for each
experiment is laid out.
All architectures follow the same overall implementation. Each classification
model implements an embedding network and a prediction network . In addition,
_E_ _P_
_CLC and CGC concatenate the embeddings corresponding to the goal and the arguments._
This concatenation is then padded if there are less arguments than the set values n (for
_CLC ) and m (for CGC). The resulting overall architecture is shown in Figure 4.4. Note_
that each classifier implements a separate embedding network.
**GC** **Goal** **LC**
Concatenate Concatenate
Padd Padd
Figure 4.4: The overall end-to-end theorem proving architecture. e refers to an embedding
vector, to an embedding network, and to a prediction network. GC refers
_E_ _P_
to the global context and LC to the local context.
**4.4.1 GAST – Graph Convolutional Network-based Architecture**
GAST is a GCN-based architecture, inspired by Paliwal et al. (2020). The main idea is
to use message passing on the nodes of the Abstract Syntax Tree (AST) representation
of logical expressions. The resulting embedding is fed into the predictive model. In other
words, GAST is a graph classification model.
65
-----
_Chapter 4 Motivation, Agent Design and Architectures_
A preprocessing step where CoqGym’s Lark ASTs (see Section 3.2.4) are converted to matrix representations is needed. This is done by a simple post-order traversal
of the AST, where a sparse matrix containing the one-hot encoded nodes is built X,
as well as the adjacency matrix A. Crucially, the one-hot encoding is possible because
CoqGym ASTs are built using values from a finite set of 55 abstract symbols – CoqGym’s
“nonterminals” (explained in Section 3.2.4).
Message passing can take many forms. In the tactic group experiments, a few
variations are tested. A simple linear layer is used in these experiments to make
predictions, and a mean operation is used as the readout function. In the end-to-end
experiments, the universal graph classification architecture DGCNN from Zhang et al.
(2018) is used (see Section 2.3.11 for details on DGCNN). This means that in the case of
end-to-end theorem proving the GAST prediction network is a convolutional network
with a dense hidden layer, and SoortPool operator from Zhang et al. (2018) is used as
readout. Figure 4.5 depicts the overall model architecture of GAST.
Coq expression
**Preprocessing** **Graph Embedding**
Message passing
Extract AST
Message passing
Message passing
prod Message passing
name app int
int rel app hops
rel inductive
One-hot encode Edge index
Readout
**Prediction network**
Figure 4.5: The GAST architecture. W denotes the node embedding after obtained after
message passing.
66
-----
_4.4 Designing Architectures_
**4.4.2 BERTac – BERT-based Architecture**
BERTac is a BERT-based (Devlin et al., 2018) architecture (see Section 2.3.12 for details
on BERT). Before expressions are inputted to BERT, some preprocessing is needed. All
expressions have an identifier which is important information to pass with the expressions.
This is because the identifier is directly referenced in other related expressions. For
instance, a goal expression might contain the variable H. In the local context, there
can be a hypothesis attached to H. The expression of this hypothesis does not contain
any reference to H, but the identifier does. In order to relate the hypothesis to the
correct term in the goal expression, the identifier is needed. The input to BERT is a
concatenation of (identifier, expression)-pairs. Each pair is mapped to a single sequence of
the form identifier + “points to” + expression. The input sequence is tokenized using the
pre-trained BERT tokenizer containing tokens for 30,522. While this works off-the-shelf,
it is by no means ideal, as this tokenizer is intended for natural language and not logical
expressions. The prediction network is a linear layer with a Softmax function applied to
the output. Figure 4.6 shows the overall BERTac architecture.
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
|||||BE|R||||||
||BERT Intermediate||||||||||
||||||||||||
|||BERT Interm|||||ediate||||
||||||||||||
||||BERT I||nterm|||ediate|||
|||||BERT|I|ntermediate|||||
||BERT Intermedi BERT Output||||||||||
|||||BERT||I|ntermedi||||
|||BERT Ou||BERT Ou|||tput||||
||||||||||||
||||BERT Ou|||||tput|||
|||||BERT Output|||||||
**BERT Embedding**
BERT Embedder
BERT Encoding
Coq expression BERT Encoding
BERT Encoding
BERT Encoding
BERT Attention
BERT Attention
**Preprocessing** BERT Attention
BERT Attention
BERT Intermediate
identifier + points to + expression BERT Intermediate Layers
BERT Intermediate
BERT Intermediate
BERT Output
BERT Output
Tokenize
BERT Output
BERT Output
BERT Pooling
**Prediction network**
Figure 4.6: The BERTac architecture. t denotes a tokenized sequence.
67
-----
_Chapter 4 Motivation, Agent Design and Architectures_
**4.4.3 QTac – Deep Q-learning Architecture**
_QTac is a deep reinforcement learning agent implementing deep Q-learning (Mnih_
et al., 2015) (see section 2.3.13 for an introduction to deep Q-learning). It trains by
interacting with CoqGym proofs and is constructed in the same way as the agent
described in Section 4.3. Furthermore, rather than having each classifier trained using
deep Q-learning, only Cτ is trained this way. The Q-network implements the same
model as the GAST Cτ model. CLC and CGC are pre-trained models, loaded from the
top-performing CLC and CGC from the supervised learning experiments. The QTac
architecture is shown in Figure 4.7.
_QTac is trained using a replay memory._ However, to make sure the Q-network
only trains on relevant experiences, only actions that lead to a new proof state are added
to the replay memory. Some actions result in an error response from Coq, even though
the tactic application is valid in itself. Other actions do not respond with an error
message, but the proof state is not changed (i.e., the tactic application corresponds to a
loop in the proof tree). In both of these cases, the experience is discarded. A target
network is used for more stable training with weights from the Q-network periodically
copied to the target network. QTac will also not train on every experience in the replay
memory. Instead, a mini-batch from the replay memory is picked whenever the replay
memory has filled up with 20% more experiences than the mini-batch size. This process
is not entirely random. Instead, experiences from successful proof attempts will be
guaranteed included in the mini-batch and the remaining chosen uniformly at random.
This is because the theorem proving task is difficult and QTac is likely to fail in most
cases. To make sure the agent sees enough positive examples, they are guaranteed to be
replayed when they occur.
In order for QTac to balance explorations and exploitation, ϵ-greedy approach
is used. Furthermore, ϵ is set to decay exponentially. Two modes for training QTac are
implemented:
- Wide: Consider each proof as an episode and decay ϵ between each proof. This
means that each theorem is attempted only once. The idea is that QTac will see as
many theorems a possible during training.
- Deep: Have QTac attempt n number of episodes for each theorem successively. ϵ is
decayed between each episode and reset for each new theorem. The idea is that
_QTac will try to get “really good” at proving the theorems it is exposed to at the_
cost of seeing fewer unique theorems.
_QTac can combine reinforcement learning and supervised learning by periodically training_
the Q-network on a labeled batch from the proof step dataset. The target network is in
this case also updated after a supervised session has taken place. This is a straightforward addition as the Q-network implements the same architecture as the GAST Cτ model.
68
-----
_4.4 Designing Architectures_
A simple reward function r is used for each new state QTac encounters, where
a timeout or reaching the maximum number of tactics is considered a failed proof
attempt and results in r = 1 (negative reinforcement). A successful proof yields
_−_
_r = 1 (positive reinforcement), and reaching a non-terminal state yields r = 0 (neutral_
reinforcement).
Replay Memory
Experience
**Tac**
Build tactic
CoqGym API
Figure 4.7: The QTac architecture. fϵ is the ϵ-greedy function, r is the reward function,
_s denotes a proof state and tn denotes time step n._
69
-----
-----
## Chapter 5
# Experiments and Results
All relevant results will now be presented. This begins with an experimental plan
(Section 5.1), where experiments are broken down into smaller concrete experiments and
configurations are defined. Then, the experimental setup is explained (Section 5.2). This
includes deep learning frameworks, setting up CoqGym, and the computational resources
used to run the experiments. Finally, results are presented (Section 5.3).
##### 5.1 Experimental Plan
Experiments in this Thesis are broken down into three categories:
- Experiment 1: Tactic group experiments.
- Experiment 2: Supervised learning models for end-to-end theorem proving.
- Experiment 3: Reinforcement learning models for end-to-end theorem proving.
Several models are trained and tested for each experiment. To keep things simple, all
models are trained using the popular Adam optimizer (Kingma and Ba, 2017) (see Section
2.3.7). This is the same optimizer used in the HOList experiments (Bansal et al., 2019b,a;
Paliwal et al., 2020). A mini-batch size of four will be used if not stated otherwise. Any
mini-batch size higher risked running into memory issues on the available hardware
resources. Cross-entropy loss will be used for all supervised models and Huber loss will be
used for replay memory training as this is loss is less sensitive to outliers (see Section 2.3.4).
Three levels of regularization are defined for the experiments. These are summarized in Table 5.1, and will be referred to as low, medium and high regularization,
for the sake of brevity. Models will be trained on one or more regularization levels,
depending on indications from previous results. For an explanation of weight decay and
dropout, see Section 2.3.8. Furthermore, models are early stopped based on validation
accuracy scores computed after each training epoch.
**5.1.1 Experiment 1 – Tactic Groups**
The goal of the tactic group experiments is to prototype GCN-based models (the GAST
models designed in section 4.4.1) and BERT-based models (the BERTac models designed
71
-----
_Chapter 5 Experiments and Results_
Table 5.1: The three levels of regularization defined for experiments.
**low** **medium** **high**
**Weight decay** 1e-6 1e-5 1e-5
**Dropout** 0.1 0.5 0.7
Section 4.4.2) on labeled proof step data. The models are tasked with predicting the tactic
groups defined in Section 4.2 corresponding to proof steps. Models will only be trained
and validated on human-written proof steps. This assumes that good hyperparameters
for models trained in human-written proofs will also be good hyperparameters for models
trained on synthetic proofs. See Section 3.2.4 for an explanation of CoqGym’s synthetic
proof data.
**Experiment 1a: Tactic Group Baselines**
The tactic group proxy metric is designed in this Thesis. Therefore, there are no previous
comparable results for this metric. In order to have benchmarks to compare GAST and
BERTac against, the following baselines are defined:
- Weighted guesses (baseline 1).
- Most common class (baseline 2)
- Feed Forward Network (FFN) classifier (baseline 3).
Baseline 1 and 2 are straightforward. Baseline 1 makes a random guess, where the
probability for picking a tactic group is weighted by the frequency of how often that
tactic group occurs. Baseline 2 always guesses the most common class.
Baseline 3 is an FFN classifier. A goal encoding is passed to the input layer
and propagated through two fully connected hidden layers of the same dimension as
the input layer. The hidden state is then passed to a fully connected output layer of
dimension four (as there are four tactic groups).
The main challenge with baseline 3 is to obtain a goal encoding. Fortunately,
CoqGym provides Abstract Syntax Tree (AST) representation of Coq expressions (see
Section 3.2.4). Nodes take a value from a fixed-size space. An encoding for baseline 3 is
obtained by simply counting the number of each nonterminal in the goal AST. Note that
this means the relational semantics in the relationship (i.e., the AST edges) are lost.
There are 55 nonterminals, meaning this will be the dimension of the input and
hidden layers. Baseline 3 therefore consists of 55 55 2 + 55 4 = 6,270 parameters in
_·_ _·_ _·_
total. Dropout and ReLU activation is used between each layer before Softmax is applied
on the output logits to obtain a probability distribution over the four tactic groups. The
model will be trained on both low and medium regularization levels.
72
-----
_5.1 Experimental Plan_
**Experiment 1b: GAST on Tactic Groups**
A variety of hyperparameters and message passing algorithms will be tested for the GAST
model. Two phases are defined for this experiment:
1. Phase 1: Vary message passing algorithm. Fix everything else.
2. Phase 2: Vary the complexity and regularization of the network. Fix everything
else.
Although everything except message passing stays fixed in phase 1, models will still run
on low and medium regularization levels. This is to help determine what regularization
level is best suited for GAST. Three message passing algorithms will be tested in phase 1:
- Multilayer Perceptron (MLP). A custom, and simple, message passing algorithm
based on the message passing algorithm from Paliwal et al. (2020). The encodings
of two adjacent nodes are passed through an FFN with one hidden layer to compute
node embeddings.
- Graph Convolutional Network (GCN). Message passer from Kipf and Welling (2017)
(see Section 2.3.11).
- Simple Graph Convolutions (SGC). Message passer from Wu et al. (2019) (see
Section 2.3.11).
MLP is included to see if an MLP message passing technique can compete with the more
sophisticated convolution-based techniques. It is also similar to the implementation
in Paliwal et al. (2020), which showed improvements in the HOList framework. GCN
(Kipf and Welling, 2017) is arguably the most adopted message passing technique and
serves as a natural starting point for GCN implementations. SGC (Wu et al., 2019) is
interesting because it is essentially the same algorithm as GCN, only simplified. This
makes it a faster algorithm while still being competitive with GCN (Wu et al., 2019).
Models are trained and validated for eight epochs in this experiment. As a comparison,
Yang and Deng (2019) train ASTactic for four epochs.
Some hyperparameters are not focused on and remain fixed. A linear layer
serves as the prediction network, and only a single round of message passing (the number
of hops, see Section 2.3.11) will be used. The learning rate will simply be set to 1e-3.
This is higher than a learning rate of 3e-5, used by Yang and Deng (2019) and First
et al. (2020), and 1e-4, used by Paliwal et al. (2020). A node embedding of 256 is chosen
as default – the same as ASTactic (Yang and Deng, 2019) and TacTok (First et al.,
2020). A simple mean operation will be used as the readout function, to globally pool
node embeddings to a fixed size graph embedding of size 256. ReLU activation is used
between each neural network layer.
Only SGC will be used during phase 2, as this is the fastest to run. These
73
-----
_Chapter 5 Experiments and Results_
experiments will be run for 20 epochs instead of just eight, as this is not too computationally expensive when using SGC. It will also allow the models a better chance of
escaping local minimums. Three configurations will be tested. These are summarized in
Table 5.2. The main goal is to compare low and medium regularization further and see if
adding complexity to the network increases performance.
Table 5.2: GAST configurations for phase 2 of experiment 1b.
**default** **reg.** **complex**
**Node emb. dim.** 256 256 1024
**Message passing** SGC SGC SGC
**Readout** Mean Mean Mean
**Prediction network** Linear Linear Linear
**Hops** 1 1 4
**Regularization** low medium medium
**Experiment 1c: BERTac on Tactic Groups**
The focus for experiment 1c is not the BERT architecture, but rather regularization
levels and learning rate. Three configurations will be tested for BERTac. These are
summarized in Table 5.3. Note that fixed BERT-specific configurations are also included
in the table.
Table 5.3: BERTac configurations for experiment 1c.
**low reg.** **low α** **medium reg. + low α**
**Regularization** low low medium
**Learning rate (α)** 1e-3 1e-6 1e-3
**Tokenizer length** 512 512 512
**Vocabulary size** 30,522 30,522 30,522
**Hidden layers** 6 6 6
**Attention heads** 6 6 6
The drop in learning rate is due to recommendations provided by Devlin et al.
(2018). They suggest that a learning rate in the range 1e-5 is usually preferable for
BERT.
Tokenizer length and vocabulary size are simply set to the default values used
in the original BERT implementation. The number of hidden layers is reduced from 12 to
6. The same is done for the number of attention heads. This is to speed up the training
process. While it would be interesting to drill deeper into the BERT architecture, time
constraints dictated that these experiments stayed reasonably off-the-shelf.
74
-----
_5.1 Experimental Plan_
**5.1.2 Experiment 2 – Supervised Learning**
Table 5.4 summarizes the configurations used in experiment 2. For both GAST and
BERTac, a tactic classifier Cτ, a local context classifier CLC and a global context
classifier CGC needs to be trained (see the theorem proving agent described in Section
4.3). Three different models will be trained for each classifier – one on human-written
proofs, one on synthetic proofs, and one on both datasets.
Table 5.4: Configurations for experiment 2. LC denotes the local context and GC the
global context.
**Datasets** human, synthetic, both
_n (hypotheses from LC)_ 10
_m (theorems from GC)_ 10
_d (depth limit)_ 10, 50, 100
_k (beam width)_ 5, 10, 20
**Regularization** medium, high
Results from experiment 1 indicated that models should at least be trained using medium regularization. The models will therefore use this setting as a default. A
version using high regularization will also be tested to see the effect of increased dropout
(see Table 5.1). Only the models with the highest validation scores will be used to build
theorem proving agents.
When designing the theorem proving agent the variables n and m were defined
(see Section 4.3). These correspond to the number of hypotheses to include from the local
context (n) and the number of theorems to include from the global context (m). m is
simply chosen to be the same as Yang and Deng (2019) and First et al. (2020): m = 10.
To decide a reasonable value for n, observe Table 5.1. This table plots the percentage of
the proof step datasets with n number of hypotheses in the local context. Most proof
steps have a local context consisting of less than 20 hypotheses. Setting n = 20 therefore
seems like a reasonable choice. Unfortunately, the ASTs can be fairly large, and GAST
needs to deal with one more AST for every added hypothesis. That is, increasing n by
one corresponds to one more AST. Therefore, n = 10 will be used instead of n = 20.
This decreases the complexity of CLC models, which is particularly important for GAST
_CLC models as too many ASTs can result in memory issues. The downside is that CLC_
will be able to deal with fewer proof states than if n is was higher.
Recall too that both CLC and CGC are only trained on examples where a true
correct argument exists (explained in Section 4.3). This means that lowering n and m
decreases the dataset sizes CLC and CGC train on. The resulting dataset sizes for the
classification models, when n = 10 and m = 10, are shown in Table 5.5. As can be seen
in the table, there is a significant drop in the dataset sizes for the argument models.
75
-----
_Chapter 5 Experiments and Results_
Human Synthetic
10.0
7.5
(%) 5.0
2.5
0.0
0 5 10 15 20 25 30
Number of hypotheses in the local context
Figure 5.1: The percentage of proof steps that have n number of hypotheses in the local
context.
Table 5.5: The dataset sizes for the supervised learning models. The numbers are presented in the format “train / validation”.
**Human** **Synthetic** **Both**
121,764 174,076 295,720
_Cτ_ 68,180 113,048 181,228
16,492 30,350 46,842
_CLC_ 9,705 19,862 29,567
6,108 14,464 20,572
_CGC_ 3,280 8,688 11,964
Depth-First Search will be used to traverse the proof tree, same as ASTactic
and TacTok. In addition, a beam width of the top k tactic applications is calculated for
each proof state so that the agent can apply a new tactic whenever it has to backtrack
the proof tree. A depth limit d is used to limit how far down a branch in the proof tree
the agent traverses. As default values d = 50 and k = 10 will be used. d is chosen to be
the same value as Yang and Deng (2019). Although k = 10 is not the optimal value
for TacTok or ASTactic, it is chosen as default because it speeds up the test process
(Yang and Deng, 2019). Note that experiment 2e will address different values for d and k
(including the optimal k = 20 from Yang et al. (2016); First et al. (2020)), to see what
impact these variables have on performance.
The default timeout and the total number of tactics the agent is allowed to try
76
-----
_5.1 Experimental Plan_
before giving up on a proof search is set to the same as experiments in Yang and Deng
(2019) and First et al. (2020). This is a timeout of ten minutes and a limit of 300 tactics
for each theorem.
**Experiment 2a: Random Guessing Baseline**
A random guessing agent, using the design from Section 4.3 will be tested as a baseline.
Because this baseline implements the same agent design as the rest of the agents in
this Thesis, results can be compared directly. Whenever either a global context or local
context dependent tactic is guessed, a random guess over the corresponding argument
space will be made. The random guessing agent will be tested with the default d = 50
and k = 10 values, and again with updated d and k values if experiment 1e shows that
different values improve results.
**Experiment 2b: Supervised GAST Models**
For this experiment, the learning rate will be set to 1e-3 and node embedding size to 256,
based on results from tactic group experiments. The readout function in this architecture
is the SortPoll operator from Zhang et al. (2018) that selects the top k features from the
node features. The top k = 30 features will be pooled.
**Experiment 2c: Supervised BERTac Models**
For this experiment, the learning rate will set to 1e-6, based on results from tactic group
experiments. The number of hidden layers and attention heads is reduced from 12 to 6
to speed up the training process.
BERTac will also be trained and validated in one additional setting: loading
pre-trained BERT model weights before training. Although it seems unlikely that
weights obtained from classical NLP pre-training tasks will help a model trying to reason
about logical expressions, it is included out of curiosity.
**Experiment 2d: Combining GAST and BERTac Models**
It is possible to combine the classifiers in arbitrary ways, as each classifier making up
the theorem proving agent is independent of each other. Suppose, for example, a GAST
model is the best core tactic classifier (Cτ ), and BERTac models are the best argument
classifiers (CLC and CGC). In that case, it is trivial to have the agent load the GAST
model as its tactic classifier and the BERTac models as its argument classifiers. For each
proof step dataset, the best performing classifier from each architecture will be combined
to form a “best” agent.
77
-----
_Chapter 5 Experiments and Results_
**Experiment 2e: Changing depth limit and beam width**
In order to see if the depth limit d and beam width (i.e., the number of tactic candidates)
_k impact the performance, a few variations of these values will be tested. d will be set to_
10 and 100, and k to 5 and 20. If the result is better for different d or k than the default
_d = 50 and k = 10, the best combination will be tested. This experiment will only use_
the overall best-performing end-to-end agent from previous experiments. Yang and Deng
(2019) also tests ASTactic with different k values. They found that k = 20 is optimal.
However, they only test with d = 50. First et al. (2020) use k = 20 and d = 5.
**5.1.3 Experiment 3 – Reinforcement Learning**
Since the deep Q-learning agent QTac (described in Section 4.4.3) trains by interactive
proof attempts, it is subject to the SerAPI bottleneck (see Section 3.2.4). This means
that exposing QTac to the whole training dataset within a reasonable time frame can be
challenging. Yang and Deng (2019) provide the average time used to prove theorems for
ASTactic. This is 2.2 seconds when k = 10. With this in mind, a time limit of only three
seconds, as opposed to the default 10 minutes, will be used when training QTac. This
drastically lowers the time spent on each proof; QTac will see more proofs in a shorter
amount of time, at the cost of failing on proofs it potentially could have solved given
more time. This is not a major issue if one assumes that most proofs are solved within
three seconds.
_QTac’s Q-network follows the same architecture as the GAST Cτ model._ This
means that it is possible to train the QTac agent with labeled examples, in addition
to replay memory training. Bansal et al. (2019b) use this approach when training
their reinforcement agent in HOList. To help QTac in the training phase, the
same idea will be used here. However, instead of only using an initial supervised
learning phase before reinforcement learning is deployed, as done by Bansal et al.
(2019a), supervised training will be interleaved with the reinforcement learning
process. Specifically, after each 1,000 proof attempt, QTac will be supervised on 2,000
synthetic proof steps. This means that an imitation proof style will influence QTac
less in the highly explorative beginning phase of training. Hopefully, this increases
the chance that QTac finds its own “style” of proving theorems, which is the main
goal for this agent. The mini-batch size is simply set to one during the supervised training.
Regularization techniques are less common in reinforcement learning than in supervised learning. This usually is fine when the test task is identical to the training task
(e.g., when teaching an agent to play chess), as overwitting is not a concern. However,
in the case of theorem proving, each proof is different and the agent must learn a
general approach to the proof procedure. Regularization can sometimes help increase
generalization. Therefore, two approaches will be tested: (1) use low regularization
while training, and (2) use medium regularization. Regularization will apply to both the
_Q-network and the target Q-network._
78
-----
_5.1 Experimental Plan_
When training the Q-network in a supervised fashion, the learning rate will be
set to 1e-3 (the same as for the supervised learning models). A lower learning rate of
1e-5 will be used for replay memory training. The reason for this is that replay memory
includes a considerable amount of noisy training data. The theorem proving task is hard,
and QTac will most likely fail in most cases. It is not desirable that QTac steps too far
along gradients when seeing potentially highly irrelevant experiences.
Interacting via SerAPI remains a bottleneck even with a time limit of three seconds. This is because SerAPI sometimes has to wait several minutes before Coq responds
to a tactic application, making the three-second timeout . QTac will, therefore, only be
trained on 10,000 theorems. This is less than 25% of the full training set in CoqGym.
_QTac will do supervised learning on 10_ 2, 000 = 20, 000 proof steps. Setting the
_·_
maximum number of tactic applications for each proof attempt to 50 means each proof
attempt will typically generate around 50 replay experiences, as QTac is expected to fail
on most theorems (i.e., by using up all 50 tactic applications). The resulting experiences
_QTac can potentially train on amounts to 10, 000_ 50 = 500, 000. As mentioned in
_·_
Section 4.4.3, not all replay memories are used for training. Instead, a random sample of
256 is pooled whenever the replay memory exceeds 307 experiences (20% more than the
replay batch size of 256). This means that QTac will train on around 400,000 replay
experiences from 10,000 proof attempts.
**Experiment 3a: Wide QTac**
As mentioned in Section 4.4.3, two training modes will be used for QTac. One is the
_wide mode. In this mode, QTac only sees each theorem once. The exploration rate ϵ is_
decayed after each proof attempt. ϵ will start at 1.0 and approach 0.2. The decay rate is
set to 3e3 in this mode. When trained on 10,000 proofs, this means ϵ will be ~0.2 at the
end of the training session.
**Experiment 3b: Deep QTac**
_QTac is also trained in the deep mode. The idea is that QTac will attempt each theorem_
10 times before moving on to the next.
However, CoqGym does not support a straightforward way to handle individual
theorems. Instead, the agents interact with proof files. This does not matter for most
cases (and has therefore not been mentioned before now), but when deciding wide
_QTac’s ϵ decay, it does. Each proof file contains an unknown, small number of theorems._
Therefore, a simple solution will be used to almost train QTac in the ideal deep mode.
Namely, have QTac train on the same proof file 10 times before moving on to the next.
It is simply assumed that each file contains around 20 theorems each.
_ϵ will be decayed so that it starts at 1.0 and ends at 0.2 when 10_ 20 = 200
_·_
79
-----
_Chapter 5 Experiments and Results_
proof attempts are reached. ϵ decay is set to 3e1, meaning that ϵ ends at ~0.2 when
reaching 200 theorems. ϵ is then reset. In this mode, QTac will only be exposed to [10]10[,][000]
= 1,000 unique proofs, as each proof is attempted 10 times.
##### 5.2 Experimental Setup
This section explains the different frameworks and resources used in the experiment
implementation. The code for all end-to-end experiments is available in a CoqGym
fork on GitHub[1]. The code for the tactic group experiments is available in a separate
repository[2].
**5.2.1 Deep Learning Frameworks**
All models are implemented using PyTorch[3]. PyTorch is a popular deep learning
framework which is high-level, flexible and allows for easy integration with Nvidia GPUs
through the CUDA API.
PyTorch Geometric[4] (Fey and Lenssen, 2019) is used to implement GNN models. This framework is a general-purpose GNN framework built on top of PyTorch. It
provides easy implementations of both custom and pre-implemented message passing
algorithms[5]. The framework also implements additional graph-related functionality, like
readout functions and graph batch handling.
The Hugging Face[6] implementation of BERT is used. Hugging Face provides
an API to extract both general-purpose Transformer implementations and specific
implementations such as BERT. Furthermore, one can load in pre-trained versions of
both a BERT tokenizer and the BERT model itself, or simply the architecture with no
specific weight initialization. The API is very flexible and lets the user specify BERT
details, like the number of hidden layers, the number of attention layers, and the number
of unique tokens in the tokenizer.
**5.2.2 CoqGym Setup**
A handful of libraries are needed when using CoqGym. A specific version of OCaml[7]
must be set up with the OPAM package manager[8]. OCaml is the functional programming
language in which Coq is written. The Coq projects comprising the CoqGym proof
1https://github.com/MaganMK/CoqGym
2https://github.com/MaganMK/prox
3https://pytorch.org/
4https://pytorch-geometric.readthedocs.io/en/latest/
5https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#convolutional-layers
6https://huggingface.co/
7https://ocaml.org/
8https://opam.ocaml.org/
80
-----
_5.3 Experimental Results_
data have to be built, a process taking around four hours. CoqGym’s caching system
also leverages a library called Lightning Memory-Mapped Database (LMDB)[9], which
needs to be available for CoqGym to work properly. Some Coq projects also use Ruby,
meaning that Ruby also has to be installed for the full dataset to be available. A detailed
description of setting up CoqGym can be found in the README on the official GitHub
repository[10].
When building the various Coq projects, an issue occurred for the coquelicot
project. Coq claims that this project makes inconsistent assumptions over another
Coq library ssreflect. Despite continued efforts, the problem was never resolved.
This means that the coquelicot project remained broken and could not be used for
experiments in this Thesis. coquelicot is part of the CoqGym test set, meaning that
agents are not evaluated on this Coq project, reducing the test set from 13,137 theorems
to 11,670 theorems.
**5.2.3 Computing Resources**
All experiments were run using the NTNU Idun HPC cluster[11] (Själander et al., 2019).
Idun provides both high-end CPU and GPU clusters. Experiments were primarily run on
Tesla P100 GPU clusters with four to eight logical cores. The exception was end-to-end
theorem proving experiments as matrix computations were not the main bottleneck but
rather the CoqGym SerAPI calls. These experiments were run on CPU clusters with
Intel Xeon cores. The tactic group experiments took between one and three days to
run, depending on the number of epochs. Training supervised learning models for the
end-to-end theorem proving task took around two days. Training reinforcement learning
agents took around two days as well, when proof search timeout was set to three seconds.
Testing models on end-to-end theorem proving took around three days, with a ten minute
time limit for each proof.
##### 5.3 Experimental Results
All relevant results will now be presented. An overview of best-performing theorem
proving agents is provided in Table 5.6. These are the main results from experiments
2 and 3. Using the default depth limit d of 50 and beam width k of 10, the baseline
random guesser can prove 6.87% of CoqGym’s test set. Keeping d = 50 and k = 10, the
best-performing supervised learning agent proves 8.65% of the theorems. This is 25.91%
more theorems than the random guessing baseline. Note that ASTactic is also included
in the table as the main comparison for this model, as it is tested using the same values
for d and k. The best-performing supervised agent with d = 50 and k = 10 scores 2.15
percentage points lower than the corresponding ASTactic model.
9https://symas.com/lmdb/
10https://github.com/princeton-vl/CoqGym
11https://www.hpc.ntnu.no/idun
81
-----
_Chapter 5 Experiments and Results_
When modifying d and k, a score of 9.98% is achieved for the supervised agent.
The random baseline also slightly improves results to 7.27%, when the same d and
_k values are used._ This means that the best-performing supervised learning agent
proves 37.28% more theorems than the corresponding random guessing agent and 2.92
percentage points lower than the state-of-the-art TacTok model (First et al., 2020).
The best-performing wide QTac agent proves 10.63%, and the best-performing
deep QTac agent proves 10.74%. 10.74% is the highest score for any agent in this Thesis,
ending up at 47.73% more theorems proved than the corresponding random guessing
agent, and 2.16 percentage points lower than TacTok.
Table 5.6: Main end-to-end theorem proving results from experiments 2 and 3. h indicate
the model was trained on human-written proof steps and s that it was trained
on synthetic proof steps. G indicates that the model was a GAST model, and
```
B that it was a BERTac model. Only the best-performing combination of
```
models is included in this overview. The easy baseline corresponds to the
best-performing Coq internal automatic engine.
**Agent** _Cτ_ _CLC_ _CGC_ _d_ _k_ **Test accuracy**
`easy (baseline 1)` - - - - - 4.90%
random guesser (baseline 2) - - - 50 10 6.87%
random guesser (baseline 2) - - - 10 20 7.27%
ASTactic - - - 50 10 10.80%
ASTactic - - - 50 20 12.20%
TacTok (state-of-the-art) - - - 5 20 12.90%
BERTac `B, s` `B, h` `B, s` 50 10 7.99%
GAST `G, s` `G, h` `G, s` 50 10 8.65%
GAST `G, s` `G, h` `G, s` 10 20 9.98%
_QTac, wide_ - `G, h` `G, s` 10 20 10.63%
_QTac, deep_ - `G, h` `G, s` 10 20 10.74%
**5.3.1 Results from Experiment 1**
The main results from experiment 1 are shown in Table 5.7. Both GAST and BERTac
beat the FFN benchmark. However, this is only by a few percentage points. Furthermore,
BERTac performed slightly better than GAST. All deep learning models significantly
outperform the weighted random guesser (baseline 1) and the most common class (baseline
2).
82
-----
_5.3 Experimental Results_
Table 5.7: Main results from experiment 1.
**Model** **Validation accuracy**
Weighted guesses (baseline 1) 27.80%
Most common class (baseline 2) 33.66%
FFN (baseline 3) 48.86%
GAST 51.28%
BERTac **52.58%**
**Results from Experiment 1a: Tactic Group Baselines**
Baselines 1 and 2 are computed based on the validation set statistics and achieve
accuracies of 27.80% and 33.66%, respectively.
Baseline 3 (the FFN model) is trained for 30 epochs. Figure 5.2 plots the validation accuracy of both the low and medium regularized models. The model does better
when medium regularization is applied, with the best accuracy of 48.86%.
1.0
0.8
49
48
47
46
0.6
0.4
45
44
0.2
0.0
0.0 0.2 0.4 0.6
low
medium
10 20 30
Epoch
Figure 5.2: Validation accuracy plots for FFN baseline from experiment 1. “low” and
“medium” refer to regularization levels.
**Results from Experiment 1b: GAST on Tactic Groups**
Figure 5.3a plots the validation accuracy from phase 1 of experiment 1b. These are the
GAST models tested with different message passing algorithms. Both the MLP and
GCN models benefit from increased regularization. This is consistent with the FFN
baseline. SGC performs well with both low and medium regularization. This might be
because SGC implements a less complex message passing algorithm (see Section 2.3.11),
83
-----
_Chapter 5 Experiments and Results_
and regularization is typically needed for more complex networks (see Section 2.3.8). All
models seem to converge towards an optimum at around 50% accuracy.
Figure 5.3b plots the validation accuracy from models in phase 2 of experiment
1b. GAST seems to benefit from training for longer than eight epochs as the models
approach 52% when trained for 20 epochs. Medium regularization helps GAST even
though SGC is the implemented message passing algorithm. The benefit from using
medium regularization can be seen from epoch 14 onward. Increasing node embedding
and the number of hops (the “complex” model) results in a more unstable learning
process and does not result in a better validation score. The best performing GAST
model achieves a validation score of 51.28% when medium regularization is applied,
SGC is the message passing algorithm, and the model is allowed to train for 20 epochs.
|G S|MLMP, LloPw, lroewg. MMLLPP,, mmeeddiiuumm CNG,C lNow, low GGCCNN,, mmeeddiiuumm GCS,G lCow, low SSGGCC,, mmeeddiiuumm|
|---|---|
|Col1|SGSCG, Clo, wlow SGSCG, Cm, emdieudmiu SGSCG, Cc,o mcopmlepxle|m x|Col4|
|---|---|---|---|
|50 45 40 35 30 25|MLMP, LloPw, lroewg. MMLLPP,, mmeeddiiuumm GCNG,C lNow, low GGCCNN,, mmeeddiiuumm SGCS,G lCow, low SSGGCC,, mmeeddiiuumm|Col3|
|---|---|---|
|51 50 49 48 47 46 45|SGSCG, Clo, wlow SGSCG, Cm, emdieudmium SGSCG, Cc,o mcopmlepxlex|Col3|
|---|---|---|
50 50 5151
45 45 5050
40 40 4949
35 35 4848
30 30 4747
MLP, low reg.MLP, low MLP, mediumMLP, medium
SGC, lowSGC, low
Validation Accuracy (%)Validation Accuracy (%)25 25 GCN, lowSGC, lowGCN, lowSGC, low GCN, mediumSGC, mediumGCN, mediumSGC, medium Validation Accuracy (%)Validation Accuracy (%)4646 SGC, mediumSGC, medium
SGC, complexSGC, complex
4545
20 20
2 2 4 4 6 6 8 8 5 5 1010 1515 2020
EpochEpoch EpochEpoch
(a) GAST using different message passing techniques.
EpochEpoch
(b) GAST using different levels of regularization and network complexity.
Figure 5.3: Validation accuracy plots for GAST models from experiment 1. “low” and
“medium” refer to regularization levels.
**Results from Experiment 1c: BERTac on Tactic Groups**
The validation accuracy plots from experiment 1c are shown in Figure 5.4. Even with
much less tuning than GAST, BERTac performs slightly better. The best score is
achieved by the low α model, at 52.58% accuracy. This is consistent with suggestions
from (Devlin et al., 2018), where a learning rate around 1e-5 is recommended.
Comparing Figure 5.4 with Figure 5.3b shows how BERTac converges faster towards an optimum than GAST. Increasing regularization does not seem to have any
impact on performance for BERTac. It hinders the convergence somewhat during the
first few epochs.
84
-----
_5.3 Experimental Results_
1.0
50
45
0.8
0.6
40
35
0.4
0.2
0.0
0.0 0.2 0.4 0.6
low
low α
medium + low α
10 15 20
Epoch
Figure 5.4: Results from of experiment 1c. BERTac is tested with different levels of
regularization and learning rates. “low α” means that learning rate is reduced
from 1e-3 to 1e-6. “low” and “medium” refer to regularization levels.
**5.3.2 Results from Experiment 2**
**Validation Accuracy for Cτ** **, CLC, and CGC Models**
The validation scores for each of the three classifiers, for both GAST and BERTac,
are shown in Table 5.8. The validation scores for BERTac with pre-trained weights
are included as well. BERTac scores the highest on tactic classification but significantly lower on both argument classification tasks. Both Cτ and CGC experience
performance gains when trained (and validated) on synthetic proof steps rather
than human-written proof steps. However, CLC performance is higher when trained
(and validated) on human-written proof steps. Training on both human-written and
synthetic proof data does not help any of the models. This points to differences in the
human-written and synthetic proof data, making it counterproductive to learn from both
datasets. It does not help BERTac to load pre-trained BERT weights, meaning classic NLP-style upstream training does not transfer well to formal expressions in this setting.
Validation plots are shown in Figure 5.5. Figure 5.5a shows how GAST and
BERTac perform similarly on the tactic classification task when trained on synthetic
proofs. Differences are bigger when training on human-written proofs. Figure 5.5b and
5.5c show how all GAST models outperform BERTac models on argument classification
tasks. The plots also show that the type of training data impacts performance for all
classifiers. Plot 5.5d is included to showcase how GAST performs better when high
regularization is applied, while BERTac performs better when medium regularization is
applied. This is consistent with experiment 1, where increasing regularization from low
to medium improved results for GAST but slightly decreased results for BERTac.
85
-----
_Chapter 5 Experiments and Results_
Table 5.8: Validation accuracy for GAST and BERTac C models.
**Model** **Dataset** **GAST** **BERTac** **Pre-trained BERTac**
human
_Cτ_ synthetic
both
human
_CLC_ synthetic
both
human
_CGC_ synthetic
both
30.19%
36.10%
34.46%
**50.24%**
48.28%
45.44%
31.28%
**32.30%**
31.08%
32.98%
**37.72%**
36.14%
32.92%
29.08%
32.56%
23.47%
27.97%
27.06%
28.90%
37.52%
-
26.54%
22.04%
-
23.47%
25.50%
|Col1|GAGSATS,T h, h BEBRETRaTc,a cs, s GAGSATS,T s, s pt pBt EBRETRaTc,a ch, BEBRETRaTc,a ch, hpt pBt EBRETRaTc,a cs,|h s|Col4|
|---|---|---|---|
|35 30 25 20 15|GAGSATS,T h, h BEBRETRaTc,a cs, s GAGSATS,T s, s pt pBt EBRETRaTc,a ch, h BEBRETRaTc,a ch, hpt pBt EBRETRaTc,a cs, s|Col3|
|---|---|---|
|0 0 0 0|GAGSATS, Th, h BEBRETRaTc,a cs, s GAGSATS, Ts, s pt pBt EBRETRaTc,a ch, BEBRETRaTc,a ch, hpt pBt EBRETRaTc,a cs,|Col3|Col4|Col5|
|---|---|---|---|---|
||||GAGSATS, Th, h BEBRETRaTc,a cs, s GAGSATS, Ts, s pt pBt EBRETRaTc,a ch, BEBRETRaTc,a ch, hpt pBt EBRETRaTc,a cs,|h s|
||||||
5050
3535
GAST,GAST, h `h` BERTac,BERTac, s s
4040
3030 GAST,GAST, s `s` pt BERTac,pt BERTac, h h
BERTac,BERTac, h `hpt BERTac,pt BERTac, s s`
2525
3030
2020
GAST,GAST, h h BERTac,BERTac, s s 2020
Validation Accuracy (%)Validation Accuracy (%) GAST,GAST, s s pt BERTac,pt BERTac, h h Validation Accuracy (%)Validation Accuracy (%)
1515
BERTac,BERTac, h hpt BERTac,pt BERTac, s s
5 5 1010 1515 2020 5 5 1010 1515 2020
EpochEpoch EpochEpoch
(a) Cτ models from experiment 2.
|32 30 28 26 24|GAGSATS, Th, h BEBRETRaTc,a sc, s GAGSATS, Ts, s pt pBtE BRETRaTc,a hc, h BEBRETRaTc,a hc, hpt pBtE BRETRaTc,a sc, s|Col3|
|---|---|---|
||||
5050
3232 GAST, mediumGAST, medium
GAST, highGAST, high
BERTac, mediumBERTac, medium
3030 GAST,GAST, h `h` BERTac,BERTac, s `s` 4040 BERTac, highBERTac, high
GAST,GAST, s `s` pt BERTac,pt BERTac, h `h`
BERTac,BERTac, h `hpt BERTac,pt BERTac, s` `s`
2828
3030
2626
Validation Accuracy (%)Validation Accuracy (%) Validation Accuracy (%)Validation Accuracy (%)2020
2424
0 0 1010 2020 3030 4040 5 5 1010 1515 2020
EpochEpoch EpochEpoch
(c) CGC models from experiment 2.
EpochEpoch
(b) CLC models from experiment 2.
5050
3030
Validation Accuracy (%)2020
|0 0 0 0|GAGSATS, Tm, emdieudmium GAGSATS, Th,i ghhigh BEBRETRaTc,a mc, emdieudmium BEBRETRaTc,a hc,i ghhigh|Col3|
|---|---|---|
5 5 1010 1515 2020
EpochEpoch
(d) CLC models with different regularization.
Figure 5.5: Validation accuracy plots for C models. “pt” denotes “pre-trained”. h refers
human-written proof steps and s to synthetic proof steps as the training data.
86
-----
_5.3 Experimental Results_
**End-to-End Theorem Proving Accuracy**
The end-to-end theorem proving accuracy for each agent from experiment 2b-d is shown
in Table 5.9. These are agents combining different classifiers based on Table 5.8. All
agents benefit from training on synthetic proofs over human-written or both datasets.
Moreover, performance for GAST and BERTac agents is improved further when having
_Cτ and CGC trained on synthetic proofs and CLC trained on human-written proofs. This_
is consistent with the validation scores from Table 5.8.
However, it is not necessarily the case that it is best to use BERTac as the Cτ
model and GAST as the argument models. This is somewhat inconsistent with the
validation scores, as validation scores for BERTac Cτ models are higher than for GAST
_Cτ models, regardless of the dataset. The best agent consists of only GAST models,_
where Cτ and CGC are trained on synthetic proofs, and CLC is trained on human-written
proofs. This agent proves 8.65% of the test set.
Table 5.9: Performance of GAST and BERTac on end-to-end theorem proving. Each
column indicates what proof step data the models were trained on. “Best
dataset” refers to each classifier being trained on the best proof step data
for that specific classifier (best meaning the highest validation score). “Best
combination” combines the best GAST and BERTac classifiers, based on
validation scores.
**Human** **Synthetic** **Both** **Best dataset**
**GAST** 7.92% 8.46% 8.29% **8.65%**
**BERTac** 6.39% 7.71% 7.75% 7.99%
**Best combination** 7.04% 8.62% 8.36% 8.61%
For experiment 2e, only the best performing agent from experiment 2b-d is
used: the “best dataset” GAST agent (see Table 5.9). Results for different depth limits d
and beam widths k are shown in Table 5.10. Results for different d values show that
lowering d to 10 improves results. This indicates that the agent should focus on shorter
proofs, searching wider in the proof tree rather than deeper. Increasing k to 20 improves
performance, and decreasing it to 5 significantly lowers performance. This is consistent
with results from (Yang and Deng, 2019) – ASTactic proves 6.5%, 10.8%, and 12.2%
for beam widths 5, 10, and 20, respectively. It is easier for the agent to perform well
if it also evaluates lower probability tactic candidates in proof states. Note that the
maximum number of tactic applications allowed in a proof search is always the same (at
300). The agent is therefore never attempting more tactic applications when k increases.
Finally, by combining the best depth limit d = 10 and beam width k = 20, the
agent proves 9.98% of the theorems in the test set.
87
-----
_Chapter 5 Experiments and Results_
Table 5.10: Results for different depth limits d and beam widths k.
_d / k_ **5** **10** **20**
**10** - 8.95% **9.98%**
**50** 3.94% 8.65% 9.36%
**100** - 8.19% -
**5.3.3 Results from Experiment 3**
The results from experiment 3 are shown in Table 5.11. Even when exposed to less
than 25% of the whole training set, QTac agents generally prove more theorems than
the supervised agents. This is encouraging. Moreover, only the Cτ model is trained
using deep Q-learning, meaning that there is potentially more to be gained by training
argument models in the same way.
Results are similar for both the deep and wide mode. Higher regularization
seems to be more critical in the wide mode than the deep. This is unexpected. Recall
that the deep mode has QTac attempt the same theorems several times, while the wide
mode only once. In other words, the deep model should become more specialized for the
theorems it is exposed to. One might expect regularization to be more beneficial in this
scenario, but this is not the case. The impact of regularization will likely be clearer if
_QTac models are trained on more theorems._
Table 5.11: Performance of QTac agents on end-to-end theorem proving. “low” and
“medium” refers to the regularization level.
**low** **medium**
Wide 10.33% 10.63%
Deep **10.74%** 9.51%
**Results for Different Coq Projects**
Table 5.12 provides an overview of results for each Coq project in CoqGym’s test set.
The table compares the best GAST and QTac agents, in addition to the random guesser
for d = 10 and k = 20. Note that the coquelicot project is marked in red as it was not
testable, as explained in Section 5.2.2.
The table shows relatively large differences in how many proofs an agent is able
to prove for different Coq projects. For example, projects like PolTac and Demos seem
to be fairly easy, while verdi and verdi-raft seem to be hard. QTac solves 20.1%
of PolTac and 67.6% of demos, while only solving 6.5% of verdi-raft and 7.0% of verdi.
In general, QTac beats GAST on most projects. However, there are some projects where
88
-----
_5.3 Experimental Results_
GAST beats QTac by a reasonable margin. For instance, on the project PolTac, GAST
proves 87 theorems, and QTac proves 73 theorems. This is a relative improvement of
19.18%. A possible explanation for this is that QTac and GAST have learned slightly
different proof styles (elaborated more on in Section 6.3.3), where following the QTac
proof style is less effective in the PolTac project. It could also be because QTac is
exposed to fewer proofs than GAST. Perhaps a subset of the training data contains
important learning examples for learning to prove the PolTac theorems, and QTac might
never be exposed to those examples.
Table 5.12: Theorem proving results for different Coq projects. The number of theorems
contained in each Coq project is shown next to the project name.
|Col1|Random Guesser GAST QTac|
|---|---|
|weak-up-to 139 buchberger 725 jordan-curve-theorem 628 dblib 180 disel 634 zchinese 43 zfc 237 dep-map 43 chinese 131 UnifySL 968 hoare-tut 18 huffman 314 PolTac 363 angles 62 coq-procrastination 8 coq-library-undecidability 2,355 tree-automata 828 fermat4 130 demos 68 coqoban 2 goedel 606 verdi-raft 2,127 verdi 514 zorns-lemma 149 coqrel 256 fundamental-arithmetics 142 coquelicot 1,467|9 (6.5%) 8 (5.8%) 10 (7.2%) 63 (8.7%) 76 (10.5%) 74 (8.7%) 15 (2.4%) 25 (4.0%) 27 (4.3%) 22 (12.2%) 31 (17.2%) 38 (21.1%) 35 (5.5%) 55 (8.7%) 81 (12.8%) 0 (0.0%) 3 (7.0%) 3 (7.0%) 14 (5.9%) 27 (11.4%) 35 (14.8%) 9 (20.9%) 5 (11.6%) 9 (20.9%) 15 (11.5%) 18 (13.7%) 30 (22.9%) 71 (7.3%) 85 (8.8%) 87 (9.0%) 0 (0.0%) 0 (0.0%) 2 (11.1%) 14 (4.5%) 22 (7.0%) 26 (8.3%) 56 (15.4%) 87 (24.0%) 73 (20.1%) 3 (4.8%) 4 (6.5%) 4 (6.5%) 2 (25.0%) 2 (25.0%) 3 (37.5%) 155 (6.6%) 243 (10.3%) 253 (10.7%) 58 (7.0%) 83 (10.0%) 76 (9.2%) 0 (0.0%) 0 (0.0%) 7 (5.4%) 43 (63.2%) 49 (72.1%) 46 (67.6%) 0 (0.0%) 0 (0.0%) 0 (0.0%) 33 (5.4%) 51 (8.4%) 48 (7.9%) 75 (3.5%) 119 (5.6%) 139 (6.5%) 32 (6.2%) 35 (6.8%) 36 (7.0%) 6 (4.0%) 10 (6.7%) 8 (5.4%) 111 (43.4%) 118 (46.1%) 130 (50.8%) 7 (4.9%) 9 (6.3%) 8 (5.6%) - - -|
|Total 11,670|848 (7.3%) 1,165 (10.0%) 1,253 (10.7%)|
89
-----
-----
## Chapter 6
# Evaluation and Discussion
The following chapter will evaluate and discuss results from this Master’s Thesis. Section
6.1 evaluates and discusses the Research Questions formulated in Chapter 1 in light of
the experimental results. Section 6.2 evaluates the Goal formulated in Chapter 1 based
on how the Thesis has answered the Research Questions. Section 6.3 discusses interesting
findings and relevant topics further.
##### 6.1 Evaluation and Discussion of Research Questions
**Research Question 1 How to design an easy and fast Auto-ITP proxy metric that also**
_indicates end-to-end theorem proving performance?_
The tactic group proxy metric designed in Section 4.3 directly addresses Research
Question 1. It is designed to balance the CoqGym dataset and emphasizes overall proof
strategy rather than individual tactics. Results from the tactic group experiments
(Section 5.3.1) indicate that the BERT-based model BERTac should predict core tactics
better than the GCN-based model GAST. This is also the case; comparing the validation
scores for the Cτ models from experiment 2 (Table 5.8 in Section 5.3.2), BERTac
outperforms GAST by 1.62 percentage points when models are trained on synthetic
proofs and by 2.79 percentage points when trained on human-written proofs.
However, this does not directly transfer to improved theorem proving ability as
shown by the end-to-end theorem proving accuracies in Table 5.9. In other words, a
model performing well on the tactic group experiment is likely to achieve a relatively
higher validation score as a Cτ model but is not necessarily a strong model for theorem
proving (this is discussed further in Section 6.3.1). A key characteristic of the metric
should be that it is indicative of theorem proving performance. Thus, there is still work
needed to meet this criterion with the tactic group-based metric proposed in this Thesis.
Another important aspect of the tactic group proxy metric is that it does not
address tactic arguments. Although the BERTac model performs better on the tactic
group experiment (as noted above), it performs significantly worse as an argument model.
This is shown by the BERTac CLC and CGC validation scores in Table 5.8. For example,
when trained on synthetic proofs, the BERTac CLC model scores 19.2 percentage points
91
-----
_Chapter 6 Evaluation and Discussion_
lower than the GAST CLC model. For the CGC models, the difference is 4.33 percentage
points, in GAST’s favor. Therefore, a key improvement to the tactic group proxy metric
would be to also account for tactic arguments. Another option is to reconsider what the
metric should be based on entirely. For example, Huang et al. (2019) propose a metric
where the model predicts how many proof steps are left. This metric is neither based on
core tactics nor tactic arguments but might still indicate end-to-end theorem proving
performance.
**Research Question 2 How can a conceptually simple end-to-end theorem proving agent**
_be designed for tactic-based ITP theorem proving?_
The theorem proving agent designed in Section 4.3 directly addresses Research Question
2. The agent is conceptually simple in that the ITP theorem proving process is
interpreted as classic machine learning problems – three separate multi-class classification
problems. The three classifiers share the same overall architecture, making model design
easier. Furthermore, each classifier is independent of the others meaning models can be
combined in arbitrary ways to build a working agent.
The agent is similar to the agents designed for the HOList framework (Bansal
et al., 2019a). The primary difference is that this Thesis’ agent does not discard the local
context and does not consider argument classification as a series of independent binary
classification tasks. This makes training argument models less expensive and avoids the
problem of the model seeing few positive examples, as explained in Section 4.3.
A drawback of this approach is that the space of potential tactic applications is
restricted. The agent designed by Yang and Deng (2019), also used by First et al.
(2020), can build tactic applications from the entire Coq tactic space, making them more
flexible. Part of the reason why agents in this Thesis are not able to outperform neither
ASTactic (Yang and Deng, 2019) nor TacTok (First et al., 2020) could be the limited
expressivity of the agent. Expressivity can, however, be improved by further developing
the tactic-building module, which can be done independently of the classifiers.
**Research Question 3 What novel embedding techniques can help models perform well in**
_CoqGym?_
Two novel embedding techniques are tested: Graph Convolutional Networks (GCNs)
(Kipf and Welling, 2017) and the BERT Transformer architecture (Devlin et al., 2018).
Supervised GAST and BERTac agents outperform corresponding random guessing agents.
The strongest BERTac agent proves 16.30% more theorems than the corresponding
random guesser, and the strongest GAST agent proves 37.28% more theorems than
the corresponding random guesser (an overview of main end-to-end theorem proving
results can be found in Table 5.6). This indicates that both GCN and off-the-shelf
BERT models can be deployed effectively for Auto-ITP in CoqGym. Moreover, when
92
-----
_6.1 Evaluation and Discussion of Research Questions_
comparing corresponding GAST and BERTac agents (i.e., when they both use the
same depth limit and beam width), the GAST agent proves 8.26% more theorems
than the BERTac agent. This can likely be attributed to GAST models significantly
outperforming BERTac models on argument prediction as shown in Table 5.8. Results
in this Thesis, therefore, suggest that GCN is more suited for Auto-ITP than BERT.
Note that tuning hyperparameters was not a significant concern in this Thesis but could
provide more nuances to this claim. Some shortcomings of the BERT-implementation in
this Thesis will be discussed further in Section 6.3.6. If these are addressed effectively,
BERT-based models could perhaps compete with the GCN models.
To gain further insights into Research Question 3, an FFN baseline model, similar to the one developed for the tactic group experiments, would shed more light on
the gains made from using GCNs and BERT. Experiment 1 indicates that GAST and
BERTac only marginally beat the FFN baseline on core tactic prediction. However, as
already explained under Research Question 1, this does not reveal the FFN baseline’s
ability to predict arguments. It is therefore not obvious how an FFN baseline model
would compare to GAST and BERTac models on end-to-end theorem proving.
It is not central to directly compare agents in this Thesis to ASTactic (Yang
and Deng, 2019) and TacTok (First et al., 2020) as different theorem proving agents are
deployed. An implementation using the same theorem proving agent could provide more
insights into how the deep learning models themselves perform. So far, different research
groups have designed unique agents, making it difficult to know what performance gains
should be contributed to the agent design and what should be contributed to the deep
learning models. The same is the case for this Thesis, as a more accessible machine
learning interpretation of the theorem proving task was a priority instead of relying on
the agent designed by Yang and Deng (2019).
**Research Question 4 How does reinforcement learning compare to supervised learning**
_in CoqGym?_
To answer this Research Question, the deep Q-learning architecture QTac was designed
(see Section 4.4.3). _QTac leverages the modular theorem proving agent designed_
in Section 4.3 – it trains a Cτ model and relies on supervised models for argument
prediction, making experiments more manageable. QTac generally outperforms the
supervised learning agents, as shown in experiment 3 (Section 5.3.3), even when exposed
to less than 25% of the CoqGym training data. The best-performing QTac model proves
7.6% more theorems than the best-performing supervised agent. This shows that a deep
reinforcement learning method, like deep Q-learning, can be leveraged effectively on the
Auto-ITP problem.
A key aspect of QTac is how it generates more training data from existing theorems by learning from both successful and failed proofs. Every intermediate proof step
93
-----
_Chapter 6 Evaluation and Discussion_
is a potential learning experience for QTac, and a single explorative proof procedure can
generate tens of such proof steps, as explained in Section 5.1.3.
It would be interesting to see how a deep reinforcement learning agent absent
of any imitation-style training performs. Note that results from Bansal et al. (2019b)
indicate that combining reinforcement learning with imitation training is superior in
HOList. Similar results for deep reinforcement learning would likely be the case. Still, it
would shed more light on the learning process of deep reinforcement learning Auto-ITP
models.
##### 6.2 Evaluation of Goal
The Goal for this Master’s Thesis was the following:
**Goal Further progress machine learning applied to formal reasoning by testing new**
_machine learning techniques on the Auto-ITP task._
Two new deep learning methods have been implemented: A GCN-based and a
BERT-based method. GNN-based models have been used in the Auto-ITP context before
(Paliwal et al., 2020), but not in CoqGym. However, the related TreeLSTM method has
been used in CoqGym (Yang and Deng, 2019; First et al., 2020). GCNs are therefore
not an entirely novel approach but rather a natural next step. Applying BERT is more
novel. It was inspired by several related works applying Natural Language Processing
(NLP) techniques to mathematics and formal reasoning (Rabe et al., 2020; Lample and
Charton, 2020; Polu and Sutskever, 2020) (see Section 3.4.1). When trained to imitate
human proofs, these techniques outperform corresponding random guessing agents by a
large margin – 16.30% for the BERT-based agent and 37.28% for the GCN-based agent.
As part of the work in this Thesis, a new theorem proving agent is developed.
This does not directly apply to the Goal of the Thesis but is a step towards better
understanding theorem proving from a machine learning perspective. This Master’s
Thesis argues that this is important, and it was therefore prioritized. A proxy metric
was developed, which also does not directly apply to the Goal of the Thesis. However, it
allows easier testing of models, which is helpful in the challenging theorem proving domain.
In addition, a deep Q-learning agent was developed and trained using a replay
memory. This model further improved results 7.6%, showing that it is possible to use
deep reinforcement learning effectively on the Auto-ITP task. This is the first time deep
reinforcement learning has been applied to ITP theorem proving. It deals with the
problem of data scarcity, which is crucial. This is pointed out by Bansal et al. (2019b)
and in the context of theorem synthesis (Wang and Deng, 2020) (see Section 3.4.2). It
also lets the agent find a unique proof strategy, effectively dealing with potential noisy
human proof styles across different formalization projects (further discussed in Section
6.3.3).
94
-----
_6.3 Further Discussion_
##### 6.3 Further Discussion
Further discussion on findings from experiments and relevant topics now follows. Section
6.3.1 takes a closer look at GAST versus BERTac Cτ models, explaining some of the
reason why it does not help to replace the GAST Cτ model with a (higher validation
score) BERTac Cτ model (as shown from results in Section 5.3.2). A discussion of the
_QTac training methodology is included in Section 6.3.2. Proof style – based on core_
tactic use – is discussed in Section 6.3.3. The CoqGym dataset is further discussed in
Section 6.3.4, where the differences between results in CoqGym versus other Auto-ITP
frameworks are addressed. Subsection 6.3.5 drills down specifically on the synthetic proof
data in CoqGym, discussing why this dataset leads to better results for imitation models
than human-written proofs. Transformer models applied to formal logic is discussed
in Section 6.3.6 where some of the shortcomings of the BERTac models are brought
up. Finally, Section 6.3.7 notes some comparisons to Hammers and Section 6.3.8 briefly
discusses proof tree traversal.
**6.3.1 Cτ Predictions**
To better understand the predictions made by supervised learning models, confusion
matrices for the weakest and strongest supervised Cτ models are plotted. Figure 6.1a
shows the confusion matrix for the GAST Cτ model trained on human-written proofs.
Figure 6.1b shows the confusion matrix for the BERTac Cτ model trained on synthetic
proofs. These are built from the predictions made during validation. Only a subset of
core tactics is included for the sake of readability. See Section 2.2.5 for an explanation of
relevant tactics.
Both models are biased towards predicting apply and rewrite. This indicates
that the Cτ models are dependent on effective CGC models in order to function well, as
both apply and rewrite are dependent on arguments from the global context to work
with the theorem proving agent. This is especially true for the GAST model. Note
that the proof step datasets are unbalanced (explained in Section 4.2), meaning the
strong bias towards predicting only a few tactics is expected and still allows the models
to achieve a validation score greater than 30% (as shown in Table 5.8). The BERTac
model predicts a wider span of tactics. This could explain the reasons why end-to-end
theorem proving performance does not necessarily improve when changing from a pure
GAST-based agent to a combination where Cτ is a BERTac model (shown in Table 5.9).
Even though Cτ has a strong validation score, it is not going to help the agent if it
proposes tactics that are not effective in driving theorem proving process forward. For
instance, BERTac Cτ is able to predict the tactic split reasonably well, which is not the
case for the GAST Cτ model, as shown in the confusion matrices. This leads to a higher
validation score but is not necessarily helpful when proving theorems, as it simply splits
subgoal into two new subgoals. split is also not a common tactic in human proofs –
it is used only in 2.4% of human-written proofs. This is much less than more popular
tactics like apply and rewrite (used in 19.47% and 11.07% of human proofs, respectively).
95
-----
_Chapter 6 Evaluation and Discussion_
(a) Confusion matrix for the GAST Cτ model
1.0 1.0
```
constructor constructor
omega omega
```
0.8 0.8
```
intro intro
intros intros
```
`simpl` 0.6 `simpl` 0.6
```
unfold unfold
```
`right` 0.4 `right` 0.4
```
apply apply
rewrite rewrite
```
0.2 0.2
```
split split
exists exists
```
0.0 0.0
```
constructor omega intro intros simpl unfold right apply rewrite split exists constructor omega intro intros simpl unfold right apply rewrite split exists
```
trained on human-written proofs.
(b) Confusion matrix for the BERTac Cτ model
trained on synthetic proofs.
Figure 6.1: Confusion matrices for Cτ models. The true tactic is denoted along the
vertical axis and the predicted along the horizontal. Values are normalized to
a probability between zero and one. Only a subset of tactics are included,
meaning each row does not necessarily add up to one.
The importance of the CGC model, as explained above, is interesting. Predicting global context arguments is essentially the same as the premise selection problem
(explained in the context of Hammers in Section 3.3.2). This problem has been pointed
out as a critical part of theorem proving in several contexts (Hoder and Voronkov, 2011;
Gauthier and Kaliszyk, 2015; Wang et al., 2017) (for example in Hammers (Gauthier
and Kaliszyk, 2015) and traditional ATP systems (Hoder and Voronkov, 2011)). This is,
as shown here, also a critical problem for the Auto-ITP agents.
**6.3.2 QTac Training**
One of the main advantages of deploying deep reinforcement learning for Auto-ITP is
that it deals with data scarcity. This has been pointed out as a key bottleneck in theorem
proving (Wang and Deng, 2020). The QTac model in this Thesis tackles this because it
learns from not just successful proofs, but also failed proofs. As explained in Section
5.1.3, it generates around 500k proof step experiences from 10k proof attempts (less than
25% of CoqGym’s training set). This is much more data than the 296k proof steps in
the combined CoqGym human-written and synthetic datasets. Of course, with many
failed proof attempts, there is likely to be lots of noise in the replay memory dataset.
To deal with this, several techniques can be applied. QTac uses a target network to
stabilize training (Mnih et al., 2015), only trains on a subset of the replay memory with
a guarantee that successful proofs will be part of the subset (as explained in Section 4.4.3).
96
-----
_6.3 Further Discussion_
However, it is not clear from experiments exactly how QTac responds to different training methods. For instance, applying more regularization to the model does not
yield any conclusive indications for how this affects training. Furthermore, exposing
the agent to fewer proofs more times (i.e., the “deep” mode, see Section 4.4.3) versus
more proofs one time (i.e., the “wide” mode, see Section 4.4.3) also does not conclusively
indicate which one is preferable as results are similar (see Table 5.11 for QTac results).
Time constraints meant that more QTac experiments had to be left out, leaving the
above questions for future study.
**6.3.3 Proof Style**
A way to get more insights into how agents prove theorems is by looking at the frequency
of how often they deploy different tactics. Figure 6.2 plots tactic frequency for successful
proofs, for the GAST and QTac agents and human-written Coq proofs. The random
guessing agent is also included for comparison. Only a subset of the most common
tactics are included, for the sake of readability. See Section 2.2.5 for an explanation of
important tactics. The following observations can be made.
The random guesser relies primarily on tactics that do not use arguments. Instead, it proves theorem by leveraging Coq’s internal automatic engines. For the random
guesser to successfully include arguments, it would have to guess both the core tactic
and the argument correctly, which is difficult.
Both the GAST and QTac agents make fair use of the internal automatic engine auto. This is expected, as auto is a powerful tactic capable of proving non-trivial
subgoals automatically. Yang et al. (2016) report that auto can prove 2.9% of the whole
CoqGym test set by itself. Such engines are used less often in human proofs.
Both the GAST and QTac agent uses intro and intros actively. This is expected as intros is a popular tactic also among human proofs. In addition, the agents
rely on local hypotheses to supply local context arguments, not direct references to terms
contained in subgoals. This does not hinder the agent, as was mentioned in Section
2.2.2, as long as the local context is modified so that subgoal terms are part of the local
hypotheses. This is achieved by using intro and intros.
The QTac agent uses induction 137.42% more often than the GAST agent.
This is interesting because induction is a tactic dependent on arguments from the local
context to work (as shown in Table 2.1). The CLC model used by QTac has a significantly
higher validation than the CGC it uses (shown in Table 5.8). A weak CGC model leads
to tactics dependent on arguments from the global context (e.g., apply or rewrite) to
fail more often, likely making QTac tend towards the local context-dependent tactic
```
induction instead. In other words, QTac appears to adopt its proof style, in a way not
```
possible for imitation-based agents.
97
-----
_Chapter 6 Evaluation and Discussion_
QTac GAST Random Guesser Human
60
40
(%)
20
0
applyassumptionautocbncbvcongruenceclearconstructordestructdiscriminateasyeautoelimexistsgeneralizehnfinductionintrointrosintuitionomegaredreflexivityrevertrewritesimplsplitsubstsymmetrytautotrivialunfold
Figure 6.2: Frequency of core tactic use for different proof agents. The plot is stacked for
improved readability.
**6.3.4 The CoqGym Dataset**
Results in CoqGym may in part be explained by the way the CoqGym dataset is split
between train, validation, and test sets. It is important to realize that the theorems in
the different splits come from different Coq projects. Models, therefore, need to learn
how to prove theorems independent of Coq projects to be able to generalize well to the
test set. However, this can be difficult as many aspects of Coq projects differ. Yang and
Deng (2019) point out that the average number of theorems in the global context varies
significantly across different projects. For instance, in the CompCert the average number
is 13,340 and in InfSeqExt it is 661 (Yang and Deng, 2019). Table 5.12 also supports
this, as it shows how performance varies significantly across different Coq projects. For
example, the best-performing QTac agent can prove 50.78% of the coqrel project and
only 6.54% of the verdi-raft project.
Perhaps it would be more reasonable to make the Auto-ITP task easier by focusing exclusively on one project. The model would train on a subset of theorems
from that specific project before being tested on the same project. In this context, the
problem of data leakage has to be addressed. In the Hammer context, this is dealt with
by human-chronological corpus building (described in Section 3.3.1) (Blanchette et al.,
2016). The performance difference between models in CoqGym and other Auto-ITP
frameworks, like HOList, might also be explained this way. The datasets in HOList
are sorted in a human-chronological ordering (Bansal et al., 2019a), meaning that
generalization from training projects to test projects might be less of an issue in this
98
-----
_6.3 Further Discussion_
framework. Results between different Auto-ITP are therefore hard to compare directly.
Tactics dependent on arguments from the global context are some of the most
popular tactics in human-written proofs. For example, apply, rewrite and unfold are
collectively used in 39.94% of the human proofs in the training set (see Table 4.1 for
an overview of tactic frequency in human proofs). The average number of theorems in
the global context across all projects is 10,350.3 (Yang and Deng, 2019), more than
one hundred times more than the ten theorems included in global context supplied to
ASTactic, TacTok, and the agents in this Thesis. This global context restriction is likely
to be a significant.
**6.3.5 CoqGym’s Synthetic Data**
As explained in Section 3.2.4, CoqGym ships with synthetic data extracted from human
proofs. Results from end-to-end theorem proving experiments in Section 5.3.2 show
that this dataset increases performance over human-written proofs. However, it is
not apparent exactly why this is. It is important to note, as also noted by Yang and
Deng (2019), that because the synthetic proof data relies on tactics extracted from
human proofs, it should not lead to a radically different theorem proving strategy than
human-written proofs. The data does not serve as a replacement for reinforcement
learning but instead provides more human-like labeled training data in CoqGym.
However, the unique characteristics of the synthetic proofs can provide insights
into how this data affects models. Synthetic proofs finish proofs by using the auto tactic.
This is interesting. It could be that this leads to less noise when training models because
the data is consistent about how to solve simple subgoals. Moreover, synthetic proofs
start by moving subgoals to the list of local hypotheses. This leads to a larger space of
potential local context arguments (Figure 5.1 explicitly shows this). Perhaps this is why
_CLC models (as the only model) perform better when trained on human-written proofs_
(shown in Table 5.8).
**6.3.6 Tailoring Transformer Models to Formal Expressions**
Formal expressions are not the same as natural human language. This clearly shows
when comparing pre-trained versions of BERTac to one without pre-trained weights (see
validation plots in Figure 5.5). The pre-trained versions perform worse across the board,
meaning that NLP-tailored pre-training seems not to transfer well to Coq expressions.
Pre-training tailored to formal expressions – similar to the he skip-tree task (Rabe et al.,
2020) and the GPT-f system (Polu and Sutskever, 2020) – is a way to address this.
When deploying the model as an argument classifier (in which multiple expressions are handled), the model performs significantly lower than the GCN models.
This points to some potential weak points of the BERTac model. An off-the-shelf
separation token might not be well-suited for Auto-ITP. It could also be a problem
99
-----
_Chapter 6 Evaluation and Discussion_
for BERTac that the concatenation of several Coq expressions results in sequences too
large for the BERT model. The preprocessing step where identifiers and expressions
are concatenated (explained in Section 4.4.2) could be modified too, by introducing a
designated BERT-style token for this purpose.
**6.3.7 Comparison to Hammers**
Hammers (described in Section 3.3) are a radically different way of automating ITP
systems. So far, CoqHammer (Czajka and Kaliszyk, 2018) significantly outperforms
Auto-ITP models in Coq. Yang and Deng (2019) report that CoqHammer is able to
prove 24.8% of CoqGym’s test set. This is 11.9 percentage points more than TacTok and
14.06 more than the best-performing QTac agent. In other words, highly optimized ATP
systems, using classical inference techniques (explained in Section 2.1), prove hard to
outperform in Coq theorem proving for now. However, results can not be compared
directly as CoqHammer deploys a premise selection step using machine learning models
trained in a human-chronological way, not using CoqGym’s training data. It could even
be that part of CoqHammer’s premise selection training data overlaps with CoqGym’s
test data.
Integration between Hammers and Auto-ITP models have significantly boosted
results in CoqGym, with ASTactic improving results by 17.8 percentage points when
CoqHammer calls are interleaved with tactic prediction (Yang and Deng, 2019). A
similar integration for agents in this Thesis is possible but was not the focus of the
Thesis. It would be interesting to test, as it allows further investigation into proof style.
In particular, it would reveal the overlap between what theorems CoqHammer and
Auto-ITP agents can prove. Perhaps, for example, QTac overlaps more with CoqHammer
than supervised agents and would therefore benefit less from integrated Hammer calls.
**6.3.8 Proof Tree Traversal**
An interesting topic not focused on in this Thesis is proof tree traversal. Agents deploy
Depth-First Search following Yang and Deng (2019), and First et al. (2020). In HOList,
Breath-First Search has been used (Bansal et al., 2019a) and TacticToe agents have been
equipped with more sophisticated heuristic-based strategies – one based on A[∗] (Gauthier
et al., 2017) and one based on Monte Carlo tree search (Gauthier et al., 2020). This latter
modification resulted in significant improvements in TacticToe (explained in Section
3.2.1), indicating that proof tree traversal has an impact on performance. Furthermore,
Yang and Deng (2019) point out that ASTactic typically finds much shorter proofs than
typical human-written proofs. This, and the fact that a lower depth limit in experiment
2 (Section 5.3.2) improves accuracy, indicates that traversing far down a branch in the
proof tree is not desirable. Therefore, using a Breath-First Search, like in HOList, could
potentially improve results in CoqGym.
100
-----
## Chapter 7
# Conclusion and Future Work
To conclude the Master’s Thesis, Section 7.1 summarizes contributions and Section 7.2
ends with some notes on possible avenues for future research.
##### 7.1 Contributions
This Master’s Thesis designs a new proxy metric for the CoqGym framework and argues
why such metrics are helpful for the theorem proving domain. The proxy metric is based
on grouping related tactics together into tactic groups (see Section 4.2). The grouping
allows the tactic dataset to become more balanced and emphasizes proof strategy rather
than specific tactics. This metric can provide a step towards easier prototyping of
Auto-ITP models. Further improvements will be to also include tactic arguments in the
proxy metric.
A new theorem proving agent is designed for Interactive Theorem Proving (ITP)
(Section 4.3). This agent turns the ITP proof procedure into three separate multi-class
classification problems. Each classification problem focuses on one of three key
aspects of tactic applications – the core tactic, the local context, and the global
context. This provides a natural machine learning interpretation of the proof procedure,
making it suited for machine learning research. Building tactics based on the output
from each classifier is done in a separate module and can be tailored (e.g., building
tactics consisting of more than one argument) independently. This agent is not
unique to Coq, as most ITP systems implement an almost identical proof procedure
with core tactics and a local and global context. Furthermore, each classifier operates independently of the others meaning classifiers can be combined in any arbitrary way.
Experiments in this Thesis focus on two deep learning embedding techniques.
One is a Graph Neural Network (GNN) technique based on Graph Convolutional
Networks (GCN) (Kipf and Welling, 2017) and the end-to-end graph classification
architecture DGCNN (Zhang et al., 2018). This implementation uses GCN message
passing to obtain node embeddings of the Abstract Syntax Tree (AST) representations
of Coq expressions before pooling node embeddings to obtain a graph representation
(see Figure 4.5 for the model architecture). The other is the Transformer model BERT
(Devlin et al., 2018). This model obtains embeddings by leveraging self-attention
101
-----
_Chapter 7 Conclusion and Future Work_
techniques directly on Coq expressions (see Figure 4.6 for the model architecture).
Furthermore, BERT is tested with and without pre-trained weights. Pre-trained weights
are obtained from Natural Language Processing (NLP) tasks, and results show that
this pre-training process does not transfer well to formal expressions (shown in Section
5.3.2). This is the first time GCNs and Transformers have been used for tactic-based
ITP theorem proving.
Several combinations of supervised GCN and BERT models are tested for endto-end theorem proving (results are presented in Section 5.3.2). The GCN models
outperform the BERT models. This can be attributed to GCN models working significantly better as argument classifiers. Several proof tree depth limits and beam widths are
tested. Results are consistent with previous results (Yang and Deng, 2019; First et al.,
2020): lowering the depth limit to 10 and increasing the beam width to 20 improves results.
A deep reinforcement learning agent is developed for CoqGym (see Figure 4.7
for the agent architecture). This agent is based on deep Q-learning (Mnih et al., 2015)
and trains by interleaving replay memory training with imitation training. The agent
trains a core tactic classifier and relies on supervised models for argument prediction.
Analysis of the deep Q-learning agent’s proof style in Section 6.3 reveals that it relies
more on the induction tactic supervised learning agents. This is interesting as the
```
induction tactic is dependent on arguments from the local context to work, and the
```
local context classifier paired with the deep Q-learning agent is stronger than the global
context classifier. This indicates that the deep Q-learning agent adapts its proof style so
that it leverages its strongest argument model. Furthermore, it trains on around 400k
proof steps generated from less than 25% of CoqGym’s test set showing how a deep
reinforcement learning approach can tackle the problem of data scarcity, as discussed
in Section 6.3.2. Results presented in Section 5.3.3 show that this agent is capable of
proving 7.55% more theorems than corresponding supervised agents.
The best-performing agent in this Master’s Thesis scored 2.16 percentage points
lower than state-of-the-art (First et al., 2020). However, results are difficult to compare
directly as a new agent was deployed in this Thesis. As a more appropriate comparison,
a random guessing baseline agent was tested. All agents prove significantly more
theorems than corresponding random guessing agents: BERT-based agent 16.30%
more, GCN-based agent 37.28% more, and deep Q-learning agent 47.73% more. As an
interesting side note, best-performing BERT, GCN, and deep Q-learning agents prove
63.06%, 103.67%, and 119.18% more theorems than Coq’s best-performing internal
automatic engine easy (Yang and Deng, 2019), respectively.
##### 7.2 Future Work
Several directions for future work are possible and will be summarized here. Some directly
build on work in this Thesis, some are inspired by work in this Thesis, while other are
102
-----
_7.2 Future Work_
completely different avenues to explore.
**Further Exploration of Deep Reinforcement Learning**
A bottleneck for training machine learning models on formal proof data is the scarcity of
labeled data. This hinders supervised learning. A way to generate more training data
from theorems is by using reinforcement learning. This has been explored in this Thesis
and by others (Wu et al., 2020; Bansal et al., 2019b). An effective theorem proving model
will likely leverage a combination of supervised and reinforcement training and can be
an exciting path to explore further. Many directions are possible. A potential starting
point is to apply deep reinforcement learning, which has shown promising initial results
in this Thesis, to not just core tactic prediction but also argument prediction. Another
is to explore the balance between exploration and exploitation further. In this context,
the ideas of training the model in a “deep” vs. “wide” mode, as described in Section
4.4.3, could be better understood. Also, training models on more proofs can be tested
and will lead to a better understanding of how regularization affects the model. Another
way to gain more insights into the deep reinforcement learning process is to gather more
statistics during training. For instance, a sliding average over the last n proof attempts
could reveal more about the convergence rate of different models.
**Focus on Premise Selection in the Auto-ITP Context**
Analysis of theorem proving agents and work by others (Hoder and Voronkov, 2011;
Gauthier and Kaliszyk, 2015; Wang et al., 2017) point to premise selection as a critical
aspect of theorem proving. For Auto-ITP, this can be interpreted as global context
classification. This is also an essential aspect of Hammers (Gauthier and Kaliszyk, 2015)
and traditional ATP systems (Hoder and Voronkov, 2011), making it an essential topic
in its own right. A new way to study this problem is as a standalone problem in the
Auto-ITP context. For example, one can define the only available core tactic to be
anything similar to Coq’s apply tactic and focus on designing the best premise selection
model for this tactic. This can be deployed as a simple Auto-ITP agent focused solely on
premise selection.
**Further Developing Proxy Metrics and Theorem Proving Agents**
This Master’s Thesis argues that the development of theorem proving proxy metrics
can benefit Auto-ITP research. Such metrics need to account for both core tactics and
tactic arguments. Further developing standardized theorem proving agents can also be
a valuable topic for future research, as an even more unambiguous machine learning
interpretation of the theorem proving task is desirable.
Further developing the agent proposed in this Thesis is a possible starting point. One
concrete way to improve this agent is to implement the tactic application ranking scheme
used in Proverbot9001 (Sanchez-Stern et al., 2020). Sanchez-Stern et al. (2020) show
103
-----
_Chapter 7 Conclusion and Future Work_
that it is beneficial to combine the ranking of core tactics and tactic arguments before
deciding on a tactic application. A similar idea can be pursued for this Thesis’ agent.
Given its modular design, this will be a reasonably straightforward customization to
make. Another concrete way to further develop the agent is to build tactics containing
multiple arguments from both the local and global context.
**Improvements on BERT and Other Transformer Models Applied to**
**Auto-ITP**
Several improvements on the BERT-based model are possible as this Thesis only deployed
an off-the-shelf solution. Some key topics here (as discussed in Section 6.3.6) are
tokenization, handling multiple expressions, dealing with expression identifiers, and pretraining. Furthermore, other Transformer models can be explored too. Several related
works focus on Transformer-based models tailored to formal logic and mathematics (Rabe
et al., 2020; Lample and Charton, 2020; Polu and Sutskever, 2020) and can be built on
for future research along these lines.
**Integration with Hammers**
Integration between Hammers (Blanchette et al., 2016) and Auto-ITP models is yet
another interesting topic. Yang et al. (2016) show how calls to CoqHammer (Czajka
and Kaliszyk, 2018) significantly boost results. On the other hand, Hammer calls are
expensive, as pointed out by Gauthier et al. (2017) in the context of the TacticToe system.
Understanding how to best leverage both Hammers and Auto-ITP models is therefore
relevant. One option for future work in this direction is to develop an agent leveraging
meta-classifiers responsible for deciding when to call the Hammer versus the Auto-ITP
model. Hammers are also interesting because they can be used to compare the overlap of
theorems classic inference techniques (i.e., the ATP inference) and tactic-based machine
learning models (i.e., the Auto-ITP models) solve. It would be interesting to investigate
this overlap for both imitation-style Auto-ITP agents and deep reinforcement learning
agents to understand better the difference between proof styles in these two settings.
**Unified Benchmarks and Frameworks**
Auto-ITP research (and machine learning applied to formal mathematics at large) has
been studied in the context of several systems. Some have focused on HOL4 (Gauthier
et al., 2017, 2020), some on HOL Light (Bansal et al., 2019a; Paliwal et al., 2020; Bansal
et al., 2019b), and some on Coq (Yang and Deng, 2019; Huang et al., 2019; First et al.,
2020; Sanchez-Stern et al., 2020). Similar work has also been based on the MetaMath
system (Wang et al., 2017; Polu and Sutskever, 2020). Even in parallel to the writing
of this Master’s Thesis, two new Auto-ITP frameworks have been proposed: LeanStep
(Han et al., 2021) and IsarStep (Li et al., 2021)[1]. While this is encouraging, it also makes
it harder to compare models and performance. A unified system and benchmark could
1This work is not mentioned in Chapter 3, as it was published in parallel to writing the Master’s Thesis.
104
-----
_7.2 Future Work_
therefore be useful. Very recently, such work has started to take place, with OpenAI
working on a common benchmark across different formal systems[2]. This is promising
for the field and an exciting avenue for further research. As discussed in Section 6.3.4,
important considerations for such a benchmark is human-chronological versus more
traditional train-validation-test splits, as mentioned in Section 6.3.4.
**Combining Autoformalization and Formal Reasoning**
Another fascinating topic is described by Szegedy (2020). Szegedy (2020) proposes
autoformalization as a natural extension of machine learning-based formal reasoning.
Most information is not stated formally but rather informally. An autoformalization
system would be able to map informal information to formal expressions. Furthermore,
one can imagine an end-to-end system, where an autoformalization module takes in
informal text, maps it to logical expressions, and an Auto-ITP-like system can reason
over the formal expressions effectively. The output of the formal reasoning procedure
could then be mapped back to informal human-readable information. As argued by
Szegedy (2020), such an end-to-end system would combine strong NLP models with
formal reasoning capabilities.
2https://github.com/openai/miniF2F
105
-----
-----
# Bibliography
Jesse Alama, Tom Heskes, Daniel Kühlwein, Evgeni Tsivtsivadze, and Josef Urban.
Premise Selection for Mathematics by Corpus Analysis and Kernel Methods. Journal
_of Automated Reasoning, 52(2):191–213, 2014._
Grzegorz Bancerek, Czesław Byliński, Adam Grabowski, Artur Korniłowicz, Roman
Matuszewski, Adam Naumowicz, and Karol Pąk. The Role of the Mizar Mathematical
Library for Interactive Proof Development in Mizar. Journal of Automated Reasoning,
61:9–32, 2018.
Kshitij Bansal, Sarah Loos, Markus Rabe, Christian Szegedy, and Stewart Wilcox.
HOList: An Environment for Machine Learning of Higher-Order Theorem Proving. In
_International Conference on Machine Learning, pages 454–463, 2019a._
Kshitij Bansal, Sarah M Loos, Markus N Rabe, and Christian Szegedy. Learning to
Reason in Large Theories without Imitation. arXiv preprint arXiv:1905.10501, 2019b.
Bruno Barras, Samuel Boutin, Cristina Cornes, Judicaël Courant, Jean-Christophe
Filliâtre, Eduardo Giménez, Hugo Herbelin, Gérard Huet, César Muñoz, Chetan
Murthy, Catherine Parent, Christine Paulin-Mohring, Amokrane Saïbi, and Benjamin
Werner. The Coq Proof Assistant Reference Manual : Version 6.1. Technical report,
INRIA, 1997.
Wolfgang Bibel. Early History and Perspectives of Automated Deduction. In Annual
_Conference on Artificial Intelligence, pages 2–18. Springer, 2007._
Jasmin Christian Blanchette, Cezary Kaliszyk, Lawrence C Paulson, and Josef Urban.
Hammering towards QED. Journal of Formalized Reasoning, 9(1):101–148, 2016.
Sascha Böhme and Tobias Nipkow. Sledgehammer: Judgement Day. In International
_Joint Conference on Automated Reasoning, pages 107–121. Springer, 2010._
Robert Boyer. The QED Manifesto. Automated Deduction - CADE, 12:238–251, 1994.
Haogang Chen, Daniel Ziegler, Tej Chajed, Adam Chlipala, M Frans Kaashoek, and
Nickolai Zeldovich. Using Crash Hoare Logic for Certifying the FSCQ File System.
In Proceedings of the 25th Symposium on Operating Systems Principles, pages 18–37,
2015.
Adam James Chlipala. Certified Programming with Dependent Types: A Pragmatic
_Introduction to the Coq Proof Assistant. MIT Press, 2013._
107
-----
_Bibliography_
Alonzo Church. A note on the Entscheidungsproblem. The Journal of Symbolic Logic, 1
(1):40–41, 1936.
Sylvain Conchon and Jean-Christophe Filliâtre. A Persistent Union-Find Data Structure.
In Proceedings of the 2007 workshop on Workshop on ML, pages 37–46, 2007.
Łukasz Czajka and Cezary Kaliszyk. Hammer for Coq: Automation for Dependent Type
Theory. Journal of Automated Reasoning, 61(1-4):423–453, 2018.
Martin Davis, George Logemann, and Donald Loveland. A Machine Program for TheoremProving. Communications of the ACM, 5(7):394–397, 1962.
David Delahaye. A Tactic Language for the System Coq. In International Conference on
_Logic for Programming Artificial Intelligence and Reasoning, pages 85–95. Springer,_
2000.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A
Large-Scale Hierarchical Image Database. In Conference on Computer Vision and
_Pattern Recognition, pages 248–255. IEEE, 2009._
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training
of Deep Bidirectional Transformers for Language Understanding. _arXiv preprint_
_arXiv:1810.04805, 2018._
William Ewald. The Emergence of First-Order Logic. In The Stanford Encyclopedia of
_Philosophy. Metaphysics Research Lab, Stanford University, spring 2019 edition, 2019._
Matthias Fey and Jan E. Lenssen. Fast Graph Representation Learning with PyTorch
Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds,
2019.
Emily First, Yuriy Brun, and Arjun Guha. TacTok: Semantics-Aware Proof Synthesis.
_Proceedings of the ACM on Programming Languages, 4:1–31, 2020._
Emilio Jesús Gallego Arias. SerAPI: Machine-Friendly, Data-Centric Serialization for
Coq. Technical report, MINES ParisTech, 2016.
Thibault Gauthier and Cezary Kaliszyk. Premise Selection and External Provers for
HOL4. In Conference on Certified Programs and Proofs, pages 49–57, 2015.
Thibault Gauthier, Cezary Kaliszyk, and Josef Urban. TacticToe: Learning to Reason
with HOL4 Tactics. In 21st International Conference on Logic for Programming,
_Artificial Intelligence and Reasoning, volume 46 of EPiC Series in Computing, pages_
125–143. EasyChair, 2017.
Thibault Gauthier, Cezary Kaliszyk, Josef Urban, Ramana Kumar, and Michael Norrish.
TacticToe: Learning to Prove with Tactics. Journal of Automated Reasoning, pages
1–30, 2020.
108
-----
_Bibliography_
Herman Geuvers. Proof assistants: History, ideas and future. Sadhana, 34(1):3–25, 2009.
Georges Gonthier. Formal Proof–The Four-Color Theorem. Notices of the American
_Mathematical Society, 55(11):1382–1393, 2008._
Georges Gonthier, Andrea Asperti, Jeremy Avigad, Yves Bertot, Cyril Cohen, François
Garillot, Stéphane Roux, Assia Mahboubi, Russell O’Connor, Sidi Biha, Ioana Pasca,
Laurence Rideau, Alexey Solovyev, Enrico Tassi, and Laurent Théry. A MachineChecked Proof of the Odd Order Theorem. In International Conference on Interactive
_Theorem Proving, pages 163–179. Springer, 2013._
Mike Gordon. From LCF to HOL: A Short History. In Proof, language, and interaction,
pages 169–186, 2000.
Adam Grabowski, Artur Kornilowicz, and Adam Naumowicz. Mizar in a Nutshell.
_Journal of Formalized Reasoning, 3(2):153–245, 2010._
Thomas Hales, Mark Adams, Gertrud Bauer, Dat Dang, John Harrison, Truong Hoang,
Cezary Kaliszyk, Victor Magron, Sean McLaughlin, Thang Nguyen, Truong Nguyen,
Tobias Nipkow, Steven Obua, Joseph Pleso, Jason Rute, Alexey Solovyev, An Ta, Trân
Trung, Diep Trieu, and Roland Zumkeller. A formal proof of the Kepler conjecture. In
_Forum of mathematics, Pi, volume 5. Cambridge University Press, 2017._
Thomas C Hales. Introduction to the Flyspeck Project. In Dagstuhl Seminar Proceedings.
Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2006.
Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W Ayers, and Stanislas Polu. Proof
Artifact Co-training for Theorem Proving with Language Models. arXiv preprint
_arXiv:2102.06203, 2021._
John Harrison. HOL Light: A tutorial introduction. In International Conference on
_Formal Methods in Computer-Aided Design, pages 265–269. Springer, 1996._
John Harrison. Floating Point Verification in HOL Light: The Exponential Function.
_Formal Methods in System Design, 16(3):271–305, 2000._
John Harrison. HOL Light: An overview. In International Conference on Theorem
_Proving in Higher Order Logics, pages 60–66. Springer, 2009._
John Harrison, Josef Urban, and Freek Wiedijk. History of Interactive Theorem Proving.
In Computational Logic, volume 9, pages 135–214, 2014.
Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural computation,
9(8):1735–1780, 1997.
Kryštof Hoder and Andrei Voronkov. Sine Qua Non for Large Theory Reasoning. In
_International Conference on Automated Deduction, pages 299–314. Springer, 2011._
109
-----
_Bibliography_
Daniel Huang, Prafulla Dhariwal, Dawn Song, and Ilya Sutskever. GamePad: A Learning Environment for Theorem Proving. In International Conference on Learning
_Representations, 2019._
Joe Hurd. First-Order Proof Tactics in Higher-Order Logic Theorem Provers. Design
_and Application of Strategies/Tactics in Higher Order Logics, number NASA/CP-2003-_
_212448 in NASA Technical Reports, pages 56–68, 2003._
Cezary Kaliszyk and Josef Urban. Stronger Automation for Flyspeck by Feature Weighting
and Strategy Evolution. In Third International Workshop on Proof Exchange for
_Theorem Proving, volume 14 of EPiC Series in Computing, pages 87–95. EasyChair,_
2013.
Cezary Kaliszyk and Josef Urban. Learning-Assisted Automated Reasoning with Flyspeck.
_Journal of Automated Reasoning, 53(2):173–213, 2014._
Cezary Kaliszyk and Josef Urban. FEMaLeCoP: Fairly Efficient Machine Learning
Connection Prover. In Logic for Programming, Artificial Intelligence, and Reasoning,
pages 88–96. Springer, 2015a.
Cezary Kaliszyk and Josef Urban. HOL(y)Hammer: Online ATP service for HOL Light.
_Mathematics in Computer Science, 9(1):5–22, 2015b._
Cezary Kaliszyk and Josef Urban. MizAR 40 for Mizar 40. Journal of Automated
_Reasoning, 55(3):245–256, 2015c._
Cezary Kaliszyk, François Chollet, and Christian Szegedy. HolStep: A Machine Learning
Dataset for Higher-order Logic Theorem Proving. International Conference on Learning
_Representations, 2017._
Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization.
_International Conference on Learning Representations, 2017._
Thomas N. Kipf and Max Welling. Semi-Supervised Classification with Graph Convolutional Networks. International Conference on Learning Representations, 2017.
Gerwin Klein, June Andronick, Kevin Elphinstone, Toby Murray, Thomas Sewell, Rafal
Kolanski, and Gernot Heiser. Comprehensive Formal Verification of an OS Microkernel.
_ACM Transactions on Computer Systems, 32(1):1–70, 2014._
Donald E. Knuth and Peter B. Bendix. Simple Word Problems in Universal Algebras. In
_Computational Problems in Abstract Algebra, pages 263 – 297. Pergamon, 1970._
Laura Kovács and Andrei Voronkov. First-Order Theorem proving And Vampire. In
_International Conference on Computer Aided Verification, pages 1–35. Springer, 2013._
Anders Krogh and John A Hertz. A Simple Weight Decay Can Improve Generalization.
In Advances in neural information processing systems, pages 950–957, 1992.
110
-----
_Bibliography_
Daniel Kühlwein, Twan van Laarhoven, Evgeni Tsivtsivadze, Josef Urban, and Tom
Heskes. Overview and Evaluation of Premise Selection Techniques for Large Theory
Mathematics. In International Joint Conference on Automated Reasoning, pages
378–392. Springer, 2012.
Guillaume Lample and François Charton. Deep Learning for Symbolic Mathematics.
_International Conference on Learning Representations, 2020._
Dennis Lee, Christian Szegedy, Markus Rabe, Sarah Loos, and Kshitij Bansal. Mathematical Reasoning in Latent Space. In International Conference on Learning Representations,
2020.
Xavier Leroy. Formal Verification of a Realistic Compiler. Communications of the ACM,
52(7):107–115, 2009.
Xavier Leroy. The CompCert C verified compiler: Documentation and user’s manual.
PhD thesis, Inria, 2016.
Wenda Li, Lei Yu, Yuhuai Wu, and Lawrence C Paulson. IsarStep: a Benchmark
for High-level Mathematical Reasoning. In International Conference on Learning
_Representations, 2021._
Sarah Loos, Geoffrey Irving, Christian Szegedy, and Cezary Kaliszyk. Deep network
guided proof search. arXiv preprint arXiv:1701.06972, 2017.
William McCune. Solution of the Robbins Problem. Journal of Automated Reasoning,
19(3):263–276, 1997.
Norman Megill and David A Wheeler. Metamath: A Computer Language for Mathematical
_Proofs. Lulu.com, 2019._
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei Rusu, Joel Veness, Marc
Bellemare, Alex Graves, Martin Riedmiller, Andreas Fidjeland, Georg Ostrovski, Stig
Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan
Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control
through deep reinforcement learning. nature, 518(7540):529–533, 2015.
M Saqib Nawaz, Moin Malik, Yi Li, Meng Sun, and M Lali. A Survey on Theorem
Provers in Formal Methods. arXiv preprint arXiv:1912.03028, 2019.
M Saqib Nawaz, M Zohaib Nawaz, Osman Hasan, Philippe Fournier-Viger, and Meng
Sun. Proof searching and prediction in HOL4 with evolutionary/heuristic and deep
learning techniques. Applied Intelligence, pages 1–22, 2020.
Michael A Nielsen. Neural Networks and Deep Learning, volume 25. Determination press
San Francisco, CA, 2015.
111
-----
_Bibliography_
Chigozie Nwankpa, Winifred Ijomah, Anthony Gachagan, and Stephen Marshall. Activation Functions: Comparison of Trends in Practice and Research for Deep Learning.
_International Conference on Computational Sciences and Technology, 2021._
Aditya Paliwal, Sarah M Loos, Markus N Rabe, Kshitij Bansal, and Christian Szegedy.
Graph Representations for Higher-Order Logic and Theorem Proving. In Association
_for the Advancement of Artificial Intelligence, pages 2967–2974, 2020._
Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. Librispeech:
An ASR corpus based on public domain audio books. In International Conference on
_Acoustics, Speech and Signal Processing, pages 5206–5210. IEEE, 2015._
Peter J. Huber. Robust Estimation of a Location Parameter. The Annals of Mathematical
_Statistics, 35(1):73 – 101, 1964._
Stanislas Polu and Ilya Sutskever. Generative Language Modeling for Automated Theorem
Proving. arXiv preprint arXiv:2009.03393, 2020.
Markus N Rabe, Dennis Lee, Kshitij Bansal, and Christian Szegedy. Mathematical
Reasoning via Self-supervised Skip-tree Training. arXiv preprint arXiv:2006.04757,
2020.
Samik Raychaudhuri. Introduction to Monte Carlo simulation. In 2008 Winter simulation
_conference, pages 91–100. IEEE, 2008._
Alan JA Robinson and Andrei Voronkov. Handbook of Automated Reasoning, volume 1.
Elsevier, 2001.
Sebastian Ruder. An overview of gradient descent optimization algorithms. arXiv preprint
_arXiv:1609.04747, 2016._
Michael Rusinowitch. Theorem-proving with Resolution and Superposition. Journal of
_Symbolic Computation, 11:21–49, 1991._
Stuart Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. Prentice
Hall Press, 3rd edition, 2010.
Alex Sanchez-Stern, Yousef Alhessi, Lawrence Saul, and Sorin Lerner. Generating
Correctness Proofs with Neural Networks. In Proceedings of the 4th ACM SIGPLAN
_International Workshop on Machine Learning and Programming Languages, pages_
1–10, 2020.
Stephan Schulz. E - A Brainiac Theorem Prover. AI Communications, 15, 09 2002.
[Burr Settles. Active Learning Literature Survey. 2009. URL http://active-learning.](http://active-learning.net/)
```
net/.
```
112
-----
_Bibliography_
David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai,
Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis. A general reinforcement learning
algorithm that masters chess, shogi, and Go through self-play. Science, 362(6419):
1140–1144, 2018.
Magnus Själander, Magnus Jahre, Gunnar Tufte, and Nico Reissmann. EPIC: An EnergyEfficient, High-Performance GPGPU Computing Research Infrastructure, 2019.
Konrad Slind and Michael Norrish. A Brief Overview of HOL4. In International
_Conference on Theorem Proving in Higher Order Logics, pages 28–32. Springer, 2008._
Samuel L Smith, Pieter-Jan Kindermans, Chris Ying, and Quoc V Le. Don’t Decay
the Learning Rate, Increase the Batch Size. International Conference on Learning
_Representations, 2018._
Raymond Smullyan. First-Order Logic. Springer, 1968.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. The
_Journal of Machine Learning Research, 15(1):1929–1958, 2014._
Christian Szegedy. A Promising Path Towards Autoformalization and General Artificial
Intelligence. In International Conference on Intelligent Computer Mathematics, pages
3–20. Springer, 2020.
Kai Sheng Tai, Richard Socher, and Christopher D Manning. Improved Semantic
Representations From Tree-Structured Long Short-Term Memory Networks. Proceedings
_of the 53rd Annual Meeting of the Association for Computational Linguistics and the_
_7th International Joint Conference on Natural Language Processing, 2015._
Alan Mathison Turing. On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 2(1):230–265, 1936.
Josef Urban and Jiří Vyskočil. Theorem Proving in Large Formal Mathematics as an
Emerging AI Field. In Automated Reasoning and Mathematics, pages 240–257. Springer,
2013.
Josef Urban, Jiří Vyskočil, and Petr Štěpánek. MaLeCoP Machine Learning Connection
Prover. In International Conference on Automated Reasoning with Analytic Tableaux
_and Related Methods, pages 263–277. Springer, 2011._
Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex
Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. WaveNet: A
Generative Model for Raw Audio. In 9th ISCA Speech Synthesis Workshop, pages
125–125, 2016.
113
-----
_Bibliography_
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N
Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is All you Need. In Advances
_in Neural Information Processing Systems, volume 30, 2017._
Jouko Väänänen. Second-order and Higher-order Logic. In The Stanford Encyclopedia of
_Philosophy. Metaphysics Research Lab, Stanford University, fall edition, 2020._
Mingzhe Wang and Jia Deng. Learning to Prove Theorems by Learning to Generate
Theorems. In Advances in Neural Information Processing Systems, volume 33, 2020.
Mingzhe Wang, Yihe Tang, Jian Wang, and Jia Deng. Premise Selection for Theorem
Proving by Deep Graph Embedding. In Advances in Neural Information Processing
_Systems, pages 2786–2796, 2017._
Boris Weisfeiler and A. A. Lehmann. A reduction of a graph to a canonical form and
an algebra arising during this reduction. Nauchno-Technicheskaya Informatsia, pages
12–16, 1968.
Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger.
Simplifying Graph Convolutional Networks. In International conference on machine
_learning, pages 6861–6871, 2019._
Minchao Wu, Michael Norrish, Christian Walder, and Amir Dezfouli. Reinforcement
Learning for Interactive Theorem Proving in HOL4. 5th Conference on Artificial
_Intelligence and Theorem Proving, 2020._
Kaiyu Yang and Jia Deng. Learning to Prove Theorems via Interacting with Proof
Assistants. In International Conference on Machine Learning, pages 6984–6994, 2019.
Li-An Yang, Jui-Pin Liu, Chao-Hong Chen, and Ying-ping Chen. Automatically Proving
Mathematical Theorems with Evolutionary Algorithms and Proof Assistants. In
_Congress on Evolutionary Computation, pages 4421–4428. IEEE, 2016._
Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An End-to-End Deep
Learning Architecture for Graph Classification. In Proceedings of the AAAI Conference
_on Artificial Intelligence, volume 32, 2018._
114
-----
Magnus Midtbø Kristiansen
Proving Theorems Using Deep Learning
-----
| [
"Magnus Midtbø, Kristiansen"
] | 2021-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Putnam-MATH: A Functional and Static Benchmark for Measuring Higher Level Mathematical Reasoning | The advent of Large Language Models (LLMs) has led to an unprecedented speed of improvement in AI capabilities. For instance, within three years of introducing the MATH dataset for mathematical reasoning in 2021 , models have attained an accuracy of 85% in 2024 – a 12-fold improvement from the original 6.9%. In addition, this achievement is merely 10% short of the 95% accuracy level reported for a three-time International Mathematical Olympiad (IMO) gold medalist. Therefore, as AI models become more capable and quickly begin to approach ceiling performances on established benchmarks, there arises a need for more challenging and long-lasting evaluation benchmarks. Therefore, we introduce the Putnam-MATH dataset, a unique collection of higher level mathematics problems that require expertlevel understanding solving. Our dataset is characterized by its challenging nature, with participants – aspiring professional mathematicians (undergraduate participants) – scoring a median of zero in the Putnam competition in 2008. In addition, numerous Putnam Fellows have achieved prominence in mathematics and other disciplines, including four who have won the Fields Medal — Terence Tao, John Milnor, David Mumford, and Daniel Quillen — and two who have received Nobel Prizes in Physics, Richard Feynman and Kenneth Wilson. Motivated by the risks of data contamination and goal to make a long-lasting benchmark in the era of fast pace of AI progress, we introduced a functional variation to our dataset. This variation modifies variable names and constants in the problems, keeping the conceptual and reasoning aspects intact, which helps in avoiding memorization by models with the potential infinite variations for some problems. Our evaluations demonstrate our benchmark is indeed difficult: GPT-4 gets 14/192 questions correctly, a specialized mathematics model like DeepSeekMath-7B gets 8/192, and popular 7B open-source models like LLama3-8B score 6/192. For our dataset’s functional variation, the numbers are even more stringent, with GPT-4 scoring 2/35, DeepSeekMath-7B 3/35, and LLama3-8B 0/35, further validating the challenging nature of our tests. The challenging nature of our datasets, both in their original and varied forms, not only tests the limits of current AI capabilities but also paves the way for new research directions in AI to push the boundaries of deep mathematical reasoning. | null | [
"Brando, Miranda",
"Eric, Chen",
"Aryan, Gulanti",
"Kai, Fronsdal",
"Emily, Xia",
"Bruno, Dumont"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
|
Question-Analysis Prompting Improves LLM Performance in Reasoning Tasks | Although LLMs have the potential to transform many fields, they still underperform humans in reasoning tasks. Existing methods induce the model to produce step-by-step calculations, but this research explores the question: Does making the LLM analyze the question improve its performance? We propose a novel prompting strategy called Question Analysis Prompting (QAP), in which the model is prompted to explain the question in ’n’ words before solving. The value of ’n’ influences the length of response generated by the model. QAP is evaluated on GPT-3.5 Turbo and GPT-4 Turbo on arithmetic datasets GSM8K, AQuA, and SAT and commonsense dataset StrategyQA. QAP is compared with other state-of-the-art prompts including chain-of-thought (CoT), Plan and Solve Prompting (PS+) and Take A Deep Breath (TADB). QAP outperforms all state-of-the-art prompts on AQuA and SAT datasets on both GPT-3.5 and GPT-4. QAP consistently ranks among the top-2 prompts on 75% of the tests. A key factor of QAP performance can be attributed to response length, where detailed responses are beneficial when answering harder questions, but can negatively affect easy questions. | This research proposes a novel prompting strategy called Question Analysis Prompting (QAP), in which the model is prompted to explain the question in words before solving, where detailed responses are beneficial when answering harder questions, but can negatively affect easy questions. | # Question-Analysis Prompting Improves LLM Performance in Reasoning Tasks
**Dharunish Yugeswardeenoo** **Kevin Zhu** **Sean O’Brien**
Algoverse AI Research
[email protected], [email protected]
**Abstract** to improve LLM performance have been focused
on sophisticating the model’s step-by-step calcu
Although LLMs have the potential to transform
lation (Gu et al., 2023). Despite SoTA prompts’
many fields, they still underperform humans in
remarkable success across various tasks, their ac
reasoning tasks. Existing methods induce the
curacies can still be further improved. In this work,
model to produce step-by-step calculations, but
this research explores the question: Does mak- we explore ways to improve the model reasoning
ing the LLM analyze the question improve its not only in the answer steps, but also how the
performance? We propose a novel prompting model interprets the question itself. By making
strategy called Question Analysis Prompting the model to explicitly interpret the question, we
(QAP), in which the model is prompted to ex
maximize its understanding of the question and
plain the question in n words before solving.
minimize missed key information. This paper in
The value of n influences the length of response
troduces Question-Analysis Prompting (QAP), a
generated by the model. QAP is evaluated on
GPT-3.5 Turbo and GPT-4 Turbo on arithmetic simple zero-shot prompting strategy that induces
datasets GSM8K, AQuA, and SAT and com- the model to first explain the question before solvmonsense dataset StrategyQA. QAP is com- ing. We include a configurable parameter within
pared with other state-of-the-art prompts in- the prompt to examine how different word counts
cluding chain-of-thought (CoT), Plan and Solve affect the quality of a model’s response.
Prompting (PS+) and Take A Deep Breath
(TADB). QAP outperforms all state-of-the-art **2** **Prompt Design**
prompts on AQuA and SAT datasets on both
GPT-3.5 and GPT-4. QAP consistently ranks The key principle behind QAP is that the model
among the top-2 prompts on 75% of the tests. should reiterate the problem in its own words beA key factor of QAP performance can be at- fore solving. The benefit is that the model will be
tributed to response length, where detailed re
able to first think about what task it is trying to
sponses are beneficial when answering harder
solve before it pursues the answer. Another princi
questions, but can negatively affect easy ques
ple is that we should be able to control how much
tions.
the model explains so that we can adapt the prompt
**1** **Introduction** to different model sizes and problem complexities.
The specific prompt used is as follows:
Large language models (LLMs) have recently
shown rapid improvement across a host of standard "Explain this problem to me in at least n words.
natural language processing (NLP) tasks, includ- Then solve for the answer."
"Explain this problem to me in at least n words.
Then solve for the answer."
ing arithmetic, commonsense and symbolic reason- In this work, we experiment with n = 25, 50, 100,
ing. (Brown et al., 2020) Although these models 150, 200. The versions of these prompts are named
show improved ability to understand and generate QAPn. Although the model is not constrained to
text (OpenAI, 2023), their performance can still generating fewer than n tokens in its summary, we
be further improved. One solution is to encour- find that the number of tokens in the response correage the model to think step-by-step. Using chain- lates strongly with the choice of n. The correlation
of-thought prompting (Wei et al., 2022), LLMs between n and median word count is 0.98. We show
are given Q&A exemplars which are designed to specific examples of the impacts of n in Figure 4.
elicit a structured step-by-step response from the
model. Many newly developed strategies meant
543
-----
Figure 1: Example of QAP prompting - shows how the prompt triggers explanation of the question followed by an
approach to solve the problem, detailed steps, finally leading to correct answer
**3** **Prompt Impact**
In Figure 1, we highlight the structure of a standard QAP output. First, the model breaks down
the question in its own words and provides detailed
analysis on each event. Many of the steps highlighted in the explanation were shown in the calculation section. Compared to the CoT output, QAP
encourages more sophistication in its response and
thus reaches the correct answer.
**4** **Experimental Setup**
**4.1** **Benchmarks**
We evaluate the effectiveness of QAP on three arithmetic reasoning datasets. These include gradeschool math questions from GSM8K (Cobbe
et al., 2021), algebraic word problems from AQuA
(Ling et al., 2017), and SAT math problems from
**AGIEval (Zhong et al., 2023). For commonsense**
reasoning, we evaluate on open-domain questions
that require implicit reasoning, from StrategyQA
(Geva et al., 2021). We evaluate on the test sets of
all benchmarks.
**4.2** **Models**
We specifically choose our models to observe the
prompts’ impacts across differences in model size.
The smaller model is GPT-3.5 Turbo with version gpt-3.5-turbo-0613. Our larger model is
**GPT-4 Turbo with version gpt-4-1106-preview**
(OpenAI, 2023). For both of the models we used
the OpenAI API [1] for running our experiments.
The temperature and Top-K sampling was set to 0
to avoid randomness and keep consistency in the
model’s responses.
**4.3** **Prompts**
For all datasets and models, we experiment with
different variations of QAP. We utilize QAP25,
**QAP50, QAP100, QAP150, and QAP200. We**
compare the performance of QAP with the baseline
(no prompt). Additionally we compare QAP with
two different zero-shot prompts: TADB - "Take a
deep breath and work on this problem step-by-step"
(Yang et al., 2023) and PS+ (Plan and Solve Plus)
(Wang et al., 2023). Finally we also compare QAP
with 8-shot chain-of-thought prompting.
**4.4** **Results**
The results for GPT-3.5 Turbo and GPT-4 Turbo
are shown in Table 1 and Table 2 respectively.
General word counts are shown in Figure 7.
**Arithmetic Reasoning: On GPT-3.5 Turbo, a**
variant of QAP is the top performer in 2 out of 3
arithmetic tasks. QAP shows significant gains on
AQuA and SAT. With GPT-4 Turbo, QAP performs
the best in the same 2 out of 3 arithmetic tasks.
This suggests that QAP may be more beneficial
[1https://platform.openai.com/docs/](https://platform.openai.com/docs/api-reference/chat)
[api-reference/chat](https://platform.openai.com/docs/api-reference/chat)
544
-----
**Prompt** **GSM8K** **AQuA** **SAT** **StratQA**
Baseline 78.7 52.8 70.9 **65.1**
QAP25 67.1 39.4 35.0 63.1
QAP50 77.8 50.0 52.7 61.4
QAP100 77.4 53.9 75.0 57.1
QAP150 78.5 **59.4** **78.6** 53.2
QAP200 76.8 52.4 75.0 51.8
TADB 78.5 57.1 74.5 62.9
CoT **79.0** 53.1 65.9 59.2
PS+ 74.7 35.0 70.9 35.6
Table 1: Results for GPT-3.5 Turbo
**Prompt** **GSM8K** **AQuA** **SAT** **StratQA**
Baseline 95.3 78.7 96.8 76.3
QAP25 94.8 77.6 94.5 77.6
QAP50 93.4 **79.1** 95.9 76.9
QAP100 94.6 75.6 96.8 77.2
QAP150 94.7 78.0 97.3 77.6
QAP200 95.0 76.4 **98.2** 75.9
TADB 95.1 78.7 96.8 **78.0**
CoT **95.6** 74.4 95.0 75.1
PS+ 94.8 52.8 97.3 77.1
Table 2: Results for GPT-4 Turbo.
on questions involving algebraic and higher-level
problem solving.
**Commonsense Reasoning:. On StrategyQA,**
QAP consistently performs second-best when compared to other prompts. On both models, QAP25
is the highest QAP performer. This suggests that
fewer-word explanations benefit commonsense reasoning. This is because too much explanation can
cause the model to confuse a simple answer (shown
in Figure 6. While there is a decline in performance
as n increases on the 3.5 model, the larger GPT-4
Turbo model yields similar performances across all
QAP variants.
**5** **Analysis**
**Question Difficulties Based On Baseline Perfor-**
**mance: Within a given dataset, the difficulty of**
the individual question may vary. We propose a
method to measure question difficulty based on performance with the baseline prompt. If the model
can answer the problem correctly with the baseline
prompt, then we consider the question to be easy;
otherwise the question is hard. We analyze the performance of different prompts across "easy" and
"hard" questions. Table 3 and Table 4 show that
QAP consistently outperforms other prompts in the
“hard” category.
**Impact Of Word Counts On Question Difficul-**
**ties: QAP generates higher word counts for both**
“easy" and “hard" questions ( Table 5 and Table 6
), despite performing lower on “easy” questions.
Although more step-by-step thought processes are
encouraged to avoid mistakes during reasoning,
this suggests that over-explanation can negatively
impact the model (also shown in Figure 5). Thus,
the most suitable word count to solve a problem
will vary from task to task; longer explanations
are best suited to more complicated questions for
which baseline prompting fails.
**Downsides Of Smaller QAPs: Despite high per-**
formance on StrategyQA, QAP25 performs poorly
on arithmetic datasets (mostly SAT and AQuA) using GPT-3.5 Turbo. Due to a small value of n, the
model outputs are unfinished responses (i.e. the
model stops midway through its reasoning steps)
(shown in Figure 8). On SAT math, 51% of responses were incomplete for QAP25. On AQuA,
19% of responses were incomplete for QAP25.
**6** **Additional Studies**
**Placement of the prompt: In this evaluation, we**
studied the impact of prompt placement on performance using GSM8K dataset. Two options for
prompt placement were considered: Q_Begin adding the prompt before the question, and Q_End
- adding the prompt after the question. Both placements provided similar results on GPT-3.5 and
GPT-4. Results shown in the rest of the paper are
based on Q_End.
**No N Constraint: To test the effectiveness of**
adding the value of N, we first examine the prompt
with just the phrase: "Explain this problem to me.
Then solve for the answer". However, the model
does not explain the question completely and in
most cases directly starts solving the question. Its
responses are no different than a response which
used no given prompt. This shows that explicitly
stating the minimum amount of words required
is more likely to induce the model to explicitly
generate an explanation of the question.
**7** **Related Work**
In one-shot and few-shot prompting, the model is
given one or more input/output examples which
will serve as a demonstration for it to solve the
problem using in-context learning (Mahabadi et al.,
2022). QAP is a zero-shot prompt. In zero-shot
545
-----
Figure 2: We consider difficulty of the problem based on baseline’s results. E.g., an incorrect answer is “hard” and a
correct answer is “easy”. Left chart shows accuracy within each difficulty. Right chart shows mean (average) word
count for within each difficulty. All results for each prompt are shown in Table 6 and Table 4
prompting the model does not receive exemplars,
but is given a specially crafted instruction on how
to approach the task (Kojima et al., 2022).
**Chain of Thought: Chain-of-thought reason-**
ing is a notable few-shot (zero-shot also exists
(Yang et al., 2023) example in which the model is
shown how to express its reasoning steps (Wei et al.,
2022). This approach was highly effective as the
model would replicate these exemplars, and their
accuracies improved drastically. CoT encouraged
the model to think step-by-step, and this concept
would be a repeating theme among other zero-shot
counterparts.
**TADB: Among different variants of Zero-Shot**
CoT, the TADB prompt (Yang et al., 2023) was
derived using an optimization objective to find instructions that would maximize task accuracy. The
eventual prompt was "Take a deep breath, and work
on this problem step by step". TADB is an example of how the wording of a prompt can drastically
impact responses.
**Plan and Solve Prompting Plus: Another zero-**
shot prompt is Plan-and-Solve Prompting (Wang
et al., 2022). There were two versions to this
prompt. The first simply asked the model devise
a plan and solve step-by-step. The second version
(PS+) extended the prompt by specifically asking
the prompt to extract relevant variables and their
corresponding numerals and to calculate intermediate results. We used PS+ on our experiments.
One difference between PS+ and QAP is that PS+
prompt is more specific to math datasets since it
instructs to extract variables, intermediate results,
etc., whereas QAP is more general. Also, PS+
prompts the model to understand the problem, but
it is not clear if model should output anything specific to the question itself. In contrast, QAP explicitly instructs the model to explain the problem in n
words.
**Question Decomposition: Question Decompo-**
sition (Radhakrishnan et al., 2023) strategy causes
the model to break down the question by creating sub-questions. The model answers each of
these sub-questions and it ties together all the subanswers into a final answer. It considers two methods for decomposition, Factored Decomposition
and CoT Decomposition. In factored decomposition each sub-question is answered in a separate
context. CoT decomposition is an intermediate between factored decomposition and CoT. It enforces
one context for sub-question, sub-answer and the
answer to the original question. The analysis of
question decomposition shows reduced bias and
ignored reasoning, improves the faithfulness of a
model-generated reasoning over CoT while retaining the performance gains of CoT.
**8** **Conclusion**
In this paper, we explored the approach of
Question-Analysis Prompting to improve LLM accuracy across math and commonsense reasoning.
The prompt focuses on how the model interprets the
task given, and whether restating the question in its
own words can further sophisticate its answer steps.
The ability of this prompting method to perform
well in diverse model types, tasks difficulty, and
546
-----
type of tasks seems promising. We plan to extend Jindong Gu, Zhen Han, Shuo Chen, Ahmad Beirami,
this work further by combining QAP with other Bailan He, Gengyuan Zhang, Ruotong Liao, Yao Qin,
Volker Tresp, and Philip Torr. 2023. A systematic sur
prompt strategies, applying decoding strategies
vey of prompt engineering on vision-language foun
and evaluating multi-modal tasks. dation models. arXiv preprint arXiv:2307.12980.
**9** **Limitations** Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu
taka Matsuo, and Yusuke Iwasawa. 2022. Large lan
There are a few limitations of QAP. First, LLMs are guage models are zero-shot reasoners. Advances in
sensitive to the prompt’s word choice, particularly _neural information processing systems, 35:22199–_
22213.
for zero-shot prompts. As a result so small changes
to the prompt wording can impact the model’s per- Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunformance. For example, the current QAP prompt som. 2017. Program induction by rationale genera
tion: Learning to solve and explain algebraic word
asks the model to "solve" for the answer. While this
problems. arXiv preprint arXiv:1705.04146.
works well for math tasks, it may not be optimal
for commonsense tasks. Secondly, the results in Rabeeh Karimi Mahabadi, Luke Zettlemoyer, James
this paper are based on four datasets and a single Henderson, Marzieh Saeidi, Lambert Mathias,
Veselin Stoyanov, and Majid Yazdani. 2022. Per
class of aligned models; further results should eval
fect: Prompt-free and efficient few-shot learning with
uate on more diverse and multi-modal datasets, as language models. arXiv preprint arXiv:2204.01172.
well as a greater variety of models. Finally, more
OpenAI. 2023. [Gpt-4 technical report.](https://api.semanticscholar.org/CorpusID:257532815) _ArXiv,_
robust methods (e.g., based on a classifier) to de
abs/2303.08774.
termine the choice of the parameter n should be
investigated to go beyond manual selection. Ansh Radhakrishnan, Karina Nguyen, Anna Chen,
Carol Chen, Carson E. Denison, Danny Hernan
**10** **Ethics** dez, Esin Durmus, Evan Hubinger, John Kernion,
Kamil.e Lukovsiut.e, Newton Cheng, Nicholas
We experiented on three arithmetic datasets: Joseph, Nicholas Schiefer, Oliver Rausch, Sam Mc
Candlish, Sheer El Showk, Tamera Lanham, Tim
GSM8K (Cobbe et al., 2021), AQuA (Ling et al.,
Maxwell, Venkat Chandrasekaran, Zac Hatfield
2017), and AGIEval SAT Math (Zhong et al., 2023).
Dodds, Jared Kaplan, Janina Brauner, Sam Bowman,
For commonsense reasoning, used StrategyQA [and Ethan Perez. 2023. Question decomposition im-](https://api.semanticscholar.org/CorpusID:259980634)
(Geva et al., 2021). GSM8K use the MIT Li- [proves the faithfulness of model-generated reasoning.](https://api.semanticscholar.org/CorpusID:259980634)
cense code, while AQUA and StrategyQA use the _ArXiv, abs/2307.11768._
Apache-2.0 code. QAP and the prompts used in Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi
this work do not jeopardize the safety of others. Lan, Roy Ka-Wei Lee, and Ee-Peng Lim. 2023. PlanThey do not include any wording which may deem and-solve prompting: Improving zero-shot chain-of
thought reasoning by large language models. arXiv
offensive to any individual or group.
_preprint arXiv:2305.04091._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le,
**References** Ed Chi, Sharan Narang, Aakanksha Chowdhery, and
Denny Zhou. 2022. Self-consistency improves chain
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
of thought reasoning in language models. _arXiv_
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
_preprint arXiv:2203.11171._
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, et al. 2020. Language models are few-shot Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
learners. Advances in neural information processing Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
_systems, 33:1877–1901._ et al. 2022. Chain-of-thought prompting elicits rea
soning in large language models. Advances in Neural
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
_Information Processing Systems, 35:24824–24837._
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu,
Nakano, et al. 2021. Training verifiers to solve math Quoc V Le, Denny Zhou, and Xinyun Chen. 2023.
word problems. arXiv preprint arXiv:2110.14168. Large language models as optimizers. arXiv preprint
_arXiv:2309.03409._
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot,
Dan Roth, and Jonathan Berant. 2021. Did aristotle Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang,
use a laptop? a question answering benchmark with Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen,
implicit reasoning strategies. Transactions of the and Nan Duan. 2023. Agieval: A human-centric
_Association for Computational Linguistics, 9:346–_ benchmark for evaluating foundation models. arXiv
361. _preprint arXiv:2304.06364._
547
-----
**A** **Appendix**
**A.1** **Analysis of Accuracy Based On Question**
**Difficulty**
Performance of prompts on problems categorized
into easy and hard - where easy problems are those
where baseline prompt leads to a correct answer
and hard problems are those where baseline prompt
**Prompt** **Easy** **Hard**
leads to a wrong answer. For each category the %
QAP25 94.6 126.7
of correct answers are calculated by number of
QAP50 123.6 158.5
correct answers(per prompt) over the total number
QAP100 200.4 229.6
of problems in that category (easy or hard)
QAP150 224.4 257.9
**Prompt** **Easy** **Hard** QAP200 270.0 301.0
QAP25 84.7 30.1 TADB 146.3 214.5
QAP50 90.0 36.7 CoT 99.4 128.3
QAP100 91.5 39.5 PS+ 197.8 216.3
QAP150 92.3 43.2
Table 5: Mean word count for Arithmetic Reasoning
QAP200 91.1 41.3
TADB 93.6 34.9
CoT 92.6 35.0
PS+ 88.2 31.5
Table 3: Accuracy for Arithmetic Reasoning
**Prompt** **Easy** **Hard**
QAP25 89.5 24.3
QAP50 87.7 24.6
QAP100 83.8 26.9
QAP150 81.4 27.0
QAP200 80.0 25.0
TADB 91.3 20.3
CoT 85.8 27.3
PS+ 70.6 21.1
**Prompt** **Easy** **Hard**
Table 4: Accuracy for Commonsense Reasoning QAP25 36.9 38.7
QAP50 71.5 73.8
QAP100 183.8 192.3
**A.2** **Analysis of Word Count based on**
QAP150 215.8 220.4
**Question Difficulty**
QAP200 268.8 274.6
Median word count generated by various prompts TADB 37.5 58.0
on all datasets and models categorized into easy CoT 29.1 30.9
and hard - where easy problems are those where PS+ 162.4 179.0
baseline prompt leads to a correct answer and hard
Table 6: Mean word count for Commonsense Reasoning
problems are those where baseline prompt leads to
a wrong answer.
548
-----
**A.3** **Example Explanations**
Figure 3: Examples of QAP inducing explanations of the question on GSM8K, AQuA, and StrategyQA. The
prompts include QAP50, QAP150, QAP50 respectively. Pink highlights key phrases (math reasoning) and orange
highlights represents useful background information (commonsense reasoning).
549
-----
**A.4** **Impact of Changing n**
Figure 4: This comparison shows how responses vary when changing n. This is only the answer portion. This was
experimented on QAP50 and QAP20 on GSM8K on AQuA. Blue represents a QAP200 section which provides
more detail than QAP100’s (Red) response on the same step. Green represents a section that QAP200 had that
QAP100 did not have at all.
550
-----
**A.5** **Large value of n for simple problems hurts the performance**
Figure 5: Example in which over-explanation can negatively impact a response. QAP50 acquires the correct answer
(34), but QAP200 does not. In fact, QAP200 reaches the correct answer, but additional explanation leads to a wrong
answer.
551
-----
Figure 6: Example in which over-explanation negatively impacts a commonsense reasoning response. The
comparison shows that more words can confuse the model.
552
-----
**A.6** **Word Counts for all datasets with GPT-3.5 and GPT-4**
Figure 7: Median word counts in response for all datasets using GPT-3.5 Turbo and GPT-4 Turbo
553
-----
**A.7** **QAP25 Unfinished Response**
Figure 8: Example in which QAP25 outputs an unfinished response on the SAT dataset.
554
-----
| [
"Dharunish, Yugeswardeenoo",
"Kevin, Zhu",
"Sean, O'Brien",
"Xiyan, Fu",
"Eve, Fleisig"
] | 2024-08-01T00:00:00 | ACL 2024 Student Research Workshop | false | 0 | 0 | null | https://aclanthology.org/2024.acl-srw.45 | https://arxiv.org/abs/2407.03624 | https://www.semanticscholar.org/paper/71e7cf4094d955f8bbaa14b08a0f13d81c0a5a02 |
RATIONALYST: Pre-training Process-Supervision for Improving Reasoning | The reasoning steps generated by LLMs might be incomplete, as they mimic logical leaps common in everyday communication found in their pre-training data: underlying rationales are frequently left implicit (unstated). To address this challenge, we introduce RATIONALYST, a model for process-supervision of reasoning based on pre-training on a vast collection of rationale annotations extracted from unlabeled data. We extract 79k rationales from web-scale unlabelled dataset (the Pile) and a combination of reasoning datasets with minimal human intervention. This web-scale pre-training for reasoning allows RATIONALYST to consistently generalize across diverse reasoning tasks, including mathematical, commonsense, scientific, and logical reasoning. Fine-tuned from LLaMa-3-8B, RATIONALYST improves the accuracy of reasoning by an average of 3.9% on 7 representative reasoning benchmarks. It also demonstrates superior performance compared to significantly larger verifiers like GPT-4 and similarly sized models fine-tuned on matching training sets. | RATIONALYST, a model for process-supervision of reasoning based on pre-training on a vast collection of rationale annotations extracted from unlabeled data, is introduced, which demonstrates superior performance compared to significantly larger verifiers like GPT-4 and similarly sized models fine-tuned on matching training sets. | ## RATIONALYST: Pre-training Process-Supervision for Improving Reasoning
**Dongwei Jiang[♠]** **Guoxuan Wang[♠]** **Yining Lu[♢]** **Andrew Wang[♠]**
**Jingyu Zhang[♠]** **Chuyu Liu[♠]** **Benjamin Van Durme[♠]** **Daniel Khashabi[♠]**
_♠Johns Hopkins University, ♢University of Notre Dame_
[email protected]
**Abstract**
The reasoning steps generated by LLMs might
be incomplete, as they mimic logical leaps common in everyday communication found in their
pre-training data: underlying rationales are frequently left implicit (unstated). To address
this challenge, we introduce RATIONALYST,
a model for process-supervision of reasoning
based on pre-training on a vast collection of
rationale annotations extracted from unlabeled
data. We extract 79k rationales from web-scale
unlabelled dataset (the Pile) and a combination of reasoning datasets with minimal human intervention. This web-scale pre-training
for reasoning allows RATIONALYST to consistently generalize across diverse reasoning tasks,
including mathematical, commonsense, scientific, and logical reasoning. Fine-tuned from
LLaMa-3-8B, RATIONALYST improves the accuracy of reasoning by an average of 3.9% on
7 representative reasoning benchmarks. It also
demonstrates superior performance compared
to significantly larger verifiers like GPT-4 and
similarly sized models fine-tuned on matching
training sets. [1]
**A typical document from LLM pre-training data**
_… Harry used magic outside of the school of Hogwarts to inflate Aunt Marge…_
_He is punished to attend a disciplinary hearing at the Ministry of Magic…_
**Implicit rationale**
_When someone breaks the rule, he will be punished!_
**in the document**
**A question posed to LLM at inference time**
**_Question: A person is caught stealing food from a store to feed their hungry_**
_family. What will likely happen to them?_
**_Choices: A: He will be punished B: He will rewarded_**
**Existing LLMs** ➋
_Let’s think step by step. Since a person is trying to help their family, they_
_will be rewarded for their act!_
**Existing LLMs + rationale supervision via RATIONALYST** ➌
_Let’s think step by step. Although this stealing has good intentions,_
**_stealing from a store breaks the rule of society, so it should be punished!_**
Figure 1: A simplified example showing how implicit ra**tionales in pre-training data can be leveraged to improve**
**reasoning. 1 : Implicit rationales (unstated logical connec-**
1
tions) occur frequently in LLM pre-training data. 2 : As a
2
result, existing LLMs pre-trained to replicate their pretraining
data tend to omit these logical steps as well. 3 : However,
3
RATIONALYST learns to generate these rationales at inference
time to supervise the chain-of-thought process for more accurate reasoning.
In the context of LLM reasoning, these rationales are typically employed through a chain-ofthought process that makes reasoning steps explicit
by articulating them as plain-text rationales. In this
approach, each subsequent rationale is generated
based on rationales produced in preceding steps,
effectively using them as a form of supervision.
However, the generated reasoning chains might
be incomplete, containing potential logical leaps
while leaving some rationales implicit (or hidden)
during the generation process. These gaps in the
reasoning chain can weaken the LLM’s reasoning
ability throughout the problem-solving process.
One reason why chain-of-thought methods
might miss implicit steps is that models trained
with “next-token prediction” often replicate the
**1** **Introduction**
Rationales play a crucial role in human reasoning
and its accuracy (Rips, 1994; Mercier and Sperber,
2011). In reasoning problems, having accurate rationales often correlates with accurate outcomes
(Tversky et al., 1982; Davis, 1984). This importance of rationales extends to Large Language Models (LLMs) as well. Wei et al. (2022) were among
the first to show that generating chain-of-thought
rationales significantly improves LLMs’ reasoning
performance. Subsequent research has further refined the methods for eliciting rationales, leading
to improved performance (Fu et al., 2023; Zhou
et al., 2022).
1Our code, data, and model can be found at this repository:
[https://github.com/JHU-CLSP/Rationalyst](https://github.com/JHU-CLSP/Rationalyst)
-----
**_Reasoning Trajectory_** **_Updated Reasoning Trajectory_** **_Updated Reasoning Trajectory_**
**_Question: Michael had 58 golf balls. On Tuesday, he_** **_Question: Michael had 58 golf balls. On Tuesday, he_** **_Question: Michael had 58 golf balls. On Tuesday, he_**
_lost 23 golf balls. On Wednesday, he lost 2 more. How_ _lost 23 golf balls. On Wednesday, he lost 2 more. How_ _lost 23 golf balls. On Wednesday, he lost 2 more. How_
_many golf balls did he have at the end of Wednesday?_ _many golf balls did he have at the end of Wednesday?_ _many golf balls did he have at the end of Wednesday?_
**_Answer:_** **_Answer:_** **_Answer:_**
_Michael started with 58 golf balls._ _Michael started with 58 golf balls. After losing 23 on_ _Michael started with 58 golf balls. After losing 23 on_
**_Tuesday, he had 58 - 23 = 35 golf balls._** **_Tuesday, he had 58 - 23 = 35 golf balls. After losing 2_**
➊RATIONALYST <BOT> There are two steps in Rationale (R) RATIONALYST **_Rationale (R)_** _more on Wednesday, he had 35 - 2 = 33 golf balls._
_solving the problem. First_ ➍ _<BOT> Since Michael only has 35_ RATIONALYST **_Rationale (R)_**
_balls, the next calculation should_
_calculate the golf balls he lost_ _<BOT> We should be able to_
_start from 35, not 58.<EOT>_
_after Tuesday <EOT>_ _give the final answer.<EOT>_
**_Candidate 1 (C1)_** **_Candidate 1 (C1)_**
_After losing 23 on Tuesday,_ _After losing 2 more on_ **_Candidate 1 (C1)_**
**Agent LLM** _he had Candidate 2 (C2)58 - 23 = 35 golf balls._ ➌ P(C1| R) = 0.91✔ **Agent LLM** _Wednesday, he hadCandidate 2 (C2)2 = 56 golf balls. 58 -_ **_P(C1| R) = 0.12✗_** **Agent LLM** _The final answer is: 2Candidate 2 (C2)_ **_P(C1| R) = 0.75✔_**
➋ **_and WednesdayAfter losing 23, he hadon Tuesday 58 -_** **_P (C2 | R) = 0.33✗_** _Wednesday, he had After losing 2 more on 35 -_ **_P (C2 | R) = 0.96✔_** _Michael lose on these Let’s substrate what_ **_P (C2 | R) = 0.25✗_**
**_23 = 35 golf balls._** **_2 = 33 golf balls._** _two days from 33_
Figure 2: An example showing how RATIONALYST works at inference time. RATIONALYST generates implicit rationales
given the current reasoning trajectory, which includes both the question and the reasoning steps generated so far 1 . Agent
1
LLM generates multiple next-step candidates for reasoning, also based on the current reasoning trajectory 2 . Implicit rationale
2
generated by RATIONALYST is used to provide heuristics for choosing the next step candidates proposed by the agent LLM by
estimating the probability of the next step candidate given the rationale 3 . The reasoning trajectory is updated iteratively with
3
the highest scoring next step candidate 4 .
omissions present in their training data. Implicit
rationales–underlying logical connections that are
often not explicitly stated–are frequently missing
in daily communication and web text. Figure 1
1 illustrates this concept using a typical docu
1
ment from LLM pre-training data. In this example, we see a passage from Harry Potter: “Harry
used magic outside... He is punished to attend..."
The text contains the implicit (unstated) rationale:
_“When someone breaks the rule, he will be pun-_
_ished!" This implicit rationale is crucial in inferring_
the causal reasoning that connects the cause (Harry
_breaking rules) to its effect (punishment), but is_
also left unstated in the context. As a result, existing LLMs trained to mimic web text will have
difficulty surfacing these implicit statements during the reasoning process, which can lead to flawed
conclusions, such as erroneously justifying theft
as a praiseworthy act when done to support one’s
family ( 2 in Figure 1).
This paper presents RATIONALYST, a model tailored for process-supervision of reasoning. RA
TIONALYST is pre-trained on a vast collection of
implicit rationales extracted from a mixture of webscale unlabeled datasets and existing reasoning
datasets. Although existing LLMs may miss crucial
details in their reasoning, leading to flawed conclusions ( 2 in Figure 1), integrating these LLMs with
2
RATIONALYST provides an additional supervision
mechanism to guide their reasoning processes, resulting in more robust conclusions ( 3 in Figure 1).
3
RATIONALYST is developed and used in three
stages: (1) we employ LLMs to extract implicit
rationales from unlabeled text corpora without hu-… _…_
man annotation. These rationales are subsequently
filtered based on their helpfulness in predicting subsequent text (§3.1); (2) we train RATIONALYST to
predict those rationales given the preceding context
(§3.2); and then (3) as depicted in Figure 2, during inference, we assume reasoning is done incrementally in a chain-of-thought fashion (Wei et al.,
2022) by another agent model, and we use RATIO
NALYST to provide supervision for the agent model
at each reasoning step throughout the reasoning
process §3.3. By adopting a data-centric approach,
RATIONALYST utilizes abundant unlabelled data to
provide process supervision (Lightman et al., 2023)
across various reasoning tasks without the need for
human annotation.
Our method extracts 65k implicit rationales from
the web-scale unlabelled dataset The Pile (Gao
et al., 2020). To adapt the extracted rationales to
our tested domain and stabilize training, we additionally extract a much smaller set of 14k implicit
rationales from the question-answer pairs in the
training sets of two reasoning datasets: GSM8K
(Cobbe et al., 2021a) and ECQA (Aggarwal et al.,
2021). Our extraction process controls for answer
leakage to prevent artificial amplification of performance. Using this curated set of rationales, RATIO
NALYST is then fine-tuned from LLaMa-3-8B. To
assess the effectiveness of our approach, we evaluate RATIONALYST on a diverse set of reasoning
tasks, including mathematical, commonsense, sci
-----
entific, and logical reasoning. Our results show that
RATIONALYST improves the accuracy of reasoning
by an average of 3.9% (§5.1). To understand the
contribution of different data sources, we conduct
an ablation study that demonstrates the utility of
rationales from both the large-scale Pile dataset and
the smaller, specialized reasoning datasets (§5.2).
Notably, RATIONALYST exhibits superior performance when compared to strong general-purpose
verifiers like GPT-4 and similar capacity models
specifically fine-tuned on matching training sets
(§5.4).
Implicit rationales generated by RATIONALYST
are also designed to provide supervision in a
human-readable form, offering improved interpretability for LLM generation. This added interpretability is particularly beneficial when reasoning
over complex domains such as mathematics or coding, where the step-by-step logic can be difficult for
humans to follow without explicit explanations. As
shown in §5.5, our model is capable of generating
human-understandable rationales for unseen data
from complex math reasoning.
Our contributions in this paper are two-fold:
- We propose RATIONALYST, a model that is pretrained on implicit rationales extracted from unlabeled text data. RATIONALYST enhances LLM
interpretability and performance during reasoning by providing process supervision.
- We empirically show RATIONALYST generalizes
across reasoning tasks and scales with unlabelled
data.
**2** **Related Work**
**Supervising reasoning.** Supervision-based approaches have been shown to enhance the reasoning abilities of LLMs. Cobbe et al. (2021b) and
Snell et al. (2024) demonstrate that training a “verifier" to supervise reasoning can be more parameterefficient than simply expanding the parameters of
the “reasoner" responsible for solving the reasoning task. Ground-truth feedback from interaction
with the environment is an effective form of supervision (Wang et al., 2023), but it works only
in controlled environments like simulated world.
General-purpose verifiers (Dhuliawala et al., 2023;
Weir et al., 2024, 2023; Vacareanu et al., 2024)
offer broader applicability utilizing principles like
compositional reasoning. However, they don’t fully
capitalize on the vast amount of unlabelled data in
the way a data-driven approach might. Process
based supervision (Lightman et al., 2023) offers
supervision at each reasoning step rather than just
at the final result. While promising, it requires substantial human annotation for the correctness of intermediate steps, making it resource-intensive. Our
work aims to address these challenges by proposing
a data-centric process-supervision method without
the need for human annotation.
**Knowledge extraction from unlabelled data.**
LLMs are conventionally trained on extensive web
data using autoregressive next-token prediction.
While effective, this approach may not fully harness the potential of the pre-training data, as latent
information within this data could be better accessed using techniques beyond simple next-token
prediction. Recent research has demonstrated several approaches to utilize this latent information to
develop more sophisticated language model capabilities. Schick et al. (2023) introduced Toolformer,
which autonomously annotates and extracts appropriate positions, names, and inputs for tool use by
leveraging supervision from future tokens. Similarly, Cornille et al. (2024) developed a method for
learning to plan coherent article writing through
self-supervised learning in text. More closely related to our work, Zelikman et al. (2024) proposed
Quiet-Star, which applied a comparable technique
to uncover underlying rationales in daily communication to enhance reasoning capabilities. Our work
adopts a strategy similar to Quiet-Star for extracting rationales in an unsupervised manner. However,
our approach diverges in its primary objective: we
aim to train a “supervisor" that can utilize these
rationales to provide process supervision for any
“reasoner." This focus enables us to implement a
simpler and more reliable method, as we don’t need
to directly integrate rationale extraction with “reasoner" training. Our approach thus offers a novel
perspective on leveraging latent information in language models to enhance their capabilities.
**Rationales as the basis for reasoning.** Various
studies have focused on improving the use of rationales to elicit reasoning. Fu et al. (2023) refine rationales for more effective reasoning elicitation, while Li et al. (2023) explore different approaches to leveraging rationales to enhance reasoning. Other works, such as Hwang et al. (2024),
examine the verification of rationales produced by
LLMs during reasoning to improve performance.
Additionally, training LLMs on rationale-rich data
is a common strategy for enhancing reasoning
-----
skills. As highlighted by Lewkowycz et al. (2022)
and Jiang et al. (2024a), LLMs trained on science
and math data tend to perform better on reasoning
tasks, particularly when CoT prompting is used. In
this work, we build on this foundation by using
rationales as the core of our method to supervise
reasoning.
**3** **Building RATIONALYST**
We discuss the construction of RATIONALYST and
its usage at inference time. First, we describe extracting rationales from unlabeled text (§3.1), then
use them to train RATIONALYST (§3.2), and finally,
employ RATIONALYST to supervise reasoning during inference (§3.3).
**Setup.** As we will be using multiple LLMs
throughout the process, we define them here:
_MRa is the trained rationale generation model_
(RATIONALYST) that generates rationales and
heuristics during inference. MAgent is a generalpurpose reasoning agent that produces candidate
reasoning steps and incorporates rationales during
inference. We use one additional model M for
initial rationale extraction, rationale filtration, and
probability estimation of potential next reasoning
steps during inference. These LLMs can be implemented using various state-of-the-art models,
allowing for adaptability to specific research needs
and computational resources.
**3.1** **Large-scale Rationale Extraction**
Implicit rationales are often embedded in unlabelled text, reflecting natural thought processes in
daily communication. Our extraction process, illustrated in Figure 3, aims to make these rationales
explicit. Using a pre-trained and aligned language
model M, we generate rationales from text and
then use M to filter these rationales to retain only
those that are useful, akin to the self-supervised
“tool” learning approach described by Schick et al.
(2023). The same M is subsequently used to train
RATIONALYST.
**A Extracting rationales from pre-training data.**
**A**
We employ M to generate rationales from the Pile.
Due to the size of this dataset, we implement a
pre-filtering process to identify reasoning-rich documents by (1) computing the average semantic embedding of representative reasoning training sets
using a paragraph embedding model, and (2) selecting documents from unlabelled datasets that exceed
a cosine similarity threshold α when compared to
this average embedding. After pre-filtering, we segment the selected paragraphs into 2000-word segments and instruct M to generate rationales at the
end of each sentence, using prompts with demonstrations. Detailed information on the prompts and
in-context learning demonstrations used for rationale extraction can be found in Appendix A.
**B** **Extracting** **rationales** **from** **reasoning**
**B**
**datasets.** In parallel to A, we also extract
A
rationales from existing reasoning datasets to
adapt the extracted rationales to our tested domain
and stabilize training. For a given reasoning
dataset with pairs of questions and final answers
_D = {(qi, ai)}i[m]=1[, we create a prompt][ P][ that]_
instructs M to generate rationales for each
reasoning step in the final answer ai. The input
of the prompt consists of the entire question and
answer, and the output includes implicit rationales
that can be inferred from the reasoning process in
the answer. Consider the concrete example from
existing datasets (bottom) in Figure 3. The solution
involves two reasoning steps: “Natalia sold 48 /
2 = 24 clips in May” and “Natalia sold 48 + 24 =
72 chips altogether.” Here, the implicit rationale
that connects the first and second steps, “Now we
_should calculate the sum of chips in April and_
_May,” is implicit yet helpful for the prediction of_
the second step. These rationales are subsequently
filtered and used to train RATIONALYST.
**C Filtering extracted rationales. Generated ra-**
**C**
tionales in A and B may not always be accurate
B
A
or helpful. In reasoning tasks, our objective is for
the extracted rationales to effectively aid in future
reasoning, which means a good rationale should
enhance the likelihood of accurately predicting the
following text. Let i be the position of the rationale r in the sequence x = x1, . . ., xn. Given a
sequence of weights (wk)k∈N, the weighted crossentropy loss for future token prediction is defined
as:
_Li(r) =_
_−_
_j=i_ _wj−i · log pM_ (xj | r, x1:j−1),
X
where M, in a different role from its previous use,
is employed to estimate the probability over tokens
_xi, . . ., xn prefixed by preceding tokens x1:i_ 1 and
_−_
rationale r. The weight assigned to each future
token decreases exponentially by a factor of 0.9 for
each step further away it is from the rationale. We
compute Li = Li(ri) − _Li(ε), where ε represents_
an empty rationale (i.e. predicting following tokens
-----
**Unlabelled Data from Different Sources** **Implicit Rationales Extracted by LLMs from Unlabeled Data** **Filtering via Future Text Perplexity**
**LLM** _… Harry used magic outside of the school of Hogwarts to inflate Aunt Marge…_ **Adding implicit rationale**
**Web** _… Harry used magic outside of the_ **_<BOT>When someone breaks the rule, he will be punished<EOT>_** **significantly reduces**
**Data** _school of Hogwarts to inflate Aunt_ _…_ **future text perplexity!**
_Marge… He is punished to attend a_
_disciplinary hearing at the Ministry_ _… Harry used magic outside of the school of Hogwarts to inflate Aunt Marge…_ **Adding implicit rationale**
_of Magic…_ **_<BOT>Hogwarts magic is incredibly versatile, capable of a wide_** **has little effect on future**
**_range of effects<EOT>_** _…_ **text perplexity!**
_Question: Natalia sold 48 chips in_ _Question: … Answer: Natalia sold 48 / 2 = 24 clips in May. <BOT>Now we_ **Adding implicit rationale**
**Existing** _April and half as many chips in May._ **_should calculate the sum of chips in April and May<EOT>._** _Natalia_ **significantly reduces**
**Datasets** _How many chips did Natalia sell_ _sold…_ **future text perplexity!**
_altogether?_
_Question: … Answer: Natalia sold 48 / 2 = 24 clips in May. <BOT>Now we_ **Adding implicit rationale**
_Answer: Natalia sold 48 / 2 = 24_ **_need to calculate the number of chips sold in May<EOT>_** _Natalia_ **has little effect on future**
_clips in May ..._ _sold…_ **text perplexity!**
Figure 3: We use LLMs to extract implicit rationales (enclosed by <BOT> and <EOT> in bold) that capture reasoning in unlabelled
text (§3.1 A and B ). The sample at the top is taken from unlabelled web-scale pre-training datasets The Pile and the sample at
the bottom is taken from existing datasets (GSM8K). These rationales are subsequently filtered based on whether they are useful
for predicting future text (§3.1 C ).
based only on preceding tokens). A rationale is generated rationales and their ground truth values
considered helpful if it makes the prediction of
future tokens easier, indicated by Li _τf_, where
_≥_
_τf is a filtering threshold. We retain rationales for_
which adding the rationale reduces the loss by at
least τf compared to having no rationale.
It’s crucial to clarify two key aspects of our rationale extraction process. First, while M extracts rationales from the training sets of reasoning datasets,
these training sets are not directly used as targets
when training M itself. Second, we explicitly instruct M to exclude answers from the extracted
rationales. This precaution prevents answer leakage in our prompts, thereby ensuring the integrity
of our reasoning process.
**3.2** **RATIONALYST Training**
The goal of RATIONALYST (denoted by MRa) training is to develop a model that can generate implicit
rationales to guide stepwise problem-solving during inference time. For web-scale datasets like The
Pile, the input context consists of a segment of text
from a document. MRa learns to generate an implicit rationale that can guide the prediction of the
next segment of text in the document’s flow. In
the case of structured reasoning datasets such as
GSM8K or ECQA, the input context includes the
question and any preceding reasoning steps toward
the answer. Here, MRa learns to generate rationales that could guide the next step in the problemsolving sequence.
Given the appropriate context from either source,
the implicit rationales, extracted and filtered as described in §3.1, serve as the target outputs during
training. The overall training objective is to minimize the per-token cross-entropy loss between the
generated rationales and their ground truth values
from the extracted and filtered rationales. By learning to generate appropriate rationales for both freeform text and structured problem-solving data, RA
TIONALYST develops the ability to provide meaningful guidance across a wide range of contexts
during inference.
**3.3** **Inference with the Help of RATIONALYST**
During inference, a general-purpose LLM (the
“agent model" or MAgent) is employed for reasoning
across various problems. Algorithm 1 outlines the
procedure.
_MAgent generates reasoning incrementally in a_
chain-of-thought fashion, producing multiple candidates for the next reasoning step. These steps
and the question form a “reasoning trajectory" T
that aims to solve the problem, which also serves
as input to MRa. MRa then generates r, the implicit
rationale (line 3) With the help of implicit rationale,
we provide supervision for the next reasoning step.
Two supervision methods we considered are:
**Implicit** **supervision.** For this supervision,
_MAgent generates the next reasoning steps condi-_
tioned on the trajectory T (line 6). We then use
_M to estimate the probability of potential next rea-_
soning steps given rationale r and reasoning trajectory T (line 13). This probability-based heuristic
aligns with our rationale filtration process used during MRa training. Just as we identified rationales
that improved the prediction of future text during
filtration, here we use rationales to improve the
selection of future reasoning steps. By leveraging
the probability estimates as a heuristic, we can effectively discriminate between more and less likely
-----
next steps in the reasoning process, guiding the
overall trajectory towards more accurate conclusions.
**Explicit supervision.** Another approach is to directly incorporate the implicit rationale into the
generation of the next reasoning steps. This method
makes the previously implicit rationale an explicit
part of the reasoning process. To do that, we ask
_MAgent to generate multiple candidate next steps_
by temporarily appending r to the trajectory T, and
then producing potential continuations based on
this augmented context (line 8). Then, we estimate
the probability of candidate generations according
to MAgent (line 15). This approach allows MAgent to
make the final decision on the next reasoning step,
as in normal beam search (Snell et al., 2024; Yao
et al., 2023), while benefiting from the additional
context provided by MRa’s rationales.
**Algorithm 1 Inference with RATIONALYST**
**Input: Question q, RATIONALYST MRa, Agent model MAgent,**
Probability estimation model M ;
**Functions: Heuristic function H(MRa, q, T** ), stopping condition
_stop_condition()_
**Hyperparameters: Sampling temperature t and number of sam-**
pled rationales N
1: T ← _q_ _▷Initialize reasoning trajectory as the question._
2: repeat
3: _r ←_ _MRa(T_ ) _▷Generate implicit rationale given trajec-_
_tory._
4: _heuristic_list = ∅_ _▷Empty the heuristic list at every_
_step._
5: **if supervision == implicit then**
6: _next_steps ←_ _MAgent(T_ ) _▷Sample next reasoning_
_steps._
7: **else if supervision == explicit then**
8: _next_steps ←_ _MAgent(T, r)_
9: **end if**
10: **for n = 1 . . . N do**
11: _x ←_ _next_steps[n]_ _▷Take next step generation._
12: **if supervision == implicit then**
13: _h ←_ _M_ (x|T, r) ▷Estimate prob. of next reasoning
_step._
14: **else if supervision == explicit then**
15: _h ←_ _MAgent(x|T, r)_
16: **end if**
17: _heuristic_list.append(h)_ _▷Retain the heuristic._
18: **end for**
19: _max_idx ←_ _argmax(heuristic_list)_
20: _T ←_ _T +_ _next_steps[max_idx] ▷Extend trajectory with_
_the highest scoring step._
21: until stop_condition(T ) _▷E.g.,_ _the_ _trajectory_ _contains_
_strings like “The final answer is:”_
22: return T
The computational cost of MRa is comparable
to a normal beam search, with the only additional
cost being the generation of rationales, which are
typically quite short.
**4** **Experimental Setup**
**4.1** **Setup for Training RATIONALYST**
**Rationale extraction.** As discussed in §3.1 A,
A
we perform pre-filtering on The Pile, an unlabelled
web-scale dataset, to identify documents with extensive reasoning content before rationale extraction. This is achieved by computing the average
semantic embedding from the training sets of the
reasoning datasets we test, filtering documents that
exceed the cosine similarity threshold α of 0.3,
and keeping only the documents with length under
2000 tokens to fit within LLaMa-3 models’ context length. The model we used to calculate these
embeddings is MPNet-base (Song et al., 2020).
Following the recipe in §3.1 B, we also ex
B
tract rationales from existing reasoning datasets.
GSM8K (Cobbe et al., 2021b) and ECQA (Aggarwal et al., 2021) were selected for their complementary coverage of mathematical and commonsense
reasoning, respectively. This combination ensures
RATIONALYST is trained on diverse reasoning patterns, enhancing its versatility across various tasks.
**Rationale annotation and filtration.** The model
_M used for rationale extraction and rationale_
filtering are both LLama-3-8B-Instruct (MetaAI,
2024). On GSM8K and ECQA, we manually annotated 100 pairs of {preceding_context, rationale,
following_context} to determine an appropriate filtration threshold. The annotations include 50 positive and 50 negative rationale examples. Since
it’s straightforward to scale up the extraction of
rationales from unlabelled data for filtration, we
prioritize maximizing the precision of our filtered
rationales, even if it means extracting fewer of them.
We set the threshold τf to ensure that 95% of the
filtered rationales are accurate. On The Pile, we do
not perform rationale annotation due to its diverse
composition of corpora with varying characteristics. So the filter threshold τf for the Pile is set to
0 for all of its subdomains.
**The resulting data from extraction/filteration.**
The results of rationale extraction and filtration
on GSM8K, ECQA, and The Pile are presented
in Table 1. On GSM8K, our method generates an
average of 2.34 rationales per document, while on
After providing heuristics for the next reasoning
steps, the step with the highest heuristic (line 19) is
selected. The reasoning trajectory is then extended
with this highest-scoring step (line 20). The reasoning process concludes when the stop condition
is satisfied (line 21), which varies by dataset and
often includes cues like “The final answer is:" that
can be specified in system prompts for MAgent for
different tasks.
-----
entific reasoning, and the recently proposed multitask reasoning dataset MMLU-Pro (Wang et al.,
2024) for holistic reasoning across multiple tasks.
All tasks are evaluated using exact match as the
metric. We apply the postprocessing setups from
lm-evaluation-harness[2] before exact match calculation where applicable.
**Task** **#Eval** **#Shots** **Reasoning Type**
**GSM8K** 1319 8 Math
**Math** 5000 5 Math
**ECQA** 17944 6 CommonSense
**HellaSwag** 10000 4 CommonSense
**ProofWriter** 600 2 Logical
**ARC** 1172 4 Scientific
**MMLU-Pro** 12000 5 Mixed
Table 2: Configuration of the reasoning tasks tested. “#Eval”
shows the number of instances used for evaluation. “#Shots”
denotes the number of few-shot demonstrations provided for
evaluation. For Math dataset, we report the results on all of
the subsets. To evaluate ECQA, we use the validation split
because the evaluation set requires running the evaluation on
the original author’s server. For ProofWriter, we include only
proofs with a depth of 5 or more. For ARC, we test on the
ARC-Challenge subset.
**Inference setting.** The model MAgent used for
our baseline inference is also LLaMa-3-8B-Instruct.
As mentioned earlier, to incorporate RATIONA
LYST, we instruct MAgent to reason in a chain-ofthought manner. For procedural reasoning tasks
like GSM8K, Math and MMLU_Pro, we provide
in-context learning examples that break down the
reasoning into individual steps leading to the final answer. For multiple-choice reasoning tasks
like ECQA and ARC, we include examples that
analyze and compare each answer choice. The content and number of in-context demonstrations align
with lm-evaluation-harness or the original paper
if available; otherwise, they are adjusted to fit the
context window of LLaMa-3-8B-Instruct. Detailed
prompts and in-context learning demonstrations are
provided in Appendix B.
For all experiments, we employ a temperature of
0.7 during inference to facilitate sampling. This approach diverges from the conventional use of temperature 0, yielding improved performance on certain datasets (e.g., ProofWriter) while marginally
reducing effectiveness on others (e.g., GSM8K).
We set the sampling parameter top_k to 3, allowing MAgent to sample three reasoning steps simultaneously at each inference stage. For the baseline
[2https://github.com/EleutherAI/](https://github.com/EleutherAI/lm-evaluation-harness)
[lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
**Dataset** **Subdomains** **# Docs. # Rationales [Rationales]** _τf_
**Left (%)**
GSM8K N/A 7473 17566 19.5 1.2
ECQA N/A 7600 19669 57.6 0.5
Pile-CC 266.6K 853.2K 2.9
StackExchange 21.8K 113.6K 29.8
Github 19.9K 45.8K 2.6
HackerNews 5.8K 24.4K 9.4
PubMed Central 4.9K 18.6K 3.2
Wikipedia (en) 4.2K 23.0K 7.8
The Pile
Table 1: The statistics on rationale sampling and filtration.
We provide the total number of documents and rationales
before filtering, and the percentage of leftover rationales after
filtering.
ECQA, it generates 2.58 rationales per document.
The filtration process removes 80.5% of the generated rationales on GSM8K and 42.4% on ECQA.
For The Pile, we report the number of rationales per document and the number after filtration
for each subdomain. The Pile’s documents, being
longer than those in GSM8K and ECQA, yield a
higher average number of rationales per document.
Among the subdomains, StackExchange retains the
highest percentage of rationales, likely due to its
question-answering format aligning well with our
reasoning tasks and containing more inherent reasoning. However, The Pile as a whole contains
less reasoning content, making rationale extraction
challenging. Setting the threshold to 0 accepts all
rationales more helpful than not having them, but
the yield remains low. A manual review shows
that most filtered rationales describe the preceding
context rather than guiding future reasoning.
In total, we extracted approximately 14k rationales from GSM8K and ECQA combined, and
about 65k from The Pile after filtration.
**RATIONALYST training.** RATIONALYST is finetuned with LLaMa-3-8B-Instruct as the base model.
We use the default hyperparameters as specified in
the LLaMa-3 technical report (MetaAI, 2024) for
fine-tuning.
**4.2** **Setup for Evaluating RATIONALYST**
**Evaluation tasks and metrics.** A summary of
reasoning tasks we evaluate is provided in Table 2. We assess our method on the following
datasets: GSM8K (Cobbe et al., 2021b) and MATH
(Hendrycks et al., 2021) for mathematical reasoning, ECQA (Aggarwal et al., 2021) and HellaSwag
(Zellers et al., 2019) for commonsense reasoning, ProofWriter (Tafjord et al., 2021) for logical reasoning, ARC (Clark et al., 2018) for sci
-----
without RATIONALYST, the next reasoning step is
chosen randomly from these 3 samples. When using RATIONALYST, the selection of the next step is
guided by the rationales generated, as described in
§3.3.
**Other verifiers.** To evaluate the effectiveness
of RATIONALYST’s process supervision, we compare it with other approaches. For process super_vision with other models, we include LLaMa-3-_
8B-Instruct and GPT-4 in our comparison. These
models are prompted to rerank partial reasoning
trajectories as reasoning steps are generated. The
prompts and in-context learning demonstrations
used for these models on representative datasets are
provided in Appendix D. For outcome supervision,
we also compare with outcome-based verifiers derived from LLaMa-3-8B-Instruct. These verifiers
are fine-tuned on the training sets of each reasoning
dataset. Following the approach outlined by Cobbe
et al. (2021b), they assess the correctness of the
final prediction by directly evaluating the question
and final solution. This comparison allows us to
assess the performance of RATIONALYST against
both process-based and outcome-based supervision
methods.
**5** **Empirical Results**
**5.1** **Main result: RATIONALYST Improves**
**Performance on Various Tasks**
In this section, we train RATIONALYST using a
combination of rationales extracted from GSM8K
and ECQA, as well as from The Pile, as outlined in
Table 1. The baseline does not use any verifier. We
use implicit supervision for this experiment. The
main result is shown in Table 3.
**Baseline** **RATIONALYST**
**Reasoning Type** **Dataset**
**Acc.** **Acc.** ∆ **Acc.**
Mathematical GSM8K 77.6 **81.6** **4.0**
Math 28.0 **32.5** **4.5**
CommonSense ECQA 72.6 **75.2** **2.6**
HellaSwag 58.2 **60.3** **2.1**
Logical ProofWriter 86.4 **90.7** **4.3**
Scientific ARC 77.6 **80.7** **3.1**
Combined MMLU-Pro 39.6 **45.3** **5.7**
Table 3: Accuracy and absolute improvement over baseline
using RATIONALYST. RATIONALYST generalizes across
**different reasoning tasks, showing improved performance**
**with unlabelled web-scale data.**
Evaluation of RATIONALYST shows that training with rationales from GSM8K, ECQA, and The
Pile improves performance not only on GSM8K
and ECQA, but also on other reasoning tasks (e.g.
scientific reasoning, logical reasoning, etc) not directly used in rationale extraction. This supports
the idea that rationales can be broadly applicable
across different reasoning tasks. In addition, since
we use the same model (LLaMa-3-8B-Instruct) for
rationale extraction, filtering, RATIONALYST training, and inference, our results do not leverage external knowledge from stronger models like LLaMa-370B-Instruct or GPT-4. Future work might change
_M to stronger models, with the expectation that_
higher-quality rationales will lead to better performance.
**5.2** **Ablation: Web-scale Rationales Enhance**
**Performance Across Tasks**
To assess the benefit of web-scale rationales, we
train another model: RATIONALYST w/o Pile solely
on rationales extracted from the training sets of
GSM8K and ECQA. We re-ran the experiments on
the same reasoning datasets using implicit supervision. The results are detailed in Table 4.
We find that training the model on web-scale
data results in better performance compared to
training only on the rationales extracted from
GSM8K and ECQA. This improvement is consistent and particularly significant on MMLU-Pro.
Web-scale data likely provides exposure to more
diverse reasoning types and content, including specialized knowledge, complex real-world scenarios,
and interdisciplinary connections not present in the
more focused datasets.
**RATIONALYST**
**Dataset** **RATIONALYST** ∆Acc.
**(w/o Pile)**
GSM8K **81.6** 80.3 **-1.3**
Math **32.5** 31.4 **-1.1**
ECQA **75.2** 74.5 **-0.7**
HellaSwag **60.3** 59.1 **-1.2**
ProofWriter **90.7** 88.2 **-2.5**
ARC **80.7** 78.8 **-1.9**
MMLU-Pro **45.3** 41.2 **-4.1**
Table 4: An ablation study on the benefit of rationales extracted from pre-training data (The Pile). We compare with
RATIONALYST against the “w/o Pile” model that is trained
solely with rationales extracted from GSM8K and ECQA. The
consistent accuracy drop shows that, utilizing web-scale ratio**nales improves performance on various reasoning datasets.**
**5.3** **Ablation: Implicit Supervision Works**
**Better than Explicit Supervision**
In this section, we conduct ablation studies to test
the effectiveness of different supervision meth
-----
ods. To isolate the impact of supervision methods and minimize confounding variables, we focus
on GSM8K and ECQA as representative benchmarks for mathematical and commonsense reasoning, respectively. We train two versions of RA
TIONALYST: one on rationales extracted from the
GSM8K training set (RATIONALYST - GSM8K)
and another on rationales from the ECQA training
set (RATIONALYST - ECQA). These models are
used to supervise MAgent during inference on their
respective tasks.
As shown in Table 5, implicit supervision outperforms explicit supervision. Our manual analysis revealed that implicit supervision’s superior
performance stems from its greater robustness to
errors. When RATIONALYST generates an imperfect rationale, the probability-based heuristic used
in implicit supervision can still provide useful guidance even if the rationale itself is not ideal. This
approach is less likely to lead MAgent to produce
incorrect next steps. In contrast, explicit supervision directly incorporates potentially flawed rationales into the reasoning process, which can cause
_MAgent to produce incorrect next steps. Essentially,_
implicit supervision acts as a softer guide, allowing for some imperfection in rationales, while explicit supervision more strictly adheres to potentially flawed rationales, making it more susceptible
to errors.
**Heuristic↓** - Evaluation task→ **GSM8K** **ECQA**
Implicit Supervision **80.3** **74.5**
Explicit Supervision 77.5 72.2
Table 5: Comparison of implicit and explicit supervision methods on GSM8K and ECQA tasks. Implicit supervision out**performs explicit supervision due to its robustness to er-**
**rors.**
**5.4** **RATIONALYST Outperforms Other**
**Verifiers**
Table 6 presents an analysis of RATIONALYST
against various verifiers. Our findings reveal several insights:
**RATIONALYST** **outperforms** **vanilla** **LLa-**
**Ma-3-8B-Instruct using process supervision:**
RATIONALYST, even without leveraging The
Pile dataset, outperforms process-based verifiers
using vanilla LLaMa-3-8B-Instruct. A manual
examination of reasoning trajectories suggests that
LLaMa-3-8B-Instruct faces difficulties in reranking partial reasoning steps. This challenge likely
stems from the model’s struggle to differentiate
among its own generated outputs, a phenomenon
observed in recent studies (Jiang et al., 2024b;
Huang et al., 2023).
**RATIONALYST shows superior process-super-**
**vision performance than much bigger models**
**like GPT-4:** We observe consistent superior performance of RATIONALYST compared to GPT-4’s
process supervision. We hypothesize that this advantage arises from RATIONALYST’s specialized
design for providing supervision, in contrast to
GPT-4’s general-purpose training.
**RATIONALYST surpasses outcome-based ver-**
**ifiers trained using matching data:** Notably,
our method surpasses the performance of finetuned outcome-based verifiers on both GSM8K
and ECQA datasets, despite these verifiers being
trained on matching data. We attribute this success to the richer feedback provided by processbased supervision compared to outcome-based approaches.
**Supervision** **GSM8K ECQA**
N/A 77.6 72.6
Process Supervision w/ LLaMa-3 77.4 71.5
Process Supervision w/ GPT-4 80.0 74.7
Outcome Supervision w/ LLaMa-3 + FT 79.2 74.3
RATIONALYST w/o Pile 80.3 74.5
RATIONALYST **81.6** **76.2**
Table 6: Comparison of different supervision methods. Process supervision uses LLaMa-3 and GPT-4 to directly rerank
each reasoning step. Outcome-based supervision fine-tunes
LLaMa-3 on GSM8K and ECQA training sets to evaluate final answers. RATIONALYST outperforms both strong verifiers
like GPT-4 and similarly-sized models fine-tuned on matching
training data.
**5.5** **RATIONALYST Generates Accurate and**
**Easy-to-understand Rationals**
We annotate some samples from the test set of Math
(Hendrycks et al., 2021) at inference time, which
was not part of the rationale sampling datasets.
Through manual observation, we find that our
model can generate useful rationales that is helpful for understanding LLM’s reasoning process
on Math (an example is provided in Appendix C).
Comparing the rationales generated by RATIONA
LYST with those generated by Quiet-Star (Zelikman
et al., 2024) on the same problems, we find that
our method produces more human-understandable
rationales. We believe this happens because QuietStar optimizes rationales during training using the
-----
accuracy of the final prediction as a reward. This
approach, while effective for improving task performance, does not explicitly prioritize human interpretability. In addition, this appraoch might inadvertently develop shortcuts or non-intuitive patterns
that optimize for accuracy but not necessarily for
clarity or human understanding.
**6** **Discussion**
**Scaling up RATIONALYST.** Scaling RATIONA
LYST with stronger models and increased computational resources is a logical next step. Utilizing
stronger models, such as LLaMa-3-70B or GPT4, would enhance the quality of extracted rationales, improve filtration accuracy, and ultimately
strengthen RATIONALYST. However, due to computational constraints, we have not pursued this,
which remains a limitation of this paper. Additionally, using larger unlabelled datasets with more
extensive reasoning content, such as OpenWebMath (Paster et al., 2023), is currently infeasible
due to the significant computational and time requirements for pre-filtering and training. These
enhancements are planned for future work.
**Connection to research on scaling test-time com-**
**pute.** Recent research has focused on extending
computational resources at test-time (Snell et al.,
2024; Wu et al., 2024), particularly for complex
reasoning tasks. In our experiments, we focus
on developing heuristics and employ a straightforward approach of sampling multiple candidates and
reranking them based on RATIONALYST’s guidance. However, RATIONALYST’s framework is
compatible with more sophisticated test-time compute techniques. Its heuristics can be integrated
into existing algorithms like beam-search or lookahead search, potentially enhancing their performance without significantly increasing computational cost.
**Is training on extracted rationales necessary?**
In our approach, we first select a subset of unlabelled data that contains strong reasoning signals,
then extract implicit rationales from this data for
model fine-tuning. While it has been demonstrated
that training on data with robust reasoning signals
can enhance reasoning capabilities on its own (Gunasekar et al., 2023; Jiang et al., 2024a), we believe
our method offers additional performance benefits for two reasons. First, many language models have already been trained on datasets like The
Pile. The value of fine-tuning on previously encountered text is likely lower than the value of
fine-tuning on newly incorporated rationales. Second, implicit rationales encapsulate the reasoning
process. Pre-training on these rationales enhances
reasoning more effectively than focusing on the
whole document.
**7** **Limitations**
One limitation of this work is the comprehensiveness of our experiments. In future research, we plan
to extend our experiments to a broader range of
reasoning tasks and compare RATIONALYST with
other outcome-based and process-based verifiers.
We also plan to adjust the combination of rationales
used to train RATIONALYST by (1) sampling from
different reasoning tasks and (2) altering the mix
of rationales in unlabelled web-scale pre-training
data to better understand its generalizability.
**8** **Conclusion**
In this paper, we introduced RATIONALYST, a
novel self-supervised model designed to enhance
the reasoning capabilities of LLMs by leveraging
hidden rationales extracted from unlabeled text.
Our approach centers on the effective extraction
and utilization of implicit rationales–those underlying thought processes that are not explicitly stated
in the text but can be inferred. By capturing these
rationales, RATIONALYST provides a mechanism
for process supervision during reasoning, enabling
LLMs to reason better.
**Acknowledgements.** We sincerely thank Eric Zelikman, Tianmin Shu, and the broader JHU CLSP
community for discussions and inspiration.
-----
**References**
Shourya Aggarwal, Divyanshu Mandowara, Vishwajeet Agrawal, Dinesh Khandelwal, Parag Singla, and
Dinesh Garg. 2021. Explanations for CommonsenseQA: New Dataset and Models. In Proceedings
_of the 59th Annual Meeting of the Association for_
_Computational Linguistics and the 11th International_
_Joint Conference on Natural Language Processing_
_(Volume 1: Long Papers), Online. Association for_
Computational Linguistics.
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot,
Ashish Sabharwal, Carissa Schoenick, and Oyvind
[Tafjord. 2018. Think you have Solved Question An-](https://arxiv.org/abs/2102.03315)
[swering? Try ARC, the AI2 Reasoning Challenge.](https://arxiv.org/abs/2102.03315)
_arXiv preprint arXiv:1803.05457._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021a. Training verifiers to solve math word prob-](https://arxiv.org/abs/2110.14168)
[lems. Preprint, arXiv:2110.14168.](https://arxiv.org/abs/2110.14168)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021b. Training verifiers to solve math word prob-](https://arxiv.org/abs/2110.14168)
[lems.](https://arxiv.org/abs/2110.14168)
Nathan Cornille, Marie-Francine Moens, and Florian
[Mai. 2024. Learning to plan for language modeling](https://arxiv.org/abs/2404.00614)
[from unlabeled data. Preprint, arXiv:2404.00614.](https://arxiv.org/abs/2404.00614)
Randall Davis. 1984. Diagnostic reasoning based on
structure and behavior. Artificial intelligence, 24(13):347–410.
Shehzaad Dhuliawala, Mojtaba Komeili, Jing Xu,
Roberta Raileanu, Xian Li, Asli Celikyilmaz, and
[Jason Weston. 2023. Chain-of-verification reduces](https://arxiv.org/abs/2309.11495)
[hallucination in large language models. Preprint,](https://arxiv.org/abs/2309.11495)
arXiv:2309.11495.
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and
Tushar Khot. 2023. Complexity-based prompting for
multi-step reasoning. In ICLR. OpenReview.net.
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang,
Horace He, Anish Thite, Noa Nabeshima, Shawn
Presser, and Connor Leahy. 2020. [The Pile: An](https://arxiv.org/abs/2101.00027)
[800gb dataset of diverse text for language modeling.](https://arxiv.org/abs/2101.00027)
_arXiv preprint arXiv:2101.00027._
Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio
César Teodoro Mendes, Allie Del Giorno, Sivakanth
Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo
de Rosa, Olli Saarikivi, Adil Salim, Shital Shah,
Harkirat Singh Behl, Xin Wang, Sébastien Bubeck,
Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee,
[and Yuanzhi Li. 2023. Textbooks are all you need.](https://arxiv.org/abs/2306.11644)
_Preprint, arXiv:2306.11644._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring Mathematical Problem Solving With the MATH Dataset. In NeurIPS,
Menlo Park, Calif. AAAI Press.
Jie Huang, Xinyun Chen, Swaroop Mishra,
Huaixiu Steven Zheng, Adams Wei Yu, Xiny[ing Song, and Denny Zhou. 2023. Large language](https://arxiv.org/abs/2310.01798)
[models cannot self-correct reasoning yet.](https://arxiv.org/abs/2310.01798)
Hyeonbin Hwang, Doyoung Kim, Seungone Kim,
Seonghyeon Ye, and Minjoon Seo. 2024. [Self-](https://arxiv.org/abs/2404.10346)
[explore to avoid the pit: Improving the reasoning](https://arxiv.org/abs/2404.10346)
[capabilities of language models with fine-grained re-](https://arxiv.org/abs/2404.10346)
[wards. Preprint, arXiv:2404.10346.](https://arxiv.org/abs/2404.10346)
Dongwei Jiang, Marcio Fonseca, and Shay B. Cohen.
[2024a. Leanreasoner: Boosting complex logical rea-](https://arxiv.org/abs/2403.13312)
[soning with lean. Preprint, arXiv:2403.13312.](https://arxiv.org/abs/2403.13312)
Dongwei Jiang, Jingyu Zhang, Orion Weller, Nathaniel
Weir, Benjamin Van Durme, and Daniel Khashabi.
2024b. Self-[in]correct: [Llms struggle with](https://arxiv.org/abs/2404.04298)
refining [self-generated](https://arxiv.org/abs/2404.04298) responses. _Preprint,_
arXiv:2404.04298.
Aitor Lewkowycz, Anders Andreassen, David Dohan,
Ethan Dyer, Henryk Michalewski, Vinay V. Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag,
Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur,
Guy Gur-Ari, and Vedant Misra. 2022. Solving quantitative reasoning problems with language models. In
_NeurIPS._
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
[Jian-Guang Lou, and Weizhu Chen. 2023. Making](https://arxiv.org/abs/2206.02336)
[large language models better reasoners with step-](https://arxiv.org/abs/2206.02336)
[aware verifier. Preprint, arXiv:2206.02336.](https://arxiv.org/abs/2206.02336)
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri
Edwards, Bowen Baker, Teddy Lee, Jan Leike,
John Schulman, Ilya Sutskever, and Karl Cobbe.
2023. [Let’s verify step by step.](https://arxiv.org/abs/2305.20050) _Preprint,_
arXiv:2305.20050.
Hugo Mercier and Dan Sperber. 2011. Why do humans reason? arguments for an argumentative theory.
_Behavioral and brain sciences, 34(2):57–74._
MetaAI. 2024. Introducing meta llama 3: The most
[capable openly available llm to date. https://ai.](https://ai.meta.com/blog/meta-llama-3/)
[meta.com/blog/meta-llama-3/.](https://ai.meta.com/blog/meta-llama-3/)
Keiran Paster, Marco Dos Santos, Zhangir Azer[bayev, and Jimmy Ba. 2023. Openwebmath: An](https://arxiv.org/abs/2310.06786)
[open dataset of high-quality mathematical web text.](https://arxiv.org/abs/2310.06786)
_Preprint, arXiv:2310.06786._
Lance J Rips. 1994. The psychology of proof: Deductive
_reasoning in human thinking. Mit Press._
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta
Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola
Cancedda, and Thomas Scialom. 2023. Toolformer:
Language models can teach themselves to use tools.
_arXiv preprint arXiv:2302.04761._
-----
Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Ku[mar. 2024. Scaling llm test-time compute optimally](https://arxiv.org/abs/2408.03314)
[can be more effective than scaling model parameters.](https://arxiv.org/abs/2408.03314)
_Preprint, arXiv:2408.03314._
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2020. [Mpnet: Masked and permuted](https://arxiv.org/abs/2004.09297)
[pre-training for language understanding. Preprint,](https://arxiv.org/abs/2004.09297)
arXiv:2004.09297.
Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. 2021.
Proofwriter: Generating implications, proofs, and
abductive statements over natural language. In ACL.
Amos Tversky, Daniel Kahneman, and Paul Slovic.
1982. Judgment under uncertainty: Heuristics and
_biases. Cambridge._
Robert Vacareanu, Anurag Pratik, Evangelia
Spiliopoulou, Zheng Qi, Giovanni Paolini,
Neha Anna John, Jie Ma, Yassine Benajiba, and
[Miguel Ballesteros. 2024. General purpose verifi-](https://arxiv.org/abs/2405.00204)
[cation for chain of thought prompting.](https://arxiv.org/abs/2405.00204) _Preprint,_
arXiv:2405.00204.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and
Anima Anandkumar. 2023. [Voyager: An open-](https://arxiv.org/abs/2305.16291)
[ended embodied agent with large language models.](https://arxiv.org/abs/2305.16291)
_Preprint, arXiv:2305.16291._
Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni,
Abhranil Chandra, Shiguang Guo, Weiming Ren,
Aaran Arulraj, Xuan He, Ziyan Jiang, Tianle Li, Max
Ku, Kai Wang, Alex Zhuang, Rongqi Fan, Xiang Yue,
[and Wenhu Chen. 2024. Mmlu-pro: A more robust](https://arxiv.org/abs/2406.01574)
[and challenging multi-task language understanding](https://arxiv.org/abs/2406.01574)
[benchmark. Preprint, arXiv:2406.01574.](https://arxiv.org/abs/2406.01574)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022. [Chain-of-thought prompting elicits](https://arxiv.org/abs/2201.11903)
[reasoning in large language models. Advances in](https://arxiv.org/abs/2201.11903)
_Neural Information Processing Systems (NeurIPS),_
35:24824–24837.
Nathaniel Weir, Peter Clark, and Benjamin Van Durme.
[2023. Nellie: A neuro-symbolic inference engine for](https://arxiv.org/abs/2209.07662)
[grounded, compositional, and explainable reasoning.](https://arxiv.org/abs/2209.07662)
_Preprint, arXiv:2209.07662._
Nathaniel Weir, Kate Sanders, Orion Weller, Shreya
Sharma, Dongwei Jiang, Zhengping Jiang, Bhavana Dalvi Mishra, Oyvind Tafjord, Peter Jansen,
[Peter Clark, and Benjamin Van Durme. 2024. En-](https://arxiv.org/abs/2402.14798)
[hancing systematic decompositional natural lan-](https://arxiv.org/abs/2402.14798)
[guage inference using informal logic.](https://arxiv.org/abs/2402.14798) _Preprint,_
arXiv:2402.14798.
Yangzhen Wu, Zhiqing Sun, Shanda Li, Sean Welleck,
[and Yiming Yang. 2024. An empirical analysis of](https://arxiv.org/abs/2408.00724)
[compute-optimal inference for problem-solving with](https://arxiv.org/abs/2408.00724)
[language models. Preprint, arXiv:2408.00724.](https://arxiv.org/abs/2408.00724)
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran,
Thomas L. Griffiths, Yuan Cao, and Karthik
Narasimhan. 2023. [Tree of thoughts: Deliberate](https://arxiv.org/abs/2305.10601)
[problem solving with large language models. In Ad-](https://arxiv.org/abs/2305.10601)
_vances in Neural Information Processing Systems_
(NeurIPS).
Eric Zelikman, Georges Harik, Yijia Shao, Varuna
Jayasiri, Nick Haber, and Noah D. Goodman. 2024.
[Quiet-star: Language models can teach themselves to](https://arxiv.org/abs/2403.09629)
[think before speaking. Preprint, arXiv:2403.09629.](https://arxiv.org/abs/2403.09629)
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali
Farhadi, and Yejin Choi. 2019. HellaSwag: Can
a machine really finish your sentence? In ACL.
Fan Zhou, Haoyu Dong, Qian Liu, Zhoujun Cheng,
[Shi Han, and Dongmei Zhang. 2022. Reflection of](https://arxiv.org/abs/2210.05075)
[thought: Inversely eliciting numerical reasoning in](https://arxiv.org/abs/2210.05075)
[language models via solving linear systems. Preprint,](https://arxiv.org/abs/2210.05075)
arXiv:2210.05075.
-----
**A** **Prompts used for rationale sampling**
In this section, we provide the prompts we used for
rationale sampling on GSM8K (Figure 4), ECQA
(Figure 5), and The Pile (Figure 6).
**B** **Prompts used during inference**
In this section, we provide the prompts used during
inference time to encourage the agent model reason
step by step for GSM8K (Figure 7) and ECQA
(Figure 8). Note that the input to the agent model
appends the last rationale generated by the agent
model.
**C** **Examples of rationales generated at**
**inference time**
In this section, we provide rationales generated by
RATIONALYST from the test set of MATH during
inference time Figure 9, which was not part of the
rationale sampling datasets, and observe that our
model can still generate useful rationales that help
to understand LLM’s reasoning process.
**D** **Prompts used for LLaMa-3 reranking**
In this section, we provide the prompts and incontext-learning demonstrations used to instruct
LLaMa-3-8B-Instruct and GPT-4 to provide feedback by directly reranking partial reasoning traces
given the question (Figure 10).
-----
**System Prompt:**
Your task is to add rationals to a piece of text. The rationals should help you with predicting future text. You can
add rationals by writing "<BOT>rational<EOT>". Here are some examples of rationale generation:
**Example Input 1:**
Question: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he
have at the end of wednesday?
Answer: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 =
33 golf balls. The answer is 33
**Example Output 1:**
Question: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he
have at the end of wednesday?
Answer: <BOT>First, we need to calculate how many golf balls Michael had after losing 23 on tuesday<EOT> Michael started with 58
golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. <BOT>After losing 2 more on wednesday, we need to calculate how many
**golf balls he had left<EOT> After losing 2 more, he had 35 - 2 = 33 golf balls. <BOT>We are ready to output the final answer<EOT>**
The answer is 33
**Example Input 2:**
Question: Brennan was researching his school project and had to download files from the internet to his computer to use for
reference. After downloading 800 files, he deleted 70% of them because they were not helpful. He downloaded 400 more files but
again realized that 3/5 of them were irrelevant. How many valuable files was he left with after deleting the unrelated files he
downloaded in the second round?
Answer: The number of non-valuable files Brennan downloaded in the first round is 70/100*800 = <<70/100*800=560>>560 files. The
number of valuable files Brennan downloaded in the first round is 800-560 = <<800-560=240>>240 When he downloaded 400 new files,
there were 3/5*400= <<3/5*400=240>>240 non-useful files, which he deleted again. The total number of valuable files he downloaded
in the second round is 400-240 = <<400-240=160>>160 To write his research, Brennan had 160+240 = <<160+240=400>>400 useful files to
reference to write his research. The answer is 400
**Example Output 2:**
Question: Brennan was researching his school project and had to download files from the internet to his computer to use for
reference. After downloading 800 files, he deleted 70% of them because they were not helpful. He downloaded 400 more files but
again realized that 3/5 of them were irrelevant. How many valuable files was he left with after deleting the unrelated files he
downloaded in the second round?
Answer: <BOT>First, we need to calculate how many non-valuable files Brennan downloaded in the first round<EOT> The number of
non-valuable files Brennan downloaded in the first round is 70/100*800 = <<70/100*800=560>>560 files. <BOT>Next, we need to
**calculate how many valuable files Brennan downloaded in the first round<EOT> The number of valuable files Brennan downloaded in the**
first round is 800-560 = <<800-560=240>>240 <BOT>After downloading 400 new files, we need to calculate how many non-valuable files
**he downloaded<EOT> When he downloaded 400 new files, there were 3/5*400= <<3/5*400=240>>240 non-useful files, which he deleted**
again. <BOT>Finally, we need to calculate how many valuable files he was left with<EOT> The total number of valuable files he
downloaded in the second round is 400-240 = <<400-240=160>>160 <BOT>Now we need to calculate the total number of valuable files he
**has left to write his research<EOT> To write his research, Brennan had 160+240 = <<160+240=400>>400 useful files to reference to**
write his research. <BOT>We are ready to output the final answer<EOT> The answer is 400
Figure 4: The prompt and in-context learning examples used for sampling rationales for GSM8K. The bolded rationales represent
implicit rationales in the document.
-----
implicit rationales in the document.
**System Prompt:**
Your task is to add rationals to a piece of text. The rationals should help you with predicting future text. You can
add rationals by writing "<BOT>rational<EOT>". Here are some examples of rationale generation:
**Example Input 1:**
Question: He came across a raw item but his pack was full, he had to abandon something if he was to what the item?
Choices: A - join, B - acquire, C - engage, D - maintain, E - remit
Answer: Acquiring an item requires one to have space to carry it. As he had box full of items, he had to abandon one of them in
order to acquire new one. Join and Engage is not related to item. Maintain an item does not require one to abandon existing item.
One cannot remit raw item. The answer is B - acquire.
**Example Output 1:**
Question: He came across a raw item but his pack was full, he had to abandon something if he was to what the item?
Choices: A - join, B - acquire, C - engage, D - maintain, E - remit
Answer: Acquiring an item requires one to have space to carry it. <BOT>We also need to establish relationship between acquire and
**the question itself<EOT> As he had box full of items, he had to abandon one of them in order to acquire new one. <BOT>Let's check**
**whether other answer choices are related to the word item in question<EOT> Join and Engage is not related to item. <BOT>Let's check**
**whether other answer choices are contradictory to the word abandon in question<EOT> Maintain an item does not require one to**
abandon existing item. <BOT>Let's check whether other answer choices are related to the word raw item in question<EOT> One cannot
remit raw item. <BOT>We are ready to make final prediction<EOT> The answer is B - acquire.
**Example Input 2:**
Question: Where is aberdeen in the US located?
Choices: A - washington, B - europe, C - scotland, D - maryland, E - south dakota
Answer: Aberdeen is located in Washington state which is in US. Aberdeen is also located in Scotland which is part of Europe.
However, Scotland or Europe are not inside US. Aberdeen is not located in Maryland and South Dakota states. The answer is A -
washington.
**Example Output 2:**
Question: Where is aberdeen in the US located?
Choices: A - washington, B - europe, C - scotland, D - maryland, E - south dakota
Answer: Aberdeen is located in Washington state which is in US. <BOT>We need to think whether there is another aberdeen with the
**same name<EOT> Aberdeen is also located in Scotland which is part of Europe. <BOT>The answer choice need to be in the U.S. as**
**mentioned in the question<EOT> However, Scotland or Europe are not inside US. <BOT>Let's check whether aberdeen is located in other**
**answer choices<EOT> Aberdeen is not located in Maryland and South Dakota states. <BOT>We are ready to make final prediction<EOT>**
The answer is A - washington.
Figure 5: The prompt and in-context learning examples used for sampling rationales for ECQA. The bolded rationales represent
-----
**System Prompt:**
Your task is to add rationals to a piece of text. The rationals should help you with predicting
future text. You can add rationals by writing "<BOT>rational<EOT>". Here are one example of
rationale generation:
**Example Input:**
\n\nNot applicable.\n\nJZ takes responsibility for drafting the manuscript. JZ, DY and XS are the
attending doctors of this patient. QS is responsible for Pathological results and WW is for
Microbiological results. YS and XS is responsible for revision of the manuscript. All authors read
and approved the final manuscript.\n\nAuthors' information
{#FPar1}\n====================\n\nJiangnan Zhao and Dongmei Yuan are resident
physicians-in-training, and Yi Shi, Qunli Shi, Weiping Wang and Xin Su are attending specialist
physicians who dedicate their time to mentoring trainees
**Example Output:**
\n\nNot applicable.\n\nJZ takes responsibility for drafting the manuscript. <BOT>Indicates JZ's
**role, suggesting the next roles to be described will involve other tasks like attending doctors**
**and result analysis<EOT> JZ, DY and XS are the attending doctors of this patient. QS is**
responsible for Pathological results and WW is for Microbiological results. <BOT>Following the
**pattern, specific tasks related to results analysis are now described, predicting further details**
**on manuscript revision<EOT> YS and XS is responsible for revision of the manuscript. <BOT>The**
**revision responsibility follows naturally after result analysis, likely leading to a final**
**approval statement<EOT> All authors read and approved the final manuscript. <BOT>Final approval**
**statement indicates closure of the roles and tasks, suggesting a summary or author information**
**follows<EOT> \n\nAuthors\' information \{#FPar1\} <BOT>Transition to detailed author information**
**section, predicting descriptions of each author\'s background<EOT>**
\n====================\n\nJiangnan Zhao and Dongmei Yuan are resident physicians-in-training,
**<BOT>Starting with the roles of resident physicians-in-training, predicting subsequent details**
**about attending specialist physicians<EOT> and Yi Shi, Qunli Shi, Weiping Wang and Xin Su are**
attending specialist physicians who dedicate their time to mentoring trainees
**Example Input:**
Suppose -s*w = -26*w. Suppose -15 = -5*j - w*j. Let g(b) = b**3 - 3*b**2 - 2*b + 3. What is
g(j)?\n-3\nLet o(w) = -9 - 31214*w + 31220*w - 3. Determine o(1).\n-6\nLet y = -11 + 15. Suppose
-4*c + 30 = -y*r + 50, 0 = -5*c - r - 25. Let q(d) = d + 5. Give q(c).\n0\nLet r = 2899 - 2906.
Let i(s) = 2*s**2 + 12*s - 3. Calculate i(r).\n11\nLet q be 6/(-105) + 4199/595. Let m(t) = t**3 -
9*t**2 + 5*t + 5. Give m(q).\n-58\nLet d = 440 - 92
**Example Output:**
Suppose -s*w = -26*w. Suppose -15 = -5*j - w*j. Let g(b) = b**3 - 3*b**2 - 2*b + 3. What is
g(j)?\n<BOT>First solve the equations to find j, then substitute j into g(b) to calculate
**g(j)<EOT>\n-3\nLet o(w) = -9 - 31214*w + 31220*w - 3. Determine o(1).\n<BOT>Substitute w = 1 into**
**the function o(w) and simplify to determine its value<EOT>\n-6\nLet y = -11 + 15. Suppose -4*c +**
30 = -y*r + 50, 0 = -5*c - r - 25. Let q(d) = d + 5. Give q(c).\n<BOT>First calculate y, then
**solve the equations to find c, and finally substitute c into q(d) to calculate q(c)<EOT>\n0\nLet r**
= 2899 - 2906. Let i(s) = 2*s**2 + 12*s - 3. Calculate i(r).\n<BOT>First calculate r, then
**substitute r into the function i(s) to calculate i(r)<EOT>\n11\nLet q be 6/(-105) + 4199/595. Let**
m(t) = t**3 - 9*t**2 + 5*t + 5. Give m(q).\n<BOT>First calculate q, then substitute q into the
**function m(t) to determine m(q)<EOT>\n-58\nLet d = 440 - 92\n<BOT>Calculate the value of d as 440**
**- 92<EOT>**
Figure 6: The prompt and in-context learning examples used for sampling rationales for The Pile. The bolded rationales represent
implicit rationales in the document.
-----
step by step on GSM8K.
**System Prompt:**
You are a smart assistant that solves math word problems. You will only generate one sentence that
extends the reasoning trajectory that solves the question given the question and partial answer
reasoning trajectory. Please don't repeat your previous generation while you're generating the
sentence. If you think you're ready to output the answer, you can finish the response with The
answer is:
**Example Input:**
Question: Weng earns $12 an hour for babysitting. Yesterday, she just did 50 minutes of
babysitting. How much did she earn?
Answer:
**Example Output:**
Weng earns 12/60 = $<<12/60=0.2>>0.2 per minute.
**Example Input:**
Question: Weng earns $12 an hour for babysitting. Yesterday, she just did 50 minutes of
babysitting. How much did she earn?
Answer: Weng earns 12/60 = $<<12/60=0.2>>0.2 per minute.
**Example Output:**
Working 50 minutes, she earned 0.2 x 50 = $<<0.2*50=10>>10. The answer is: 10
Figure 7: The prompt and in-context-learning demonstrations used during inference time to encourage the agent model reason
-----
step by step on ECQA.
**System Prompt:**
You are a smart assistant that solves commonsense reasoning problems. You will only generate one
sentence that extends the reasoning trajectory that solves the question given the question and
partial answer reasoning trajectory. Please don't repeat your previous generation while you're
generating the sentence. Please analyze all answer choices before finishing reasoning. If you
think you're ready to output the answer, you can finish the response with The answer is:
**Example Input:**
Question: He came across a raw item but his pack was full, he had to abandon something if he was
to what the item?
Choices: A - join, B - acquire, C - engage, D - maintain, E - remit
Answer:
**Example Output:**
Join is not the type of the activity associated with item.
**Example Input:**
Question: He came across a raw item but his pack was full, he had to abandon something if he was
to what the item?
Choices: A - join, B - acquire, C - engage, D - maintain, E - remit
Answer: Join is not the type of the activity associated with item.
**Example Output:**
Acquiring requires space in a pack. To create space in a full pack, one has to abandon an item.
**Example Input:**
Question: He came across a raw item but his pack was full, he had to abandon something if he was
to what the item?
Choices: A - join, B - acquire, C - engage, D - maintain, E - remit
Answer: Join is not the type of the activity associated with item. Acquiring requires space in a
pack. To create space in a full pack, one has to abandon an item.
**Example Output:**
Engage is not related to item.
**Example Input:**
Question: He came across a raw item but his pack was full, he had to abandon something if he was
to what the item?
Choices: A - join, B - acquire, C - engage, D - maintain, E - remit
Answer: Join is not the type of the activity associated with item. Acquiring requires space in a
pack. To create space in a full pack, one has to abandon an item. Engage is not related to item.
**Example Output:**
Maintain an item does not require one to make space for it.
**Example Input:**
Question: He came across a raw item but his pack was full, he had to abandon something if he was
to what the item?
Choices: A - join, B - acquire, C - engage, D - maintain, E - remit
Answer: Join is not the type of the activity associated with item. Acquiring requires space in a
pack. To create space in a full pack, one has to abandon an item. Engage is not related to item.
Maintain an item does not require one to make space for it.
**Example Output:**
One cannot remit raw item. The answer is B - acquire.
Figure 8: The prompt and in-context-learning demonstrations used during inference time to encourage the agent model reason
-----
**System Prompt:**
You are a smart assistant that solves math problems. You will only generate one sentence that
extends the reasoning trajectory that solves the question given the question and partial answer
reasoning trajectory. Please don't repeat your previous generation while you're generating the
sentence.
**Example Input:**
Question: If $2x - 9y = 14$ and $6x = 42 +y$, what is the value of the product $xy$?
Answer:
**Generated Rationale:**
Let’s first rewrite the second equation to isolate $y$ in terms of $x$
**Example Output:**
Rewrite the second equation to express $y$ in terms of $x$: $6x = 42 + y \implies y = 6x - 42$
**Example Input:**
Question: If $2x - 9y = 14$ and $6x = 42 +y$, what is the value of the product $xy$?
Answer: Rewrite the second equation to express $y$ in terms of $x$: $6x = 42 +y$ \implies y = 6x 42 $
**Generated Rationale:**
Substitute the expression for $y$ into the first equation
**Example Output:**
Substitute $y = 6x - 42$ into the first equation: $2x - 9(6x - 42) = 14$
**Example Input:**
Question: If $2x - 9y = 14$ and $6x = 42 +y$, what is the value of the product $xy$?
Answer: Rewrite the second equation to express $y$ in terms of $x$: $ 6x = 42 + y \implies y = 6x
- 42 $. Substitute $y = 6x - 42$ into the first equation: $2x - 9(6x - 42) = 14$
**Generated Rationale:**
We should simplify the equation and solve for $x$
**Example Output:**
Simplify and solve for $x$: $2x - 54x + 378 = 14 \implies -52x + 378 = 14 \implies -52x = -364
\implies x = 7$
**Example Input:**
Question: If $2x - 9y = 14$ and $6x = 42 +y$, what is the value of the product $xy$?
Answer: Rewrite the second equation to express $y$ in terms of $x$: $6x = 42 + y \implies y = 6x 42$. Substitute $y = 6x - 42$ into the first equation: $ 2x - 9(6x - 42) = 14 $. Simplify and
solve for $x$: $2x - 54x + 378 = 14 \implies -52x + 378 = 14 \implies -52x = -364 \implies x = 7$
**Generated Rationale:**
Use the value of $x$ to calculate $y$
**Example Output:**
Substitute $x = 7$ back into $y = 6x - 42$ to find $y$: $y = 6(7) - 42 = 42 - 42 = 0$
**Example Input:**
Question: If $2x - 9y = 14$ and $6x = 42 +y$, what is the value of the product $xy$?
Answer: Rewrite the second equation to express $y$ in terms of $x$: $6x = 42 + y \implies y = 6x 42$. Substitute $y = 6x - 42$ into the first equation: $2x - 9(6x - 42) = 14$. Simplify and solve
for $x$: $2x - 54x + 378 = 14 \implies -52x + 378 = 14 \implies -52x = -364 \implies x = 7$.
Substitute $x = 7$ back into $y = 6x - 42$ to find $y$: $y = 6(7) - 42 = 42 - 42 = 0$
**Generated Rationale:**
Now we can just multiply the values of $x$ and $y$ to find the product
**Example Output:**
Finally, calculate the product $xy$: $xy = 7 \cdot 0 = 0$. The value of the product $xy$ is
$\boxed{0}$.
Figure 9: Rationales generated by RATIONALYST for the test set of MATH.
-----
reranking partial reasoning trajectory.
**System Prompt:**
Your task is to assign rewards to the reasoning trajectory that solves the question given the
question and partial answer reasoning trajectory. The more probable the reasoning trajectory is
correct, the higher the reward should be. The reward should be an integer in the range of 0 to 3.
End your generation with So the reward is:
**Example Input:**
Question: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2
more. How many golf balls did he have at the end of wednesday?
Answer: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35.
**Example Output:**
This reasoning trajectory is correct and reasonable. So the reward is: 3
**Example Input:**
Question: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How
many lollipops did Jason give to Denny?
Answer: Jason started with 20 lollipops. After giving some to Denny, he had 12 left. To find out
how many he gave away, we need to add 12 + 20, which gives us 32. So, Jason gave Denny 32
lollipops.
**Example Output:**
This reasoning trajectory incorrectly adds instead of subtracting the lollipops given to Denny,
leading to an illogical result. So the reward is: 1.
Figure 10: The prompt and in-context-learning demonstrations used during process supervision to elicit the feedback by directly
-----
| [
"Dongwei, Jiang",
"Guoxuan, Wang",
"Daniel, Khashabi",
"Andrew, Wang",
"Yining, Lu",
"Benjamin, Van Durme",
"Jingyu, Zhang",
"Chuyu, Liu"
] | 2024-10-01T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.01044 | https://arxiv.org/abs/2410.01044 | https://www.semanticscholar.org/paper/16e2c1e988b13329e3b7fc2dbbf854f9414721d7 |
RUPBench: Benchmarking Reasoning Under Perturbations for Robustness Evaluation in Large Language Models | With the increasing use of large language models (LLMs), ensuring reliable performance in diverse, real-world environments is essential. Despite their remarkable achievements, LLMs often struggle with adversarial inputs, significantly impacting their effectiveness in practical applications. To systematically understand the robustness of LLMs, we present RUPBench, a comprehensive benchmark designed to evaluate LLM robustness across diverse reasoning tasks. Our benchmark incorporates 15 reasoning datasets, categorized into commonsense, arithmetic, logical, and knowledge-intensive reasoning, and introduces nine types of textual perturbations at lexical, syntactic, and semantic levels. By examining the performance of state-of-the-art LLMs such as GPT-4o, Llama3, Phi-3, and Gemma on both original and perturbed datasets, we provide a detailed analysis of their robustness and error patterns. Our findings highlight that larger models tend to exhibit greater robustness to perturbations. Additionally, common error types are identified through manual inspection, revealing specific challenges faced by LLMs in different reasoning contexts. This work provides insights into areas where LLMs need further improvement to handle diverse and noisy inputs effectively. | This work presents RUPBench, a comprehensive benchmark designed to evaluate LLM robustness across diverse reasoning tasks, and highlights that larger models tend to exhibit greater robustness to perturbations. | ## RUPBench: Benchmarking Reasoning Under Perturbations for Robustness Evaluation in Large Language Models
**Yuqing Wang[*]**
Stanford University
[email protected]
**Yun Zhao[*]**
Meta Platforms, Inc.
[email protected]
the boundaries of what these systems can achieve.
However, as the deployment of LLMs in real-world
applications grows, particularly in high-risk domains, ensuring their robustness against diverse
and potentially adversarial inputs becomes critical.
Despite advancements, LLMs remain vulnerable
to perturbations that can significantly degrade their
performance. These perturbations can come in various forms, including lexical variations (e.g., typos),
syntactic changes (e.g., cleft constructions), and
semantic distractions (e.g., red herrings). Such
weaknesses pose serious challenges, especially in
applications requiring high reliability and accuracy,
such as healthcare (Wang et al., 2024b), legal document analysis (Cheong et al., 2024), and automated
customer service (Kolasani, 2023).
Several studies have explored the robustness
of LLMs from various angles. For instance,
datasets like AdvGLUE (Wang et al., 2021) and AdvGLUE++ (Wang et al., 2024a) are specifically designed to test how language models respond to adversarial inputs, which are meticulously altered to
elicit incorrect responses from the models. Wang et
al. (Wang et al., 2023b) assessed the robustness of
ChatGPT and other LLMs against adversarial and
out-of-distribution (OOD) samples, while Zhuo et
al. (Zhuo et al., 2023) evaluated the robustness
of semantic parsing. However, these studies focus on restricted tasks or types of perturbations,
lacking a holistic evaluation framework that comprehensively assesses robustness across multiple
categories and distinct perturbation types. Additionally, they do not delve deeply into the specific
error patterns induced by different perturbations,
leaving gaps in understanding how to enhance the
models’ resilience in practical applications.
To address this gap, we introduce the Reasoning
**Under Perturbations Benchmark (RUPBench), a**
comprehensive benchmark designed to evaluate
the robustness of LLMs across different reasoning tasks. RUPBench includes 15 source datasets
**Abstract**
With the increasing use of large language models (LLMs), ensuring reliable performance in
diverse, real-world environments is essential.
Despite their remarkable achievements, LLMs
often struggle with adversarial inputs, significantly impacting their effectiveness in practical
applications. To systematically understand the
robustness of LLMs, we present RUPBench, a
comprehensive benchmark designed to evaluate LLM robustness across diverse reasoning
tasks. Our benchmark incorporates 15 reasoning datasets, categorized into commonsense,
arithmetic, logical, and knowledge-intensive
reasoning, and introduces nine types of textual
perturbations at lexical, syntactic, and semantic levels. By examining the performance of
state-of-the-art LLMs such as GPT-4o, Llama3,
Phi-3, and Gemma on both original and perturbed datasets, we provide a detailed analysis of their robustness and error patterns. Our
findings highlight that larger models tend to
exhibit greater robustness to perturbations. Additionally, common error types are identified
through manual inspection, revealing specific
challenges faced by LLMs in different reasoning contexts. This work provides insights into
areas where LLMs need further improvement
to handle diverse and noisy inputs effectively.
[Our data and code are available at https:](https://github.com/EternityYW/RUPBench)
[//github.com/EternityYW/RUPBench.](https://github.com/EternityYW/RUPBench)
**1** **Introduction**
Large language models (LLMs) have gained increasing popularity due to their unprecedented performance in various tasks such as sentiment analysis (Miah et al., 2024), complex reasoning (Wang
et al., 2023a), and time series analysis (Zhao et al.,
2021; Wang et al., 2022b). Models like GPT3 (Brown et al., 2020), GPT-4o (gpt 4o, 2024),
and Llama3 (AI@Meta, 2024) have set new benchmarks in natural language processing, pushing
*Denotes equal contribution.
-----
spanning four major reasoning categories: commonsense, arithmetic, logical, and knowledgeintensive. Each dataset is subjected to nine types
of textual perturbations, covering lexical, syntactic, and semantic levels, to simulate real-world
input variations. Then, we conduct extensive
experiments with several leading LLMs using
RUPBench, including GPT-4o (gpt 4o, 2024),
Llama3 (AI@Meta, 2024), Phi-3 (Abdin et al.,
2024), and Gemma (Team et al., 2024) models,
assessing their performance on both original and
perturbed datasets. By analyzing the models’ responses, we provide insights into their robustness
and identify common error patterns. Our findings
indicate that larger models generally exhibit greater
robustness to perturbations. Manual inspection of
incorrect predictions highlights specific error types
prevalent across all LLMs, directing areas for improvement and emphasizing the need for targeted
strategies to address these weaknesses by task.
In summary, our contributions are threefold:
(1) We introduce RUPBench, a comprehensive
benchmark designed to systematically evaluate the robustness of LLMs across 15 reasoning tasks, incorporating nine types of textual
perturbations, resulting in a total of 365,580
perturbed samples.
(2) We assess the performance of several stateof-the-art LLMs, including GPT-4o, Llama3,
Phi-3, and Gemma, on both original and perturbed datasets. Our extensive analysis provides detailed insights into their robustness
across different tasks and perturbations.
(3) We identify common error types from perturbations through manual inspection, highlighting challenges LLMs face, such as context misinterpretation and knowledge gaps, to
guide future research towards more resilient
and reliable LLMs.
**2** **Related Work**
In this section, we provide an overview of LLM
evaluation, with a focus on robustness. We also
discuss the role of textual perturbations in assessing
the robustness and safety of LLMs.
**2.1** **LLM Evaluation**
Pretrained language models like BERT (Kenton
and Toutanova, 2019) and RoBERTa (Liu et al.,
2019) have been the standard practice in many NLP
tasks. However, the introduction of GPT-3 (Brown
et al., 2020) shifted the focus towards minimal
fine-tuning approaches, such as zero-shot (Kojima et al., 2022) and few-shot learning. Recently, advanced LLMs like GPT-4o (gpt 4o, 2024),
Llama3 (AI@Meta, 2024), and Gemini (Team
et al., 2023) have demonstrated significant improvements across various domains, including complex
reasoning (Wang and Zhao, 2023b,a; Xia et al.,
2024), machine translation (Ding et al., 2023), and
text classification (Wang et al., 2022a, 2023c).
Given the remarkable performance of LLMs,
their evaluation has garnered significant attention
across areas like robustness (Dong et al., 2023),
hallucination (Li et al., 2023), healthcare (Wang
et al., 2023d), and ethics (Wan et al., 2023). Benchmarks such as GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019) have been foundational in advancing natural language understanding tasks. More recent benchmarks, including
MMLU (Hendrycks et al., 2020b), BigBench (Srivastava et al., 2023), and HellaSwag (Zellers et al.,
2019), assess capabilities in knowledge understanding and complex reasoning.
Robustness is particularly crucial for LLMs as it
ensures reliable performance in diverse, real-world
environments and the ability to handle noisy, incomplete, or adversarial inputs (Wang et al., 2024c).
Existing benchmarks like AdvGLUE (Wang et al.,
2021) and AdvGLUE++ (Wang et al., 2024a), built
on the foundation of GLUE, focus on evaluating
robustness. However, these benchmarks do not
sufficiently challenge the advanced capabilities of
current LLMs, underscoring the need for more rigorous assessments.
Our benchmark, RUPBench, addresses this critical gap by incorporating diverse recent datasets that
emphasize complex reasoning. This approach not
only enhances performance differentiation but also
pushes the boundaries of reasoning and knowledge
in advanced LLMs, making it an essential tool for
the next generation of LLM evaluation.
**2.2** **Textual Perturbations and LLM Safety**
Textual perturbations involve creating variations in
input text to evaluate the robustness and safety of
LLMs. Unlike efforts aimed at generating potentially harmful outputs, such as SafetyPrompts (Sun
et al., 2023) or prompt injection attacks (Esmradi
et al., 2023), our perturbations mimic plausible user
mistakes in data samples. Our goal is to ensure that
-----
**Original** **Adversarial** **Expert Review**
**Perturbations**
- **Reasoning DataCommonsense (8)** - **Lexical Perturbations (3)** - **Engagement of ten experts**
- CommonsenseQA, TRAM, PIQA, QASC, - Homophones, typos, Leetspeak - **Evaluate and revise** **RUPBench**
Social IQA, ETHICS… - **Syntactic Perturbations (3)** **perturbations for error** **Data**
- **Logical (3)** - It-cleft, Wh-cleft, **correction and readability**
- ReClor, LogiQA2.0, ART compound variations
- **Arithmetic (2)** - **Semantic Perturbations (3)** - **Accept perturbed samples**
- GSM8K, AQuA-RAT - Red herrings, **with approval from over**
- **Knowledge-Intensive (1)** CheckList, StressTest **60% of experts**
- MMLU
Figure 1: Overview of the data construction pipeline for RUPBench.
LLMs can manage diverse, noisy, or slightly incorrect inputs without producing erroneous or harmful
outputs, thereby enhancing their robustness and
safety in real-world applications. Additionally, categorizing perturbations into lexical, syntactic, and
semantic levels from a linguistic perspective covers
a broad spectrum of text variations, enabling a nuanced understanding of how different perturbations
affect LLM performance.
**3** **Dataset Construction**
In this section, we introduce the 15 source reasoning datasets spanning commonsense, logic, arithmetic, and cross-domain areas. We describe the
nine general text-based perturbations applied at
lexical, syntactic, and semantic levels, resulting in
a total of 365,580 perturbed samples. We also detail the involvement of human experts to ensure the
quality and validity of the perturbations. The overall data construction pipeline is shown in Figure 1.
**3.1** **Tasks and datasets**
We consider 15 representative text-based reasoning datasets, which are categorized into four major
reasoning groups: commonsense reasoning, arithmetic reasoning, logical reasoning, and knowledgeintensive reasoning. Table 1 provides an overview
of the reasoning datasets and tasks.
**3.1.1** **Commonsense Reasoning**
This group encompasses nine datasets covering
various dimensions of commonsense reasoning.
- CommonsenseQA (Talmor et al., 2019): Focuses on general commonsense knowledge,
requiring models to answer questions based
on everyday scenarios.
- TRAM (Wang and Zhao, 2023c): Assesses
the model’s ability to understand and reason
about time-related information such as frequency, ordering, duration, and typical time.
- PIQA (Bisk et al., 2020): Targets physical interaction reasoning, challenging models with
questions about everyday situations, favoring
atypical solutions.
- QASC (Khot et al., 2020): Centers on scientific reasoning, requiring models to integrate and apply scientific knowledge to answer questions.
- Social IQA (Sap et al., 2019): Emphasizes social reasoning, evaluating the model’s understanding of the social implications of everyday
events and situations.
- Cosmos QA (Huang et al., 2019): Focuses
on contextual reasoning, requiring models to
draw inferences from contextual information
in narrative passages.
- NumerSense (Lin et al., 2020): Tests numerical reasoning by requiring models to fill in
missing numerical values (zero to ten) or “no”
in given sentences.
- RiddleSense (Lin et al., 2021): Challenges
models to solve riddles that often require multiple pieces of commonsense knowledge and
figurative language.
- ETHICS (Hendrycks et al., 2020a): Focuses
on moral reasoning, assessing the model’s
ability to make ethical judgments and understand moral principles.
**3.1.2** **Arithmetic Reasoning**
This group comprises two datasets focusing on
math word problems.
-----
- GSM8K (Cobbe et al., 2021): Contains grade
school math word problems requiring basic
arithmetic and reasoning.
- AQuA-RAT (Ling et al., 2017): Comprises
algebraic math word problems, requiring models to answer multiple-choice questions and
generate rationales.
**3.1.3** **Logical Reasoning**
This group comprises three datasets focused on deductive reasoning (i.e., drawing conclusions based
on premises) and abductive reasoning (i.e., forming
hypotheses from incomplete information) tasks.
- ReClor (Yu et al., 2019): Contains logical
reasoning problems from standardized tests
such as LSAT and GMAT, requiring models
to perform deductive reasoning.
- LogiQA2.0 (Liu et al., 2023): Contains logical reasoning problems from the Chinese Civil
Service Examination, including natural language inference (NLI) and machine reading
comprehension (MRC) tasks.
- ART (Bhagavatula et al., 2019): Focuses on
abductive reasoning, challenging models to
select the most plausible explanation (hypothesis) given a pair of observations.
**3.1.4** **Knowledge-Intensive Reasoning**
We consider the MMLU (Hendrycks et al., 2020b)
benchmark as the standard for knowledge-intensive
reasoning, encompassing a broad range of exam
questions from 57 subjects across STEM, social
sciences, humanities, and more.
**3.2** **Perturbation Categories**
We consider each reasoning dataset’s validation or
test sets as our source samples, upon which we
perform various perturbations. Specifically, we categorize these perturbations into three major types:
lexical, syntactic, and semantic. Our perturbations
are designed to induce incorrect responses from
the LLM while preserving the essence of the original content, ensuring that the ground truth answer
remains unchanged despite the perturbations. Examples of RUPBench can be found in Appendix A.
**3.2.1** **Lexical Perturbation**
Lexical perturbations involve modifying individual
words within the text to evaluate the model’s robustness to variations. We consider three specific
types of lexical perturbations: homophones, typos,
and leetspeak, due to their ability to simulate common real-world challenges like phonetic confusion,
typographical errors, and informal language.
- Homophones: This involves replacing words
with their homophones, i.e., words that sound
the same but have different meanings and
spellings. For instance, “meet” might be replaced with “meat”. Using the CMU Pronouncing Dictionary, we identify homophones
for each word in a sentence and randomly select replacements.
- Typos: This introduces random spelling errors
into the text. Methods include swapping adjacent characters, inserting random characters,
deleting characters, or replacing characters
with random ones. For example, “example”
might become “exmaple” or “ex@ample”.
- Leetspeak (Wei et al., 2024): This is a system
of modified spellings used primarily on the
Internet. This perturbation translates text into
leetspeak by replacing letters with numbers
or symbols that resemble them. For example, “write” might become “WR1735”. Each
character is mapped to a set of possible replacements, and one is randomly chosen.
**3.2.2** **Syntactic Perturbation**
Syntactic perturbations involve modifying the structure of sentences to evaluate the model’s understanding of grammar and sentence construction.
We consider three specific types of syntactic perturbations: It-cleft, Wh-cleft, and compound variations. These perturbations are selected for their
ability to challenge the model’s syntactic parsing
capabilities and emphasize different aspects of sentence structure and focus.
- It-cleft: This restructures sentences using the
It-cleft construction, which highlights a specific part of the sentence by placing it after
“It was”. For example, “The dog chased the
cat” becomes “It was the dog that chased the
cat”. This method involves using the spaCy
library (Honnibal and Montani, 2017) to identify the subject, verb, and object in a sentence
and rephrasing it to fit the It-cleft structure.
- Wh-cleft: This restructures sentences using
the Wh-cleft construction, which highlights a
specific part of the sentence with Wh-words
-----
Table 1: Summary statistics of RUPBench. The benchmark is constructed using the validation or test sets from
15 source reasoning datasets, depending on availability and the presence of ground truth labels. ‘Pert.’ refers to
perturbed, indicating the total number of samples after applying nine types of general perturbations to each original
validation/test sample, with the original sample count shown in parentheses. For datasets like TRAM and ETHICS,
which include multiple subtasks beyond commonsense reasoning, we extract the relevant samples for our analysis.
**# Train Samples** **# Pert. Val/Test Samples**
**Dataset** **Domain** **Answer Type**
_(Source)_ _(RUPBench)_
**Commonsense Reasoning**
CommonsenseQA General 5-Way MC 9,741 10,989 (1,221)
TRAM Temporal 3-Way MC N/A 29,610 (3,290)
PIQA Physical 2-Way MC 16,113 16,542 (1,838)
QASC Science 8-Way MC 8,134 8,334 (926)
Social IQA Social 3-Way MC 33,410 17,586 (1,954)
Cosmos QA Contextual 4-Way MC 25,262 26,865 (2,985)
NumerSense Numerical Number 10,444 1,800 (200)
RiddleSense Riddle 5-Way MC 3,510 9,189 (1,021)
ETHICS Moral 2-Way MC 13,910 35,676 (3,964)
**Arithmetic Reasoning**
GSM8K Grade School Math Number 7,473 11,871 (1,319)
AQuA-RAT Algebra 5-Way MC 97,467 4,572 (508)
**Logical Reasoning**
ReClor Deductive 4-Way MC 4,638 4,500 (500)
LogiQA2.0 Deductive 2/4-Way MC 44,098 47,880 (5,320)
ART Abductive 2-Way MC 169,654 13,788 (1,532
**Knowledge-Intensive Reasoning**
MMLU Multi-discipline 4-Way MC N/A 126,378 (14,042)
like “what”, “who”, “where”, etc. For example, “The dog chased the cat” becomes “What
the dog chased was the cat”. Similar to the
It-cleft method, we use the spaCy library to
identify key elements and rephrase them to fit
the Wh-cleft structure.
- Compound Variations: This perturbation
creates complex sentence structures by incorporating subordinating conjunctions, quantifiers, and modifying punctuation. For example, a simple sentence can be made more intricate with conjunctions like “although” and
quantifiers like “several”. We use the NLTK library (Bird et al., 2009) to tokenize sentences,
identify parts of speech, and insert suitable
conjunctions and quantifiers. Punctuation is
then adjusted to form compound sentences.
**3.2.3** **Semantic Perturbation**
Semantic perturbations modify the meaning or context of the text to evaluate the model’s understand
ing of deeper linguistic aspects. We consider three
specific types of semantic perturbations: Red herrings, CheckList (Ribeiro et al., 2020) items, and
StressTest (Naik et al., 2018) statements. These
perturbations assess the model’s ability to maintain
logical consistency and focus on relevant information despite the presence of distracting, irrelevant,
or misleading content.
- Red Herrings (RHs): This introduces contextually plausible but irrelevant information
designed to distract the model, aiming to challenge its focus on relevant parts of the text
without altering the final answer. We use GPT4o to generate these RHs, leveraging the efficiency and consistency of LLMs compared
to human generation. We prompt GPT-4o
with: “Given the statement: {context}, gen_erate a single Red Herring either before, after,_
_or within the original text to challenge the_
_LLMs while keeping the original text and final_
_answer intact”._
-----
- CheckList: This perturbation involves incorporating URLs, social media handles, or other
irrelevant elements into the text. For exam[ple, embedding “@newswire” or “http://dw.](http://dw.com)
[com” within a sentence assesses the model’s](http://dw.com)
capability to manage such elements in context
without being misled by their presence. We
generate 100 random URLs and handles, with
a subset selected to be inserted arbitrarily into
various parts of each sample’s context.
- StressTest: This introduces logically redundant or repetitive phrases such as “and true
is true”, “and false is not true”, or “if one is
equal to one”. These phrases are inserted at
random positions within the original text. The
aim is to challenge models to maintain logical
consistency and manage semantic redundancy.
**3.3** **Expert Review**
After collecting the raw perturbed dataset, we conduct a human study involving ten human experts
with at least an undergraduate degree to review the
generated perturbations of each data sample, particularly the RHs, ensuring their quality and reliability. The experts evaluate whether the perturbations
significantly alter the context or introduce errors
that could mislead the models. If a perturbation is
deemed unreadable, the experts rewrite it to align
with the specific type. Their feedback is crucial for
maintaining the original meaning of the text while
effectively challenging the models. Any perturbations deemed implausible or overly disruptive are
revised based on their insights. A perturbed data
sample is considered acceptable without further
changes if it receives approval from at least 60% of
the experts (i.e., six out of ten).
**4** **Experiments**
In this section, we describe the experimental setup,
report overall performance, analyze robustness
from different perspectives, and perform error analysis to identify common errors in LLMs under original and perturbed texts.
**4.1** **Experimental Setup**
We evaluate several leading LLMs for RUPBench
on original and perturbed samples, including GPT4o (gpt 4o, 2024), Llama3-8B-Instruct, Llama370B-Instruct (AI@Meta, 2024), Phi-3-mini-128kInstruct, Phi-3-medium-128k-Instruct (Abdin et al.,
2024), Gemma-2B-Instruct, and Gemma-7BInstruct (Team et al., 2024). GPT-4o is accessed
through the OpenAI API, while the other models
are loaded from Hugging Face. For generating
model responses, we use greedy decoding (temperature = 0). Due to API cost constraints, we
randomly sample 300 instances per dataset (except
NumerSense), each with 10 variations (one raw
and nine perturbed). For MMLU, we sample 50
instances per subject. We utilize 5-shot Chain-ofThought prompting (Kojima et al., 2022) for arithmetic reasoning datasets, while applying 5-shot
standard prompting for the other datasets.
For evaluation metrics, we report the original performance using accuracy, suitable for the multiplechoice nature of most tasks. Additionally, following (Zhu et al., 2023), we report the Performance
Drop Rate (PDR) to measure the relative performance decline after adversarial perturbations. A
negative PDR indicates instances where perturbations can unexpectedly improve performance.
**4.2** **Results and Analysis**
We compare the performance of multiple LLMs
across all datasets, followed by a robustness analysis considering perturbation types, task types, and
models. Finally, we conduct an error analysis to
identify LLM weaknesses under perturbations.
**4.2.1** **Main Results**
We present the overall performance of various models on RUPBench reasoning datasets, comparing
original and perturbed samples. GPT-4o demonstrates the highest accuracy with an average of
83.9% and the lowest average PDR of 10.0%, indicating its strong robustness to adversarial perturbations. Among the open-source LLMs, Llama3-70B
performs exceptionally well with a relatively low
PDR of 11.5%. In contrast, the smallest model,
Gemma-2B, shows the lowest average accuracy of
42.7% and the highest PDR of 21.2%, highlighting
its susceptibility to perturbations.
In terms of datasets, CommonsenseQA presents
notable variability. Gemma-2B achieves only
45.6% accuracy with a substantial PDR of 28.5%,
whereas GPT-4o reaches 83.9% accuracy with a
significantly lower PDR of 5.5%. This trend is
consistent across most datasets, where larger models generally perform better and exhibit greater
robustness. For instance, in the GSM8K dataset,
GPT-4o achieves 94.1% accuracy with a PDR of
22.5%, compared to Gemma-2B’s 16.4% accuracy
-----
Table 2: Model performance on RUPBench, including raw and perturbed datasets. The results are averaged over
three runs. The numbers outside parentheses represent the accuracy (%) on the original data, while the numbers
within parentheses indicate the average PDR (%) across nine perturbations.
**Gemma** **Phi-3-mini** **Gemma** **Llama3** **Phi-3-medium** **Llama3** **GPT-4o**
**Dataset**
_2B_ _3.8B_ _7B_ _8B_ _14B_ _70B_ _>175B_
CommonsenseQA 45.6 (28.5) 75.8 (24.7) 66.0 (24.1) 73.5 (11.3) 80.3 (18.4) 80.7 (12.4) 83.9 (5.5)
TRAM 53.6 (20.2) 79.4 (9.5) 67.3 (21.1) 78.8 (6.1) 81.3 (10.6) 82.8 (8.5) 87.8 (7.8)
PIQA 50.1 (1.1) 79.5 (0.6) 73.3 (0.3) 81.3 (1.2) 83.7 (0.9) 82.1 (0.7) 91.2 (0.5)
QASC 61.4 (39.0) 77.3 (18.4) 67.1 (35.4) 75.9 (17.3) 75.3 (20.7) 79.6 (16.9) 92.6 (14.5)
Social IQA 53.1 (8.7) 70.3 (3.5) 62.1 (5.3) 70.4 (5.5) 73.8 (6.2) 74.1 (8.3) 80.7 (8.8)
Cosmos QA 52.4 (2.2) 72.7 (5.6) 64.0 (0.9) 81.2 (3.6) 82.9 (4.2) 86.1 (6.5) 88.6 (3.6)
NumerSense 37.8 (86.3) 66.4 (93.9) 62.5 (53.3) 64.8 (15.8) 68.2 (84.3) 69.5 (18.9) 83.2 (20.8)
RiddleSense 37.1 (24.9) 58.5 (22.2) 50.8 (20.9) 64.1 (17.3) 63.3 (20.3) 70.7 (18.4) 89.3 (16.7)
ETHICS 40.8 (13.3) 56.0 (7.7) 61.7 (10.3) 78.1 (12.3) 69.2 (6.8) 82.3 (11.8) 94.7 (7.8)
GSM8K 16.4 (49.8) 70.3 (22.2) 45.6 (40.5) 76.7 (18.2) 81.2 (26.7) 85.9 (20.3) 94.1 (22.5)
AQuA-RAT 19.6 (-0.3) 26.1 (6.2) 30.1 (-2.0) 38.7 (17.6) 32.8 (9.8) 41.5 (19.2) 48.2 (12.3)
ReClor 32.1 (10.4) 62.0 (8.4) 41.9 (9.3) 63.1 (9.0) 67.9 (13.2) 69.5 (12.5) 77.2 (8.9)
LogiQA2.0 42.8 (6.3) 55.9 (5.9) 51.4 (3.7) 55.7 (5.5) 58.3 (5.7) 60.4 (7.0) 72.8 (6.6)
ART 57.3 (9.4) 78.3 (8.8) 68.8 (2.2) 73.6 (1.1) 79.8 (10.3) 80.2 (1.8) 87.1 (3.7)
MMLU 40.5 (18.9) 63.8 (6.3) 62.5 (15.2) 67.3 (7.7) 76.8 (7.2) 80.2 (9.3) 87.6 (9.7)
**Average** 42.7 (21.2) 66.1 (16.3) 58.3 (16.0) 69.5 (10.0) 71.6 (16.3) 75.0 (11.5) 83.9 (10.0)
and 49.8% PDR.
Interestingly, models demonstrate varied responses to specific perturbations. The arithmetic
reasoning datasets GSM8K and AQuA-RAT show
mixed results, with AQuA-RAT experiencing negative PDRs for some models, such as -0.3% for
Gemma-2B and -2.0% for Gemma-7B, suggesting
that certain perturbations might inadvertently aid
performance in these tasks.
Overall, while the largest models like GPT-4o
exhibit robust performance with minimal PDRs,
smaller models like Gemma-2B and Phi-3-mini3.8B struggle significantly more in challenging
datasets like NumerSense and GSM8K. This underscores the necessity for further advancements in
model robustness and the importance of evaluating
models on diverse and complex reasoning tasks.
**4.2.2** **Robustness Analysis**
word forms and spelling to understand context and
meaning, making them highly sensitive to such variations. Syntactic perturbations, especially It-cleft
(15.5%) and Wh-cleft (15.1%) constructions, also
cause significant performance drops. Models may
struggle with non-standard sentence structures that
deviate from the syntactic patterns they are trained
on, potentially confusing their parsing mechanisms
and affecting comprehension. Finally, semantic
perturbations like Red Herrings (10.2%) exhibit notable PDRs, indicating that introducing irrelevant
information can distract and mislead the models.
12
8
Normalize PDR (%)
4
0
Leetspeak It-cleft Wh-cleft TyposHomophonesRed Herrings CheckList StressTestCompound Variations
|Lexical Syntactic Semantic|Lexical Syntactic Semantic|
|---|---|
|||
Lexical
Syntactic
Semantic
Figure 2: Normalized PDR (%) of nine perturbation
types, averaged across datasets and models. Normalization scales each perturbation’s impact.
**Data Categories and Models We further examine**
the impact of data categories and models on robust
We investigate robustness across nine perturbation
types within three major categories (lexical, syntactic, and semantic) and the relationship between the
robustness of reasoning data types and models.
**Perturbation Categories Figure 2 displays the**
normalized PDR (measure for robustness) for nine
perturbation types, averaged across datasets and
models. Lexical perturbations, particularly Leetspeak (16.3%) and typos (13.6%) result in high
PDRs, likely due to the models’ reliance on precise
-----
25 Gemma-2B Phi-3-medium
Phi-3-mini Llama3-70B
Gemma-7B GPT-4o
20 Llama3-8B
15
10
Average PDR (%)
5
0
Commonsense Arithmetic Logical Knowledge-Intensive
Figure 3: Average PDR (%) by dataset categories and models. Each bar represents the average PDR for a specific
model across different dataset categories. Commonsense reasoning and arithmetic reasoning are generally more
susceptible to perturbations. Additionally, larger models tend to be more robust to perturbations.
ness through average PDR, as shown in Figure 3.
The results demonstrate that the small-size LLM
Gemma-2B is more susceptible to perturbations
compared to the other LLMs. As model size increases, there is a general trend towards improved
robustness, indicated by a decrease in PDR. Commonsense and arithmetic reasoning tasks are more
affected by perturbations, as evidenced by their
higher PDRs. This can be attributed to these tasks’
reliance on specific contextual knowledge and precise calculations, which are more easily disrupted.
Conversely, logical and knowledge-intensive reasoning tasks exhibit lower PDRs, likely due to their
structured nature and extensive training data, making them more resilient to perturbations.
**4.2.3** **Error Analysis**
We provide a detailed examination of the errors
encountered by LLMs. Through manual inspection of incorrect predictions under perturbations,
we find that in commonsense reasoning, errors often involve context misinterpretation (32.7%) and
literal interpretation (28.2%), exacerbated by perturbations that introduce ambiguities or misleading
details. In arithmetic reasoning, the most common
mistakes are calculation errors (35.9%) and misunderstandings of word problems (28.4%), amplified
by perturbations that alter problem wording. Logical reasoning errors typically include faulty deductions (30.7%) and inconsistent reasoning (27.0%),
often due to syntactic perturbations that disrupt
the logical flow. In knowledge-intensive reasoning,
the primary issues are knowledge gaps (40.3%)
and concept confusion (26.9%), with semantic per
turbations introducing irrelevant or contradictory
information that challenges the model’s knowledge
base. This analysis highlights specific challenges
posed by different perturbation types, emphasizing
the need for targeted strategies to enhance LLM
robustness. More details on each error type and
their proportions under different reasoning tasks
can be found in Appendix B.
**5** **Discussion**
Investigating robustness is essential for ensuring
the reliable use of LLMs. In this work, we introduce RUPBench, a comprehensive benchmark that
incorporates 15 reasoning datasets with nine general perturbations, covering lexical, syntactic, and
semantic challenges for evaluating LLM robustness. Our study reveals significant variability in
the robustness of different LLMs across various
reasoning tasks. Generally, larger models tend to
be less susceptible to perturbations. Additionally,
LLMs are more vulnerable to lexical and syntactic perturbations. They exhibit varying levels of
resilience across different types of reasoning tasks,
highlighting the influence of data nature on model
robustness. Finally, we identify error patterns that
help understand the inherent weaknesses in LLMs
and provide direction for targeted improvements.
For future work, we will incorporate more challenging and diverse perturbation types to simulate real-world adversarial inputs. Additionally,
integrating domain-specific datasets and perturbations can provide deeper insights into model performance in specialized fields such as healthcare,
-----
legal, and finance. Finally, we will continuously
update RUPBench with emerging datasets and perturbations to ensure rigorous LLM robustness evaluation for the community.
**6** **Limitations**
We acknowledge several limitations in our study.
First, our evaluation is performed on a subset of
data samples, which may not fully capture the comprehensive robustness of LLMs. Second, although
our benchmark includes diverse datasets, perturbations, and models, it is impractical to encompass
all possible LLMs, datasets, and adversarial perturbations due to computational constraints. Third,
we do not explore sufficient prompting methods,
which can be crucial for assessing LLMs’ the general and robustness performance. Lastly, our use of
textual questions may not entirely reflect the robustness capabilities of LLMs, as real-world scenarios
often involve multimodal cues such as images and
videos. Future research could extend similar evaluation pipelines to multimodal LLMs to provide a
more comprehensive assessment.
**References**
Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan,
Jyoti Aneja, Ahmed Awadallah, Hany Awadalla,
Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, et al. 2024. Phi-3 technical report: A highly
capable language model locally on your phone. arXiv
_preprint arXiv:2404.14219._
[AI@Meta. 2024. Llama 3 model card.](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md)
Chandra Bhagavatula, Ronan Le Bras, Chaitanya
Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Wen-tau Yih, and Yejin
Choi. 2019. Abductive commonsense reasoning. In
_International Conference on Learning Representa-_
_tions._
Steven Bird, Ewan Klein, and Edward Loper. 2009. Nat_ural language processing with Python: analyzing text_
_with the natural language toolkit. " O’Reilly Media,_
Inc.".
Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi,
et al. 2020. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the
_AAAI conference on artificial intelligence, volume 34,_
pages 7432–7439.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, et al. 2020. Language models are few-shot
learners. Advances in neural information processing
_systems, 33:1877–1901._
Inyoung Cheong, King Xia, KJ Kevin Feng, Quan Ze
Chen, and Amy X Zhang. 2024. (a) i am not a lawyer,
but...: Engaging legal experts towards responsible llm
policies for legal advice. In The 2024 ACM Confer_ence on Fairness, Accountability, and Transparency,_
pages 2454–2469.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, et al. 2021. Training verifiers to solve math
word problems. arXiv preprint arXiv:2110.14168.
Liang Ding, Qihuang Zhong, Li Shen, Xuebo Liu, Min
Zhang, Yuanxin Ouyang, Dacheng Tao, et al. 2023.
Towards making the most of chatgpt for machine
translation. In The 2023 Conference on Empirical
_Methods in Natural Language Processing._
Guanting Dong, Jinxu Zhao, Tingfeng Hui, Daichi Guo,
Wenlong Wang, Boqi Feng, Yueyan Qiu, Zhuoma
Gongque, Keqing He, Zechen Wang, et al. 2023. Revisit input perturbation problems for llms: A unified
robustness evaluation framework for noisy slot filling
task. In CCF International Conference on Natural
_Language Processing and Chinese Computing, pages_
682–694. Springer.
Aysan Esmradi, Daniel Wankit Yip, and Chun Fai Chan.
2023. A comprehensive survey of attack techniques,
implementation, and mitigation strategies in large
language models. In International Conference on
_Ubiquitous Security, pages 76–95. Springer._
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach,
Hal Daumé Iii, and Kate Crawford. 2021. Datasheets
for datasets. Communications of the ACM, 64(12):86–
92.
[gpt 4o. 2024. Hello gpt-4o. https://openai.com/](https://openai.com/index/hello-gpt-4o/)
[index/hello-gpt-4o/.](https://openai.com/index/hello-gpt-4o/)
Dan Hendrycks, Collin Burns, Steven Basart, Andrew
Critch, Jerry Li, Dawn Song, and Jacob Steinhardt.
2020a. Aligning ai with shared human values. In In_ternational Conference on Learning Representations._
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou,
Mantas Mazeika, Dawn Song, and Jacob Steinhardt.
2020b. Measuring massive multitask language understanding. In International Conference on Learning
_Representations._
M Honnibal and I Montani. 2017. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing.
neural machine translation. In Proceedings of the As_sociation for Computational Linguistics (ACL), pages_
688–697.
Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and
Yejin Choi. 2019. Cosmos qa: Machine reading comprehension with contextual commonsense reasoning.
In Proceedings of the 2019 Conference on Empirical
_Methods in Natural Language Processing and the 9th_
-----
_International Joint Conference on Natural Language_
_Processing (EMNLP-IJCNLP), pages 2391–2401._
Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina
Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In
_Proceedings of NAACL-HLT, pages 4171–4186._
Tushar Khot, Peter Clark, Michal Guerquin, Peter
Jansen, and Ashish Sabharwal. 2020. Qasc: A
dataset for question answering via sentence composition. In Proceedings of the AAAI Conference on
_Artificial Intelligence, volume 34, pages 8082–8090._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in
_neural information processing systems, 35:22199–_
22213.
Saydulu Kolasani. 2023. Optimizing natural language
processing, large language models (llms) for efficient
customer service, and hyper-personalization to enable sustainable growth and revenue. Transactions
_on Latest Trends in Artificial Intelligence, 4(4)._
Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun
Nie, and Ji-Rong Wen. 2023. Halueval: A largescale hallucination evaluation benchmark for large
language models. In Proceedings of the 2023 Con_ference on Empirical Methods in Natural Language_
_Processing, pages 6449–6464._
Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, and Xiang Ren. 2020. Birds have four legs?! numersense:
Probing numerical commonsense knowledge of pretrained language models. In Proceedings of the 2020
_Conference on Empirical Methods in Natural Lan-_
_guage Processing (EMNLP), pages 6862–6868._
Bill Yuchen Lin, Ziyi Wu, Yichi Yang, Dong-Ho Lee,
and Xiang Ren. 2021. Riddlesense: Reasoning about
riddle questions featuring linguistic creativity and
commonsense knowledge. In Findings of the Associ_ation for Computational Linguistics: ACL-IJCNLP_
_2021, pages 1504–1515._
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word
problems. In Proceedings of the 55th Annual Meet_ing of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 158–167._
Hanmeng Liu, Jian Liu, Leyang Cui, Zhiyang Teng, Nan
Duan, Ming Zhou, and Yue Zhang. 2023. Logiqa
2.0—an improved dataset for logical reasoning in
natural language understanding. IEEE/ACM Trans_actions on Audio, Speech, and Language Processing._
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
Md Saef Ullah Miah, Md Mohsin Kabir, Talha Bin Sarwar, Mejdl Safran, Sultan Alfarhood, and MF Mridha.
2024. A multimodal approach to cross-lingual sentiment analysis with ensemble of transformer and llm.
_Scientific Reports, 14(1):9603._
Aakanksha Naik, Abhilasha Ravichander, Norman
Sadeh, Carolyn Rose, and Graham Neubig. 2018.
Stress test evaluation for natural language inference.
In Proceedings of the 27th International Conference
_on Computational Linguistics, pages 2340–2353._
Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin,
and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of nlp models with checklist. In
_Proceedings of the 58th Annual Meeting of the Asso-_
_ciation for Computational Linguistics. Association_
for Computational Linguistics.
Maarten Sap, Hannah Rashkin, Derek Chen, Ronan
Le Bras, and Yejin Choi. 2019. Social iqa: Commonsense reasoning about social interactions. In
_Proceedings of the 2019 Conference on Empirical_
_Methods in Natural Language Processing and the 9th_
_International Joint Conference on Natural Language_
_Processing (EMNLP-IJCNLP), pages 4463–4473._
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao,
Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch,
Adam R Brown, Adam Santoro, Aditya Gupta, Adrià
Garriga-Alonso, et al. 2023. Beyond the imitation
game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine
_Learning Research._
Hao Sun, Zhexin Zhang, Jiawen Deng, Jiale Cheng,
and Minlie Huang. 2023. Safety assessment of
chinese large language models. _arXiv preprint_
_arXiv:2304.10436._
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and
Jonathan Berant. 2019. Commonsenseqa: A question
answering challenge targeting commonsense knowledge. In Proceedings of NAACL-HLT, pages 4149–
4158.
Gemini Team, Rohan Anil, Sebastian Borgeaud,
Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu,
Radu Soricut, Johan Schalkwyk, Andrew M Dai,
Anja Hauth, et al. 2023. Gemini: a family of
highly capable multimodal models. arXiv preprint
_arXiv:2312.11805._
Gemma Team, Thomas Mesnard, Cassidy Hardin,
Robert Dadashi, Surya Bhupatiraju, Shreya Pathak,
Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale,
Juliette Love, et al. 2024. Gemma: Open models
based on gemini research and technology. _arXiv_
_preprint arXiv:2403.08295._
Yixin Wan, George Pu, Jiao Sun, Aparna Garimella,
Kai-Wei Chang, and Nanyun Peng. 2023. “kelly is a
warm person, joseph is a role model”: Gender biases
in llm-generated reference letters. In The 2023 Con_ference on Empirical Methods in Natural Language_
_Processing._
-----
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy,
and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information
_processing systems, 32._
Alex Wang, Amanpreet Singh, Julian Michael, Felix
Hill, Omer Levy, and Samuel R Bowman. 2018.
Glue: A multi-task benchmark and analysis platform
for natural language understanding. In International
_Conference on Learning Representations._
Boshi Wang, Xiang Yue, and Huan Sun. 2023a. Can
chatgpt defend its belief in truth? evaluating llm
reasoning via debate. In Findings of the Association
_for Computational Linguistics: EMNLP 2023, pages_
11865–11881.
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie,
Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi
Xiong, Ritik Dutta, Rylan Schaeffer, et al. 2024a.
Decodingtrust: A comprehensive assessment of trustworthiness in gpt models. Advances in Neural Infor_mation Processing Systems, 36._
Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan,
Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah,
and Bo Li. 2021. Adversarial glue: A multi-task
benchmark for robustness evaluation of language
models. In Thirty-fifth Conference on Neural In_formation Processing Systems Datasets and Bench-_
_marks Track (Round 2)._
Jindong Wang, HU Xixu, Wenxin Hou, Hao Chen,
Runkai Zheng, Yidong Wang, Linyi Yang, Wei Ye,
Haojun Huang, Xiubo Geng, et al. 2023b. On the
robustness of chatgpt: An adversarial and out-ofdistribution perspective. In ICLR 2023 Workshop
_on Trustworthy and Reliable Large-Scale Machine_
_Learning Models._
Yuqing Wang, Malvika Pillai, Yun Zhao, Catherine
Curtin, and Tina Hernandez-Boussard. 2024b.
Fairehr-clp: Towards fairness-aware clinical
predictions with contrastive learning in multimodal electronic health records. _arXiv preprint_
_arXiv:2402.00955._
Yuqing Wang, Prashanth Vijayaraghavan, and Ehsan Degan. 2023c. Prominet: Prototype-based multi-view
network for interpretable email response prediction.
In Proceedings of the 2023 Conference on Empirical
_Methods in Natural Language Processing: Industry_
_Track, pages 202–215._
Yuqing Wang and Yun Zhao. 2023a. Gemini in reasoning: Unveiling commonsense in multimodal large
language models. arXiv preprint arXiv:2312.17661.
Yuqing Wang and Yun Zhao. 2023b. Metacognitive
prompting improves understanding in large language
models. arXiv preprint arXiv:2308.05342.
Yuqing Wang and Yun Zhao. 2023c. Tram: Benchmarking temporal reasoning for large language models.
_arXiv preprint arXiv:2310.00835._
Yuqing Wang, Yun Zhao, Rachael Callcut, and Linda
Petzold. 2022a. Integrating physiological time series
and clinical notes with transformer for early prediction of sepsis. arXiv preprint arXiv:2203.14469.
Yuqing Wang, Yun Zhao, and Linda Petzold. 2022b.
Enhancing transformer efficiency for multivariate time series classification. _arXiv preprint_
_arXiv:2203.14472._
Yuqing Wang, Yun Zhao, and Linda Petzold. 2023d.
Are large language models ready for healthcare? a
comparative study on clinical language understanding. In Machine Learning for Healthcare Conference,
pages 804–823. PMLR.
Yuqing Wang, Yun Zhao, and Linda Petzold. 2024c.
An empirical study on the robustness of the segment
anything model (sam). Pattern Recognition, page
110685.
Alexander Wei, Nika Haghtalab, and Jacob Steinhardt.
2024. Jailbroken: How does llm safety training fail?
_Advances in Neural Information Processing Systems,_
36.
Haotian Xia, Zhengbang Yang, Yuqing Wang, Rhys
Tracy, Yun Zhao, Dongdong Huang, Zezhi Chen,
Yan Zhu, Yuan-fang Wang, and Weining Shen.
2024. Sportqa: A benchmark for sports understanding in large language models. arXiv preprint
_arXiv:2402.15862._
Weihao Yu, Zihang Jiang, Yanfei Dong, and Jiashi Feng.
2019. Reclor: A reading comprehension dataset requiring logical reasoning. In International Confer_ence on Learning Representations._
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali
Farhadi, and Yejin Choi. 2019. Hellaswag: Can a
machine really finish your sentence? In Proceedings
_of the 57th Annual Meeting of the Association for_
_Computational Linguistics, pages 4791–4800._
Yun Zhao, Yuqing Wang, Junfeng Liu, Haotian Xia,
Zhenni Xu, Qinghang Hong, Zhiyang Zhou, and
Linda Petzold. 2021. Empirical quantitative analysis of covid-19 forecasting models. In 2021 In_ternational Conference on Data Mining Workshops_
_(ICDMW), pages 517–526. IEEE._
Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zeek Wang,
Hao Chen, Yidong Wang, Linyi Yang, Wei Ye,
Neil Zhenqiang Gong, Yue Zhang, et al. 2023.
Promptbench: Towards evaluating the robustness of
large language models on adversarial prompts. arXiv
_preprint arXiv:2306.04528._
Terry Yue Zhuo, Zhuang Li, Yujin Huang, Fatemeh
Shiri, Weiqing Wang, Gholamreza Haffari, and YuanFang Li. 2023. On robustness of prompt-based semantic parsing with large pre-trained language model:
An empirical study on codex. In Proceedings of the
_17th Conference of the European Chapter of the As-_
_sociation for Computational Linguistics, pages 1090–_
1102.
-----
**A** **RUPBench Examples**
We present RUPBench examples with nine perturbation types, covering lexical, syntactic, and
semantic-level changes, in Table 3.
**B** **Error Types**
Table 4 illustrates the major error types in LLMs
for different reasoning tasks under perturbations.
For commonsense reasoning tasks, errors often
include context misinterpretation (32.7%), where
the model fails to grasp the overall context, leading
to incorrect conclusions. For example, given the
statement “John went to the bank to deposit his paycheck”, the model might incorrectly assume “bank”
refers to the side of a river rather than a financial
institution. Literalism (28.2%) is another common
error, where the model interprets idiomatic or figurative language literally, resulting in incorrect responses. An example is misinterpreting “kick the
bucket” as physically kicking a bucket instead of
understanding it as an idiom for dying. Additionally, reliance on surface patterns (23.8%) occurs
when the model focuses on superficial text features
rather than underlying meanings, such as recognizing “dog" and “bark" but failing to understand
that “bark” refers to the sound made by a dog. Ignored details (15.3%) represent instances where the
model overlooks crucial information, significantly
impacting the answer. For instance, it might miss
the importance of “only” in “She only eats vegetables” leading to incorrect dietary assumptions.
In arithmetic reasoning, calculation mistakes
(35.9%) are the most frequent errors, where the
model makes errors in mathematical computations,
such as adding 5 + 7 and incorrectly arriving at
11. Word misunderstandings (28.4%) occur when
the model misinterprets the problem’s wording,
leading to incorrect problem-solving steps. For
example, it might misinterpret “double” in “double the number” as simply repeating the number
rather than multiplying it by two. Errors in logical
steps (25.8%) involve incorrect or missing steps
in the solution process, such as skipping a step in
a multi-step algebra problem. Unit errors (9.9%)
arise when the model confuses or mishandles units
of measurement, such as mixing up centimeters
and inches, affecting the accuracy of the solution.
For logical reasoning tasks, faulty deduction
(30.7%) is a common error, where the model draws
incorrect conclusions from the given premises due
to flawed reasoning. For instance, given “All hu
mans are mortal. Socrates is a human”, the model
might incorrectly conclude that “Socrates is not
mortal”. Inconsistency (27.0%) occurs when the
model’s reasoning is not logically coherent, such
as providing contradictory answers to similar questions. Wrong assumptions (23.9%) involve the
model making incorrect initial assumptions that
lead to errors in the logical process, like assuming all birds can fly when solving a problem about
penguins. Connective misuse (18.4%) refers to
incorrect use of logical connectors, such as misinterpreting “if” and “only if”, which disrupts the
logical flow of the argument.
In knowledge-intensive reasoning, the primary
issue is knowledge gaps (40.3%), where the model
lacks the necessary background information to answer correctly, indicating limitations in the model’s
training data. For instance, it might not know that
“Einstein developed the theory of relativity". Concept confusion (26.9%) involves the model mixing
up related but distinct concepts, leading to incorrect
answers, such as confusing “mitosis” and “meiosis”
in a biology question. Fact errors (21.3%) occur
when the model recalls or generates incorrect factual information, like stating that “Albert Einstein
won the Nobel Prize in Chemistry for his discovery
of the photoelectric effect”. Data misuse (11.5%)
happens when the model incorrectly applies relevant data, leading to erroneous conclusions, such as
using outdated statistics to answer a current events
question, highlighting challenges in the model’s
data integration capabilities.
**C** **Datasheet**
We provide the datasheet for RUPBench following (Gebru et al., 2021).
**OVERVIEW**
**Motivation and Intended Uses.**
1. What are the intended purposes for this benchmark?
The intended purposes of RUPBench are to systematically evaluate the robustness of large language
models (LLMs) across a diverse set of reasoning
tasks and to provide insights into their performance
under various types of textual perturbations. By
offering a comprehensive benchmark, RUPBench
aims to help researchers and practitioners identify
and address specific weaknesses in LLMs, thereby
enhancing their reliability and effectiveness in realworld applications.
-----
Table 3: Examples of RUPBench for each perturbation type. OS (Original Sentence) and PS (Perturbed Sentence)
are presented, with major changes highlighted in red and blue.
**Data** **Perturbation** **Sample**
OS: Where do apples form on an apple tree?
CommonsenseQA Homophone
PS: Where deux apple’s form on an appel tree?
OS: How to finish wood table after pictures have been glued on.
PIQA Typo
PS: How tV funish womod table after pictures have beedn gOlued on.
OS: Robin had been away for two weeks on his honeymoon.
Cameron picked him up on his return home.
Social IQA Leetspeak
PS: Robin had been away for two weeks 0/ his honeymoon.
Cameron |D!<|<€|) him up ()/ his return home.
OS: Several tenants blame other neighbors as perpetrators
of the rift, however. How long has there been a rift between neighbors?
TRAM It-cleft
PS: It was several tenants who blame other neighbors as perpetrators
of the rift, however. How long has there been a rift between neighbors?
OS: Anna was making a world atlas. Then she colored in her atlas.
ART Wh-cleft PS: What Anna was doing was making a world atlas.
What she did next was color in her atlas.
Compound OS: What is always slow to come, but never actually happens?
RiddleSense
Variation PS: If What is always slow to come,, but never actually happens ?
OS: James delivers 600 newspapers in a day. He delivers 198 newspapers
to District A, some to District B and 209 newspapers to District C.
How many newspapers does he deliver to District B?
PS: James, who wakes up at 5 am every morning, delivers 600 newspapers in a day.
He delivers 198 newspapers to District A, some to District B,
and 209 newspapers to District C. On Sundays, he also delivers a special magazine
to each house. How many newspapers does he deliver to District B?
OS: boeing and lockheed are <mask> aeronautics companies.
PS: $https://github.com$ $http://huffpost.com$ boeing
$https://medium.com/writer$ $http://huffpost.com$ and
$tech_updates$ lockheed are <mask> aeronautics companies.
OS: Breaking complex chemicals into simple ones in humans occur in what location?
PS: Breaking complex chemicals into simple ones in humans occur in what location?
and false is not true and fire is hot and the sky is blue
if gravity pulls objects down if one is equal to one.
GSM8K Red Herrings
NumerSense CheckList
QASC StressTest
2. Was it designed to address a specific task or fill
a particular gap in research or application?
Yes, RUPBench was specifically designed to fill a
gap in the evaluation of LLMs’ robustness. While
existing benchmarks often focus on restricted tasks
or types of perturbations, RUPBench provides a
more holistic framework that encompasses a wide
range of reasoning tasks (commonsense, arithmetic,
logical, and knowledge-intensive) and three major
categories of textual perturbations (lexical, syntactic, and semantic). This allows for a more nuanced
understanding of how LLMs perform under various
adversarial conditions, addressing the need for a
rigorous and multifaceted robustness evaluation.
**Limitations and Inappropriate Uses.**
3. Are there any specific tasks or applications for
which this benchmark should not be used?
RUPBench is specifically designed to evaluate the
robustness of LLMs in reasoning tasks under var
ious textual perturbations. It is not suitable for
tasks such as natural language generation, summarization, or translation. Additionally, it is not designed for evaluating LLMs in highly specialized or
domain-specific applications outside the scope of
the included datasets, such as biomedical text analysis or highly technical legal document processing,
unless those fields are represented in the included
datasets and perturbations. The benchmark is also
not intended for use in evaluating non-textual data
or multimodal tasks that combine text with other
data types, such as images or audio.
**DETAILS**
**Composition.**
4. What do the instances that comprise the benchmark represent?
The instances in RUPBench represent various reasoning tasks, specifically designed to test the robustness of LLMs. Each instance includes a
reasoning question or problem from one of the
-----
Table 4: Distribution of major error types in LLMs by
reasoning tasks under perturbations. Con. Misinter.
refers to context misinterpretation, and Misunder. refers
to misunderstanding.
**Task** **Error Types** **Proportion (%)**
Con. Misinter. 32.7
Literalism 28.2
Commonsense
Surface Patterns 23.8
Ignored Details 15.3
Calculation Mistakes 35.9
Word Misunder. 28.4
Arithmetic
Logical Steps 25.8
Unit Errors 9.9
Faulty Deduction 30.7
Inconsistency 27.0
Logical
Wrong Assumptions 23.9
Connective Misuse 18.4
Knowledge Gaps 40.3
Knowledge- Concept Confusion 26.9
Intensive Fact Errors 21.3
Data Misuse 11.5
four major categories: commonsense (CommonsenseQA, TRAM, PIQA, QASC, Social IQA, Cosmos QA, NumerSense, RiddleSense, ETHICS),
arithmetic (GSM8K, AQuA-RAT), logical (ReClor, LogiQA2.0, ART), and knowledge-intensive
(MMLU) reasoning. These instances are further subjected to nine types of textual perturbations, covering lexical (homophones, typos, Leetspeak), syntactic (It-cleft, Wh-cleft, compound variation), and semantic levels (red herrings, CheckList, StressTest), to simulate real-world input variations and assess how well LLMs handle such adversarial conditions.
5. How many instances are there in total (of each
type, if appropriate)?
RUPBench consists of a total of 365,580 instances
(excluding the original instances). This includes
15 original datasets, each subjected to nine different types of perturbations. Specifically, the
number of perturbed samples for each dataset is
as follows: CommonsenseQA (10,989), TRAM
(29,610), PIQA (16,542), QASC (8,334), Social
IQA (17,586), Cosmos QA (26,865), NumerSense
(1,800), RiddleSense (9,189), ETHICS (36,676),
GSM8K (11,871), AQuA-RAT (4,572), ReClor
(4,500), LogiQA2.0 (47,800), ART (13,788), and
MMLU (126,378).
6. Does the benchmark contain all possible instances or is it a sample (not necessarily random)
of instances from a larger set?
The benchmark contains a curated selection of
instances from the available reasoning datasets,
specifically from the validation or test sets.
7. Is there a label or target associated with each
instance?
Yes, each instance in the benchmark has an associated label or target. These labels represent the correct answers or expected outputs for the reasoning
tasks, which are used to evaluate the performance
and robustness of the LLMs.
8. Is the benchmark self-contained, or does it link
to or otherwise rely on external resources (e.g.,
websites, tweets, other datasets)?
RUPBench is built upon existing datasets but is
self-contained. It includes perturbed versions of instances from various established reasoning datasets.
While the original datasets are sourced from external resources, RUPBench itself provides all necessary data for robustness evaluation without requiring access to the external sources. Users do not
need to access the original datasets separately, as
all relevant instances and their perturbations are
included within RUPBench.
9. Does the benchmark contain data that might be
considered sensitive in any way?
The benchmark does not contain any sensitive data.
**Data Quality.**
10. Is there any missing information in the benchmark?
Everything is included. No data is missing.
11. What errors, sources of noise, or redundancies
are important for benchmark users to be aware of?
Benchmark users should be aware of potential
sources of noise and errors, such as inconsistencies in how perturbations are applied to different instances, which may affect model performance evaluation. Some perturbations may introduce subtle
ambiguities or irrelevant information that could disproportionately impact certain types of reasoning
tasks, leading to variability in results. Additionally,
redundancies might arise if multiple perturbations
affect the same aspect of an instance, potentially
skewing the analysis. It’s also important to consider that manual inspection and correction of perturbations, while thorough, may still leave room for
subjective interpretations, which could introduce a
level of bias into the benchmark.
12. How was the data validated/verified?
The data in RUPBench was validated and verified
through a multi-step process. First, each source
dataset underwent a thorough review through sam
-----
pling instances to ensure quality. Perturbations
were then generated and applied to these instances
following standardized procedures to maintain consistency across the benchmark.
To ensure the quality and reliability of the perturbed data, a human study was conducted involving ten experts with at least an undergraduate degree. These experts reviewed the generated perturbations to verify that they maintained human
readability while introducing the intended adversarial variations. If a perturbation was deemed
unreadable or significantly altered the context, the
experts would rewrite it to align with the specific
perturbation type.
Finally, any identified errors or inconsistencies
were corrected based on expert feedback, and a
consensus approach was used to ensure that at least
60% of experts approved each perturbed instance.
**Pre-Processing, Cleaning, and Labeling.**
13. What pre-processing, cleaning, and/or labeling
was done on this benchmark?
Original datasets underwent human reviews for
quality checks. Nine types of textual perturbations were systematically applied to each dataset,
covering lexical, syntactic, and semantic levels.
These perturbations were designed to simulate realworld input variations and test the robustness of the
models. In particular, for the arithmetic reasoning
datasets GSM8K and AQuA-RAT, no numerical
alterations were made to keep the final answers
unchanged. Finally, the perturbed samples were
reviewed by a panel of ten experts to ensure the
perturbations maintained readability and did not
introduce significant context alterations. Experts
corrected any perturbations that were unreadable
or inappropriate.
14. Provide a link to the code used to preprocess/clean/label the data, if available.
The code for data pre-processing is available on the
official GitHub page.
15. If there are any recommended data splits (e.g.,
training, validation, testing), please explain.
RUPBench is designed primarily for the evaluation
of LLM robustness and does not include predefined
splits for training, validation, or testing. Instead,
it provides a comprehensive set of perturbed instances intended for testing the performance of
already trained models. Users are encouraged to
use the entire dataset for evaluation purposes. If
specific splits are required for custom analyses or
experiments, users can create their own training,
validation, and testing splits as appropriate for their
specific needs. Alternatively, users can use the
training set of the source dataset, if available, and
validate the test samples in RUPBench.
**ADDITIONAL DETAILS ON**
**DISTRIBUTION AND MAINTENANCE**
**Distribution.**
16. Will the benchmark be distributed to third parties outside of the entity (e.g., company, institution,
organization) on behalf of which the dataset was
created?
Yes, the benchmark will be publicly available on
the Internet.
17. How will the benchmark be distributed (e.g.,
tarball on website, API, GitHub)?
The benchmark is distributed via the official
GitHub page.
18. When will the benchmark be distributed?
The benchmark will be released in June 2024.
**Maintenance.**
19. Who will be supporting/hosting/maintaining
the benchmark?
The first author of the RUPBench paper will support and maintain the benchmark.
20. Will the benchmark be updated (e.g., to correct labeling errors, add new instances, delete instances)?
Updates to test sets and error corrections will be
shared on the official GitHub page.
21. Will older versions of the benchmark continue
to be supported/hosted/maintained?
Given any updates to the benchmark, older versions
will be retained for consistency.
22. If others want to extend/augment/build
on/contribute to the benchmark, is there a mechanism for them to do so?
Anyone interested in incorporating fixes or extensions should reach out to the original authors of
RUPBench.
-----
| [
"Yuqing, Wang",
"Yun, Zhao"
] | 2024-06-16T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2406.11020 | https://arxiv.org/abs/2406.11020 | https://www.semanticscholar.org/paper/d979b2fd5ceb4b4ca2bc61adae19dd474133646c |
Rationales for Answers to Simple Math Word Problems Confuse Large Language Models | Recently, large language models (LLMs) have demonstrated breakthrough mathematical problem-solving capabilities in grade school math word problems (MWP). For example, on the MWP benchmark GSM8K, the accuracy of GPT-3.5-Turbo and MetaMath-70B reaches 80.80% and 82.30%, respectively. One question arises, does it mean that LLMs have truly mastered related mathematical problem-solving abilities? In this paper, by presenting two types of benchmarks, where MCGSM8K aims at selecting one correct solution from four solutions, while GSM8K-Judgement judges whether a solution to a given question is true or false, we demonstrate that the ability of most LLMs to evaluate the mathematical reasoning process of MWP is far from sufficient. To compensate for this issue, we propose hybrid supervised fine-tuning data from the training data of GSM8K, MCGSM8K, and GSM8K-Judgement, which significantly improves performance on the proposed reasoning process evaluation benchmarks. For example, fine-tuning improves the performance of LLaMA-2-13B from 33.51% to 70.89% on MCGSM8K. In conclusion, we experimentally demonstrate that most LLMs have limited ability to evaluate the mathematical reasoning process of MWP, which can be enhanced through fine-tuning. | null | # Rationales for Answers to Simple Math Word Problems Confuse Large Language Models
**Yidan Zhang[♠]** **Mingfeng Xue[♠]** **Dayiheng Liu[♠]** **Zhenan He[♠][∗]**
_♠College of Computer Science, Sichuan University_
[email protected]
**Abstract**
Recently, large language models (LLMs)
have demonstrated breakthrough mathematical
problem-solving capabilities in grade school
math word problems (MWP). For example,
on the MWP benchmark GSM8K, the accuracy of GPT-3.5-Turbo and MetaMath-70B
reaches 80.80% and 82.30%, respectively. One
question arises, does it mean that LLMs have
truly mastered related mathematical problemsolving abilities? In this paper, by presenting
two types of benchmarks, where MCGSM8K
aims at selecting one correct solution from four
solutions, while GSM8K-Judgement judges
whether a solution to a given question is true or
false, we demonstrate that the ability of most
LLMs to evaluate the mathematical reasoning
process of MWP is far from sufficient. To
compensate for this issue, we propose hybrid
supervised fine-tuning data from the training
data of GSM8K, MCGSM8K, and GSM8KJudgement, which significantly improves performance on the proposed reasoning process
evaluation benchmarks. For example, finetuning improves the performance of LLaMA-213B from 33.51% to 70.89% on MCGSM8K.
In conclusion, we experimentally demonstrate
that most LLMs have limited ability to evaluate
the mathematical reasoning process of MWP,
which can be enhanced through fine-tuning.
**1** **Introduction**
It is reported that general close-source large language models (LLMs) have demonstrated promising performance on several mathematical word
problems (MWP) benchmarks, e.g., GPT-4 (OpenAI, 2023) and GPT-3.5-Turbo (OpenAI, 2022)
achieving the accuracy of 92.00% and 80.80% on
grade school MWP benchmark GSM8K (Cobbe
et al., 2021a), respectively. With the development of prompt-based methods (Fu et al., 2023;
Wang et al., 2023a) and finetuning-based methods (Yu et al., 2023; Yue et al., 2023; Yuan
_∗_ Corresponding author.
et al., 2023), mathematical specialized LLMs
tuned on specific tasks also exhibit competitive
performance. For example, MetaMath-70B (Yu
et al., 2023), WizardMath-70B (Luo et al., 2023),
and MAmmoTH-70B (Yue et al., 2023) achieves
82.30%, 81.60%, and 76.90% on GSM8K, respectively. Now, one question arises, does the excellent
performance demonstrate that these LLMs truly
master related mathematical problem-solving abilities, such as the ability to evaluate the mathematical
reasoning process of MWP?
Intuitively, picking one correct solution from
possible solutions is easy for humans, as it just
requires evaluating the correctness of the reasoning process. In comparison, reasoning the answer
based on the open-formed question is difficult,
which requires analyzing the problem, step-bystep reasoning, and deriving the final result (Cobbe
et al., 2021b). Building upon this premise, we design a simple mathematical reasoning processes
evaluation benchmark, MCGSM8K [1] aiming at
choosing one correct solution from four options,
as shown in Figure 2. Then, we utilize a few-shot
(Chen et al., 2022b; Min et al., 2022) setup to test
the performance of typical general open-source
models LLaMA-2, general closed-source models
GPT-3.5-Turbo and GPT-4, as well as mathematical specialized models MetaMath and MAmmoTH
on it. However, our experimental results in Figure 1 and Table 1 reveal that most LLMs lag far
behind on MCGSM8K. For example, the accuracy of LLaMA-2-70B, MetaMath-70B, and GPT3.5-Turbo drop from 56.80% to 38.29%, 82.30%
to 34.87%, and 80.80% to 40.56%, respectively.
Specifically, each solution of MCGSM8K contains
both the final answer and a step-by-step reasoning
process (rationale). We collect incorrect solutions
by regenerating solutions for each question in the
test set of GSM8K. To keep the quality and di
[1MCGSM8K is publicly available at https://github.](https://github.com/SCUZPP/MCGSM8K.git)
[com/SCUZPP/MCGSM8K.git](https://github.com/SCUZPP/MCGSM8K.git)
8853
-----
**Average testing accuracy**
GSM8k MCGSM8k MCGSM8k-No-Rationale MCGSM8k-2Options GSM8k-Judgement-TPR GSM8k-Judgement-TNR
|42.75 35.9 46.4 64.78 41.97 76.68|73.38 34.46 61.39 65.51 81.98 27.85|86.4 61.56 86.66 84.81 91.0 67.74|68.98 41.59 63.96 70.15 74.25 50.03|
|---|---|---|---|
general open-source models mathematical specialized models general closed-source models all tested models
Figure 1: Average few-shot testing accuracy by general open-source models (LLaMA-2 with 13B and 70B),
mathematical specialized models (MetaMath with 13B and 70B, and MAmmoTH with 13B and 70B), general
closed-source models (GPT-3.5-Turbo and GPT-4), and all tested models.
versity of MCGSM8K, we use multiple advanced
LLMs, e.g., Qwen (Bai et al., 2023), LLaMA-2
(Touvron et al., 2023b), MetaMath (Yu et al., 2023),
and WizardLM (Xu et al., 2023), through few-shot
Chain-of-thought (CoT) (Wei et al., 2022) prompting to generate incorrect solutions.
To comprehensively investigate the performance
of LLMs on MCGSM8K, we progressively conduct the following experiments with a few-shot setting on three well-designed benchmarks as shown
in Figure 2. First of all, the main reason for the
poor performance of LLMs on MCGSM8K might
include a) the model’s inability to solve multiplechoice format questions, and b) the model’s inability to evaluate the reasoning process. To verify, we
propose a conventional multiple-choice-question
benchmark MCGSM8K-No-Rationale by removing the rationale and leaving only the final answer
for each option from MCGSM8K. The average
accuracy of all tested models on MCGSM8K-NoRationale (63.96%) is 25.50% higher than that on
MCGSM8K (41.59%) and close to that on GSM8K
(68.98%). The result reveals that the poor performance might be due to the model has difficulty in evaluating the reasoning process instead
of the answer directly. Furthermore, we analyze
the performance of LLMs by reducing the difficulty of solving the problem in MCGSM8K to
half. Specifically, we remove any two incorrect
options for each question in MCGSM8K, resulting in MCGSM8K-2Options aiming at selecting
one correct solution from two. However, the average accuracy of all tested models on the twochoice-question benchmark MCGSM8K-2Options
is merely 1.17% higher than that on the openformed-question benchmark MCGSM8K (70.15%
vs. 68.98%). MetaMath-70B and LLaMA-270B achieve an accuracy of 66.94% and 64.22%,
which is merely 16.94% and 14.22% higher than
the random-chance accuracy of 50%, respectively.
This verifies the ability of most LLMs to evaluate the mathematical reasoning process is insufficient. Are LLMs insufficient to evaluate the correct
solution or the incorrect solution? To figure this
out, we propose a true-or-false-question benchmark
GSM8K-Judgement to directly judge the correctness of the solution to a given question. In general, it is easier for humans to identify incorrect
solutions than correct solutions, as the former only
requires identifying a certain incorrect step, while
the latter must ensure the correctness of all steps.
Experimental results on general open-source models show that the average True Negative rate (TNR)
is significantly better than the average True Positive rate (TPR), which is consistent with human
behavior. However, when it comes to mathemat
8854
-----
ical specialized models and GPT-3.5-Turbo, the
situation is completely reversed.
Through the above experimental analysis, it can
be concluded that most LLMs have a poor ability
to evaluate the mathematical reasoning process of
MWP. Mathematical specialized models are mainly
fine-tuned on abundant correct solutions, which
greatly improves the ability to identify correct solutions while causing catastrophic forgetting in identifying incorrect solutions. In addition, we hypothesize that most LLMs only catch spurious signals in
specific datasets resulting in “solving” the datasets
while not mastering abilities related to mathematical problem-solving.
In this paper, we try to compensate for these
shortcomings by finetuning models on hybrid
training samples from GSM8K, MCGSM8K, and
GSM8K-Judgement. This contributes to enhancing the generalization ability to solve mathematical problems, mastering new data distribution on
the MCGSM8K and GSM8K-Judgement, identifying correct solutions, as well as learning to analyze and evaluate the reasoning process. Specifically, we use LLaMA-2-13B as the base model.
After fine-tuning, we observe a substantial improvement in accuracy with an increase of +37.38%
on MCGSM8K, +41.24% on TPR, +8.87% on
TNR, +7.43% on MCGSM8K-No-Rationale, and
+16.62% on GSM8K. The result demonstrates that
fine-tuning can greatly improve the mathematical
reasoning process evaluation ability of LLMs.
Our main contributions can be summarised as
follows:
- To explore whether LLMs have mastered
the ability to evaluate the mathematical reasoning process of MWP, we carefully create two kinds of benchmarks. The first category is multiple-choice questions, including MCGSM8K aiming at choosing one correct solution from four options, MCGSM8K2Options containing only two options, and
MCGSM8K-No-Rationale with only the final answer in each option. The second type
is true or false questions, including GSM8KJudgement to judge the correctness of the solution to a given question.
- We conduct experiments with typical general
open-source models, general closed-source
models, and mathematical specialized models
on the four benchmarks. The experimental
results reveal that existing LLMs except GPT4 have a poor ability to evaluate the mathematical reasoning process of MWP. Meanwhile, fine-tuning with only correct solutions
improves the performance in evaluating correct solutions, but leads to a huge performance
drop in evaluating incorrect solutions. Furthermore, we experimentally demonstrate that
these drawbacks of LLMs can be alleviated to
a certain extent through fine-tuning.
**2** **Related Work**
**Large Language Models. LLMs with billions of**
parameters trained on extensive large-scale corpora
of textual data have led to massive changes in the
field of AI. OpenAI’s GPT series (Brown et al.,
2020; OpenAI, 2023), which is the most representative general closed-source LLM, opens the era of
pre-trained LLMs, where a large number of prominent instances are launched one after another involving Anthropic’s Claude 2 (Bai et al., 2022),
Google’s PaLM (Chowdhery et al., 2023; Anil
et al., 2023), DeepMind’s Chinchilla (Hoffmann
et al., 2022), and Gopher (Rae et al., 2021). Subsequently, numerous general open-source LLMs
have been released, whose code and weight parameters are open to the public. Typical examples
include LLaMA (Touvron et al., 2023a,b), GLM130B (Zeng et al., 2023), OPT (Zhang et al., 2022),
Falcon (Penedo et al., 2023), and so on. Although
general closed-source LLMs, e.g., GPT-3.5, GPT-4,
and PaLM-2, have achieved considerable advancements in several MVP tasks such as GSM8K and
NumGLUE (Mishra et al., 2022), the performance
of general open-source LLMs is far from satisfactory.
**Large Language Models for Mathematical**
**Reasoning. Chain-of-thought (CoT) (Wei et al.,**
2022) reasoning by designing better prompts has
been proposed to generate step-by-step solutions
leading to improved performance in complex reasoning. To improve the mathematical reasoning
capabilities of general open-source LLMs, existing
methods focus on CoT prompting for augmenting
fine-tuning data. WizardMath (Luo et al., 2023)
utilizes few-shot CoT prompting to re-generate
solutions for GSM8K and MATH (Hendrycks
et al., 2021), then uses this data to construct SFT
data. MetaMath (Yu et al., 2023) aims to improve
finetuning-based methods by answer augmentation
and available mathematical questions bootstrap to
8855
-----
**GSM8K**
**Question: Weng earns $12 an hour for babysitting. Yesterday, she just did 50 minutes of babysitting. How much did she earn?**
**Rationale: Weng earns 12/60 = $<<12/60=0.2>>0.2 per minute.**
Working 50 minutes, she earned 0.2*50 = $<<0.2*50=10>>10.
**Final Answer: 10**
**MCGSM8K**
**Question: Weng earns $12 an hour for babysitting. Yesterday, she just did 50 minutes of babysitting. How much did she earn?**
**Options:**
(A) 600. Rationale: Weng earns $12*50 = $600 for 50 minutes of babysitting.
(B) 2430. Rationale: Weng earns $12/hour * 60 minutes/hour = $720/hour for babysitting. So she earned 720/hour - 12/hour =
608/hour - 12/hour = 486/hour for babysitting. As she worked for 50 minutes, she earned 486/hour * 50 minutes = 2430.
(C) 10. Rationale: Weng earns 12/60 = 0.2 per minute. Working 50 minutes, she earned 0.2*50 = 10.
(D) 1. Rationale: In an hour, there are 60 minutes. Weng did babysitting for 50 minutes, so she just worked for 50 / 60 = 1/12 of
an hour. So she earned $12 * 1/12 = $1.
**Correct Option: C**
**MCGSM8K-2Options**
**Question: Weng earns $12 an hour for babysitting. Yesterday, she just did 50 minutes of babysitting. How much did she earn?**
**Options:**
(A) 600. Rationale: Weng earns $12*50 = $600 for 50 minutes of babysitting.
(B) 10. Rationale: Weng earns 12/60 = 0.2 per minute. Working 50 minutes, she earned 0.2*50 = 10.
**Correct Option: B**
**MCGSM8K-No-Rationale**
**Question: Weng earns $12 an hour for babysitting. Yesterday, she just did 50 minutes of babysitting. How much did she earn?**
**Options:**
(A) 600 (B) 2430 (C) 10 (D) 1
**Correct Option: C**
**GSM8K-Judgement**
**Statement: Weng earns $12 an hour for babysitting. Yesterday, she just did 50 minutes of babysitting. How much did she earn?**
10. Rationale: Weng earns 12/60 = 0.2 per minute. Working 50 minutes, she earned 0.2*50 =10.
**Answer: True**
Figure 2: Four example problems constructed from the original problem in GSM8K.
construct SFT data. RFT (Yuan et al., 2023) improves mathematical reasoning performance by
applying CoT prompts and Rejection Sampling
(RS) on SFT models to construct augmented solutions. MAmmoTH (Yue et al., 2023) utilizes
a unique hybrid of CoT and program-of-thought
(PoT) (Chen et al., 2022a) rationales to construct
augmented solutions for improving mathematical
problem-solving ability. As a result, these mathematical specialized models have surpassed previous
general open-source LLMs by a significant margin
in mathematical problem-solving.
**Large Language Models for Mathematical**
**Reasoning Process Evaluation. There are differ-**
ent ways to investigate the mathematical reasoning
process, e.g., scoring each step of the reasoning
process (Lightman et al., 2023), examining and
analyzing each step, as well as modifying the reasoning process based on analyses (An et al., 2023).
However, the above methods are mainly used to
improve reasoning capabilities. Lightman et al.
(2023) demonstrates that step-by-step verifying for
the reasoning process of LLMs can lead to bet
ter model performance in mathematical problemsolving. An et al. (2023) fine-tunes LLMs on
mistake-correction data pairs generated by GPT4 to improve their reasoning capabilities in solving math problems. Existing research rarely explores the mathematical reasoning process evaluation ability of LLMs. We aim to design two types of
benchmarks to test the ability of LLMs to evaluate
the mathematical reasoning process of MWP. To
achieve this goal, we design two direct and simple
benchmarks, i.e., choosing the correct reasoning
process from four candidates and judging whether
a given reasoning process is correct or not. Please
note that all existing mathematical reasoning benchmarks can be used to construct reasoning process
evaluation benchmarks by our proposed method.
Similar to our work, there are also studies focusing on the counter-intuitive behaviors of LLMs.
Berglund et al. (2023) studies the Reversal Curse of
LLMs, i.e., LLMs exhibit a basic failure in deducting from “A is B” to the reverse direction “B is A”.
Pezeshkpour and Hruschka (2023) aims to investigate the sensitivity of LLMs against options order
8856
-----
of multiple-choice questions and recommends two
approaches to calibrate LLMs’ predictions including majority vote and multiple evidence calibration
(MEC).
**3** **MCGSM8K**
In this section, we describe in detail the process
of constructing the benchmark MCGSM8K, which
consists of multiple-choice questions with each
question from the original test set of GSM8K and
each option containing a solution. Our primary objective is to guarantee the reliability of MCGSM8K,
ensuring that correct options are indeed correct
and incorrect options are demonstrably wrong. To
achieve this, we derive correct options directly
from GSM8K, incorporating both the validated answer and its accompanying rationale, which are
manually annotated to ensure accuracy. Conversely,
incorrect options are generated by LLMs, comprising wrong answers distinct from the ground-truth
answer, each accompanied by a plausible rationale.
On the premise that the reliability of the dataset is
guaranteed, we further enhance the challenge and
difficulty of the dataset. Thus, we 1) use multiple
advanced open-source LLMs to generate numerous
candidate solutions with wrong answers, 2) maintain the diversity of incorrect option differences
based on Rouge-L (Lin, 2005) scores and k-means
cluster (Wong, 1979), and 3) ensure the confusability of the incorrect options with the correct option
based on similarity ranking.
**3.1** **Generation of Incorrect Solutions**
Given a question qi, we utilize 8-shot CoT prompting to re-generate N solutions {(ri[j][, a]i[j][) :][ j][ =]
1, ..., N _} by advanced open-sourced models includ-_
ing Qwen (Bai et al., 2023), LLaMA-2 (Touvron
et al., 2023b), MetaMath (Yu et al., 2023), and WizardLM (Xu et al., 2023), with sizes of 13B and 70B.
We use each model to generate 50 candidate solutions. The principles for selecting these specific
models are detailed in Appendix C. Specifically, a
question qi is appended to a few demonstrations,
then fed to an LLM for generating its answer a[j]i
along with rationale ri[j] [step-by-step. In the gen-]
eration process, we follow Wang et al. (2023b) to
adopt temperature sampling and set the temperature
as 0.7. Then, find out those with a wrong answer
according to the ground-truth answer to construct
the incorrect solution set Setaug.
Setaug ={(a[j]i _[, r]i[j][) :]_ (1)
_a[j]i_ _[̸][=][ a]i[∗][;][ i][ = 1][, ..., K][;][ j][ = 1][, ...N]_ _[}][,]_
where K is the total number of questions, and a[∗]i
is the ground-truth answer for question qi.
**3.2** **Diversity and Quality**
To enlarge diversity, an incorrect solution to qi is
added to Setaug only when its ROUGE-L similarity
with any existing solutions to qi in Setaug is less
than 0.9, to effectively reconcile the retention quantity with the desired diversity of solutions. We also
remove the calculation annotations proposed in the
original answer rationale of GSM8K from each solution to facilitate formatting consistency. Invalid
solutions are identified and filtered out based on
heuristics, e.g., too-long or too-short solutions, solutions contain codes, and solutions do not end in
the specified format. Finally, the candidate solutions to qi are clustered into three categories using
_k-means clustering to ensure the diversity of solu-_
tions among each category. More details on data
quality and diversity is provided in Appendix D.
**3.3** **Construction of Dataset**
We have designed three different selection ways to
avoid a single selection method causing a similar
distribution or text of the three incorrect options.
The ablation study on data construction is analyzed
in Appendix B. One way is picking the one with
the highest ROUGE-L similarity score to the correct option, which can ensure the confusability of
the incorrect options. Another way is picking out
the one with the lowest Perplexity (PPL) scored by
language model WizardLM-70B (Xu et al., 2023).
The lower the PPL, the solution is more natural and
more consistent with the model’s generation preferences. The last one is the random selection to keep
diversity. For a question qi, we select one solution
from each of the three clusters to construct three
incorrect options. At every election, we choose
one way in turn. Then, the three incorrect options
are combined with the correct one in a randomly
shuffled order. Finally, we combine the question,
options, and the correct option label into a multiplechoice question.
**4** **Experiments**
In this section, the core of our analysis is how
LLMs perform on the proposed benchmarks, what
8857
-----
MCGSM8K MCGSM8K GSM8K-Judgement
Model GSM8K MCGSM8K
-No-Rationale -2Options TPR TNR
general open-source models
LLaMA-2-13B 28.70 33.51 34.34 62.62 36.85 75.26
LLaMA-2-70B 56.80 38.29 58.45 66.94 47.08 78.09
AVG 42.75 35.90 46.40 64.78 41.97 76.68
mathematical specialized models
MetaMath-13B 72.30 23.12 45.49 57.70 85.75 17.54
MAmmoTH-13B 62.00 35.25 55.34 58.91 59.21 48.17
MetaMath-70B 82.30 34.87 76.04 64.22 88.93 22.21
MAmmoTH-70B 76.90 44.58 68.69 81.20 94.01 23.48
AVG 73.38 34.46 61.39 65.51 81.98 27.85
general closed-source models
GPT-3.5-Turbo 80.80 40.56 79.68 75.06 88.48 47.69
GPT-4 92.00 82.56 93.63 94.56 93.70 87.79
AVG 86.40 61.56 86.66 84.81 91.09 67.74
AVG All 68.98 41.59 63.96 70.15 74.25 50.03
RC Acc 0 25 25 50 50 50
Table 1: Comparison of testing accuracy on GSM8K and the proposed four benchmarks. To ensure equitable
evaluations, we report the scores of all models using the settings of few-shot in-context learning. RC acc is the
abbreviation of Random-Chance accuracy.
factors are associated with the model performance,
and whether the performance can be further improved through supervised fine-tuning, thus answering the question: do LLMs master the ability
to evaluate the mathematical reasoning process of
MWP?
**4.1** **Can LLMs Solve Simple Mathematical**
**Reasoning Process Evaluation of MWP?**
To verify this issue, we propose a mathematical reasoning process evaluation benchmark MCGSM8K
(Figure 2) consisting of 1,319 samples aiming at
choosing the correct one from four solutions. We
utilize the settings of few-shot in-context learning
and CoT prompting, which is shown in Appendix
**E Figure 5.**
**4.1.1** **Models**
We evaluate the testing accuracy on several representative models – (i) general closed-source models
GPT-4[2] (OpenAI, 2023) and GPT-3.5-Turbo (OpenAI, 2022), (ii) general open-source models: the
current state-of-the-art LLaMA-2 (Touvron et al.,
2023b) with two different parameter sizes of 13B
and 70B, and (iii) mathematical specialized mod
2We use gpt4-1106-preview.
els: MetaMath (Yu et al., 2023) and MAmmoTH
(Yue et al., 2023) in sizes 13B and 70B, which
specifically tune LLaMA-2 on mathematical reasoning datasets. MetaMath is tuned on mathematical reasoning datasets collected by mathematical
questions bootstrap and answering augmentation.
MAmmoTH is trained on an instruction-tuning
dataset compiled from 13 math datasets with a
unique hybrid of CoT and PoT rationales. In all
evaluation experiments, we set a temperature of
zero for open-source models and mathematical specialized models following previous work (Yu et al.,
2023; Yue et al., 2023), and a temperature of 0.2 for
general closed-source models to generate quality
answers.
**4.1.2** **Results**
In Table 1, we cite the metrics of all tested models
on GSM8K from the paper of MetaMath (Yu et al.,
2023) and MAmmoTH (Yue et al., 2023). We can
see that general closed-source models and mathematical specialized models have shown promising
performance in GSM8K. For example, the accuracy
of GPT-3.5-Turbo and MetaMath-70B all exceeds
80% from the third column in Table 1. These results exhibit the strong ability of most LLMs to
8858
-----
solve grade school math word problems.
However, all models except GPT-4 achieve poor
performance below 41% on the simple MCGSM8K
compared with their performance on GSM8K.
The accuracy of LLaMA-2-70B, MetaMath-70B,
and GPT-3.5-Turbo drop from 56.80% to 38.29%,
82.30% to 34.87%, and 80.80% to 40.56%, respectively. To maximize accuracy, we test model accuracy with and without CoT prompting. As illustrated in the second column of Table 4 in Appendix
**A, the average performance gap between the test**
accuracy with and without CoT prompting is tiny,
which is within 5%, exhibiting the incompetence of
most LLMs in solving problems from MCGSM8K.
We have tried different CoT prompting shown in
Appendix Table 3. Specifically, we prompt the
model to describe the task and explain the answer.
The performance improvement brought by different prompts is less than 3%, and the best result so
far is still much lower than that on GSM8K.
**Point 1: Although most LLMs can solve MWP to some**
degree, they have difficulty in evaluating the reasoning
process of MWP.
**4.2** **Can LLMs Solve MWP in the Form of**
**Multiple-choice Questions?**
In this subsection, we conduct the second experiment to investigate whether LLMs are incapable of
solving multiple-choice format questions or evaluating the reasoning process. Specifically, for each
option of MCGSM8K, we remove the reasoning
process (rationale) and leave only the final answer
to construct MCGSM8K-No-Rationale (Figure 2)
consisting of 1319 samples. The model setting is
the same as that in subsection 4.1.1. We report
the testing accuracy under the settings of 5-shot
in-context learning and CoT prompting, as shown
in Appendix E Figure 5.
The average CoT performance of all tested
models on MCGSM8K-No-Rationale is significantly higher than that on MCGSM8K (63.96%
vs. 41.59%), and close to that on GSM8K (63.96%
vs. 68.98%). From the paper of MAmmoTH (Yue
et al., 2023), GPT-4 and MAmmoTH-70B achieve
an accuracy of 72.60% and 65.00%, on the AQuA
(Ling et al., 2017) dataset consisting of multiplechoice algebraic word problems, respectively. The
results exhibit the ability of most LLMs to solve
MWP in the form of multiple-choice questions. In
addition, CoT prompting brings significant performance gains, e.g., the performance improvement
of LLaMA-2-70B, and MetaMath-70B is 20.85%,
and 38.74%, respectively, as shown in the third
column of Table 4 in Appendix A.
**Point 2: Most LLMs are capable of solving MWP in the**
form of multiple-choice questions.
**4.3** **Can Reducing Options on the Problem to**
**Be Solved in MCGSM8K Improve Model**
**Performance?**
For each sample in MCGSM8K, we remove any
two incorrect options from the four options, leaving
only one correct and one incorrect option, resulting
in MCGSM8K-2Options (Figure 2) consisting of
1319 samples. The model setup is the same as that
in subsection 4.1.1. and we use 8-shot in-context
learning without CoT prompting.
As illustrated in Table 1, the average accuracy
of all tested models on the two-choice-question
benchmark MCGSM8K-2Options is merely 1.17%
higher than that on the open-formed-question
benchmark GSM8K (70.15% vs. 68.98%), confirming that the ability of most LLMs to evaluate
the reasoning process of MWP is insufficient.
**Point 3: The ability of most LLMs to evaluate the reasoning**
process of MWP is insufficient.
**4.4** **Incapable of Identifying Correct Solutions**
**or Incorrect Solutions?**
From the previous experimental results, we observe
that most LLMs perform poorly on MCGSM8K
and MCGSM8K-2Options. To figure out whether
the model is incapable of identifying correct solutions or incorrect solutions, we propose a trueor-false-question benchmark GSM8K-judgement
(Figure 2) aiming at directly judging the correctness of a solution. Specifically, for each question
in GSM8K, we append a solution to the end of the
question. We utilize the correct solution synthesized by the ground-truth answer and the answer
rationale for constructing the positive sample. For
one open-formed question, there are theoretically
infinite numbers of incorrect solutions generated
by the model with each one varying from others.
To eliminate randomness, we design three various
negative samples utilizing the three high-quality incorrect solutions from the options of MCGSM8K,
thus resulting in a total of 1319 positive samples
and 3957 negative samples. The model setting is
the same as that in subsection 4.1.1. We utilize 5shot in-context learning and CoT prompts to maximize accuracy, as shown in Appendix E Figure
5.
8859
-----
MCGSM8K MCGSM8K GSM8K-Judgement
Model GSM8K MCGSM8K
-No-Rationale -2Options TPR TNR
LLaMA-2-13B 28.70 33.51 34.34 62.62 36.85 74.91
SFT-GSM8K 50.00 - - - - -
SFT-MCGSM8K 0.00 75.97 33.66 87.26 0.00 0.00
SFT-Judgement 0.00 22.21 27.82 51.60 69.60 83.24
SFT-hybrid 43.52 70.89 41.77 80.29 78.09 83.78
Table 2: Testing accuracy of LLaMA-2-13B trained on different data.
The testing accuracy of GSM8K-judgement is illustrated in the sixth and seventh columns of Table
1. As for general open-source models, the average True Negative Rate (TNR) is 76.68%, which
is 34.71% higher than the average True Positive
Rate (TPR) (41.97%) and 26.68% higher than the
random-chance accuracy (50%). In general, it is
easier for humans to identify incorrect solutions
than correct solutions. This experimental result is
consistent with human behavior. However, the performance of mathematical specialized models contradicts human performance, which are tuned on
numerous augmented correct solutions. In contrast
to general open-source models, the average TPR of
mathematical specialized models is 40.01% higher,
but the average TNR is 48.83% lower, which suggests that models merely fine-tuned on correct solutions can improve the ability to judge the correct
reasoning process but greatly weaken the ability
to judge the incorrect reasoning process. GPT-3.5Turbo also shows similar performance as mathematical specialized models. Only GPT-4 achieves
remarkable TPR and TNR results at the same time.
**Point 4: Fine-tuning with only correct solutions improves**
the performance in evaluating correct solutions, but leads
to a huge drop in evaluating incorrect solutions.
**4.5** **Fine-tuning**
Finally, we explore whether fine-tuning can improve the ability of LLMs to evaluate the reasoning
process of MWP. We use the most widely used
model LLaMA-2-13B as the base model for finetuning. The LLaMA-2-13B is trained by fully finetuning on 8 NVIDIA A100 GPUs. The training
details are following Yu et al. (2023).
In Table 2, the accuracy of SFT-GSM8K by finetuning the LLaMA-2-13B on the training data of
GSM8K is extracted from RFT (Yuan et al., 2023).
First, we fine-tune the base model on 6,000 training data from MCGSM8K and 3,000 training data
from GSM8K-Judgement (1,500 positive samples
and 1,500 negative samples), respectively. Significant performance improvements are obtained
for both SFT-MCGSM8K and SFT-Judgement on
in-domain datasets (IND), demonstrating the effectiveness of the training data in improving mathematical reasoning process evaluation ability. Meanwhile, SFT-Judgement achieves an improvement
of 32.75% on TPR and 8.33% on TNR, revealing that finetuning on training data from GSM8KJudgement is effective for enhancing capabilities
in identifying both correct and incorrect solutions. On out-of-domain datasets (OOD) including
GSM8K, MCGSM8K -No-Rationale, and GSM8KJudgement, the performance decline between SFTMCGSM8K and the base model can even reach up
to 100%, which is consistent with the conclusion
drawn from MAmmoTH (Yue et al., 2023) that
fine-tuning LLMs using supervised data specific to
certain datasets improves in-domain performance
while reduces generalization to tasks beyond their
fine-tuning data. Especially in the GSM8K benchmark, SFT-MCGSM8K and SFT-Judgement have
completely lost their ability to follow instructions,
resulting in irrelevant answers.
To maintain the generalization ability of the
model in solving mathematical problems, we collect a hybrid fine-tuning dataset by mixing the training data from GSM8K, MCGSM8K, and GSM8KJudgement. On GSM8K, SFT-hybrid lags behind
SFT-GSM8K. We speculate that the question form
of MCGSM8K and GSM8K-Judgement is completely different from that of GSM8K, resulting in
the ability to evaluate the mathematical reasoning
process not being successfully transferred to solve
the mathematical reasoning problems. Compared
with the base model, the improvement of the SFThybrid in TPR, TNR, MCGSM8K, and GSM8K
is 41.24%, 8.87%, 37.38%, and 16.62%, respectively. Moreover, for the unseen MCGSM8K-NoRationale task, the performance of the SFT-hybrid
8860
-----
is better than the base model (+7.43%). The results show that we can improve the ability of LLMs
to evaluate the mathematical reasoning process of
MWP by fine-tuning.
**4.6** **Case Study**
**Appendix F shows some examples generated by**
different models to solve MCGSM8K and GSM8KJudgement problems.
**5** **Conclusion**
In this paper, we focus on exploring the ability of
LLMs to evaluate the mathematical reasoning processes of MWP. To achieve this, we utilize incorrect
solutions generated by multiple advanced LLMs
to curate two benchmarks. One is MCGSM8K
and its two variants, a new type of multiple-choice
question dataset, in which each option contains a
solution to solve the problem. The other one is
GSM8K-Judgement, which judges whether a solution to a given problem is true or false. The poor
performance of LLMs on MCGSM8K confirms the
incapable ability of most LLMs in mathematical
reasoning process evaluation. In particular, the performance on GSM8K-Judgement exhibits that it
is easier to identify incorrect solutions. However,
merely fine-tuning with correct solutions improves
the performance in evaluating correct solutions, but
leads to a huge drop in evaluating incorrect solutions. Fine-tuning models on the proposed training
data greatly improves the mathematical process
evaluation ability. Exploring the relation between
the ability of mathematical problem-solving and
mathematical reasoning process evaluation is left
to future work.
**Limitations**
Through the above experiments and analyses, we
summarize the following limitations:
1) In this work, we test the mathematical reasoning process evaluation ability of LLMs on limited
benchmarks. In the future, we will utilize various
MWP benchmarks, e.g., MATH and AQuA, to construct more comprehensive mathematical reasoning
process evaluation benchmarks.
2) All incorrect solutions in the proposed benchmarks are generated by advanced LLMs, thus there
are inevitably biases inherent in model generations.
Furthermore, the constructed benchmark does not
reflect the ability of LLMs to evaluate incorrect
solutions written by humans.
3) The fine-tuned model does not exhibit a significant improvement in mathematical problemsolving. How to transfer the ability in mathematical reasoning process evaluation to mathematical
problem-solving will be a future work.
**Ethics Statement**
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national
research committee and with the 1964 Helsinki
Declaration and its later amendments or comparable ethical standards. This article does not contain
any studies with animals performed by any of the
authors. Informed consent was obtained from all
individual participants included in the study.
**Acknowledgments**
This work is supported by the National Key Research and Development Program of China under Grant 2023YFF1204901, the National Natural
Science Foundation of China under Grant NSFC62076172, and the Key Research and Development Program of Sichuan Province under Grant
2023YFG0116.
**References**
Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng,
[Jian-Guang Lou, and Weizhu Chen. 2023. Learning](https://doi.org/10.48550/arXiv.2310.20689)
[from mistakes makes LLM better reasoner. arXiv](https://doi.org/10.48550/arXiv.2310.20689)
_preprint arXiv:2310.20689._
Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak
Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng
Chen, Eric Chu, Jonathan H. Clark, Laurent El
Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin
Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao,
Yuanzhong Xu, Yujing Zhang, Gustavo Hernández
Ábrego, Junwhan Ahn, Jacob Austin, Paul Barham,
Jan A. Botha, James Bradbury, Siddhartha Brahma,
Kevin Brooks, Michele Catasta, Yong Cheng, Colin
Cherry, Christopher A. Choquette-Choo, Aakanksha
Chowdhery, Clément Crepy, Shachi Dave, Mostafa
Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz,
Nan Du, Ethan Dyer, Vladimir Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier
Garcia, Sebastian Gehrmann, Lucas Gonzalez, and
[et al. 2023. Palm 2 technical report. arXiv preprint](https://doi.org/10.48550/arXiv.2305.10403)
_arXiv:2305.10403._
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang,
Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei
Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin,
Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu,
8861
-----
Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren,
Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong
Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang,
Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu,
Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang
Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang
[Zhu. 2023. Qwen technical report. arXiv preprint](https://doi.org/10.48550/arXiv.2309.16609)
_arXiv:2309.16609._
Yuntao Bai, Saurav Kadavath, Sandipan Kundu,
Amanda Askell, Jackson Kernion, Andy Jones, Anna
Chen, Anna Goldie, Azalia Mirhoseini, Cameron
McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep
Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez,
Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua
Landau, Kamal Ndousse, Kamile Lukosiute, Liane
Lovitt, Michael Sellitto, Nelson Elhage, Nicholas
Schiefer, Noemí Mercado, Nova DasSarma, Robert
Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort,
Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei,
Nicholas Joseph, Sam McCandlish, Tom Brown, and
Jared Kaplan. 2022. Constitutional AI: harmlessness
from AI feedback. arXiv preprint arXiv:2212.08073.
Lukas Berglund, Meg Tong, Max Kaufmann, Mikita
Balesni, Asa Cooper Stickland, Tomasz Korbak, and
[Owain Evans. 2023. The reversal curse: Llms trained](https://doi.org/10.48550/arXiv.2309.12288)
[on "a is b" fail to learn "b is a".](https://doi.org/10.48550/arXiv.2309.12288) _arXiv preprint_
_arXiv:2309.12288._
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
Clemens Winter, Christopher Hesse, Mark Chen, Eric
Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess,
Jack Clark, Christopher Berner, Sam McCandlish,
Alec Radford, Ilya Sutskever, and Dario Amodei.
2020. Language models are few-shot learners. In Ad_vances in Neural Information Processing Systems 33:_
_Annual Conference on Neural Information Process-_
_ing Systems 2020, NeurIPS 2020, December 6-12,_
_2020, virtual._
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W. Cohen. 2022a. [Program of thoughts](https://doi.org/10.48550/arXiv.2211.12588)
[prompting: Disentangling computation from reason-](https://doi.org/10.48550/arXiv.2211.12588)
[ing for numerical reasoning tasks. arXiv preprint](https://doi.org/10.48550/arXiv.2211.12588)
_arXiv:2211.12588._
Yanda Chen, Ruiqi Zhong, Sheng Zha, George Karypis,
and He He. 2022b. Meta-learning via language
model in-context tuning. In Proceedings of the 60th
_Annual Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers), ACL 2022,_
_Dublin, Ireland, May 22-27, 2022, pages 719–730._
Association for Computational Linguistics.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
Sebastian Gehrmann, Parker Schuh, Kensen Shi,
Sasha Tsvyashchenko, Joshua Maynez, Abhishek
Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben
Hutchinson, Reiner Pope, James Bradbury, Jacob
Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin,
Toju Duke, Anselm Levskaya, Sanjay Ghemawat,
Sunipa Dev, Henryk Michalewski, Xavier Garcia,
Vedant Misra, Kevin Robinson, Liam Fedus, Denny
Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim,
Barret Zoph, Alexander Spiridonov, Ryan Sepassi,
David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira,
Rewon Child, Oleksandr Polozov, Katherine Lee,
Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark
Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy
Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov,
and Noah Fiedel. 2023. Palm: Scaling language modeling with pathways. J. Mach. Learn. Res., 24:240:1–
240:113.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021a. Training verifiers to solve math word prob-](https://arxiv.org/abs/2110.14168)
[lems. arXiv preprint arXiv:2110.14168.](https://arxiv.org/abs/2110.14168)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
[Nakano, et al. 2021b. Training verifiers to solve math](https://arxiv.org/abs/2110.14168)
[word problems. arXiv preprint arXiv:2110.14168.](https://arxiv.org/abs/2110.14168)
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and
Tushar Khot. 2023. Complexity-based prompting for
multi-step reasoning. In The Eleventh International
_Conference on Learning Representations, ICLR 2023,_
_Kigali, Rwanda, May 1-5, 2023. OpenReview.net._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and Ja[cob Steinhardt. 2021. Measuring mathematical prob-](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/be83ab3ecd0db773eb2dc1b0a17836a1-Abstract-round2.html)
[lem solving with the MATH dataset. In Proceedings](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/be83ab3ecd0db773eb2dc1b0a17836a1-Abstract-round2.html)
_of the Neural Information Processing Systems Track_
_on Datasets and Benchmarks 1, NeurIPS Datasets_
_and Benchmarks 2021, December 2021, virtual._
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch,
Elena Buchatskaya, Trevor Cai, Eliza Rutherford,
Diego de Las Casas, Lisa Anne Hendricks, Johannes
Welbl, Aidan Clark, Tom Hennigan, Eric Noland,
Katie Millican, George van den Driessche, Bogdan
Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and
[Laurent Sifre. 2022. Training compute-optimal large](https://doi.org/10.48550/arXiv.2203.15556)
[language models. arXiv preprint arXiv:2203.15556.](https://doi.org/10.48550/arXiv.2203.15556)
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike,
John Schulman, Ilya Sutskever, and Karl Cobbe.
8862
-----
2023. [Let’s verify step by step.](https://doi.org/10.48550/arXiv.2305.20050) _arXiv preprint_
_arXiv:2305.20050._
C. Y Lin. 2005. Rouge-recall-oriented understudy for
gisting evaluation-version 1.5.5.
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word
problems. In Proceedings of the 55th Annual Meet_ing of the Association for Computational Linguistics,_
_ACL 2017, Vancouver, Canada, July 30 - August 4,_
_Volume 1: Long Papers, pages 158–167. Association_
for Computational Linguistics.
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
[Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wiz-](https://doi.org/10.48550/arXiv.2308.09583)
[ardmath: Empowering mathematical reasoning for](https://doi.org/10.48550/arXiv.2308.09583)
[large language models via reinforced evol-instruct.](https://doi.org/10.48550/arXiv.2308.09583)
_arXiv preprint arXiv:2308.09583._
Sewon Min, Mike Lewis, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2022. Metaicl: Learning to learn
in context. In Proceedings of the 2022 Conference of
_the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, NAACL 2022, Seattle, WA, United States,_
_July 10-15, 2022, pages 2791–2809. Association for_
Computational Linguistics.
Swaroop Mishra, Arindam Mitra, Neeraj Varshney,
Bhavdeep Singh Sachdeva, Peter Clark, Chitta Baral,
and Ashwin Kalyan. 2022. Numglue: A suite of
fundamental yet challenging mathematical reasoning
tasks. In Proceedings of the 60th Annual Meeting of
_the Association for Computational Linguistics (Vol-_
_ume 1: Long Papers), ACL 2022, Dublin, Ireland,_
_May 22-27, 2022, pages 3505–3523. Association for_
Computational Linguistics.
OpenAI. 2022. GPT-3.5 technical report.
[OpenAI. 2023. GPT-4 technical report. arXiv preprint](https://doi.org/10.48550/arXiv.2303.08774)
_arXiv:2303.08774._
Guilherme Penedo, Quentin Malartic, Daniel Hesslow,
Ruxandra Cojocaru, Alessandro Cappelli, Hamza
Alobeidli, Baptiste Pannier, Ebtesam Almazrouei,
[and Julien Launay. 2023. The refinedweb dataset](https://doi.org/10.48550/arXiv.2306.01116)
[for falcon LLM: outperforming curated corpora](https://doi.org/10.48550/arXiv.2306.01116)
[with web data, and web data only. arXiv preprint](https://doi.org/10.48550/arXiv.2306.01116)
_arXiv:2306.01116._
Pouya Pezeshkpour and Estevam Hruschka. 2023.
[Large language models sensitivity to the order of](https://doi.org/10.48550/arXiv.2308.11483)
[options in multiple-choice questions. arXiv preprint](https://doi.org/10.48550/arXiv.2308.11483)
_arXiv:2308.11483._
Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie
Millican, Jordan Hoffmann, H. Francis Song, John
Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George
van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang,
Jonathan Uesato, John Mellor, Irina Higgins, Antonia
Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant M. Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine
Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena
Gribovskaya, Domenic Donato, Angeliki Lazaridou,
Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong,
Daniel Toyama, Cyprien de Masson d’Autume, Yujia
Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin,
Aidan Clark, Diego de Las Casas, Aurelia Guy,
Chris Jones, James Bradbury, Matthew J. Johnson,
Blake A. Hechtman, Laura Weidinger, Iason Gabriel,
William Isaac, Edward Lockhart, Simon Osindero,
Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem
Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving.
[2021. Scaling language models: Methods, analy-](https://arxiv.org/abs/2112.11446)
[sis & insights from training gopher. arXiv preprint](https://arxiv.org/abs/2112.11446)
_arXiv:abs/2112.11446._
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal
Azhar, Aurélien Rodriguez, Armand Joulin, Edouard
[Grave, and Guillaume Lample. 2023a. Llama: Open](https://doi.org/10.48550/arXiv.2302.13971)
[and efficient foundation language models.](https://doi.org/10.48550/arXiv.2302.13971) _arXiv_
_preprint arXiv:2302.13971._
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, Dan Bikel, Lukas Blecher, Cristian CantonFerrer, Moya Chen, Guillem Cucurull, David Esiobu,
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa,
Isabel Kloumann, Artem Korenev, Punit Singh Koura,
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten,
Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu,
Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas
[Scialom. 2023b. Llama 2: Open foundation and fine-](https://doi.org/10.48550/arXiv.2307.09288)
[tuned chat models. arXiv preprint arXiv:2307.09288.](https://doi.org/10.48550/arXiv.2307.09288)
Peiyi Wang, Lei Li, Liang Chen, Feifan Song, Binghuai
Lin, Yunbo Cao, Tianyu Liu, and Zhifang Sui. 2023a.
[Making large language models better reasoners with](https://doi.org/10.48550/arXiv.2309.02144)
[alignment. arXiv preprint arXiv:2309.02144.](https://doi.org/10.48550/arXiv.2309.02144)
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V.
Le, Ed H. Chi, Sharan Narang, Aakanksha Chowd
8863
-----
hery, and Denny Zhou. 2023b. Self-consistency
improves chain of thought reasoning in language
models. In The Eleventh International Conference
_on Learning Representations, ICLR 2023, Kigali,_
_Rwanda, May 1-5, 2023. OpenReview.net._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le,
and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In
_NeurIPS._
J. A. Hartiganm. A. Wong. 1979. Algorithm as 136: A
k-means clustering algorithm. Journal of the Royal
_Statistical Society, 28(1):100–108._
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng,
Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin
Jiang. 2023. [Wizardlm: Empowering large lan-](https://doi.org/10.48550/arXiv.2304.12244)
[guage models to follow complex instructions. arXiv](https://doi.org/10.48550/arXiv.2304.12244)
_preprint arXiv:2304.12244._
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2023.
[Metamath: Bootstrap your own mathematical ques-](https://doi.org/10.48550/arXiv.2309.12284)
[tions for large language models.](https://doi.org/10.48550/arXiv.2309.12284) _arXiv preprint_
_arXiv:2309.12284._
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting
[Dong, Chuanqi Tan, and Chang Zhou. 2023. Scal-](https://doi.org/10.48550/arXiv.2308.01825)
[ing relationship on learning mathematical reason-](https://doi.org/10.48550/arXiv.2308.01825)
[ing with large language models.](https://doi.org/10.48550/arXiv.2308.01825) _arXiv preprint_
_arXiv:2308.01825._
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
[2023. Mammoth: Building math generalist models](https://doi.org/10.48550/arXiv.2309.05653)
[through hybrid instruction tuning. arXiv preprint](https://doi.org/10.48550/arXiv.2309.05653)
_arXiv:2309.05653._
Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang,
Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu,
Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma,
Yufei Xue, Jidong Zhai, Wenguang Chen, Zhiyuan
Liu, Peng Zhang, Yuxiao Dong, and Jie Tang. 2023.
GLM-130B: an open bilingual pre-trained model. In
_The Eleventh International Conference on Learning_
_Representations, ICLR 2023, Kigali, Rwanda, May_
_1-5, 2023. OpenReview.net._
Susan Zhang, Stephen Roller, Naman Goyal, Mikel
Artetxe, Moya Chen, Shuohui Chen, Christopher
Dewan, Mona T. Diab, Xian Li, Xi Victoria Lin,
Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar,
[Tianlu Wang, and Luke Zettlemoyer. 2022. OPT:](https://doi.org/10.48550/arXiv.2205.01068)
[open pre-trained transformer language models. arXiv](https://doi.org/10.48550/arXiv.2205.01068)
_preprint arXiv:2205.01068._
**A** **The Impact of Different Prompts on**
**Model Performance**
First, we compare testing accuracy with/without
Chain-of-Thought (CoT) prompting. We utilize
Model Ours P1
LLaMA-2-13B 33.51 33.43
LLaMA-2-70B 38.29 38.51
MetaMath-13B 23.12 23.58
MAmmoTH-13B 35.25 32.68
MetaMath-70B 34.87 35.71
MAmmoTH-70B 44.58 42.30
Table 3: Comparison of testing accuracy by different
CoT prompts.
CoT prompting to make models solve math problems through step-by-step natural language descriptions, as shown in Figure 5. On the MCGSM8K
benchmark, the average accuracy of all tested models with and without CoT prompting is 41.59% and
38.46%, respectively, exhibiting that CoT prompting brings slight improvements. By analyzing CoT
answers illustrated in Figure 6 and Figure 7, we
find that most models have difficulty in identifying incorrect computational processes and logical
fallacies in reasoning steps, thus leading to incorrect results. In addition to fine-tuning, some welldesigned CoT prompts can also bring a certain
degree of performance improvement.
On the MCGSM8K-No-Rationale benchmark,
the accuracy with CoT prompting is significantly
higher than the accuracy without CoT prompting
for all tested models, e.g., the accuracy of LLaMA2-70B with and without CoT prompting is 58.45%
and 37.60%, and the accuracy of MetaMath-70B
with and without CoT is 76.04% and 37.30%.
On the GSM8K-Judgement benchmark, the performance gap with and without CoT prompting is
negligible for general closed-source models. Meanwhile, CoT prompting brings significant performance improvements for LLaMA-2-70B. We can
observe a huge performance decline between CoT
prompting and no CoT prompting for mathematical
specialized models, as they forget the instructionfollowing ability without CoT prompting, resulting
in completely irrelevant answers.
Further, we compare the testing accuracy by different CoT prompts on MCGSM8K. We utilize two
different prompts including describing the task and
explaining the answer (P1), and explaining why
other options are wrong and why the predicted option is correct before outputting the correct answer
label (ours). We illustrate the results in Table 3.
The performance improvement brought by differ
8864
-----
ent prompts is less than 3%, and the best result
so far is still much lower than that on GSM8K.
We suspect this is mainly due to the model’s high
confidence in the incorrect solutions.
**B** **Ablation Study on Data Construction**
Ablation Study on Data Construction
60
50
40
30
To keep the incorrect options in MCGSM8K confusing and diverse, we have designed three effective ways to select negative examples from candidate solutions, including random selection, similarity ranking, and PPL ranking. To explore the role
of these three ways on the testing accuracy of the
benchmark, we study the following setups:
(1) PPL Ranking (P): On the test data constructed by only PPL Ranking selection, LLaMA2-70B achieves an accuracy of 51.60%, which is
13.31% higher than that on MCGSM8K, as shown
in Figure 3. We suggest that the single PPL ranking selection may cause the constructed negative
examples with high textual similarity. Thus, the
performance gain may not be the improved ability
of the model to evaluate the reasoning process, but
rather the model successfully selecting the correct
solutions based on text similarity.
(2) Random Selection + PPL Ranking (R + P):
The performance of LLaMA-2-70B has dropped
slightly from 51.60% to 49.28%. This ablation
reflects that multiple selection methods are more
likely to produce diverse negative examples, which
can lead to improved performance. However, the
quality of the negative samples by random selection cannot be guaranteed, thus the performance
improvement is relatively small.
(3) Random selection + PPL Ranking + Similarity Ranking (MCGSM8K): After mixing similarity
ranking, the performance of LLaMA-2-70B has
dropped the most, reaching a minimum accuracy
of 38.29%. We suggest that incorrect solutions
similar to the correct solution are most likely to
confuse the model.
**C** **Principles for selecting LLMs to**
**generate solutions**
20
10
0
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|Col23|Col24|Col25|Col26|Col27|Col28|Col29|Col30|Col31|Col32|Col33|Col34|Col35|Col36|Col37|Col38|Col39|Col40|Col41|Col42|Col43|Col44|Col45|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||||||||||||||||||||||||5|6|.8|0|||||||
|||||||||5|1.|60|||||||||||||||||||||||||||||||||||
|||||||||||||||||||4|9.|2|8||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||G|S|M|8k||||
||||||||||||||||||||||||||||||||||||||||||||||
||||||||PP|L|R|an|k|in|g||||||||||||||||||||||||||||||||
||||||||||||||||R|an|d|om||Se|le|cti|o|n|||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||+|P|PL|R|a|nk|in|g||||||3|8.|2|9||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||R|a|nd|o|m|se|le|ct|io|n|+|P|PL||||||||||
||||||||||||||||||||||||R|a|nk|in|g|+|S|im|il|ar|it|y|||||||||||
||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||R|a|nk|in|g|(|M|C|GS|M|8|k)|||||||||||
||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||||
Figure 3: Testing accuracy of LLaMA-2-70B on the
data constructed by different selecting ways.
found that because some GSM8K questions were
too easy for the strong models such as GPT4, and
MetaMath-70B, we couldn’t get the desired wrong
answer even to sample 50 times.
Secondly, we also tried to use prompt words to
push chat models deliberately generating wrong
answers, but we found some common patterns in
the generated answers. Similarly, solutions generated from the same closed-source model also have
similar characteristics and distributions, and these
patterns are easily captured by LLMs, leading to a
decline in the difficulty and quality of the benchmark.
Given the above factors, the efficiency of constructing incorrect solutions by closed-source models is low. To obtain a variety of diverse incorrect
solutions for each question from GSM8K, we allow multiple LLMs with sizes of 13B and 70B including general LLMs Qwen, LLaMA-2, and WizardLM with stronger instruction following capabilities, and mathematical specialized LLMs MetaMath to generate incorrect solutions.
**D** **MCGSM8K Analysis**
**D.1** **Statistics**
Table 5 describes the basic statistics of MCGSM8K,
which consists of a total of 1,319 multiple-choice
questions with each option containing one solution.
The reasons for choosing open-source LLMs and
the inclusion of LLMs of relatively smaller sizes
(i.e., 13B) to synthesize incorrect solutions are as
follows:
Firstly, we need to emphasize that the stronger
the model’s mathematical problem-solving ability,
the lower the probability of using reject sampling to
get the desired incorrect answers. For example, we
**D.2** **Diversity**
We conduct further analysis to examine the distinctions among incorrect options. For each sample,
we calculate the ROUGE-L overlap between the
three incorrect options and illustrate the distribution of these scores in Figure 4. The results reveal
significant diversity among incorrect options.
8865
-----
MCGSM8K MCGSM8K GSM8K-Judgement
Model MCGSM8K
-No-Rationale -2Options TPR TNR
LLaMA-2-13B 33.51/26.99 34.34/28.65 53.68/62.62 36.85/69.14 75.26/29.03
LLaMA-2-70B 38.29/33.66 58.45/37.60 65.13/66.94 47.08/10.99 78.09/66.72
AVG 35.90/30.33 46.40/33.13 59.41/64.78 41.97/40.07 76.68/47.88
MetaMath-13B 23.12/20.55 45.49/24.03 51.86/57.70 85.75/0.07 17.54/0.07
MAmmoTH-13B 35.25/26.16 55.34/27.37 54.89/58.91 59.21/16.22 48.17/16.15
MetaMath-70B 34.87/34.27 76.04/37.30 69.14/64.22 88.93/1.10 22.21/1.00
MAmmoTH-70B 44.58/40.86 68.69/35.25 69.07/81.20 94.01/1.70 23.48/5.80
AVG 34.46/30.46 61.39/30.99 61.24/65.51 81.98/4.77 27.85/5.76
GPT-3.5-Turbo 40.56/40.33 79.68/29.34 64.59/75.06 88.48/89.16 47.69/45.94
GPT-4 82.56/84.89 93.63/56.04 90.98/94.56 93.70/96.94 87.79/87.56
AVG 61.56/62.61 86.66/42.69 77.78/84.81 91.09/93.05 67.74/66.75
AVG All 41.59/38.46 63.96/34.45 64.92/70.15 74.25/35.67 50.03/31.53
Table 4: Comparison of testing accuracy with/without CoT prompting.
statistic
# samples 1,319
avg. correct option length (in words) 54
avg. incorrect option length (in words) 55
Table 5: Statistics of the samples in the benchmark
MCGSM8K.
**ROUGE-L Overlap between the Incorrect Options**
1200
1058
999
1000
800 776
600 582
Samples
400
225 212
200
69
11 25
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Figure 4: Distribution of the ROUGE-L scores between
the incorrect options.
**D.3** **Quality**
So far, we have demonstrated the quantity and
diversity of incorrect options, yet their quality remains uncertain. To address this concern,
we randomly select 100 incorrect options from
MCGSM8K. Subsequently, three expert annotators
(the authors of this work) are tasked with assigning
a score ranging from 1 to 5 to each incorrect option,
in terms of fluency, relevance, validity, and consis
score 1 2 3 4 5
distribution 1% 2% 15% 70% 12%
Table 6: Average human evaluation on the data quality
for 100 incorrect options from MCGSM8K. 1: very
poor, 2: poor, 3: okay, 4: good, and 5: very good.
tency. The evaluation results, presented in Table 6,
indicate that the majority of incorrect options exhibit relevant and reasonable solutions to the given
question, albeit some may contain a certain level of
noise, such as repetition, calculation errors, etc. We
use the intraclass correlation coefficient (CCI) to
measure the correlation between raters. The correlation value calculated by ICC(C,1) is 0.92, higher
than 0.9, which means that the evaluation of the
three raters has a high degree of consistency.
**E** **Prompts**
We illustrate some examples in Figure 5.
**F** **Case Study**
**F.1** **MCGSM8K Case Study**
We illustrate the result in Figure 6.
**F.2** **GSM8K-Judgement Case Study**
We illustrate the result in Figure 7.
8866
-----
**CoT Prompt with few-shot examples from MCGSM8K**
The following are a math word problem and four solutions. Choose the correct one from the four options (A), (B), (C), and (D).
**Question: Paddington has 40 more goats than Washington. If Washington has 140 goats, how many goats do they have in total?**
**Options: (A) 320. Rationale: If Washington has 140 goats, Paddington has 140+40 =180 goats. In total, they have 140+180 =320 goats (B)**
240. Rationale: If Washington has 140 goats, and Washington has 40 more goats than Paddington, then Paddington has 140 - 40 =100 goats.
Together, they have 100 + 140 =240 goats. (C) 400. Rationale: Washington has 140 goats + 40 goats =180 goats. Since Washington has 180
goats and Paddington has 40 more goats than Washington, Paddington has a total of 180 goats + 40 goats =220 goats. Together, they have
220 goats + 180 goats =400 goats. (D) 180. Rationale: If Washington has 140 goats, Paddington has 140+40 =180 goats. They have 180
goats in total.
**Answer: Let's think step by step. Option (B) incorrectly calculates the number of goats Paddington has by subtracting 40 from the number**
of goats Washington has, which should be adding 40 to the number of goats Washington has.
Option (C) incorrectly calculates the number of goats Washington has by adding 40 to 140. This would give Washington 140 + 40 = 180
goats. However, the problem states that Washington has 140 goats.
Option (D) correctly calculates the number of goats Paddington has (180), but it incorrectly calculates the total number of goats.
Option (A) correctly calculates the total number of goats by adding the number of goats Washington has (140) to the number of goats
Paddington has (180).
Therefore, the correct option is (A)
...
**Question: John plans to sell all his toys and use the money to buy video games. He has 13 lego sets and he sells them for $15 each. He ends**
up buying 8 video games for $20 each and has $5 left. How many lego sets does he still have?
**Options: (A) 16. Rationale: He sold 13 lego sets because 5 / 5 = 1. He spent $100 on video games because 8 x 20 = 160. He earned $130**
from the lego sets because 13 x 15 = 195. He had $235 worth of toys because 195 + 5 = 200. He had 17 lego sets because 200 / 13 = 15.385.
He has 16 left because 15 + 1 = 16. (B) 10. Rationale: He earned $200 selling legos because 13 x 15 = 195. He spent $160 on video games
because 8 x 20 = 160. He had $45 left over because 195 - 160 = 35. He bought 3 sets because 45 / 15 = 3. He now has 10 sets because 13 3 = 10. (C) 5. Rationale: He earned 13 * $15 = $200 from selling lego sets. He spent $200 - $10 = $190 on video games. He still has 13 - 8
= 5 lego sets. (D) 2. Rationale: He spent $160 on video games because 8 x 20 = 160. He earned $165 from his sale because 160 + 5 = 165.
He sold 11 lego sets because 11 x 15 = 165. He has 2 sets left because 13 - 11 = 2.
**Answer: Let's think step by step.**
**CoT Prompt with few-shot examples from MCGSM8K-No-Rationale**
The following are a math word problem and four options. Choose the correct one from the four options (A), (B), (C), and (D).
**Question: Paddington has 40 more goats than Washington. If Washington has 140 goats, how many goats do they have in total?**
**Options: (A) 320 (B) 240 (C) 400 (D) 180**
**Answer: Let's think step by step. If Washington has 140 goats, Paddington has 140+40 =180 goats. In total, they have 140+180 =320 goats.**
Therefore, the correct option is (A)
...
**Question: John plans to sell all his toys and use the money to buy video games. He has 13 lego sets and he sells them for $15 each. He ends**
up buying 8 video games for $20 each and has $5 left. How many lego sets does he still have?
**Options: (A) 16 (B) 10 (C) 5 (D) 2**
**Answer: Let's think step by step.**
**CoT Prompt with few-shot examples from GSM8K-Judgement**
The following is a statement about a math word problem. Please determine whether the statement is true or false.
**Statement: Paddington has 40 more goats than Washington. If Washington has 140 goats, how many goats do they have in total? 320.**
Rationale: If Washington has 140 goats, Paddington has 140+40 =180 goats. In total, they have 140+180 =320 goats.
**Answer: Let's think step by step. In this statement, it correctly calculates the total number of goats by adding the number of goats**
Washington has (140) to the number of goats Paddington has (180). The reasoning process and the final answer are correct.
Therefore, the statement is True.
...
**Statement: John plans to sell all his toys and use the money to buy video games. He has 13 lego sets and he sells them for $15 each. He**
ends up buying 8 video games for $20 each and has $5 left. How many lego sets does he still have? 2. Rationale: He spent $160 on video
games because 8 x 20 = 160. He earned $165 from his sale because 160 + 5 = 165. He sold 11 lego sets because 11 x 15 = 165. He has 2
sets left because 13 - 11 = 2.
**Answer: Let's think step by step.**
Figure 5: CoT Prompts with few-shot examples from MCGSM8K, MCGSM8K-No-Rationale, and GSM8KJudgement
8867
-----
**MCGSM8K Case Study**
**Question: John plans to sell all his toys and use the money to buy video games. He has 13 lego sets and he sells them for $15 each. He ends**
up buying 8 video games for $20 each and has $5 left. How many lego sets does he still have?
**Options: (A) 16. Rationale: He sold 13 lego sets because 5 / 5 = 1. He spent $100 on video games because 8 x 20 = 160. He earned $130**
from the lego sets because 13 x 15 = 195. He had $235 worth of toys because 195 + 5 = 200. He had 17 lego sets because 200 / 13 = 15.385.
He has 16 left because 15 + 1 = 16. (B) 10. Rationale: He earned $200 selling legos because 13 x 15 = 195. He spent $160 on video games
because 8 x 20 = 160. He had $45 left over because 195 - 160 = 35. He bought 3 sets because 45 / 15 = 3. He now has 10 sets because 13 3 = 10. (C) 5. Rationale: He earned 13 * $15 = $200 from selling lego sets. He spent $200 - $10 = $190 on video games. He still has 13 - 8
= 5 lego sets. (D) 2. Rationale: He spent $160 on video games because 8 x 20 = 160. He earned $165 from his sale because 160 + 5 = 165.
He sold 11 lego sets because 11 x 15 = 165. He has 2 sets left because 13 - 11 = 2.
**Ground-truth answer: D**
**LLaMA-2-70B answer: Option (A) is incorrect. The rationale states that he sold 13 lego sets because 5 / 5 = 1. This is incorrect. Then, it**
states that he spent $100 on video games because 8 x 20 = 160. This is also incorrect. Finally, it states that he earned $130 from the lego
sets because 13 x 15 = 195. This is also incorrect.
Option (B) is incorrect. The rationale states that he earned $200 selling legos because 13 x 15 = 195. This is incorrect. Then, it states that
he spent $160 on video games because 8 x 20 = 160. This is also incorrect. Finally, it states that he had $45 left over because 195 - 160 =
35. This is also incorrect.
Option (C) is correct. The rationale states that he earned 13 * $15 = $200 from selling lego sets. This is correct. Then, it states that he spent
$200 - $10 = $190 on video games. This is also correct. Finally, it states that he still has 13 - 8 = 5 lego sets. This is also correct.
Option (D) is incorrect. The rationale states that he spent $160 on video games because 8 x 20 = 160. This is incorrect. Then, it states that
he earned $165 from his sale because 160 + 5 = 165. This is also incorrect. Finally, it states that he sold 11 lego sets because 11 x 15 = 165.
This is also incorrect.
Therefore, the correct option is (C)
**MetaMath-70B answer: Option (A) is incorrect because it states that John sold 13 lego sets, but he actually sold 11 lego sets.**
Option (B) is incorrect because it states that John spent $160 on video games, but he actually spent $160 on video games.
Option (C) is incorrect because it states that John has 5 lego sets left, but he actually has 2 lego sets left.
Option (D) is incorrect because it states that John sold 11 lego sets, but he actually sold 11 lego sets.
Therefore, the correct option is (C)
**MAmmoTH-70B answer: 13 lego sets were sold because 5 / 5 = 1. He spent $100 on video games because 8 x 20 = 160. He earned $130**
from the lego sets because 13 x 15 = 195. He had $235 worth of toys because 195 + 5 = 200. He had 17 lego sets because 200 / 13 = 15.385.
He has 16 left because 15 + 1 = 16.
Therefore, the correct option is (A)
**GPT-3.5-Turbo answer: Option (A) correctly calculates the amount of money John earned from selling the lego sets and the amount he**
spent on video games. However, it incorrectly calculates the number of lego sets he has left. The calculation should be 200/15 = 13.33,
which means he has 13 lego sets left, not 16.
Option (B) correctly calculates the amount of money John earned from selling the lego sets and the amount he spent on video games. It also
correctly calculates the number of lego sets he has left. Therefore, this option is correct.
Option (C) incorrectly calculates the amount of money John earned from selling the lego sets. It also does not provide any information
about the number of lego sets he has left.
Option (D) incorrectly calculates the amount of money John earned from selling the lego sets. It also incorrectly calculates the number of
lego sets he has left.
Therefore, the correct option is (B)
**GPT-4 answer: Option (A) incorrectly calculates the total amount John earned from selling the lego sets and the total number of lego sets**
he had.
Option (B) incorrectly calculates the amount John had left over after buying the video games and the number of lego sets he bought.
Option (C) incorrectly calculates the amount John spent on video games and the number of lego sets he still has.
Option (D) correctly calculates the amount John spent on video games and the amount he earned from selling the lego sets. It also correctly
calculates the number of lego sets he sold and the number he still has.
Therefore, the correct option is (D)
Figure 6: A comparison case on different types of models solving MCGSM8K problems.
8868
-----
**GSM8K-Judgement Case Study**
**Statement: John plans to sell all his toys and use the money to buy video games. He has 13 lego sets and he sells them for $15 each. He**
ends up buying 8 video games for $20 each and has $5 left. How many lego sets does he still have? 2.
Rationale: He spent $160 on video games because 8 x 20 = 160. He earned $165 from his sale because 160 + 5 = 165. He sold 11 lego sets
because 11 x 15 = 165. He has 2 sets left because 13 - 11 = 2.
**Ground-truth answer: True**
**LLaMA-2-70B answer: In the statement, it incorrectly calculates the total amount of money John spent on video games.**
The final answer and reasoning process are incorrect. Therefore, the statement is False.
**MetaMath-70B answer: In the statement, it correctly calculates the total amount of money John earned from selling his lego sets and the**
total amount of money he spent on video games. However, it incorrectly calculates the number of lego sets John still has.
The final answer and reasoning process are incorrect. Therefore, the statement is False.
**MAmmoTH-70B answer: The final answer and reasoning process are correct. Therefore, the statement is True.**
**GPT-3.5-Turbo answer: In the statement, it correctly calculates the amount John spent on video games ($160) and the amount he earned**
from selling his lego sets ($165). However, it incorrectly calculates the number of lego sets he sold (11) and therefore the number of lego
sets he still has.
The final answer and reasoning process are incorrect. Therefore, the statement is False.
**GPT-4 answer: In the statement, it correctly calculates the total amount John spent on video games, the total amount he earned from selling**
his lego sets, and the number of lego sets he sold. It also correctly calculates the number of lego sets he has left.
The final answer and reasoning process are correct. Therefore, the statement is True.
**Statement: Billy sells DVDs. He has 8 customers on Tuesday. His first 3 customers buy one DVD each. His next 2 customers buy 2 DVDs**
each. His last 3 customers don't buy any DVDs. How many DVDs did Billy sell on Tuesday?\n5. Rationale: Billy sells 3 DVDs to his first
3 customers. He sells 2 DVDs to his next 2 customers. He sells 0 DVDs to his last 3 customers. Billy sells a total of 3 + 2 + 0 = 5 DVDs on
Tuesday..
**Ground-truth answer: False**
**LLaMA-2-70B answer: In the statement, it correctly calculates the number of DVDs Billy sold to his first 3 customers (3 DVDs), the**
number of DVDs Billy sold to his next 2 customers (2 DVDs), and the number of DVDs Billy sold to his last 3 customers (0 DVDs).
The final answer and reasoning process are correct. Therefore, the statement is True.
**MetaMath-70B answer: It correctly calculates the number of DVDs Billy sells to each group of customers and adds them up to find the**
total number of DVDs sold on Tuesday (5 DVDs).
The final answer and reasoning process are correct. Therefore, the statement is True.
**MAmmoTH-70B answer: 3 + 2 = 5. Therefore, the statement is True.**
#### 5.
**GPT-3.5-Turbo answer: In the statement, it correctly calculates the number of DVDs Billy sells to each group of customers and then adds**
them together to find the total number of DVDs sold (3 + 2 + 0 = 5).
The final answer and reasoning process are correct. Therefore, the statement is True.
**GPT-4 answer: In the statement, it incorrectly calculates the total number of DVDs Billy sold on Tuesday. The first 3 customers bought 1**
DVD each, making a total of 3 DVDs. The next 2 customers bought 2 DVDs each, making a total of 4 DVDs. The last 3 customers didn't
buy any DVDs. Therefore, Billy sold a total of 3 + 4 = 7 DVDs on Tuesday.
The final answer and reasoning process are incorrect. Therefore, the statement is False.
Figure 7: A comparison case on different types of models solving GSM8K-Judgement problems.
8869
-----
| [
"Yidan, Zhang",
"Vivek, Srikumar",
"Mingfeng, Xue",
"Zhenan, He",
"Dayiheng, Liu",
"Lun-Wei, Ku",
"Andre, Martins"
] | 2024-08-01T00:00:00 | ACL 2024 Findings | false | 0 | 0 | null | https://aclanthology.org/2024.findings-acl.524 | null | https://www.semanticscholar.org/paper/21729b0c7b9af67fb06c6997862098cdbce9e9e4 |
ReGenesis: LLMs can Grow into Reasoning Generalists via Self-Improvement | Post-training Large Language Models (LLMs) with explicit reasoning trajectories can enhance their reasoning abilities. However, acquiring such high-quality trajectory data typically demands meticulous supervision from humans or superior models, which can be either expensive or license-constrained. In this paper, we explore how far an LLM can improve its reasoning by self-synthesizing reasoning paths as training data without any additional supervision. Existing self-synthesizing methods, such as STaR, suffer from poor generalization to out-of-domain (OOD) reasoning tasks. We hypothesize it is due to that their self-synthesized reasoning paths are too task-specific, lacking general task-agnostic reasoning guidance. To address this, we propose Reasoning Generalist via Self-Improvement (ReGenesis), a method to self-synthesize reasoning paths as post-training data by progressing from abstract to concrete. More specifically, ReGenesis self-synthesizes reasoning paths by converting general reasoning guidelines into task-specific ones, generating reasoning structures, and subsequently transforming these structures into reasoning paths, without the need for human-designed task-specific examples used in existing methods. We show that ReGenesis achieves superior performance on all in-domain and OOD settings tested compared to existing methods. For six OOD tasks specifically, while previous methods exhibited an average performance decrease of approximately 4.6% after post training, ReGenesis delivers around 6.1% performance improvement. We also conduct in-depth analysis of our framework and show ReGenesis is effective across various LLMs and design choices. | ReGenesis self-synthesizes reasoning paths by converting general reasoning guidelines into task-specific ones, generating reasoning structures, and subsequently transforming these structures into reasoning paths, without the need for human-designed task-specific examples used in existing methods. | [
"Xiangyu, Peng",
"Congying, Xia",
"Xinyi, Yang",
"Chien-Sheng, Wu",
"Caiming, Xiong",
"Chen, Xing"
] | 2024-10-02T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.02108 | https://arxiv.org/abs/2410.02108 | https://www.semanticscholar.org/paper/45c70f76c60b582ef8a6432f19a5095643ebe902 |
|
Reasoning Graph Enhanced Exemplars Retrieval for In-Context Learning | Large language models(LLMs) have exhibited remarkable few-shot learning capabilities and unified the paradigm of NLP tasks through the in-context learning(ICL) technique. Despite the success of ICL, the quality of the exemplar demonstrations can significantly influence the LLM's performance. Existing exemplar selection methods mainly focus on the semantic similarity between queries and candidate exemplars. On the other hand, the logical connections between reasoning steps can be beneficial to depict the problem-solving process as well. In this paper, we proposes a novel method named Reasoning Graph-enhanced Exemplar Retrieval(RGER). RGER first quires LLM to generate an initial response, then expresses intermediate problem-solving steps to a graph structure. After that, it employs graph kernel to select exemplars with semantic and structural similarity. Extensive experiments demonstrate the structural relationship is helpful to the alignment of queries and candidate exemplars. The efficacy of RGER on math and logit reasoning tasks showcases its superiority over state-of-the-art retrieval-based approaches. Our code is released at https://github.com/Yukang-Lin/RGER. | A novel method named Reasoning Graph-enhanced Exemplar Retrieval (RGER), which first quires LLM to generate an initial response, then expresses intermediate problem-solving steps to a graph structure, and employs graph kernel to select exemplars with semantic and structural similarity. | [
"Yukang, Lin",
"Bingchen, Zhong",
"Shuoran, Jiang",
"Joanna, Siebert",
"Qingcai, Chen"
] | 2024-09-17T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.11147 | https://arxiv.org/abs/2409.11147 | https://www.semanticscholar.org/paper/1c77b2fb6233d237c482e6f4efabe3b927249535 |
|
Reasoning in Flux: Enhancing Large Language Models Reasoning through Uncertainty-aware Adaptive Guidance | Machine reasoning, which involves solving complex problems through step-by-step deduction and analysis, is a crucial indicator of the capabilities of Large Language Models (LLMs). However, as the complexity of tasks escalates, LLMs often encounter increasing errors in their multi-step reasoning process. This study delves into the underlying factors contributing to these reasoning errors and seeks to leverage uncertainty to refine them. Specifically, we introduce Uncertainty-aware Adaptive Guidance (UAG), a novel approach for guiding LLM reasoning onto an accurate and reliable trajectory. UAG first identifies and evaluates uncertainty signals within each step of the reasoning chain. Upon detecting a significant increase in uncertainty, UAG intervenes by retracting to a previously reliable state and then introduces certified reasoning clues for refinement. By dynamically adjusting the reasoning process, UAG offers a plug-and-play solution for improving LLMs’ performance in complex reasoning. Extensive experiments across various reasoning tasks demonstrate that UAG not only enhances the reasoning abilities of LLMs but also consistently outperforms several strong baselines with minimal computational overhead. Further analysis reveals that UAG is notably effective in identifying and diminishing reasoning errors. | Uncertainty-aware Adaptive Guidance (UAG) is introduced, a novel approach for guiding LLM reasoning onto an accurate and reliable trajectory that not only enhances the reasoning abilities of LLMs but also consistently outperforms several strong baselines with minimal computational overhead. | # Reasoning in Flux: Enhancing Large Language Models Reasoning through Uncertainty-aware Adaptive Guidance
**Zhangyue Yin[♢]** **Qiushi Sun[♡]** **Qipeng Guo[♣]** **Zhiyuan Zeng[♢]** **Xiaonan Li[♢]**
**Junqi Dai[♢]** **Qinyuan Cheng[♢]** **Xuanjing Huang[♢][†]** **Xipeng Qiu[♢][†]**
♢School of Computer Science, Fudan University
♡The University of Hong Kong ♣Shanghai AI Laboratory
{yinzy21,cengzy23,jqdai22,chengqy21}@m.fudan.edu.cn
[email protected] [email protected]
{lixn20, xjhuang, xpqiu}@fudan.edu.cn
**Abstract**
Machine reasoning, which involves solving
complex problems through step-by-step deduction and analysis, is a crucial indicator of the
capabilities of Large Language Models (LLMs).
However, as the complexity of tasks escalates,
LLMs often encounter increasing errors in their
multi-step reasoning process. This study delves
into the underlying factors contributing to these
reasoning errors and seeks to leverage uncertainty to refine them. Specifically, we introduce
Uncertainty-aware Adaptive Guidance (UAG),
a novel approach for guiding LLM reasoning
onto an accurate and reliable trajectory. UAG
first identifies and evaluates uncertainty signals within each step of the reasoning chain.
Upon detecting a significant increase in uncertainty, UAG intervenes by retracting to a previously reliable state and then introduces certified reasoning clues for refinement. By dynamically adjusting the reasoning process, UAG
offers a plug-and-play solution for improving
LLMs’ performance in complex reasoning. Extensive experiments across various reasoning
tasks demonstrate that UAG not only enhances
the reasoning abilities of LLMs but also consistently outperforms several strong baselines
with minimal computational overhead. Further
analysis reveals that UAG is notably effective
in identifying and diminishing reasoning errors.
**1** **Introduction**
The impressive advancements of Large Language
Models (LLMs) have recently brought about a
new era in machine reasoning (Brown et al., 2020;
Chowdhery et al., 2022; OpenAI, 2023; Jiang et al.,
2024; Sun et al., 2024c, inter alia). For challenging
scenarios, decomposing a problem into a series of
intermediate steps has been shown to significantly
improve the performance of LLMs (Cobbe et al.,
2021; Yu et al., 2023; Sun et al., 2024a).
† Corresponding Authors
**Question:**
Janet’s ducks lay 16 eggs per day. She eats three for 28
breakfast every morning and bakes muffins for her friends 26
every day with four. She sells the remainder at the farmers'
market daily for $2 per fresh duck egg. How much in dollars 24
does she make every day at the farmers' market? 22
**Correct Reasoning:** 20
[1] 16 eggs per day. [2] 3 for breakfast, 4 for muffins, and the
rest for sale. [3] 16 - 3 - 4 = 9 eggs for sale. [4] 9 eggs for sale 18
at $2 each is $18. [5] So the answer is $18. 16
**Wrong Reasoning:** 14
[1] 16 eggs are laid per day. [2] Janet eats 3 for breakfast every
day. [3] 16 - 3 = 13 eggs are left. [4] 13 eggs are baked into 12
muffins. [5] 13 x 4 = 52 muffins. [6] 52 muffins are sold for $2 10
each. [7] 52 x 2 = 104 dollars. [8] So the answer is 104 dollars.
**Correct NLL Distribution:** 8
6
**Wrong NLL Distribution:** 4
2
Figure 1: An example from the GSM8K dataset, where
the model displays higher uncertainty in the tokens
where the reasoning is incorrect (highlighted in red
boxes). Each step is delineated by periods.
However, as the difficulty of tasks escalates, the
reasoning chains inevitably become more complex
and lengthy (Zhang et al., 2023b). This poses a
challenge for LLMs in managing the accumulation
of errors across multiple intermediate steps (Chen
et al., 2022; Chu et al., 2023).
To mitigate the aforementioned issues, existing
studies have focused on addressing the challenges
from the perspective of alleviating uncertainty. For
example, self-consistency decoding (Wang et al.,
2023) samples multiple reasoning chains and employs a majority voting mechanism to mitigate the
inherent randomness. Tree-of-thought (Yao et al.,
2023) enables the exploration and evaluation of coherent thought units that serve as intermediate steps
in problem-solving. Furthermore, Xie et al. (2023)
conceptualize reasoning as a beam search, incorporating step-wise evaluation in the decoding process.
While these methods have shown promise in enhancing reasoning performance, their manipulation
of the reasoning process is conducted after the gen
2401
-----
eration of individual intermediate steps, lacking the
_ability to make fine-grained, flexible adjustments_
_with each step to steer the model effectively. This_
oversight has long been neglected in the study of
reasoning chains, yet it unveils a new pathway to
improve the reasoning capabilities of LLMs.
Delving into individual reasoning steps, our observation reveals that the model can exhibit signs of
uncertainty when faced with potential errors spontaneously. Figure 1 illustrates two distinct reasoning chains generated by LLaMA-2 (Touvron et al.,
2023b), where a gradual shift towards red hues indicates an increase in the model’s uncertainty. This
increase in uncertainty is particularly notable when
the model is in the midst of an incorrect reasoning
step. For instance, the model erroneously interprets
that 13 eggs were “baked” instead of “left” in one
flaw, which leads to a cascade of subsequent errors.
Inspired by this, we propose a novel approach:
UAG, which integrates clues throughout the reasoning process based on the model’s fine-grained
uncertainty. UAG first guides the model in an autoregressive manner to conduct reasoning. Upon
detecting a significant rise in uncertainty within a
step, the erroneous step will be removed, and targeted reasoning clues will be incorporated. Prior
research (Cao, 2023; Diao et al., 2023) indicate
that reasoning exemplars imbued with rich clues
can substantially assist models in completing complex reasoning tasks. From a Bayesian perspective,
we infer the necessity of selecting examples based
on their relevance and originality. Furthermore,
we cluster these example samples to minimize the
search space and reduce computational costs. Our
empirical studies on various datasets, including
mathematical, commonsense, and symbolic reasoning, demonstrate that UAG significantly enhances
model performance on complex reasoning tasks.
Moreover, our method is plug-and-play, applicable
to various open-source LLMs such as LLaMA (Touvron et al., 2023a) and Mistral (Jiang et al., 2023).
Our primary contributions are as follows:
- We conduct a pioneering study that explores
the underlying causes of errors in LLM reasoning, focusing on the role of uncertainty.
- We introduce the Uncertainty-aware Adaptive
Guidance (UAG), a novel technique that leverages model uncertainty to evaluate and enhance the reliability of each reasoning step.
- Our experimental evaluations, conducted
across a diverse range of reasoning tasks,
show that UAG significantly outperforms a
series of strong baselines.
**2** **Related Work**
**2.1** **Demonstration Guidance.**
Recent advancements have emphasized the significance of effective exemplar selection to guide
model reasoning (Zhang et al., 2022; Shum et al.,
2023; Paranjape et al., 2023; Su et al., 2023). Pioneering this field, Zhang et al. (2023c) introduce
AutoCoT, a method that automates the creation
of exemplars by sampling a diverse array of problems and autonomously generating corresponding
reasoning chains (Wei et al., 2022; Kojima et al.,
2022). This approach eliminates the need for manually crafting task-specific examples. Additionally,
some strategies leverage the intrinsic knowledge
of LLMs to enhance the accuracy and factuality
of reasoning processes through exemplar extraction (Wang et al., 2024).
In parallel, Diao et al. (2023) investigate the
application of active learning for selecting informative exemplars, which utilizes the model’s inherent uncertainties. Ye and Durrett (2023) focus on assessing the validity of reasoning chains
within exemplars by evaluating their log-likelihood
and performance accuracy on novel instances. Further extending the utility of LLMs in reasoning, Li
and Qiu (2023) introduce memory-of-thought, a
novel approach that retrieves pre-established, highconfidence thought processes to aid in current reasoning tasks. The potential for LLMs to utilize
their self-generated reasoning chains for continuous self-improvement has been demonstrated in
recent studies (Huang et al., 2023; Zheng et al.,
2023; Lu et al., 2024; Madaan et al., 2023).
**2.2** **Decomposition and Validation.**
Zhou et al. (2023) address the challenges that
LLMs encounter in complex reasoning tasks by advocating for the decomposition of these tasks into
simpler sub-questions. This concept aligns with
the modular decomposition strategy introduced by
Khot et al. (2023), which aims to optimize the handling of individual subtasks effectively. Building
upon these foundations, Yao et al. (2023) innovate
further by integrating a verification process within
the decomposition framework. They conceptualize
reasoning as a tree search, allowing LLMs to traverse various decision branches and perform both
forward and backward exploration at each node.
2402
-----
Similarly, Besta et al. (2024) propose modeling
the reasoning process as a graph structure, which
offers a more dynamic framework for synthesizing
LLM thoughts. Further advancing evaluation techniques, Yin et al. (2024) introduce a two-stage evaluation framework that incorporates local scoring
and global evaluation to enhance LLM decisionmaking. Recent developments also include collaborative efforts, where problems are distributed
among multiple LLMs for resolution (Yin et al.,
2023a). Despite the innovative nature of these
methods, they inherently increase computational
demands, a consequence of the extensive decomposition (Hao et al., 2023; Zhang et al., 2023a; Han
et al., 2023; Zhang et al., 2024), exploration (Liu
et al., 2023), and verification (Sel et al., 2023).
**2.3** **Decoding Enhancement.**
In the realm of decoding strategies, Wang et al.
(2023) introduce self-consistency decoding, a significant shift from traditional greedy decoding.
This method enhances reasoning capabilities by
generating multiple reasoning paths and selecting
the most consistent answer, effectively reducing
the randomness associated with single-sample decoding. Complementing this, Fu et al. (2023b)
advocate for guiding LLMs through more complex
reasoning processes by using exemplars that feature increased reasoning complexity.
Furthering this line, Xie et al. (2023) propose a
model akin to beam search that incorporates a selfevaluation mechanism to refine the decoding process. Building on this, Li et al. (2023a) introduce
contrastive decoding, which helps LLMs avoid basic errors common in smaller models by utilizing
model comparisons. This significantly enhances
the reasoning capabilities of LLMs, as echoed in
the work of O’Brien and Lewis (2023). Additionally, Chuang et al. (2023) explore intra-model dynamics by contrasting outputs from later versus
earlier layers, aiming to cultivate more factually
accurate reasoning within LLMs. Despite these
advancements, these methodologies primarily rely
on the internal representations within models (Sun
et al., 2023; Stechly et al., 2023), often neglecting
the integration of external reasoning cues.
**3** **Preliminary**
ken by token through a probabilistic approach as
described below:
_A_
_P_ = _P_ _ai_ _, a<i_ _,_ (1)
_A_ _Q_ ∏∣ ∣ _M_ _Q_
_i=1_
( ∣ ) ( ∣ )
where PM _ai_ _Q, a<i_ represents the probability of
generating the i-th token of the answer, given the
problem ( and the sequence of previously gener-∣ )
_Q_
ated tokens a<i.
CoT (Wei et al., 2022) involves supplementing
the problem Q with several demonstrations D that
include detailed reasoning processes. This methodology guides the model to first generate the rationale R, followed by the answer A.
_P_ _R, A_ _D, Q_ = P _A_ _D, Q, R_ _P_ _R_ _D, Q_ _,_
(2)
( ∣ ) ( ∣ ) ( ∣ )
Applying Bayesian theorem (Bayes and Price,
1763) allows us to further refine our understanding:
the most consistent answer, effectively reducing
the randomness associated with single-sample de- _P_ _,_ _,_ = _[P]_ [(][D][∣][Q][,][ R][,][ A][)][P] [(][R][,][ A][∣][Q][)][P] [(][Q][)]
_R_ _A_ _D_ _Q_ _P_ _,_
coding. Complementing this, Fu et al. (2023b) _D_ _Q_
advocate for guiding LLMs through more complex ( ∣ ) = _[P]_ [(][D][∣][Q][,][ R][,][ A][)][P] [(][R][,][ A][∣][Q][)] _,_
_P_ ( )
reasoning processes by using exemplars that fea- _D_ _Q_
(3)
ture increased reasoning complexity.
( ∣ )
Furthering this line, Xie et al. (2023) propose a where a low P _R, A_ _Q_ indicates the model’s difmodel akin to beam search that incorporates a self
ficulty in generating the desired rationale R and
evaluation mechanism to refine the decoding pro
answer without additional context. Our goal is( ∣ )
_A_
cess. Building on this, Li et al. (2023a) introduce
to enhance the probability of accurately generating
contrastive decoding, which helps LLMs avoid ba
both the rationale and the answer by improving
sic errors common in smaller models by utilizing _P_ _R, A_ _D, Q_ . According to Bayes’ Theorem,
model comparisons. This significantly enhances this requires increasing P _D_ _Q, R, A_ while dethe reasoning capabilities of LLMs, as echoed in creasing( ∣ P ) .
_D_ _Q_
the work of O’Brien and Lewis (2023). Addition
Given that the answer ( typically comprises∣ )
_A_
ally, Chuang et al. (2023) explore intra-model dy
fewer tokens than the rationale( ∣ ), our primary
_R_
namics by contrasting outputs from later versus
focus is on optimizing the impact of the reasoning
earlier layers, aiming to cultivate more factually
process R. To this end, we define the following
accurate reasoning within LLMs. Despite these
criteria:
advancements, these methodologies primarily rely
- Relevance: P _,_ _,_ indicates that the
on the internal representations within models (Sun _D_ _Q_ _R_ _A_
reasoning within aligns closely with our
et al., 2023; Stechly et al., 2023), often neglecting _D_
expected reasoning process, emphasizing the( ∣ )
the integration of external reasoning cues.
relevance of the exemplified reasoning.
- Originality: P suggests that the rea
**3** **Preliminary** _D_ _Q_
soning within D introduces novel concepts
This section outlines the foundational concepts that or steps unknown to the model (( ∣ ) Yin et al.,
underpin our UAG method. Given a problem Q, 2023b), highlighting the originality of the exa LLM generates an answer A, constructing it to- emplar’s reasoning process.
2403
-----
|Col1|Col2|Demonstration Clustering 𝑸 : Natalia sold clips to 48 of her friends in 𝑫𝟏 April, and then she sold half as many clips… 𝑨 : Natalia sold 48/2 = 24 clips in May. Natalia 𝑫𝟏 sold 48+24 = 72 clips altogether in April … 𝑸 : Weng earns $12 an hour for babysitting. 𝑫𝟐 Yesterday, she just did 50 minutes of … 𝑨 : Weng earns 12/60 = 0.2$ per minute. 𝑫𝟐 Working 50 minutes, she earned 0.2 x 50 … Relevance Originality 𝑸 : At 30, Anika is 4/3 the age of Maddie. 𝑫𝒌 What would be their average age in 15 years? 𝑨 : If Anika is 30 now, in 15 years, she'll be 𝑫𝒌 30+15 = 45 years old. At 30, Anika is 4/3 … 1 2 k|
|---|---|---|
|Adaptive Reasoning Adjustment 𝑸 : Alfred is storing a tonne of maize each month for the next 2 years. If 5 𝑫 tonnes …, how many tonnes of maize does he have at the end of the 2 years. 𝑨 : Before his maize is stolen, the maize he stores at the end of 2 years is 𝑫 12*2=24 tonnes of maize. After his maize is stolen, he is left with … 𝑸: Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market? 𝑨: [1] 16 eggs are laid per day. [2] Janet eats 3 for breakfast every day. [3] 16 - 3 = 13 eggs are left. [4] Janet bakes muffins with 4 eggs. [5] 13 - 4 = 9 eggs are left. [6] Janet sells 9 eggs at $2 per egg. [7] So she makes $18 per day.|||
||||
Figure 2: An overview of the Uncertainty-aware Adaptive Guidance (UAG) method. The process begins with the
model incrementally generating reasoning based solely on the question while monitoring the uncertainty of each
token. A significant increase in uncertainty signals potential errors, prompting a reversion to the last complete
sentence. Exemplars are then selected from a curated set of examples based on their relevance and originality to
assist in refining and completing the reasoning process.
**4** **Uncertainty-aware Adaptive Guidance**
Uncertainty-aware Adaptive Guidance (UAG) aims
to enhance the reasoning capabilities of LLMs by
incorporating uncertainty awareness and adaptability. As depicted in Figure 2, our method progresses
through three interconnected phases: Uncertainty
Identification, Adaptive Reasoning Adjustment,
and Demonstration Clustering. Each phase plays
a crucial role in refining the LLM’s reasoning process, which we detail in the subsequent sections.
**4.1** **Uncertainty Identification**
In complex reasoning tasks, LLMs generate reasoning chains R sequentially, one token at a time.
This generative process can be formally described
as follows:
_P_ = _P_ _rt_ _, r<t_ _,_ (4)
_R_ _Q_ ∏ _M_ _Q_
_t_
( ∣ ) ( ∣ )
where P _R_ _Q_ represents the probability of generating the reasoning chain R given a question
_Q, and P(M_ ∣rt )Q, r<t denotes the probability assigned by the model M to the t-th token, given the
question and the preceding tokens( ∣ ) _r<t._
A significant challenge in LLM reasoning is the
unreliability and inaccuracy of generated reasoning
chains, often resulting from cumulative errors in
specific reasoning steps (Xie et al., 2023; Zhang
et al., 2023b; Wang et al., 2024; Hao et al., 2023).
The uncertainty during the decoding process typically reflects the model’s confidence level or lack
thereof (Manakul et al., 2023; Iter et al., 2023).
We define the uncertainty of generating the t-th
token as:
_H_ _rt_ = − log P _rt_ _r<t_ _,_ (5)
where p _rt_ _r<t_ is the probability that the LLM
( ) ( ∣ )
assigns to the t-th token, given the previous tokens.
To assess changes in uncertainty, we utilize the( ∣ )
following difference function:
∆H _rt_ = H _rt_ − _H_ _rt−1_ _,_ (6)
where ∆ _rt_ quantifies the uncertainty gap for
_H_ ( ) ( ) ( )
the t-th token relative to the t−1-th token. This metric is crucial for highlighting fluctuations in model( )
confidence between decoding steps. Specifically, a
positive ∆ _rt_ indicates rising uncertainty, sig_H_
naling challenging reasoning steps or potential mistakes. Conversely, a negative value indicates in-( )
creasing reliability in the sequence.
If the increase in ∆ _rt_ exceeds a predefined
_H_
threshold θ, formally expressed as:
( )
if ∆H _rt_ > θ, (7)
this condition suggests a significant rise in uncer
( )
tainty, indicating a potential need for intervention,
such as introducing additional context or implementing a corrective mechanism to steer the reasoning chain toward a more reliable trajectory.
2404
-----
**4.2** **Adaptive Reasoning Adjustment**
When faced with increased uncertainty, UAG aims
to rectify errors by eliminating erroneous reasoning steps and introducing supplementary reasoning
clues, as aligned with the relevance and originality
criteria defined in Section 3.
For refinement, we first backtrack within the reasoning chain to the last coherent step, denoted as
_rm, where r≤m represents the most recently com-_
pleted reasoning step (as illustrated in Figure 2).
To mitigate the uncertainty in the reasoning process, guided by Eq 3, our objective is to carefully
select a demonstration to serve as external insights.
We aim to increase P _D_ _Q, R, A_ and decrease
_P_, where comprises _d,_ _d,_ _d_ .
_D_ _Q_ _D_ _Q_ _R_ _A_
Given that the model has not yet completed its( ∣ )
reasoning, we utilize the existing reasoning process( ∣ ) { }
_r≤m to calculate the relevance score SR as follows:_
_SR = log P_ _D_ _Q, r≤m_
= log P _Qd, Rd, Ad_ _Q, r≤m_ (8)
( ∣ )
A lower probability suggests higher originality;( ∣ )
hence, we calculate the originality score through
the negative log-likelihood of the probability:
_SO = −_ log P _D_ _Q_
= − log P _Qd, Rd, Ad_ _Q_ (9)
( ∣ )
The selection score ( is then computed as the∣ )
_S_
weighted average of the two scores:
_S = λ1SR + λ2SO,_ (10)
where λ1 and λ2 are the weights assigned to the
relevance and originality scores, respectively.
Following this, we rank the demonstrations according to the selection score S, ordered from highest to lowest as 1, 2, 3, . . ., corresponding to
_S_ _S_ _S_
1, 2, 3, . . . . We retain the initial reasoning
_D_ _D_ _D_
process r≤m and sequentially append each { } _Di in_
front of the problem{ } . Our objective is to iden_Q_
tify a Di such that, upon generating up to the next
step, the uncertainty remains consistently below a
predefined threshold θ:
∃Di ∶∀k, ∆H _rm+k_ ≤ _θ,_ (11)
where k is the index of the tokens generated af-( )
ter appending _i. This ensures that the enhanced_
_D_
reasoning process effectively mitigates uncertainty
and increases the reliability of the reasoning chain.
**4.3** **Optimizing Demonstration Selection**
**Through Clustering**
Given the reasoning process for each question Q,
it becomes necessary to retrieve demonstrations D
when the uncertainty exceeds the predefined threshold θ. This could lead to a high computational
overhead due to the extensive retrieval of demonstrations. To address this, we propose a clustering
approach for organizing the demonstration set _D_,
aimed at selecting representative exemplars efficiently. { }
Initially, we compute a vector representation for
each demonstration Di using a sophisticated text
embedding model, text-embedding-3-large.
Subsequently, these vectorized demonstrations are
grouped into k distinct clusters using the k-means
clustering algorithm. This method efficiently
groups similar reasoning processes, enhancing the
efficiency of demonstration selection.
We structure each cluster Cj by sorting its
demonstrations according to their proximity to the
cluster’s centroid, prioritizing those closest as they
are most representative of the cluster’s reasoning
pattern:
_Cj =_ _D1[j][,][ D]2[j][, . . .][]][,]_ _Cj ∈_ _K-Means_ _D_ _,_
(12)
[ ({ })
where each cluster Cj comprises demonstrations
_D1[j][,][ D]2[j][, . . .][]][ that exhibit similar reasoning traits,]_
determined by the K-means clustering of the
demonstration set[ .
_D_
This clustering-based approach not only streamlines the demonstration selection process but also { }
ensures that the model has access to a diverse yet
concise set of reasoning examples. It effectively
balances the need for a broad variety of reasoning
examples with computational efficiency, ensuring
that the reasoning process is both accurate and practical.
**5** **Experiments**
**5.1** **Experimental Setup**
In this section, we delineate and scrutinize the performance of our proposed UAG, utilizing a variety of LLM backbones across a series of reasoning benchmarks. To evaluate the efficacy of UAG,
we primarily employ accuracy as the performance
metric. Furthermore, we undertake an analysis of
https://openai.com/index/new-embedding-models-andapi-updates
2405
-----
GSM8K MultiArith SingleEQ AddSub SVAMP AQuA Average Avg. #Tokens
_Single Reasoning Chain_
ZS-CoT 41.93 64.50 71.65 75.19 59.40 31.89 57.43 178.25
CoT 38.89 75.50 77.36 75.19 59.20 30.71 59.48 87.88
ComplexCoT 43.67 76.50 77.56 76.20 56.90 31.50 60.38 124.96
UAG **46.70** **77.66** **79.92** **77.97** **60.70** **33.85** **62.80** 151.76
_Multiple Reasoning Chains_
ZS-CoT-SC 52.16 75.33 77.17 80.25 67.40 37.01 64.89 884.79
CoT-SC 47.08 85.00 85.43 79.24 67.40 36.22 66.73 442.23
ComplexCoT-SC 56.63 85.83 83.86 79.49 **67.90** 35.83 68.26 629.18
UAG-SC **58.07** **87.66** **86.41** **81.26** 67.60 **39.76** **70.12** 772.53
Table 1: Results on Arithmetic Reasoning Tasks (Accuracy in %). The best result is highlighted in bold, while
the method with the lowest computational cost is denoted in green. We utilize the Mistral-7B (Jiang et al., 2023)
backbone for all methods to ensure a fair comparison. For brevity, Zero-Shot-CoT is abbreviated as ZS-CoT. In
multiple reasoning chains scenario, the self-consistency is applied to determine the final outcome. Additionally, we
report the average number of generated tokens ( #Tokens ) to compare the computational efficiency of each method.
100
90
80
70
60
50
40
30
20
10
100
90
80
70
60
50
40
30
20
10
|report the average number of generated tokens ( # 100 Zero-Shot CoT 90 CoT 80 UAG 70 Performance 60 50 40 30 20 10 CSQA StrategyQA BoolQ ARC|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||||||
|||||||||||||||||||
|||||||||||||||||||
|||C|SQ|A|S|tra|teg|yQ|A|B|ool|Q|||ARC|||
|to compare the computational efficiency of each m 100 Zero-Shot CoT 90 CoT 80 UAG 70 Performance 60 50 40 30 20 10 Date Penguin Colored Obj. Obj. Counting|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||||||
||||||||||||||||||
||||||||||||||||||
||||||||||||||||||
||||||||||||||||||
||D|at|e||Pe|ng|uin|C|olo|red|Ob|j. Ob|j.|Cou|nti|ng|
(a) Commonsense Reasoning
(b) Symbolic Reasoning
Figure 3: Performance comparison and cost curves on (a) commonsense reasoning and (b) symbolic reasoning. The
cost and performance of different methods correspond to each other. Histograms show the accuracy, while line
charts illustrate the cost.
the computational costs among different methods,
quantified by the number of tokens generated during the reasoning process. Implementation details
and configurations of the various approaches can
be found in Appendix B.
**Benchmarks.** Our evaluation encompasses three
distinct categories of reasoning tasks, ensuring a
comprehensive analysis of UAG’s versatility and
effectiveness:
- Arithmetic Reasoning: This category includes
GSM8K (Cobbe et al., 2021), MultiArith (Roy
and Roth, 2015), SingleEq (Koncel-Kedziorski
et al., 2016), AddSub (Hosseini et al., 2014),
SVAMP (Patel et al., 2021), and AQUA (Ling
et al., 2017), which encompass a range of arithmetic problem-solving tasks.
- Commonsense Reasoning: We employ StrategyQA (Geva et al., 2021), CommonsenseQA
(CSQA; Talmor et al., 2019), BoolQ (Clark et al.,
2019), and AI2 Reasoning Challenge (ARC-c;
Clark et al., 2018) to gauge the model’s ability to
understand and apply commonsense knowledge.
- Symbolic Reasoning: This involves datasets derived from BigBench (bench authors, 2023; Suzgun et al., 2023), specifically Date Understanding, Penguins in a Table, Colored Objects, and
Object Counting, which test the model’s skill in
abstract and symbolic thought processes.
Following Zhang et al. (2023c), our experiments
are conducted in a “test question only” scenario,
where we lack access to correct answers and must
independently construct exemplars. Detailed descriptions and statistics of these benchmarks are
provided in the appendix A.
**Baselines.** For an intuitive and comprehensive
performance comparison, we incorporate three
primary categories of baselines: (1) Zero-shot
CoT (Kojima et al., 2022) for reasoning without
exemplars; (2) CoT (Wei et al., 2022) for exemplarguided chain-of-thought prompting; and (3) Com
2406
-----
plexCoT (Fu et al., 2023b) for complexity-based
prompting. Furthermore, we also employ selfconsistency decoding (Wang et al., 2023) as a
strong baseline with multiple reasoning chains for
comparison. In our analysis, UAG’s performance
is contrasted not only with these baselines but
also with a variety of exemplar selection (Zhang
et al., 2023c) and decoding enhancement techniques (O’Brien and Lewis, 2023; Chuang et al.,
2023) in Appendix C.4. For generation, we adhere
to the few-shot exemplars of baselines and use the
number of generated tokens as a metric to assess
the computational cost of each method.
**Backbones.** We derive embeddings for clustering
through text-embedding-3-large. In our evaluation, we utilize open-source models LLaMA2 (Touvron et al., 2023b) and Mistral (Jiang et al.,
2023), applying various prompting techniques such
as Zero-Shot CoT (ZS-CoT) (Kojima et al., 2022),
CoT (Wei et al., 2022), and ComplexCoT (Fu et al.,
2023b). Section 5.3 details our examination of the
scalability of our approach across various model
sizes, specifically using the 7B, 13B, and 70B parameter configurations of LLaMA-2. Additionally,
we integrate a Mixture of Experts model, Mistral8x7B (Jiang et al., 2024) to further diversify our
evaluations.
**5.2** **Main Results**
**Arithmetic Reasoning.** In Table 1, we present
the results of arithmetic reasoning tasks. Our
method manifests substantial performance enhancement across most benchmarks, in both single and
multiple chain scenarios. Notably, we observe absolute increments in accuracy of 7.81%, 2.78%,
and 3.14% on GSM8K, AddSub, and AQuA benchmarks, respectively, when compared to CoT approach (Wei et al., 2022). This disparity in performance gains can be attributed to UAG’s strategy of constraining the reasoning search space by
strategically eliminating uncertainty, as reflected in
its superior performance across tasks. This underscores the efficacy of introducing controllable randomness in the UAG decoding process to expand
the search space. A more extensive comparison
of UAG against a broader spectrum of methods is
detailed in Appendix C, where we delve deeper
into the underlying factors contributing to UAG’s
enhanced performance.
**Commonsense and Symbolic Reasoning.** As
illustrated in Figure 3a and Figure 3b, UAG con
50 Zero-Shot CoT
CoT
40 Complex CoT
UAG
30
Performance20
10
0
LLaMA-7B LLaMA-13B LLaMA-70B Mistral-7B Mistral-8x7B
Figure 4: Performance comparison across models on the
AQuA dataset. UAG achieves consistent performance
improvement across various models.
sistently outperforms existing approaches across a
variety of tasks. For instance, on the BoolQ dataset,
our approach achieves an accuracy rate of 62.26%,
significantly surpassing the 58.07% accuracy of the
baseline method. Particularly noteworthy is our performance on the StrategyQA and BoolQ datasets,
where we observe that the Zero-Shot CoT (Kojima et al., 2022) struggled in guiding the model to
generate precise and accurate answers, leading to
significant performance declines.
**Computational Cost.** Despite the significant improvements our approach yields across various
benchmarks, a potential concern arises regarding
the computational overhead. Upon examining Figures 3a and 3b, we note that UAG, when applied to
commonsense and symbolic reasoning tasks, does
not incur substantial performance costs in comparison to CoT. For instance, in the Date Understanding dataset, UAG demonstrates only a 20%
increase in overhead relative to CoT, yet it achieves
an enhancement of over 3% in performance. Furthermore, as illustrated in Table 1, in arithmetic
reasoning, UAG’s computational demand is comparable to ComplexCoT and is even more efficient
than the exemplar-free Zero-Shot CoT approach.
Notably, UAG’s mechanism of refining reasoning
based on finished reasoning steps ensures that the
additional computational overhead is minimal.
**5.3** **Further Analysis**
**Performance on various models.** In Figure 4,
we compare the performance of UAG against other
baselines across different LLMs. UAG consistently
outperforms these baselines. Intriguingly, CoT
does not always outperform Zero-Shot-CoT (Kojima et al., 2022), e.g., Zero-Shot-CoT outperforms
CoT on Mistral-7B, and this advantage is more
pronounced on Mistral-8x7B. Furthermore, an increased number of exemplars does not unconditionally enhance performance; in certain cases, such as
with Mistral, better results are achieved even with
2407
-----
ity of UAG, which dynamically selects appropriate
demonstrations based on uncertainty during the reasoning process, as opposed to Auto-CoT’s static
pre-selection approach.
**6** **Conclusion**
In this paper, we start by observing the causes behind LLM reasoning errors from the perspective of
uncertainty, paving the way for nuanced and adaptive modifications in the reasoning process. Driven
by this insight, we introduce the Uncertainty-aware
Adaptive Guidance (UAG). UAG strategically mitigates reasoning errors by identifying and eliminating uncertainties with reasoning steps, concurrently leveraging exemplars chosen for their relevance and originality to dispel uncertainty and steer
the subsequent reasoning trajectory. Comprehensive experimental evaluations across a spectrum
of reasoning tasks demonstrate that UAG not only
surpasses several strong baselines but also exhibits
adaptability to a diverse array of models, underscoring its efficacy and wide applicability in enhancing
LLM reasoning capabilities. Further analysis showcases the remarkable versatility of UAG, which can
function as a plug-and-play module and efficiently
identify and eliminate reasoning errors.
**Ethics Statement**
The development and deployment of Large Language Models (LLMs), such as those described in
this paper, necessitate careful consideration of various ethical concerns. We outline several key areas
of focus:
**Data Privacy and Security. Our approach, in-**
volving the enhancement of LLMs for reasoning
tasks, does not require the collection or processing
of personal data. The prompts and methods used
are devoid of personal information, aligning with
privacy preservation principles.
**Impact on Workforce. The automation capabili-**
ties of LLMs might affect employment in certain
sectors. It is important to consider the broader societal implications and support the workforce in
adapting to these technological changes.
**Environmental Considerations. The computa-**
tional resources required for training and running
LLMs have environmental impacts. We advocate
for the use of sustainable practices and the exploration of energy-efficient models.
In conducting this research, we have adhered to
ethical guidelines and ensured compliance with
Performance impact across datasets when omitting ei
|70 UAG 60 Relevance Only Performance Originality Only 50 Random 40 30 GSM8K AQUA CSQA StrategyQA Figure 5: Ablation Study on Relevance and Originalit|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||UAG Relevance Only Originality Only Random|||||||||||||||||||||
|||||||||||||||||||||||
|||||||||||||||||||||||
|||||||||||||||||||||||
|||||||||||||||||||||||
|||5|GS :|M8 A|K bla|tion|St|AQ ud|UA y|o|n Re|le|CS va|QA n|ce|and|Stra Or|te i|gy gi|QA na|lit|
ther relevance or originality.
out exemplars (Li et al., 2023b). In this scenario,
Mistral is even better without exemplar. This phenomenon underscores UAG’s strategic advantage
in introducing the most appropriate exemplars ondemand, which substantially contributes to the observed performance improvements of our method.
**Importance of Relevance and Originality.** In
Section 3, we introduce the concepts of relevance
and originality, with exemplar selection criteria detailed in Eq 10. An ablation study, as depicted
in Figure 5, evaluates the impact of these factors.
We modify the experiment by omitting the condition in Eq 11 and employing random selection
for comparison. The results reveal that excluding
originality leads to an average performance drop of
5.73%, as the model then relies solely on question
relevance, often being misled by errors in analogous questions (Zhang et al., 2023c). Conversely,
eliminating relevance also results in significant performance decline, rendering the exemplar selection process ineffective. A further analysis of the
|weights λ and λ is elaborated in Appendix C.1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|weights λ and λ is elaborated in Appendix C.1 1 2 80 Zero-Shot CoT CoT Performance 60 AutoCoT UAG 40 20 GSM8K AQuA CSQA StrategyQA||||||||||||||||||||||
|||||||||||||||||||||||
|||||||||||||||||||||||
|||||||||||||||||||||||
|||||||||||||||||||||||
|||||||||||||||||||||||
|||||||||||||||||||||||
||||GSM|8K||||AQ|uA||||CS|QA|||St|rat|egyQ|A||
Figure 6: Performance comparison of UAG and AutoCoT methods on various reasoning datasets.
**Comparison to Existing Demonstration Selec-**
**tion Method.** In Figure 6, we conduct an analysis
of our UAG method against another representative
demonstration selection strategy utilizing clustering, Auto-CoT (Zhang et al., 2023c), across various reasoning datasets. This comparison, under the
same experimental setting, reveals a notable performance enhancement by UAG over Auto-CoT in
each dataset. These findings indicate the superior
2408
-----
**References**
[Mr. Bayes and Mr. Price. 1763. An essay towards solv-](http://www.jstor.org/stable/105741)
[ing a problem in the doctrine of chances. by the late](http://www.jstor.org/stable/105741)
[rev. mr. bayes, f. r. s. communicated by mr. price, in](http://www.jstor.org/stable/105741)
[a letter to john canton, a. m. f. r. s. Philosophical](http://www.jstor.org/stable/105741)
_Transactions (1683-1775), 53:370–418._
[BIG bench authors. 2023. Beyond the imitation game:](https://openreview.net/forum?id=uyTL5Bvosj)
[Quantifying and extrapolating the capabilities of lan-](https://openreview.net/forum?id=uyTL5Bvosj)
[guage models. Transactions on Machine Learning](https://openreview.net/forum?id=uyTL5Bvosj)
_Research._
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi,
Joanna Gajda, Tomasz Lehmann, Hubert Niewiadom[ski, Piotr Nyczyk, and Torsten Hoefler. 2024. Graph](http://arxiv.org/abs/2308.09687)
[of thoughts: Solving elaborate problems with large](http://arxiv.org/abs/2308.09687)
[language models.](http://arxiv.org/abs/2308.09687)
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens
Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack
Clark, Christopher Berner, Sam McCandlish, Alec
Radford, Ilya Sutskever, and Dario Amodei. 2020.
Language models are few-shot learners. In NeurIPS,
volume 33, pages 1877–1901.
[Lang Cao. 2023. Enhancing reasoning capabilities of](http://arxiv.org/abs/2308.09267)
[large language models: A graph-based verification](http://arxiv.org/abs/2308.09267)
[approach.](http://arxiv.org/abs/2308.09267)
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W. Cohen. 2022. [Program of thoughts](http://arxiv.org/abs/2211.12588)
[prompting: Disentangling computation from reason-](http://arxiv.org/abs/2211.12588)
[ing for numerical reasoning tasks.](http://arxiv.org/abs/2211.12588)
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul
Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language
modeling with pathways.
Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang
Yu, Tao He, Haotian Wang, Weihua Peng, Ming Liu,
[Bing Qin, and Ting Liu. 2023. A survey of chain of](http://arxiv.org/abs/2309.15402)
[thought reasoning: Advances, frontiers and future.](http://arxiv.org/abs/2309.15402)
Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon
[Kim, James Glass, and Pengcheng He. 2023. Dola:](http://arxiv.org/abs/2309.03883)
[Decoding by contrasting layers improves factuality](http://arxiv.org/abs/2309.03883)
[in large language models.](http://arxiv.org/abs/2309.03883)
Christopher Clark, Kenton Lee, Ming-Wei Chang,
Tom Kwiatkowski, Michael Collins, and Kristina
[Toutanova. 2019. BoolQ: Exploring the surprising](https://doi.org/10.18653/v1/N19-1300)
[difficulty of natural yes/no questions. In Proceedings](https://doi.org/10.18653/v1/N19-1300)
_of the 2019 Conference of the North American Chap-_
_ter of the Association for Computational Linguistics:_
_Human Language Technologies, Volume 1 (Long and_
_Short Papers), pages 2924–2936, Minneapolis, Min-_
nesota. Association for Computational Linguistics.
the licensing requirements of the datasets used, as
detailed in Table 2. Our commitment to ethical
research extends beyond legal compliance, encompassing a broader responsibility to the societal implications of our work.
**Limitations**
While our Uncertainty-aware Adaptive Guidance
(UAG) method demonstrates significant improvements in reasoning tasks, it is important to acknowledge certain limitations that point towards areas for
future development:
**Applicability to Closed-Source Models.** A notable constraint of our method is its limited applicability to closed-source models. Models such as
ChatGPT and Claude, which do not provide access to the probability distribution of tokens, pose a
challenge to the implementation of UAG. This limitation restricts the versatility of our approach, as
it cannot be directly applied to these commercially
closed-source models.
**Generalization to Broader Generative Tasks.**
While our research has focused predominantly on
reasoning tasks, the potential of leveraging uncertainty in LLMs extends to a wider spectrum of generative applications (Manakul et al., 2023), such as
code generation (Sun et al., 2024b). This includes
using uncertainty as a metric for evaluating model
hallucination (Ji et al., 2023), generation quality,
and other aspects of generative performance. However, our current scope has not encompassed these
areas, and we recognize this as an opportunity for
future research to expand the applicability and utility of UAG in these domains.
**Acknowledgement**
We extend our gratitude to the members of the
FudanNLP group for their insightful suggestions
and thought-provoking discussions that greatly enhanced this work. We also sincerely appreciate
the anonymous reviewers and area chairs for their
constructive feedback, which was instrumental in
advancing the quality of our study. This work was
supported by the National Natural Science Foundation of China (No. 62236004). The computations
in this research were performed using the CFFF
platform of Fudan University.
2409
-----
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot,
Ashish Sabharwal, Carissa Schoenick, and Oyvind
Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. ArXiv,
abs/1803.05457.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training verifiers to solve math word prob-](http://arxiv.org/abs/2110.14168)
[lems.](http://arxiv.org/abs/2110.14168)
Shizhe Diao, Pengcheng Wang, Yong Lin, and Tong
Zhang. 2023. [Active prompting with chain-of-](http://arxiv.org/abs/2302.12246)
[thought for large language models.](http://arxiv.org/abs/2302.12246)
Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei
[Liu. 2023a. Gptscore: Evaluate as you desire.](http://arxiv.org/abs/2302.04166)
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and
[Tushar Khot. 2023b. Complexity-based prompting](https://openreview.net/forum?id=yf1icZHC-l9)
[for multi-step reasoning. In The Eleventh Interna-](https://openreview.net/forum?id=yf1icZHC-l9)
_tional Conference on Learning Representations._
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot,
[Dan Roth, and Jonathan Berant. 2021. Did aristotle](https://doi.org/10.1162/tacl_a_00370)
[use a laptop? a question answering benchmark with](https://doi.org/10.1162/tacl_a_00370)
[implicit reasoning strategies. Transactions of the](https://doi.org/10.1162/tacl_a_00370)
_Association for Computational Linguistics, 9:346–_
361.
Chengcheng Han, Xiaowei Du, Che Zhang, Yixin Lian,
[Xiang Li, Ming Gao, and Baoyuan Wang. 2023. Di-](https://doi.org/10.18653/v1/2023.emnlp-main.501)
[alCoT meets PPO: Decomposing and exploring rea-](https://doi.org/10.18653/v1/2023.emnlp-main.501)
[soning paths in smaller language models. In Proceed-](https://doi.org/10.18653/v1/2023.emnlp-main.501)
_ings of the 2023 Conference on Empirical Methods_
_in Natural Language Processing, pages 8055–8068,_
Singapore. Association for Computational Linguistics.
Shibo Hao, Yi Gu, Haodi Ma, Joshua Hong, Zhen
Wang, Daisy Wang, and Zhiting Hu. 2023. [Rea-](https://doi.org/10.18653/v1/2023.emnlp-main.507)
[soning with language model is planning with world](https://doi.org/10.18653/v1/2023.emnlp-main.507)
[model. In Proceedings of the 2023 Conference on](https://doi.org/10.18653/v1/2023.emnlp-main.507)
_Empirical Methods in Natural Language Processing,_
pages 8154–8173, Singapore. Association for Computational Linguistics.
Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren
[Etzioni, and Nate Kushman. 2014. Learning to solve](https://doi.org/10.3115/v1/D14-1058)
[arithmetic word problems with verb categorization.](https://doi.org/10.3115/v1/D14-1058)
In Proceedings of the 2014 Conference on Empirical
_Methods in Natural Language Processing (EMNLP),_
pages 523–533. Association for Computational Linguistics.
Jiaxin Huang, Shixiang Gu, Le Hou, Yuexin Wu, Xuezhi
[Wang, Hongkun Yu, and Jiawei Han. 2023. Large](https://doi.org/10.18653/v1/2023.emnlp-main.67)
[language models can self-improve. In Proceedings](https://doi.org/10.18653/v1/2023.emnlp-main.67)
_of the 2023 Conference on Empirical Methods in Nat-_
_ural Language Processing, pages 1051–1068, Singa-_
pore. Association for Computational Linguistics.
Dan Iter, Reid Pryzant, Ruochen Xu, Shuohang Wang,
Yang Liu, Yichong Xu, and Chenguang Zhu. 2023.
[In-context demonstration selection with cross entropy](https://doi.org/10.18653/v1/2023.findings-emnlp.81)
[difference. In Findings of the Association for Com-](https://doi.org/10.18653/v1/2023.findings-emnlp.81)
_putational Linguistics: EMNLP 2023, pages 1150–_
1162, Singapore. Association for Computational Linguistics.
Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan
Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea
[Madotto, and Pascale Fung. 2023. Survey of halluci-](https://doi.org/10.1145/3571730)
[nation in natural language generation. ACM Comput-](https://doi.org/10.1145/3571730)
_ing Surveys, 55(12):1–38._
Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud,
Marie-Anne Lachaux, Pierre Stock, Teven Le Scao,
Thibaut Lavril, Thomas Wang, Timothée Lacroix,
[and William El Sayed. 2023. Mistral 7b.](http://arxiv.org/abs/2310.06825)
Albert Q. Jiang, Alexandre Sablayrolles, Antoine
Roux, Arthur Mensch, Blanche Savary, Chris
Bamford, Devendra Singh Chaplot, Diego de las
Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, MarieAnne Lachaux, Pierre Stock, Sandeep Subramanian,
Sophia Yang, Szymon Antoniak, Teven Le Scao,
Théophile Gervet, Thibaut Lavril, Thomas Wang,
[Timothée Lacroix, and William El Sayed. 2024. Mix-](http://arxiv.org/abs/2401.04088)
[tral of experts.](http://arxiv.org/abs/2401.04088)
Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao
Fu, Kyle Richardson, Peter Clark, and Ashish Sab[harwal. 2023. Decomposed prompting: A modular](https://openreview.net/forum?id=_nGgzQjzaRy)
[approach for solving complex tasks. In The Eleventh](https://openreview.net/forum?id=_nGgzQjzaRy)
_International Conference on Learning Representa-_
_tions._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu[taka Matsuo, and Yusuke Iwasawa. 2022. Large lan-](https://openreview.net/forum?id=e2TBb5y0yFf)
[guage models are zero-shot reasoners. In Advances](https://openreview.net/forum?id=e2TBb5y0yFf)
_in Neural Information Processing Systems._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate
[Kushman, and Hannaneh Hajishirzi. 2016. MAWPS:](https://doi.org/10.18653/v1/N16-1136)
[A math word problem repository. In Proceedings of](https://doi.org/10.18653/v1/N16-1136)
_the 2016 Conference of the North American Chapter_
_of the Association for Computational Linguistics: Hu-_
_man Language Technologies, pages 1152–1157, San_
Diego, California. Association for Computational
Linguistics.
Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang,
Jason Eisner, Tatsunori Hashimoto, Luke Zettle[moyer, and Mike Lewis. 2023a. Contrastive decod-](https://doi.org/10.18653/v1/2023.acl-long.687)
[ing: Open-ended text generation as optimization. In](https://doi.org/10.18653/v1/2023.acl-long.687)
_Proceedings of the 61st Annual Meeting of the As-_
_sociation for Computational Linguistics (Volume 1:_
_Long Papers), pages 12286–12312, Toronto, Canada._
Association for Computational Linguistics.
[Xiaonan Li and Xipeng Qiu. 2023. MoT: Memory-of-](https://doi.org/10.18653/v1/2023.emnlp-main.392)
[thought enables ChatGPT to self-improve. In Pro-](https://doi.org/10.18653/v1/2023.emnlp-main.392)
_ceedings of the 2023 Conference on Empirical Meth-_
2410
-----
_ods in Natural Language Processing, pages 6354–_
6374, Singapore. Association for Computational Linguistics.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
[Jian-Guang Lou, and Weizhu Chen. 2023b. Making](https://aclanthology.org/2023.acl-long.291)
[language models better reasoners with step-aware](https://aclanthology.org/2023.acl-long.291)
[verifier. In Proceedings of the 61st Annual Meet-](https://aclanthology.org/2023.acl-long.291)
_ing of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 5315–5333, Toronto,_
Canada. Association for Computational Linguistics.
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun[som. 2017. Program induction by rationale genera-](https://doi.org/10.18653/v1/P17-1015)
[tion: Learning to solve and explain algebraic word](https://doi.org/10.18653/v1/P17-1015)
[problems. In Proceedings of the 55th Annual Meet-](https://doi.org/10.18653/v1/P17-1015)
_ing of the Association for Computational Linguistics,_
_ACL 2017, Vancouver, Canada, July 30 - August 4,_
_Volume 1: Long Papers, pages 158–167. Association_
for Computational Linguistics.
Tengxiao Liu, Qipeng Guo, Yuqing Yang, Xiangkun
Hu, Yue Zhang, Xipeng Qiu, and Zheng Zhang. 2023.
[Plan, verify and switch: Integrated reasoning with](https://doi.org/10.18653/v1/2023.emnlp-main.169)
[diverse X-of-thoughts. In Proceedings of the 2023](https://doi.org/10.18653/v1/2023.emnlp-main.169)
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, pages 2807–2822, Singapore. As-_
sociation for Computational Linguistics.
Jianqiao Lu, Wanjun Zhong, Wenyong Huang, Yufei
Wang, Qi Zhu, Fei Mi, Baojun Wang, Weichao Wang,
Xingshan Zeng, Lifeng Shang, Xin Jiang, and Qun
[Liu. 2024. Self: Self-evolution with language feed-](http://arxiv.org/abs/2310.00533)
[back.](http://arxiv.org/abs/2310.00533)
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler
Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon,
Nouha Dziri, Shrimai Prabhumoye, Yiming Yang,
Shashank Gupta, Bodhisattwa Prasad Majumder,
Katherine Hermann, Sean Welleck, Amir Yazdan[bakhsh, and Peter Clark. 2023. Self-refine: Itera-](https://proceedings.neurips.cc/paper_files/paper/2023/file/91edff07232fb1b55a505a9e9f6c0ff3-Paper-Conference.pdf)
[tive refinement with self-feedback. In Advances in](https://proceedings.neurips.cc/paper_files/paper/2023/file/91edff07232fb1b55a505a9e9f6c0ff3-Paper-Conference.pdf)
_Neural Information Processing Systems, volume 36,_
pages 46534–46594. Curran Associates, Inc.
Potsawee Manakul, Adian Liusie, and Mark Gales. 2023.
[SelfCheckGPT: Zero-resource black-box hallucina-](https://doi.org/10.18653/v1/2023.emnlp-main.557)
[tion detection for generative large language models.](https://doi.org/10.18653/v1/2023.emnlp-main.557)
In Proceedings of the 2023 Conference on Empiri_cal Methods in Natural Language Processing, pages_
9004–9017, Singapore. Association for Computational Linguistics.
[Sean O’Brien and Mike Lewis. 2023. Contrastive de-](http://arxiv.org/abs/2309.09117)
[coding improves reasoning in large language models.](http://arxiv.org/abs/2309.09117)
[OpenAI. 2023. GPT-4 technical report.](http://arxiv.org/abs/2303.08774)
Bhargavi Paranjape, Scott Lundberg, Sameer Singh,
Hannaneh Hajishirzi, Luke Zettlemoyer, and
[Marco Tulio Ribeiro. 2023. Art: Automatic multi-](http://arxiv.org/abs/2303.09014)
[step reasoning and tool-use for large language mod-](http://arxiv.org/abs/2303.09014)
[els.](http://arxiv.org/abs/2303.09014)
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
[2021. Are NLP models really able to solve simple](https://doi.org/10.18653/v1/2021.naacl-main.168)
[math word problems? In Proceedings of the 2021](https://doi.org/10.18653/v1/2021.naacl-main.168)
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 2080–2094, Online._
Association for Computational Linguistics.
[Subhro Roy and Dan Roth. 2015. Solving general arith-](https://doi.org/10.18653/v1/d15-1202)
[metic word problems. In Proceedings of the 2015](https://doi.org/10.18653/v1/d15-1202)
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, EMNLP 2015, Lisbon, Portugal,_
_September 17-21, 2015, pages 1743–1752. The As-_
sociation for Computational Linguistics.
Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar,
Ruoxi Jia, and Ming Jin. 2023. [Algorithm of](http://arxiv.org/abs/2308.10379)
[thoughts: Enhancing exploration of ideas in large](http://arxiv.org/abs/2308.10379)
[language models.](http://arxiv.org/abs/2308.10379)
KaShun Shum, Shizhe Diao, and Tong Zhang. 2023.
[Automatic prompt augmentation and selection with](http://arxiv.org/abs/2302.12822)
[chain-of-thought from labeled data.](http://arxiv.org/abs/2302.12822)
Kaya Stechly, Matthew Marquez, and Subbarao Kamb[hampati. 2023. Gpt-4 doesn’t know it’s wrong: An](http://arxiv.org/abs/2310.12397)
[analysis of iterative prompting for reasoning prob-](http://arxiv.org/abs/2310.12397)
[lems.](http://arxiv.org/abs/2310.12397)
Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi,
Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf,
Luke Zettlemoyer, Noah A. Smith, and Tao Yu. 2023.
[Selective annotation makes language models better](https://openreview.net/forum?id=qY1hlv7gwg)
[few-shot learners. In The Eleventh International Con-](https://openreview.net/forum?id=qY1hlv7gwg)
_ference on Learning Representations._
Jiankai Sun, Chuanyang Zheng, Enze Xie, Zhengying
Liu, Ruihang Chu, Jianing Qiu, Jiaqi Xu, Mingyu
Ding, Hongyang Li, Mengzhe Geng, Yue Wu, Wenhai Wang, Junsong Chen, Zhangyue Yin, Xiaozhe
Ren, Jie Fu, Junxian He, Wu Yuan, Qi Liu, Xihui
Liu, Yu Li, Hao Dong, Yu Cheng, Ming Zhang,
Pheng Ann Heng, Jifeng Dai, Ping Luo, Jingdong
Wang, Ji-Rong Wen, Xipeng Qiu, Yike Guo, Hui
[Xiong, Qun Liu, and Zhenguo Li. 2024a. A survey](http://arxiv.org/abs/2312.11562)
[of reasoning with foundation models.](http://arxiv.org/abs/2312.11562)
Qiushi Sun, Zhirui Chen, Fangzhi Xu, Kanzhi
Cheng, Chang Ma, Zhangyue Yin, Jianing Wang,
Chengcheng Han, Renyu Zhu, Shuai Yuan, Qipeng
Guo, Xipeng Qiu, Pengcheng Yin, Xiaoli Li, Fei
Yuan, Lingpeng Kong, Xiang Li, and Zhiyong
[Wu. 2024b. A survey of neural code intelligence:](http://arxiv.org/abs/2403.14734)
[Paradigms, advances and beyond.](http://arxiv.org/abs/2403.14734)
Qiushi Sun, Zhangyue Yin, Xiang Li, Zhiyong Wu,
[Xipeng Qiu, and Lingpeng Kong. 2023. Corex: Push-](http://arxiv.org/abs/2310.00280)
[ing the boundaries of complex reasoning through](http://arxiv.org/abs/2310.00280)
[multi-model collaboration.](http://arxiv.org/abs/2310.00280)
Tianxiang Sun, Xiaotian Zhang, Zhengfu He, Peng Li,
Qinyuan Cheng, Xiangyang Liu, Hang Yan, Yunfan Shao, Qiong Tang, Shiduo Zhang, et al. 2024c.
Moss: An open conversational large language model.
_Machine Intelligence Research, pages 1–18._
Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung,
2411
-----
Aakanksha Chowdhery, Quoc Le, Ed Chi, Denny
[Zhou, and Jason Wei. 2023. Challenging BIG-bench](https://aclanthology.org/2023.findings-acl.824)
[tasks and whether chain-of-thought can solve them.](https://aclanthology.org/2023.findings-acl.824)
In Findings of the Association for Computational Lin_guistics: ACL 2023, pages 13003–13051, Toronto,_
Canada. Association for Computational Linguistics.
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and
[Jonathan Berant. 2019. CommonsenseQA: A ques-](https://doi.org/10.18653/v1/N19-1421)
[tion answering challenge targeting commonsense](https://doi.org/10.18653/v1/N19-1421)
[knowledge. In Proceedings of the 2019 Conference](https://doi.org/10.18653/v1/N19-1421)
_of the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
4149–4158, Minneapolis, Minnesota. Association for
Computational Linguistics.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal
Azhar, Aurelien Rodriguez, Armand Joulin, Edouard
[Grave, and Guillaume Lample. 2023a. Llama: Open](http://arxiv.org/abs/2302.13971)
[and efficient foundation language models.](http://arxiv.org/abs/2302.13971)
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, et al.
[2023b. Llama 2: Open foundation and fine-tuned](http://arxiv.org/abs/2307.09288)
[chat models.](http://arxiv.org/abs/2307.09288)
Jianing Wang, Qiushi Sun, Xiang Li, and Ming Gao.
2024. [Boosting language models reasoning with](http://arxiv.org/abs/2306.06427)
[chain-of-knowledge prompting.](http://arxiv.org/abs/2306.06427)
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le,
Ed H. Chi, Sharan Narang, Aakanksha Chowdhery,
[and Denny Zhou. 2023. Self-consistency improves](https://openreview.net/forum?id=1PL1NIMMrw)
[chain of thought reasoning in language models. In](https://openreview.net/forum?id=1PL1NIMMrw)
_The Eleventh International Conference on Learning_
_Representations._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le,
[and Denny Zhou. 2022. Chain of thought prompt-](https://openreview.net/forum?id=_VjQlMeSB_J)
[ing elicits reasoning in large language models. In](https://openreview.net/forum?id=_VjQlMeSB_J)
_Advances in Neural Information Processing Systems._
Yuxi Xie, Kenji Kawaguchi, Yiran Zhao, Xu Zhao,
Min-Yen Kan, Junxian He, and Qizhe Xie. 2023.
[Self-evaluation guided beam search for reasoning.](https://openreview.net/forum?id=Bw82hwg5Q3)
In Thirty-seventh Conference on Neural Information
_Processing Systems._
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran,
Thomas L. Griffiths, Yuan Cao, and Karthik
[Narasimhan. 2023. Tree of Thoughts: Deliberate](http://arxiv.org/abs/2305.10601)
[problem solving with large language models.](http://arxiv.org/abs/2305.10601)
[Xi Ye and Greg Durrett. 2023. Explanation selection](https://doi.org/10.18653/v1/2023.emnlp-main.41)
[using unlabeled data for chain-of-thought prompting.](https://doi.org/10.18653/v1/2023.emnlp-main.41)
In Proceedings of the 2023 Conference on Empiri_cal Methods in Natural Language Processing, pages_
619–637, Singapore. Association for Computational
Linguistics.
Zhangyue Yin, Qiushi Sun, Cheng Chang, Qipeng
Guo, Junqi Dai, Xuanjing Huang, and Xipeng Qiu.
[2023a. Exchange-of-thought: Enhancing large lan-](https://aclanthology.org/2023.emnlp-main.936)
[guage model capabilities through cross-model com-](https://aclanthology.org/2023.emnlp-main.936)
[munication. In Proceedings of the 2023 Conference](https://aclanthology.org/2023.emnlp-main.936)
_on Empirical Methods in Natural Language Process-_
_ing, pages 15135–15153, Singapore. Association for_
Computational Linguistics.
Zhangyue Yin, Qiushi Sun, Qipeng Guo, Jiawen Wu,
[Xipeng Qiu, and Xuanjing Huang. 2023b. Do large](https://aclanthology.org/2023.findings-acl.551)
[language models know what they don’t know? In](https://aclanthology.org/2023.findings-acl.551)
_Findings of the Association for Computational Lin-_
_guistics: ACL 2023, pages 8653–8665, Toronto,_
Canada. Association for Computational Linguistics.
Zhangyue Yin, Qiushi Sun, Qipeng Guo, Zhiyuan Zeng,
Xiaonan Li, Tianxiang Sun, Cheng Chang, Qinyuan
Cheng, Ding Wang, Xiaofeng Mou, Xipeng Qiu, and
[Xuanjing Huang. 2024. Aggregation of reasoning:](https://aclanthology.org/2024.lrec-main.53)
[A hierarchical framework for enhancing answer se-](https://aclanthology.org/2024.lrec-main.53)
[lection in large language models. In Proceedings of](https://aclanthology.org/2024.lrec-main.53)
_the 2024 Joint International Conference on Computa-_
_tional Linguistics, Language Resources and Evalua-_
_tion (LREC-COLING 2024), pages 609–625, Torino,_
Italia. ELRA and ICCL.
Fei Yu, Hongbo Zhang, Prayag Tiwari, and Benyou
[Wang. 2023. Natural language reasoning, a survey.](http://arxiv.org/abs/2303.14725)
Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021.
[BARTScore: Evaluating generated text as text gener-](https://openreview.net/forum?id=5Ya8PbvpZ9)
[ation. In Advances in Neural Information Processing](https://openreview.net/forum?id=5Ya8PbvpZ9)
_Systems._
Jiajie Zhang, Shulin Cao, Tingjian Zhang, Xin Lv,
Juanzi Li, Lei Hou, Jiaxin Shi, and Qi Tian. 2023a.
[Reasoning over hierarchical question decomposition](https://doi.org/10.18653/v1/2023.acl-long.814)
[tree for explainable question answering. In Proceed-](https://doi.org/10.18653/v1/2023.acl-long.814)
_ings of the 61st Annual Meeting of the Association for_
_Computational Linguistics (Volume 1: Long Papers),_
pages 14556–14570, Toronto, Canada. Association
for Computational Linguistics.
Kun Zhang, Jiali Zeng, Fandong Meng, Yuanzhuo
Wang, Shiqi Sun, Long Bai, Huawei Shen, and Jie
Zhou. 2024. Tree-of-reasoning question decomposition for complex question answering with large
language models. In Proceedings of the AAAI Con_ference on Artificial Intelligence, volume 38, pages_
19560–19568.
Yifan Zhang, Jingqin Yang, Yang Yuan, and Andrew
[Chi-Chih Yao. 2023b. Cumulative reasoning with](http://arxiv.org/abs/2308.04371)
[large language models.](http://arxiv.org/abs/2308.04371)
[Yiming Zhang, Shi Feng, and Chenhao Tan. 2022. Ac-](https://doi.org/10.18653/v1/2022.emnlp-main.622)
[tive example selection for in-context learning. In Pro-](https://doi.org/10.18653/v1/2022.emnlp-main.622)
_ceedings of the 2022 Conference on Empirical Meth-_
_ods in Natural Language Processing, pages 9134–_
9148, Abu Dhabi, United Arab Emirates. Association
for Computational Linguistics.
Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex
[Smola. 2023c. Automatic chain of thought prompt-](https://openreview.net/forum?id=5NTt8GFjUHkr)
[ing in large language models. In The Eleventh Inter-](https://openreview.net/forum?id=5NTt8GFjUHkr)
_national Conference on Learning Representations._
2412
-----
Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo
[Li, and Yu Li. 2023. Progressive-hint prompting](https://arxiv.org/abs/2304.09797)
[improves reasoning in large language models. ArXiv](https://arxiv.org/abs/2304.09797)
_preprint, abs/2304.09797._
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H.
[Chi. 2023. Least-to-most prompting enables com-](https://openreview.net/forum?id=WZH7099tgfM)
[plex reasoning in large language models. In The](https://openreview.net/forum?id=WZH7099tgfM)
_Eleventh International Conference on Learning Rep-_
_resentations._
**A** **Statistics and Details of Datasets**
For our experimental analysis, we carefully selected a diverse set of 14 datasets, encompassing
the domains of arithmetic reasoning, commonsense
reasoning, and symbolic reasoning. In Table 2, we
comprehensively detail each dataset, including its
source, type of answers it contains, the count of
Chain-of-Thought (CoT) (Wei et al., 2022) prompt
exemplars used, the size of the test sample, and the
applicable licenses.
**B** **Implementation Details**
**Baseline Implementation.** In our main experiments, we employ three baselines: Zero-Shot
CoT (Kojima et al., 2022), CoT (Wei et al., 2022),
and ComplexCoT (Fu et al., 2023b). For ZeroShot CoT, we append “Let’s think step by step”
after each question. For both CoT and ComplexCoT, we adhere to their original prompting exemplars. To maintain consistency, we standardize
the prompt format in ComplexCoT to match CoT,
replacing “Question:” and “Answer:” with “Q:”
and “A:”, respectively. The details of prompts
used are listed in Table 2. While CoT and ComplexCoT have the same number of prompts, each
example in ComplexCoT encompasses more intermediate steps. The experimental outcomes for
CD and DoLA were sourced from Chuang et al.
(2023) and O’Brien and Lewis (2023). For comparison, we use the LLaMA-1 model (Touvron et al.,
2023a) and sample 20 reasoning chains for selfconsistency (Wang et al., 2023), aligning with their
settings.
GSM8K
65 StrategyQA
60
55
Performance50
45
40
0.0 0.2 0.4 0.6 0.8 1.0
Relevance
Figure 7: Weight analysis of Relevance and Originality
on GSM8K and StrategyQA dataset.
(τ > 0.5), consistent with Xie et al. (2023). For
the Mistral model, a temperature control within
_τ ∈_ 0.3, 0.7 yields more favorable results. This
variance in optimal temperature settings can be attributed to differences in model architecture and[ ]
the distribution of the training data.
In scenarios involving multiple reasoning paths,
we set the sampling temperature to 0.7 and set the
number of reasoning chains to 5 to explore the reasoning space thoroughly. A majority voting mechanism is employed post-answer generation for the
final decision (Wang et al., 2023). We note that
some special tokens, such as the “muffins” in Figure 1, exhibits high uncertainty, likely due to infrequent usage, but did not consider these as reasoning
errors if their uncertainty exceeded the threshold θ.
For different datasets, we follow the Auto-CoT
setup (Zhang et al., 2023c) with varying cluster
numbers; for mathematical reasoning, k = 8, while
for symbolic reasoning, k = 4 was uniform. In
commonsense reasoning, the cluster count was set
to 7 for CSQA and StrategyQA, and 5 for BoolQ
and ARC-c. To ensure precise answer extraction
and mitigate evaluation errors, as discussed in Section 5.3, an exemplar was incorporated in the initial
stages for Mistral-7B, LLaMA models.
Hardware utilization included a single RTX4090
for running LLaMA-7B and Mistral-7B models,
two RTX4090s for LLaMA-13B, and two A100s
for LLaMA-70B and Mistral-8x7B. In the multiple reasoning chains scenario, employing multiplesampling effectively reduces the randomness associated with a single run. The UAG method is implemented using PyTorch and Transformers, with
Copilot and ChatGPT assisting in code writing and
debugging.
**Hyper parameter.** We set the respective weights
of relevance and originality λ1 and λ2 to 0.5 and
0.5, respectively, and the ablation experiments for
**Generation Settings** In our experimental setup,
we adapt the generation temperature τ for different tasks and baseline models. For the LLaMA
model, Zero-Shot-CoT shows optimal performance
at lower temperatures, specifically within τ ∈
0.1, 0.5 . This contrasts with CoT and ComplexCoT, which perform better at higher temperatures
[ ]
2413
-----
|DATASET|REASONING TASK|ANSWER FORMAT|# EX.|# EVAL.|LICENSE|
|---|---|---|---|---|---|
|GSM8K (Cobbe et al., 2021) MultiArith (Roy and Roth, 2015) SingleEq (Koncel-Kedziorski et al., 2016) AddSub (Hosseini et al., 2014) SVAMP (Patel et al., 2021) AQUA (Ling et al., 2017) StrategyQA (Geva et al., 2021) CommonsenseQA (Talmor et al., 2019) BoolQ (Clark et al., 2019) ARC-c (Clark et al., 2018) Date Understanding (Suzgun et al., 2023) Penguins in a Table (Suzgun et al., 2023) Colored Objects (Suzgun et al., 2023) Object Counting (Suzgun et al., 2023)|Arithmetic Arithmetic Arithmetic Arithmetic Arithmetic Arithmetic Commonsense Commonsense Commonsense Commonsense Symbolic Symbolic Symbolic Symbolic|Number Number Number Number Number Multi-choice T/F Multi-choice T/F Multi-choice Multi-choice Multi-choice Multi-choice Multi-choice|8 8 8 8 8 4 6 7 4 4 3 3 3 3|1,319 600 508 395 1,000 254 2,290 1,221 3,270 299 250 146 250 250|MIT License Unspecifeid Unspecifeid Unspecifeid MIT License Apache-2.0 MIT license Unspecifeid CC BY-SA 3.0 CC BY-SA 4.0 MIT license MIT license MIT license MIT license|
|---|---|---|---|---|---|
Table 2: Comprehensive statistics of datasets utilized in our experiments. # EX. indicates the number of Chain-ofThought (CoT) (Wei et al., 2022) prompting exemplars used for few-shot prompting. # EVAL. denotes the total
count of evaluation samples in each dataset.
1000
50 Accuracy Samples
800
40
600
Accuracy30 400 Samples
20
10 200
0 0
7.5 10.0 12.5 15.0 17.5 20.0 22.5 25.0
Theta
2000
70 Accuracy Samples
1750
60
1500
50
1250
40
1000
Accuracy30 750 Samples
20 500
10 250
0 0
7.5 10.0 12.5 15.0 17.5 20.0 22.5 25.0
Theta
Figure 8: Threshold Analysis on GSM8K dataset.
this setting are shown in 5.3, which we analyze in
more detail in C.1. In the main experiment, we set
the threshold θ to 16, and we analyzed the different
threshold selections in C.2. In the clustering phase,
we use text-embedding-3-large to obtain the
corresponding embedding. We use NLL loss to
react to the uncertainty of the model and use the
model’s respective tokenizer to get the number of
generated tokens. Benefiting from the uncertainty
assessment, we are able to first complete inference
on a batch of confident samples. These samples are
used as examples to bootstrap questions that are
interrupted due to high uncertainty, similar to active
learning, allowing our approach to be applied in
test question only scenarios.
**C** **Extended Analysis**
**C.1** **Further Analysis of Relevance and**
**Originality**
In Figure 7, we examine the influence of relevance
and originality weights on performance, utilizing
the Mistral-7B model. We maintain the constraint
_λ1 + λ2 = 1, where λ1 varies from 0 to 1, denoting_
Figure 9: Threshold Analysis on StrategyQA dataset.
an increasing emphasis on relevance. Our findings
indicate that an increment in λ1 corresponds with
a gradual improvement in model performance, underscoring the positive role of relevance in enhancing model reasoning. However, beyond λ1 > 0.6,
there is a notable decline in performance, suggesting an overreliance on correlation and the detrimental impact of reasoning errors from similar
samples. The model exhibits optimal performance
when λ1 = λ2 = 0.5, striking a balance between
relevance and originality. This balanced setting is
adopted for our subsequent experiments, reflecting
its effectiveness in optimizing model performance.
**C.2** **Threshold θ**
In Figure 8 and Figure 9, we delve into the influence of threshold θ on the performance in the
GSM8K and StrategyQA datasets. The histograms
display the distribution of samples requiring exemplar introduction at various thresholds. The
accompanying curves illustrate the corresponding
accuracy at each threshold level. It is observed
that the model’s accuracy escalates with an in
2414
-----
Human
ChatGPT
ChatGPT
(Corrected)
UAG
UAG
(Corrected)
0 20 40 60 80 100
Number of Samples
Human
ChatGPT
ChatGPT
(Corrected)
UAG
UAG
(Corrected)
0 20 40 60 80 100
Number of Samples
Figure 10: Error Identification and Correction on
GSM8K dataset. UAG exhibits comparable error identi
fication capabilities to ChatGPT.
60
CoT
CD
50 DoLA
UAG
40
30
Performance
20
10
0
LLaMA-13B LLaMA-33B LLaMA-65B
Figure 11: Performance comparison of UAG and
various Decoding Enhancement Methods on GSM8K
dataset. Using LLaMA-1 backbone (Touvron et al.,
2023a), UAG consistently enhances performance across
models of different parameter sizes.
crease in θ, peaking before a rapid decline post
_θ > 16. To understand this phenomenon, we exam-_
ine the frequency of samples meeting the criteria
of Eq 7 at varying thresholds, depicted through a
histogram. This analysis reveals a decrease in the
number of samples satisfying Eq 7 as θ ascends. A
lower θ entails a larger subset of samples undergoing reasoning adjustment, which, given that many
samples are correctly inferred initially, could inadvertently introduce errors. Conversely, a higher θ
may fail to identify samples with reasoning errors,
gradually like an exemplar-free Zero-Shot-CoT approach (Kojima et al., 2022) and resulting in a
marked degradation in performance.
**C.3** **Correlation between Reasoning Errors**
**and Model Uncertainty**
In Figure 1, we observe a correlation between
reasoning errors and model uncertainty, a phenomenon previously substantiated by various studies (Yuan et al., 2021; Fu et al., 2023a; Manakul
et al., 2023). To delve deeper, we analyze 100
error-containing samples from the GSM8K and
Figure 12: Error Identification and Correction on StrategyQA dataset. UAG exhibits comparable error identification capabilities to ChatGPT.
lined in Xie et al. (2023). ChatGPT exhibits a
|80 CoT CD 75 DoLA UAG Performance 70 65 60 55 LLaMA-13B LLaMA-33B LLaMA-65B Figure 13: Performance comparison of UAG and va ious Decoding Enhancement Methods on StrategyQ dataset. Using LLaMA-1 backbone (Touvron et a 2023a), UAG consistently enhances performance acro models of different parameter sizes. StrategyQA datasets, identified through labele correct answers, as illustrate in Figure 10 an Figure ??. These samples are processed usin both ChatGPT and our UAG method for error l calization and correction. Employing the reaso ing process generated by Mistral-7B, we utilize gpt-3.5-turbo-1106, following the method ou|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||||
|||||||||||||||||
|||||||||||||||||
|||||||||||||||||
|||||||||||||||||
||L 13: eco . U of gyQ t a ? ha ion oce .5-|LaM P din Usi AG dif A ns ?. tG a ss tu|A-13 erfo g E ng co fer da wer Th PT nd ge rb|B rm nh LL nsis ent tas s, es an cor ner o-|an anc aM ten par ets as e s d o rec ate 110|L ce c em A- tly am, i ill am ur tio d b 6,|LaM om ent 1 b enh ete de ust pl U n. y fol|A-33 pa M ack an r si nti rat es AG E Mi low|B riso eth bo ces zes fied e i are m mpl stra in|n ods ne per . th n p eth oy l-7 g t|L of U on (To for ro Fig roc od ing B, he|LaM A St uv ma ug ur ess fo th w me|A-65 G a rat ron nce h l e 1 ed r er e r e u tho|B nd egy et ac abe 0 us ror eas tili d|va Q a ro le an in l o ze ou|
substantial error identification rate, correctly pinpointing 71% and 83% of errors in the datasets and
successfully amending 57% and 69%, respectively.
UAG demonstrates a comparable proficiency, correctly identifying 60% and 81% of the errors and
effecting corrections in 43% and 55% of the cases.
Notably, UAG capitalizes on the model’s inherent
uncertainty for judgment, obviating the need for
reevaluation. This approach not only significantly
reduces computational overhead but also remains
versatile across models of varying parameter sizes.
2415
-----
In Section 2, we discuss a variety of reasoning
|85 CoT 80 CD UAG 75 CoT-SC CD-SC 70 UAG-SC Performance 65 60 55 50 45 GSM8K StrategyQA Figure 14: Performance Comparison in Multiple Re soning Chains Scenario. Adhering to the configuratio outlined by O’Brien and Lewis (2023), we utilize th LLaMA-65B backbone (Touvron et al., 2023a). C.4 Comparison to Decoding Enhancement Method|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||
|||||||||||||||
|||||||||||||||
|||||||||||||||
|||||||||||||||
||G e 14: P g Chai ed by A-65B Com Meth|SM8 er ns O’ b pa od|K for Sc Br ac ris|ma ena ien kbo on|nc rio a ne t|e Comparis . Adhering nd Lewis (2 (Touvron e o Decoding|on to 02 t al E|Stra in the 3), ., n|teg M c w 202 ha|yQA ult onf e u 3a nc|ipl igu tili ). em|e ra ze en|Re tio th t|
enhancement methods. Due to experimental constraints, it was impractical to compare our approach
with all notable methods. Consequently, we select
two representative decoding enhancement methods
for comparison: Contrastive Decoding (CD) (Li
et al., 2023a) and Decoding by Contrasting Layers (DoLA) (Chuang et al., 2023). Utilizing the
LLaMA-1 backbone (Touvron et al., 2023a), with
baseline results sourced from Chuang et al. (2023),
Figure 11 and Figure 13 illustrates that UAG consistently outperforms the DoLA method across all
model sizes on both the GSM8K and StrategyQA
datasets. O’Brien and Lewis (2023) highlight that
meticulous hyperparameter selection can notably
enhance the performance of CD methods in reasoning tasks. In Figure 14, our comparison with
these carefully tuned CD results reveals that UAG
not only demonstrates comparable performance but
also exhibits a distinct edge in commonsense reasoning tasks. This comparison underscores UAG’s
ability to achieve significant performance improvements across a broader array of tasks.
2416
-----
| [
"Xiaonan, Li",
"Zhangyue, Yin",
"Qipeng, Guo",
"Vivek, Srikumar",
"Qiushi, Sun",
"Zhiyuan, Zeng",
"Xipeng, Qiu",
"Junqi, Dai",
"Xuanjing, Huang",
"Qinyuan, Cheng",
"Lun-Wei, Ku",
"Andre, Martins"
] | 2024-08-01T00:00:00 | ACL 2024 Long Papers | true | 0 | 0 | null | https://aclanthology.org/2024.acl-long.131 | null | https://www.semanticscholar.org/paper/e0c7da7c849f6daa94055081d5209275c227c207 |
Reasoning or Spurious Correlations? Applying transformers to propositional logic | We experiment with a transformer model (BART) to investigate its capabilities for learning reasoning in propositional logic from data. Previous work has highlighted the pitfalls when trying to solve this with a classifier: it tends to learn spurious correlations of the dataset, not reasoning. Here, we augment the data with proof steps, and demonstrate that generative models trained fine-tuned on proofs better approximate logical reasoning, also on out-of-distribution data. | null | # Reasoning or Spurious Correlations? Applying transformers to propositional logic
Daniel Enstr¨om[1], Viktor Kjellberg[1], and Moa Johansson[2]
1 University of Gothenburg, Gothenburg, Sweden.
2 Chalmers University of Technology, Gothenburg, Sweden.{gusensda, guskjevia}@student.gu.se
```
[email protected]
```
**Abstract**
We experiment with a transformer model (BART) to investigate its capabilities for
learning reasoning in propositional logic from data. Previous work has highlighted the
pitfalls when trying to solve this with a classifier: it tends to learn spurious correlations of
the dataset, not reasoning. Here, we augment the data with proof steps, and demonstrate
that generative models trained fine-tuned on proofs better approximate logical reasoning,
also on out-of-distribution data.
## 1 Introduction
Language models are now being applied to tasks beyond pure generation of text [1, 9]. The
ability to reason logically, and produce proofs is one such task [10]. However, it is not always
clear if this emerging functionality really corresponds to a model having learnt logical reasoning,
or if it is simply picking up on some other pattern in the data, that seemingly allows it to
solve reasoning tasks. It has for instance been shown that inducing large language models to
”reason step by step” (via the prompt or by emitting intermediate steps) improves results on
reasoning tasks [3, 5, 6, 8, 12, 11]. So, why does this step-by-step reasoning work? What are
the features used here, compared to otherwise? When can we be confident that the model
does reasoning, rather than picking up on some other relationship? Prystawski and Goodman
investigates this in the context of Bayesian inference [7]. They find that step-by-step reasoning
works when concepts not appearing close together in the training data can be linked together
by concepts that do. We are interested in studying this in the context of logic reasoning, and
take as a starting point the work by Zhang et al. [13]. They showed that the performance of a
classifier BERT model [2], trained to accurately predict if problems in propositional logic were
satisfiable (or not), turns out to have learnt spurious correlations arising from characteristics of
the problem set (e.g. the number of rules). Faced with problems from a different distribution it
failed miserably. We instead train two generative seq-to-seq BART [4] models in two different
ways: 1) producing a short proof in one go, and 2) sequentially producing the next proof step,
given the problem state. The one-go method gives mixed results, while next-rule prediction
applied in a step-by-step fashion produces near-perfect cross-distribution accuracy.
## 2 Data and Model
The SimpleLogic dataset consists of 860 000 propositional logic problems [13]. Each problem
includes a query literal, a list of facts (positive literals) and a list of rules represented as Horn
clauses with 1 - 3 premises. SimpleLogic problems are divided into three distributions based
on the strategy employed to generate them: Label Priority (LP), Rule Priority (RP), and a
-----
Reasoning or Spurious Correlations? Enstr¨om, Kjellberg and Johansson
balanced version of RP (RP b). RP for instance, has a potentially spurious feature where
the number of rules is higher for queries that hold, while RP b was designed to remove this
correlation. For our experiments, we augmented these datasets with proofs in order to train and
evaluate the models. Using the augmented datatset, we then trained two models starting from
a pre-trained BART model: the first simply generates a whole candidate proof string in one go,
and is called Whole Proof-BART (WP-BART), while the second is based on a neuro-symbolic
architecture, and is called Symbolic Iterative Proof-BART (SIP-BART). The SIP-BART model
was designed following the idea of chain-of-thought prompting for logical tasks [6, 12]. Here,
the BART model is trained to produce the next proof step in an iterative manner resembling
forward-chaining, which is then passed to a symbolic module which process the generated output
to create a new input state, being passed back to the neural part. The processed stops when
either the proof is complete, or the search space has been exhausted. We hypothesise that
these methods would force the models to learn more relevant features, and avoid short-cutting
reasoning by learning spurious correlations between the problem presentation and the truth
value of the query.
Three versions of each model was trained, one for each of the subsets LP, RP and RP b,
and then tested on test sets from each to detect how well each model generalised. Results are
reported in relation to problem depth. The minimum depth of 0 means that no rule is required
to solve the problem. Deeper problems requires a longer chain of rules that are dependent on
other rules in order to prove the truth-value of the query. In SimpleLogic, the maximum depth
of a problem is 6.
## 3 Results and Conclusions
The models are evaluated on the accuracy of the generated label (is the query true or false).
We first compare our models with that of Zhang et al. [13] to assess if training also on proofs
improve out of distribution performance. We here simply compare the models’ accuracy on
determining whether a problem is satisfiable or not, as the model in [13] does not produce
proofs. Results are show in Tables 1-3 in Appendix A.
The mean accuracy of WP-BART did not improve from the result achieved by Zhang et al.
However, there was an improvement on out of distribution (OOD) problems requiring longer
proofs. Our results suggest that WP-BART does not pick up spurious statistical features to
the same extent. It performs somewhat better OOD, and also does show less variability going
from problems with shallow to deep proofs, and also less variability when generalizing to other
distibutions, as seen in Table 2 in Appendix A. It also shows similar accuracies on RP and R b
in contrast to the models trained by Zhang et al.
SIP-BART on the other hand, achieved a near-perfect accuracy across all distributions, and
only did marginally worse on OOD problems, as seen in Table 3 in Appendix A. The accuracy is
high even on deeper problems, with a minimum accuracy of 98.7 % on depth six. This step-bystep approach seems able to generalize well to other distributions, which suggests that it is able
to approximate reasoning better than the other model-variants. Furthermore the consistency
of the proofs generated by SIP-BART are almost perfect. The few errors that occur can be
divided into four different types, which all relate to the fact that we are using a pre-trained
transformer model for natural language on a reasoning task:
- Non-existing Rule: The neural part produce a rule that does not exist in the problem
description. This may happen because it accidentally replaces a word for a synonym, or
misses a premise of a rule.
-----
Reasoning or Spurious Correlations? Enstr¨om, Kjellberg and Johansson
- Inapplicable Rule: The model has mistaken the conclusion of a rule for a fact, likely as
facts are listed immediately after the rule in the input string.
- Unexhausted Search Space: The model prematurely concludes that the query is false,
before exhausting the search space.
- Spurious Match: The model proves the wrong query by mixing it up with a synonymous
word.
To summarise, the effect of learning spurious correlations for determining the validity of a
propositional logic problem, identified by Zhang et al. [13], seem to be reduced by training the
model on not only the problem description, but also on associated proofs. Already WP-BART,
which is trained on whole proofs, seem to perform better out of distribution. For SIP-BART,
the effect of these spurious correlations all but disappears, as evident in its capability to solve
the problems of bigger depths. While we cannot fully rule out other unknown correlations,
our results are consistent with prior work on step-by-step reasoning, and the hypothesis put
forward in [7]: that adding step-wise inferences help the model learn how to connect also distant
concepts in the training data. We also identify four new types of errors that still occur in SIPBART. All are related to using a pre-trained transformer language model to do reasoning none would appear in a symbolic theorem prover. We believe these types of errors are worth
being aware of, as transformers are increasingly being applied to natural language tasks also
involving logical reasoning.
**Acknowledgement**
The computations and storage of data were enabled by resources provided by the National
Academic Infrastructure for Supercomputing in Sweden (NAISS) at Chalmers Centre for Computational Science and Engineering (C3SE), partially funded by the Swedish Research Council
through grant agreement no. 2022-06725.
## References
[1] Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc V Le. Massive exploration of neural
machine translation architectures. arXiv preprint arXiv:1703.03906, 2017.
[2] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep
bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of
_the North American Chapter of the Association for Computational Linguistics: Human Language_
_Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota, June_
2019. Association for Computational Linguistics.
[3] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and
Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022.
[4] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer
Levy, Veselin Stoyanov, and Luke Zettlemoyer. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the
_58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880, Online,_
July 2020. Association for Computational Linguistics.
[5] Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David
Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. Show your work: Scratchpads for intermediate computation with language models,
2021.
-----
Reasoning or Spurious Correlations? Enstr¨om, Kjellberg and Johansson
[6] Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving,
2020.
[7] Ben Prystawski and Noah D. Goodman. Why think step-by-step? Reasoning emerges from the
locality of experience, 2023.
[8] Markus N. Rabe, Dennis Lee, Kshitij Bansal, and Christian Szegedy. Mathematical reasoning via
self-supervised skip-tree training. arXiv: Learning, 2020.
[9] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
[10] David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical
reasoning abilities of neural models. In International Conference on Learning Representations,
2019.
[11] Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. ProofWriter: Generating implications, proofs,
and abductive statements over natural language. In Findings of the Association for Computa_tional Linguistics: ACL-IJCNLP 2021, pages 3621–3634, Online, August 2021. Association for_
Computational Linguistics.
[12] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc
Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models, 2022.
[13] Honghua Zhang, Liunian Harold Li, Tao Meng, Kai-Wei Chang, and Guy Van den Broeck. On
the paradox of learning to reason from data, 2022.
-----
Reasoning or Spurious Correlations? Enstr¨om, Kjellberg and Johansson
## Appendix A
|Train|Test|0|1|2|3|4|5|6|Mean|
|---|---|---|---|---|---|---|---|---|---|
|RP|RP|99.8|100.0|99.4|98.9|98.6|96.9|95.9|98.5|
|RP|LP|99.9|99.9|99.0|94.3|83.8|65.6|50.0|84.7|
|RP|RP b|99.2|99.2|98.6|98.0|96.6|93.9|89.1|96.4|
|LP|RP|97.4|92.5|64.5|60.2|67.6|72.6|69.9|75.0|
|LP|LP|99.8|99.8|99.8|99.6|98.8|97.2|95.4|98.6|
|LP|RP b|97.7|93.3|60.2|56.7|63.9|68.7|68.5|72.7|
|RP b|RP|99.8|99.9|99.5|98.9|98.6|97.9|96.9|98.8|
|RP b|LP|99.7|99.4|99.3|96.4|87.6|72.6|57.2|87.5|
|RP b|RP b|99.6|99.5|99.0|98.4|98.0|96.7|94.1|97.9|
Table 1: Accuracies from the Zhang et al. [13] BERT model. The integers refer to the depth
of the ground-truth proof. Mean is the average across all depths.
|Train|Test|0|1|2|3|4|5|6|Mean|
|---|---|---|---|---|---|---|---|---|---|
|LP|LP|100.0|100.0|92.6|90.2|89.8|91.2|93.3|93.9|
|LP|RP|100.0|99.9|83.3|65.5|67.3|72.0|76.5|80.6|
|LP|RP b|100.0|99.9|82.1|64.9|66.3|74.0|83.0|81.4|
|RP|LP|84.7|85.4|79.2|73.9|71.6|68.5|63.4|75.2|
|RP|RP|84.3|88.5|87.9|87.6|85.0|79.8|78.1|84.5|
|RP|RP b|87.7|88.3|88.8|87.7|85.4|80.7|79.7|85.5|
|RP b|LP|84.1|94.3|89.6|85.1|80.9|76.6|71.6|83.2|
|RP b|RP|84.0|94.2|94.3|92.2|89.3|84.8|82.2|88.7|
|RP b|RP b|87.2|93.6|94.1|93.3|88.6|86.8|85.7|89.9|
Train Test 0 1 2 3 4 5 6 Mean
LP LP 100.0 100.0 92.6 90.2 89.8 91.2 93.3 93.9
LP RP 100.0 99.9 83.3 65.5 67.3 72.0 76.5 80.6
LP RP b 100.0 99.9 82.1 64.9 66.3 74.0 83.0 81.4
RP LP 84.7 85.4 79.2 73.9 71.6 68.5 63.4 75.2
RP RP 84.3 88.5 87.9 87.6 85.0 79.8 78.1 84.5
RP RP b 87.7 88.3 88.8 87.7 85.4 80.7 79.7 85.5
RP b LP 84.1 94.3 89.6 85.1 80.9 76.6 71.6 83.2
RP b RP 84.0 94.2 94.3 92.2 89.3 84.8 82.2 88.7
RP b RP b 87.2 93.6 94.1 93.3 88.6 86.8 85.7 89.9
Table 2: Accuracies from WP-BART. The integers refer to the depth of the ground-truth proof.
Mean is the average across all depths.
|Train|Test|0|1|2|3|4|5|6|Mean|
|---|---|---|---|---|---|---|---|---|---|
|LP|LP|99.9|99.9|99.8|99.9|99.5|99.6|99.5|99.7|
|LP|RP|100.0|99.9|99.7|99.2|99.1|99.3|98.7|99.4|
|LP|RP b|100.0|99.8|99.7|99.0|99.2|99.3|99.2|99.4|
|RP|LP|100.0|99.8|99.7|99.4|98.8|98.7|98.7|99.3|
|RP|RP|100.0|100.0|99.9|99.6|99.6|99.7|99.5|99.7|
|RP|RP b|100.0|100.0|99.8|99.6|99.4|99.6|99.7|99.7|
|RP b|LP|100.0|99.8|99.7|99.4|99.0|98.9|98.7|99.4|
|RP b|RP|99.9|100.0|99.9|99.6|99.7|99.6|99.6|99.8|
|RP b|RP b|100.0|100.0|99.8|99.5|99.5|99.7|99.7|99.7|
Train Test 0 1 2 3 4 5 6 Mean
LP LP 99.9 99.9 99.8 99.9 99.5 99.6 99.5 99.7
LP RP 100.0 99.9 99.7 99.2 99.1 99.3 98.7 99.4
LP RP b 100.0 99.8 99.7 99.0 99.2 99.3 99.2 99.4
RP LP 100.0 99.8 99.7 99.4 98.8 98.7 98.7 99.3
RP RP 100.0 100.0 99.9 99.6 99.6 99.7 99.5 99.7
RP RP b 100.0 100.0 99.8 99.6 99.4 99.6 99.7 99.7
RP b LP 100.0 99.8 99.7 99.4 99.0 98.9 98.7 99.4
RP b RP 99.9 100.0 99.9 99.6 99.7 99.6 99.6 99.8
RP b RP b 100.0 100.0 99.8 99.5 99.5 99.7 99.7 99.7
Table 3: Accuracies from SIP-BART trained on the different distributions. The integers refer
to the depth of the ground-truth proof. Mean is the average accuracy across all depths.
-----
| [
"Moa, Johansson",
"Daniel, Enstrom",
"Viktor, Kjellberg"
] | 2023-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
RecT: A Recursive Transformer Architecture for Generalizable Mathematical Reasoning. | N/A | null | null | [
"Rohan, Deshpande",
"Jerry, Chen",
"Isabelle, Lee"
] | 2021-01-01T00:00:00 | null | false | 0 | 0 | null | http://ceur-ws.org/Vol-2986/paper13.pdf | null | null |
Recognizing algebraic properties from multiplication tables | We study the problem of recognizing algebraic properties (such as commutativity, associativity, latin square property and self-distributivity) from multiplication tables by means of neural networks. We achieve reasonable accuracy but the problem is highly sensitive to the structure of training data. Our analysis sheds some light on the question of what exactly are neural networks learning and on the problem of how to make neural networks learn what we want them to learn (rather than a proxy property). This is work in progress—more details will be reported at the time of the conference. | null | # Recognizing algebraic properties from multiplication tables
Sujoy Mukherjee[1], Daniel Scofield[2], and Petr Vojtˇechovsk´y[3][∗]
1 University of Denver, Denver, Colorado, U.S.A.
```
[email protected]
```
2 Francis Marion University, Florence, South Carolina, U.S.A.
```
[email protected]
```
3 University of Denver, Denver, Colorado, U.S.A.
```
[email protected]
```
**Abstract**
We study the problem of recognizing algebraic properties (such as commutativity, associativity, latin square property and self-distributivity) from multiplication tables by means
of neural networks. We achieve reasonable accuracy but the problem is highly sensitive
to the structure of training data. Our analysis sheds some light on the question of what
exactly are neural networks learning and on the problem of how to make neural networks learn what we want them to learn (rather than a proxy property). This is work in
progress—more details will be reported at the time of the conference.
## 1 General introduction
Research into applications of machine learning and artificial intelligence has increased dramatically in the past ten years, enabled by the availability of computing power and the accessibility
of software packages, etc. Many attempts have been made to solve problems in pure mathematics using these methods [5]. Davies et al. proposed a general framework for guiding human
math intuition using AI [1].
It has been observed that machine learning models may achieve high levels of accuracy by
exploiting spurious correlations or artifacts in training data [4], or may discover “shortcut” rules
that succeed within the realm of experimental test/train data but fail to generalize to simple
examples from a slightly different distribution [2]. Thus caution is warranted when considering
results from ML experiments.
## 2 Recognizing algebraic properties
We are interested in using ML to recognize certain algebraic properties from multiplication
tables.
A given algebraic property P typically implies many other properties which, unlike P, might
be easy to glean from simple statistical tests. (For instance, groups contain an identity element
and their multiplication tables are latin squares.) If the implied properties are not carefully
controlled for during training and validation, it seems likely that ML will learn to recognize
some of the consequences of P rather than P itself.
He and Kim [3] report that ML can classify with high degree of accuracy whether multiplication tables belong to certain finite groups. In particular, they deduce that ML can learn to
recognize associativity.
_∗Supported by the Simons Foundation Mathematics and Physical Sciences Collaboration Grant for Mathe-_
maticians no. 855097
-----
Recognizing algebraic properties Mukherjee, Scofield and Vojtˇechovsk´y
Being somewhat pessimistic of such claims, we set out to investigate the situation more
systematically on small multiplication tables. Here is a sampling of our observations for the
case of latin squares:
As a base case, given two disjoint sets A and B of random multiplication tables (of a fixed
size), ML does not seem to be able to learn membership in A.
Training ML on multiplication tables that are either latin squares or are far from being
latin squares results in a fairly accurate recognition (exceeding 95 percent) of the latin
property.
But the model trained as above has a high rate of false positives when tested against
multiplication tables that are close to being latin squares (which we can think of as hard
counterexamples).
The rate of false positives improves dramatically when the training data consists only of
latin squares and multiplication tables that are close to being latin squares. Interestingly,
models so trained continue to perform very well on recognizing latin squares among general
multiplication tables, despite never being exposed to general multiplication tables during
training.
This suggests a general strategy for the recognition of a property P by ML, in which the
training data consists of multiplication tables satisfying P and of multiplication tables that are
close (in some sense) to satisfying P . We will report on the results for various choices of P, such
as commutativity, associativity and self-distributivity. We will also comment on some previously
used training techniques (such as permuting the rows, columns and symbols of multiplication
tables by three independent permutations) which are not necessarily mathematically sound,
sometimes in a subtle way.
## References
[1] Davies, A., Veliˇckovi´c, P., Buesing, L. et al., Advancing mathematics by guiding human intuition
_with AI, Nature 600, 70–74 (2021)._
[2] Geirhos, R., Jacobsen, JH., Michaelis, C. et al. Shortcut learning in deep neural networks, Nat Mach
Intell 2, 665–673 (2020).
[3] Yang-Hui He and Minhyong Kim, Learning algebraic structures: Preliminary investigations, International Journal of Data Science in the Mathematical Sciences 01, no. 01, 3–22 (2023)
[4] Lapuschkin, S., W¨aldchen, S., Binder, A. et al., Unmasking Clever Hans predictors and assessing
_what machines really learn, Nat Commun 10, 1096 (2019)._
[5] Williamson, G., Is deep learning a useful tool for the pure mathematician?, Bulletin of the American
Mathematical Society, Volume 61, Number 2, April 2024, Pages 271–286.
-----
| [
"Sujoy, Mukherjee",
"Daniel, Scofield",
"Petr, Vojtˇechovsky"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Recursive Introspection: Teaching Foundation Model Agents How to Self-Improve | A central piece in enabling intelligent agentic behavior in foundation models is to make them capable of introspecting upon their behavior, to reason and correct their mistakes. Even strong proprietary large language models (LLMs) do not exhibit the ability of continually improving their responses sequentially, even in scenarios where they are explicitly told that they are making a mistake. In this paper, we develop $\textbf{RISE}$: $\textbf{R}$ecursive $\textbf{I}$ntro$\textbf{s}$p$\textbf{e}$ction, an approach for fine-tuning LLMs to introduce this ability. Our approach prescribes an iterative fine-tuning procedure, which attempts to teach the model how to alter its response after having seen previously unsuccessful attempts to solve a problem with additional environment feedback. RISE poses fine-tuning for a single-turn problem as solving a multi-turn Markov decision process (MDP), where the initial state is the prompt. Inspired by principles in online imitation learning, we derive effective strategies to dictate multi-turn data collection and training so as to imbue in an LLM the capability to recursively detect and correct its previous mistakes in subsequent iterations. Our experiments show that $\textbf{RISE}$ enables 7B Llama2 and Mistral models to improve themselves with more turns on math reasoning tasks, outperforming several single-turn strategies given an equal amount of inference-time computation. Our analysis shows that RISE makes meaningful improvements to responses to arrive at the correct solution for challenging prompts, without disrupting one-turn abilities. | null | null | null | null | NeurIPS 2024 | true | 0 | 0 | null | https://neurips.cc/virtual/2024/poster/96089 | null | null |
Reinforcement Learning for Interactive Theorem Proving | N/A | null | Eindhoven University of Technology
BACHELOR
Reinforcement Learning for Interactive Theorem Proving
Creating an Artificial Student
Cottaar, Jolijn
Award date:
2020
[Link to publication](https://research.tue.nl/en/studentTheses/17ead00d-c586-4c11-b681-dfe2e93ff6ef)
Disclaimer
This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student
theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document
as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required
minimum study period may vary in duration.
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners
and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights
-----
Department of Mathematics and Computer Science
## Reinforcement Learning for Interactive Theorem Proving
### Creating an Artificial Student
Jolijn Cottaar
Supervisors:
dr. J.W. Portegies
dr. C. Hojny
Eindhoven, July 2020
-----
# Abstract
In this thesis we have created an artificial student, who learns to provide mathematical proofs. The
learning is based on the Reinforcement Learning algorithms Sarsa, Q-Learning and Epsilon Soft. The
artificial student uses the interactive theorem prover Coq and it is only able to prove intuitionistic
propositional logic lemmas.
We laid the theoretical basis using a Markov Decision Process for our student. Subsequently we
created a prototype for the artificial student. And finally we have compared the algorithms and various
other variables.
We have done limited testing concerning comparing certain aspects of our program. We have seen
that the Epsilon Soft algorithm learns most quickly from all the algorithms in the first 15 episodes. Due
to runtime problems we are not able to show what happens in episodes later on. Similarly we have found
that the Name state space is best at finding equivalent states with changing proposition names. A low
epsilon chosen seems to have the effect of increasing the average number of actions in an episode, but
also making sure more episodes end in a completed proof, while a high epsilon does the opposite. In
reward functions we have found that reward functions which give a positive reward for a completed proof
has the best effect on the learning.
-----
# Contents
**Contents** **iii**
**1** **Introduction** **1**
**2** **A Proof in Coq** **3**
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 Tactics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.4 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
**3** **Markov Decision Process** **6**
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2 The State Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.3 The Action Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.4 The Reward Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.5 Transition function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.6 Value Functions and Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.7 An Episode of the Markov Decision Process . . . . . . . . . . . . . . . . . . . . . . . . . . 9
**4** **Reinforcement Learning Algorithms** **11**
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.2 Branches of Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.2.1 Epsilon Soft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.2.2 Sarsa and Q-learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
**5** **Implementation** **13**
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.2 Parts of Oasis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
**6** **State Spaces** **15**
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6.2 Elements for States to Be Equivalent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6.3 State Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6.3.1 Simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6.3.2 Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6.3.3 Match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
**7** **Results** **17**
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
7.2 Comparison of the algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
7.3 Comparison of Possible Epsilons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
7.4 Comparison of State Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
7.5 Comparison of Reward Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
**8** **Discussion** **22**
**9** **Conclusions** **23**
-----
**Bibliography** **24**
-----
## Chapter 1
# Introduction
Proving lemmas and theorems is seen as the hardest part of any mathematics course. For first-year math
students it is always a shock to the system to have to learn something entirely new that is so hard. Thus
it is seen as one of the stumbling blocks of the first year of mathematics.
At the same time professors have little time to spend on education, since the pressure to publish
and gain funding is ever increasing. To help both sides of this problem we took the first step in the
development of an artificial teacher helping out with teaching students to prove. Additionally we have
researched how reinforcement learning can contribute in this area.
In this project we used the interactive theorem prover Coq [1]. In a previous mathematics bachelor
project, Beurskens [10] has made the first steps of making Coq more accessible and easier to understand
for students. Continuing on this, a group for computer science and mathematics double majors have
created the program Waterproof [4] which enables students to create readable and understandable proofs
using Coq. This program is already used in courses at the TU/e.
We focused on a subset of mathematics, namely the intuitionistic propositional logic. Intuitionistic
propositional logic differs from the more commonly used classic propositional logic on a few points.
Intuitionistic logic does not contain the Aristotelian law of excluded middle (A ∨¬A) or the classical
law of double negation elimination (¬¬A → _A)._ It does however contain the law of contradiction
((A → _B) →_ ((A →¬B →¬A)) and ex falso sequitur quodlibet (¬A → (A → _B)) [16] [15]._
The main idea of our project is to create an artificial student which uses Coq to learn how to prove
certain lemmas. To mimic the learning of a ‘real’ student most closely we look at the artificial learning
algorithms known as Reinforcement Learning [18]. Reinforcement Learning is mostly based on learning
from experience.
The overall goal is to create an artificial teacher, which can help the artificial student and ultimately
a real life student. This artificial student is the first step on the road to this.
Parallel to this project we have worked on a prototype artificial teacher for which I refer to the thesis
of my colleague-student S. McCarren [14].
**Literature Study and Earlier Work**
There has been considerable work done in using reinforcement learning in combination with interactive
theorem proves.
For example, TacticToe [19] is an automated tactical prover. It is build on top of the interactive
theorem prover HOL4 [5], created by Cambridge University. It uses Monte Carlo Tree Search to solve
theorems concerning classical first order logic.
E Prover [7] is an automated theorem prover that uses deep neural networks to solve classical order
theorems. Automated theorem provers are not dependent on an interactive theorem prover, as automated
tactical provers are and as our artificial student will be.
Brian Groenke [12] in 2018 used Q-Learning, a reinforcement learning algorithm, to create an automotive theorem prover that focuses on Core Logic. We will also use Q-Learning in this project.
Kusomoto, Yahata and Sakai [13] also have used Coq, to create an automated tactical prover to solve
intuitionistic propositional logic. This research is very similar to our own, the main difference is they go
into deep learning, while we use other reinforcement algorithms that do not require neural networks.
-----
**Overview Report**
We start with explaining how a proof is constructed in the interactive theorem prover Coq (Chapter 2).
Then we proceed to defining a Markov Decision Process (Chapter 3), which is a vital first step to be able
to use reinforcement learning, from which we will then explain the three algorithms we use (Chapter 4).
In Chapter 5 we take the step from theory to practice. We will go a bit further into one aspect of
the program, namely how we define states to be equivalent in Chapter 6.
In Chapter 7 we compare aspects of our program and end with our discussion (Chapter 8) and
conclusion (Chapter 9).
-----
## Chapter 2
# A Proof in Coq
### 2.1 Introduction
Coq [2] is an interactive theorem prover and proof assistant. This means that it can verify mathematical
proofs created by an external source, for example a student. It was created about 30 years ago, and has
been constantly optimized by an active community [1].
In this chapter, we explain those aspects of Coq that attribute to a proof construction. The definitions
of these aspects are mostly derived from the definitions and explanations found in Coq’Art by Bertot
and Casteran [9].
Coq is based on type theory, instead of the more commonly used set theory. Every term that can
be encountered or worked with in Coq has a certain ‘type’, which restricts the operations that may be
performed on it. Most of the types we encounter are Propositions. Types are mentioned a few times in
this chapter, but will not be important later on [17].
The figures of proofs and intermediate steps were created with CoqIDE, an integrated development
environment for easy communication with Coq and constructing proofs [6].
### 2.2 Environments
During a proof, Coq keeps track of certain information about the proof and about the mathematical
world in which the proof is being constructed. There are three main parts in which this information is
contained:
1. Global environment
2. Local environment
3. Goal.
**Global Environment**
The global environment can be defined as follows.
**Definition 2.2.1. Global Environment [9, p. 29]**
A global environment consists of the initial environment, the imported libraries and all global declarations
and definitions made by the user. This will be denoted as E.
To properly understand the definition of the global environment we need the following definition of
a global declaration and definition.
**Definition 2.2.2. Global declarations and definitions [9, p. 29]**
A global declaration adds a certain identifier with a certain type to the global environment. A global
_definition adds a certain identifier with a certain value and a certain type to the environment._
The initial environment contains definitions of certain types and functions. The imported libraries
contain the axioms of the intuitionistic propositional logic.
Both the initial global environment and the imported libraries are static, thus the only way for the
global environment to change is if a global declaration or definition is made. Since we will not use any
global declarations or definitions, the global environment will not change.
-----
**Local Environment**
The local environment can be defined as follows.
**Definition 2.2.3. Local Environment [9, p. 19]**
A local environment consists of all local declarations and definitions made by the user. This will be
denoted as Γ.
**Definition 2.2.4. Local declarations and definitions [9, p. 19]**
A local declaration adds a certain identifier with a certain type to the local environment. A local definition
adds a certain identifier with a certain value and a certain type to the local environment.
The local environment contains all the variables we have defined and the hypotheses containing
these variables we have introduced. This thus contains all the knowledge we already have collected and
hypotheses we already know to be true.
In Figure 2.1 the local environment is everything above the horizontal line.
**Goals**
Finally the last part we look at is the goal.
**Definition 2.2.5. Goal [9, def. 8]**
A goal is a pair of a local environment Γ and a type G that is well-formed in this local environment.
As an example in Figure 2.1 the goal is shown in its entirety, local environment Γ is above the
horizontal line, the well-formed type G = Q ∧ _P is beneath it._
Figure 2.1: An example of a goal
During a proof it is possible to have multiple subgoals simultaneously, each containing their individual
local environment. All of the subgoals need to be proven separately to complete the proof. As soon as
a subgoal has been completed it disappears from the list of subgoals, along with its local environment.
An example with multiple subgoals can be seen in Figure 2.2, where there are two subgoals namely
_P and Q, where Q is the active subgoal._
Figure 2.2: An example with multiple subgoals
### 2.3 Tactics
In Coq you have access to a collection of tactics, which are tools to construct a proof.
**Definition 2.3.1. Tactic [9, def. 9]**
A tactic is a command that can be applied to the active goal. The effect is to produce a new (possibly
empty) list of goals or change something in the local environment.
**Definition 2.3.2. Proposition [9, def. 4]**
Every type P whose type is the sort Prop is called a proposition.
-----
**Definition 2.3.3. Hypothesis[9, def. 5]**
A hypothesis is a local declaration of the shape H : P, with H an identifier and P a proposition.
Hypotheses are always in the local environment, and show us what is known to be true during a
proof.
**Definition 2.3.4. Theorems or Lemmas [9, def. 7]**
Global definitions of identifiers with as type a proposition are called theorems or lemmas.
### 2.4 An Example
As an example we have used CoqIDE to create a simple proof. We executed three tactics, after each
tactic a figure was made. The green colored text has already been executed.
intros takes a goal of the form A → _B, and adds A to the local environment Γ and underneath_
the line leaves B. In this case A = ∀P, Q : Prop, P ∧ _Q and thus the tactic introduces P and Q as_
propositions and the hypothesis H : P ∧ _Q. apply H takes the hypothesis named H and uses it to solve_
the goal. Qed concludes every proof and can only be done if there are no more subgoals.
For example in Figure 2.4 the local environment contains three hypotheses P, Q and H. The goal
is P . Right before the Qed tactic is executed, we can see there are no more subgoals and the local
environment is emptied.
Figure 2.3: First step of a proof in Coq
Figure 2.4: Second step of a proof in Coq
Figure 2.5: Third step of a proof in Coq
Figure 2.6: Fourth step of a proof in Coq
-----
## Chapter 3
# Markov Decision Process
### 3.1 Introduction
Most reinforcement learning algorithms are based on a Markov Decision Process. Our Markov Decision
Process is defined as an episodic process. An episodic process means that the Markov Decision process
naturally splits up into episodes, where each episode has a natural beginning and end. For example if
this was a process to learn a virtual student to play chess, each episode would be one game. In our case
every episode will be a run-through of a proof.
To properly define a Markov Decision Process we need to specify the following objects
1. The State Space S, a set containing all the possible states.
2. The Action Space A, a set containing all the possible actions.
3. A Reward function.
4. A Transition function.
We also need to look at policies and value functions.
### 3.2 The State Space
The state spaces contains all possible states, which we have defined as follows.
**Definition 3.2.1. State**
A state is a list of goals (Definition 2.2.5), each consisting of a local environment Γ (Definition 2.2.3)
and a well-formed type G. The state will be denoted by [Γ1 _G1], ..., [Γn_ _Gn]_, for n subgoals.
_{_ _⊢_ _⊢_ _}_
Thus the state space contains all possible combinations of local environments Γ and well-formed types
_G, within the limits of intuitionistic propositional logic._
An example of a state with one subgoal which we can encounter can be seen in Figure 3.1.
Figure 3.1: A state
**Definition 3.2.2. Initial State**
The initial state is the state where the episode starts. This will be denoted by [Γ0 ⊢ _G] with Γ0 an_
empty local environment and G the to be proven lemma.
-----
The initial state is the first state that is sent to the artificial student at the start of a proof. The
local environment is completely empty, since the artificial student has not made any local declarations
or definitions yet. G is the lemma or theorem that has to be proven. Thus the initial state depends
on what the theorem is and will be the same every time a new proof gets started when doing multiple
run-throughs of one lemma.
We can see the initial state in Figure 3.2, the local environment is empty, the goal is the lemma to
be proven.
Figure 3.2: The initial state
**Definition 3.2.3. Terminating State**
The terminating state is the state where the episode ends.
This will be either the state with no subgoals, denoted by [Γ0 ] or the state where the action space
_⊢∅_
is empty, further explained momentarily. In Figure 3.3 an example of a terminating state can be found.
Figure 3.3: A terminating state after completing a proof
### 3.3 The Action Space
The action space is a set of tactics (Definition 2.3.1), which are the actions available to the artificial
student to try to change the state and reach a terminating state. We chose a certain subset of the
available tactics and added some extra tactics of our own, with the goal of making sure all possible
intuitionistic propositional logic theorems can be solved.
In order to do this, we turn to the study of contraction-free Sequent Calculi of Roy Dyckhoff [11].
Dyckhoff explains the need for exactly twelve tactics and he proves that using these twelve tactics the
proof will always terminate. Dyckhoff has shown that any provable theorem can indeed be proven and if
a theorem is not provable these actions should be able to show this. For a more extensive look at these
actions and the proof see the work of McCarren [14, chapter 2].
Thus the action space is a subset of the following twelve actions, with Γ as the local environment, G
is an arbitrary goal, A, B, C, D are propositions:
intro : [Γ ⊢ _A →_ _B] =⇒_ [A, Γ ⊢ _B]_ (3.1)
assumption : [A, Γ _A] =_ [Γ0 ] (3.2)
_⊢_ _⇒_ _⊢∅_
contradiction : [false, Γ _G] =_ [Γ0 ] (3.3)
_⊢_ _⇒_ _⊢∅_
split : [Γ ⊢ _A ∧_ _B] =⇒{[Γ ⊢_ _A], [Γ ⊢_ _B]}_ (3.4)
left : [Γ ⊢ _A ∨_ _B] =⇒_ [Γ ⊢ _A]_ (3.5)
right : [Γ ⊢ _A ∨_ _B] =⇒_ [Γ ⊢ _B]_ (3.6)
destructand : [A ∧ _B, Γ ⊢_ _G] =⇒_ [A, B, Γ ⊢ _G]_ (3.7)
destructor : [A ∨ _B, Γ ⊢_ _G] =⇒{[A, Γ ⊢_ _G], [B, Γ ⊢_ _G]}_ (3.8)
imply1 : [(A → _B), A, Γ ⊢_ _G] =⇒_ [B, A, Γ ⊢ _G]_ (3.9)
imply2 : [(C ∧ _D) →_ _B, Γ ⊢_ _G] =⇒_ [C → (D → _B), Γ ⊢_ _G]_ (3.10)
imply3 : [(C ∨ _D) →_ _B, Γ ⊢_ _G] =⇒_ [C → _B, D →_ _B, Γ ⊢_ _G]_ (3.11)
imply4 : [(C → _D) →_ _B, Γ ⊢_ _G] =⇒{[D →_ _B, Γ ⊢_ _C →_ _D], [B, Γ ⊢_ _G]}_ (3.12)
-----
The first six of these necessary tactics are already implemented in Coq where they execute the required
alterations to our state. The alterations defined above are the bare necessities this action has to do to
adhere to the rules of Dyckhoff.
Since we have not defined these tactics ourselves, but used the premade tactics provided by Coq,
these tactics alter more of the goal than what we showed here. For example we know for a fact the
tactic assumption first does an intro tactic before doing the above alteration, and some of the others
do similar things. Initial testing has shown that for these tactics the extra alterations do not impact the
provability of the theorem.
In this process we have exchanged the original tactic destruction with two tactics destructionand and destructionor. We did this because the extra alterations this tactic does could be a problem.
The problem here was that destruct needs to be done on a hypothesis in the local environment and if
the artificial student does this action on the wrong hypothesis it is possible to get into a state where
the student is stuck. We have made sure with our new destruct that it will always choose the right
hypothesis and thus adher to the rules of Dyckhoff.
The last four actions imply 1 through 4 are created by ourselves to do the actions for which we do
not have predefined tactics yet. These do exactly the alteration Dyckhoff requires us to.
**Specific Action Spaces**
The action space is dependent on the state the artificial student is currently in, not all actions are
applicable in each state. For example the tactics, that work on a hypothesis in the local environment,
need to be defined per state. Thus we will refer to the action space as As for the action space in a certain
state s or Ai for the action space at time i, where si is the current state.
We can denote a terminating state s, where the artificial student has no more actions, as As = ∅.
### 3.4 The Reward Function
The reward function R : S × S × A → R finds the rewards during an episode. Our reward functions will
be based on the last state visited, the new state, and the action space of the state.
A reward for a state-action pair at a certain time step i is denoted as R(si, ai) = Ri. A reward is a
real valued number.
The first type of reward function we work with is such that each action generates either a positive
or negative reward. For example we can use a reward function that gives a reward of +1 to every stateaction pair, that gives us a new state after doing the action in that particular state. It gives a reward of
_−1 to the state-action pair if it generates an error from Coq or the state does not change._
The second type of reward functions gives a reward if an episode is completed. Thus it will for example
give a reward of +1 to the last state-action pair done in an episode, if it actually gives a complete proof.
If the artificial student gets stuck in a proof, thus there are no more actions that can be done to reach
a completed proof, none of the state-action pairs will get a reward. Another option is to give a negative
reward to episodes that do not end in a completed proof.
### 3.5 Transition function
The transition function is T : S × A →S. This function is responsible for calculating the next state,
based on the previous state and the action done in that state.
This transition function gives the next state by doing the above mentioned tactics and then returning
the new goal, or possibly goals.
-----
### 3.6 Value Functions and Optimization
The artificial student chooses an action during an episode using a policy. This policy π : A × S → [0, 1]
calculates the probability for each action in a certain state for it to be chosen.
The policy used is determined by the chosen algorithm and this will define which actions the student
takes. During these run-throughs the student will learn from the feedback given after the actions. This
feedback will be in the form of a reward.
The goal of having a Markov Decision process and then running a certain reinforcement learning
algorithm is to optimize a policy.
For this we use the expected return from a certain time t on-wards. So we define the sum of the
rewards after time t as Gt:
_Gt := Rt+1 + Rt+2 + ... + RT_ (3.13)
with Ri the reward at time step i and T as the final time step of the episode.
An additional concept that we will use in some of policies defined later, is the concept of discounting.
The contribution to the total reward is less when the step generating the reward is farther away in the
future. So if we are in time step t then the reward in time step t + 1 is worth more than the reward in
_t + 6 for example. For this we use the discount factor γ ∈_ (0, 1].
We then define the discounted sum of the rewards as:
_Gt := Rt+1 + γRt+2 + γ[2]Rt+3 + ..._ (3.14)
= Rt+1 + γ(Rt+2 + γRt+3 + ...) (3.15)
= Rt+1 + γGt+1. (3.16)
The state value function for state s, under policy π is the expected return when starting in state
_s and afterwards following policy π. It is denoted by vπ(s). We define the state value function as the_
expected values of all the possible rewards possible given we start in state s.
_vπ(s) := E[Gt|St = s]_ (3.17)
_∞_
= E "n=0 _γ[k]Rt+k+1|St = s#_ (3.18)
X
Another way to look at what the best possible action could be is to look at the expected reward value
of a certain action state pair. We call this the state-action value function, denoted as qπ(s, a), for taking
action a in state s under a certain policy π.
_qπ(s, a) := E[Gt|St = s, At = a]_ (3.19)
_∞_
= E "n=0 _γ[k]Rt+k+1|St = s, At = a#_ (3.20)
X
We combine the results of these calculations in a Q-matrix. This matrix contains all the Q(s, a), the
value of the state-action value function for state s and action a.
The goal is of course to find an optimized policy, such that every choice gives the highest possible
reward. We define a policy π[′] as being better or equal to another policy π if and only if vπ(s) _vπ′_ (s)
_≤_
for all s ∈ _S. There is not necessarily one optimal policy, but we denote them all as π[∗]. All optimal_
policies share the same state value function and the same optimal state-action value function defined as
follows:
_vπ∗_ (s) := max _vπ(s)_ _s_ **S** (3.21)
_π_ _∀_ _∈_
_qπ∗_ (s, a) := max _qπ(s, a)_ _s_ **S, a** **A(s)** (3.22)
_π_ _∀_ _∈_ _∈_
### 3.7 An Episode of the Markov Decision Process
Each episode of our process consists of a certain number of cycles, as seen in Figure 3.4 for a certain
policy π. In a cycle the artificial student starts in the initial state s0 and at time 1 takes a certain
_∈S_
-----
action a1, given by the policy π. Then the environment calculates the new state T (s0, a1) = s1
_∈A_ _∈S_
and the reward R(s0, s1, a1) = R1 and sends both to our artificial student. Subsequently the agent
_∈R_
chooses action a2, again using the policy, and so forth. At some point the state sent to the artificial
_∈A_
student will be the terminating state and this will conclude one episode.
Figure 3.4: One step in the Markov Decision Process
-----
## Chapter 4
# Reinforcement Learning Algorithms
### 4.1 Introduction
In this chapter we explain the three algorithms we have implemented. Each algorithm contains a policy
to choose actions and a way to update the Q-matrix, which contains the values of the state-action pairs.
There are three main branches of Reinforcement Learning, i.e. Dynamic Programming, Monte Carlo
methods and Temporal-Difference learning, as described in Chapter 4.2. Most of this information is
gathered from the book Reinforcement Learning: An Introduction by Sutton and Barto [18].
In Chapter 4.2.1 we’ll explain the Monte-Carlo style algorithm Epsilon Soft and in Chapter 4.2.2 the
Temporal-Difference algorithms Sarsa and Q-Learning.
### 4.2 Branches of Reinforcement Learning
Dynamic Programming (DP) has been in development since the ’50s [8]. It is a collection of algorithms
that can be used to find optimal policies given a perfect model of the problem. As seen in Chapter 3
we have defined our state space as an infinite space, namely all possible combinations of goals. Since
we cannot perfectly model an infinite space, we are not able to use any DP algorithms for our artificial
student.
Monte Carlo (MC) methods require only experience to be able to learn. This means it uses sample
sequences of states, actions and rewards from interaction to learn what to do. A model does not have to
be completely defined. The main idea is for the artificial student to try out actions and evaluate them
based on the feedback or rewards they receive. MC methods do not require any previous knowledge of
the environment.
Temporal-Difference (TD) learning combines ideas from Monte Carlo and Dynamic Programming.
Similar to Monte Carlo, TD methods can learn from experience, without needing a model of the environment. This is why we can still use this method of reinforcement learning for our model. An advantage
that TD has over Monte Carlo methods is that the implementation goes more natural. After every step
the return of a state-action pair is known, instead of having to wait until the end of an episode to be
able to calculate it. Similar to Dynamic Programming, TD methods uses bootstrapping, which entails
that the values of states are estimated in part by the estimations of the states visited afterwards.
**4.2.1** **Epsilon Soft**
The Monte Carlo method we have implemented, we have named Epsilon Soft [18, p.101]. The name
refers to the policy, and thus the way actions are chosen. The way the Q-values are updated uses the
discounted sum of rewards as explained in Chapter 3.
There is a probability ϵ ∈ [0, 1] which is decided by the environment or user of the program beforehand.
This probability dictates whether the artificial student chooses either one of the following two options:
– With probability ϵ the artificial student chooses a random action in the current state.
– With probability 1 − _ϵ the artificial student chooses one of the actions with the current highest_
_Q-value._
-----
In mathematical terms this looks as follows. The policy π(a _st) calculates the probability that a_
_|_
certain action is chosen in state s at time t as previously introduced in Chapter 3.6. Let A[∗] be defined
as the set of the actions which have the highest value for state st or A[∗] = arg maxa **A Q(st, a) For all**
_∈_
_a_ **A(st):**
_∈_ _π(a|st) =_ ( 1|A−[∗](ϵϵs| _t[+])_ _|A(ϵst)|_ ifif _aa / ∈_ _AA[∗][∗]_
_|A_ _|_ _∈_
After an episode is concluded the Q-values of the state-action pairs visited in the episode are updated.
We use as a reward for a certain state-action pair the discounted sum of the rewards as defined in equation
3.16. Then the Q-value is updated to the average of all these discounted sum of rewards we have seen
per state-action pair, with the discount factor chosen as γ = 0.9.
**4.2.2** **Sarsa and Q-learning**
The other two algorithms we have implemented are Sarsa and Q-learning, which are TD learning. Both
of these use the same policy, e.g. the way of choosing actions, as the previously discussed Epsilon Soft
algorithm. The difference lies in the way the Q-values are updated. As seen in the previous chapter, the
_Q-values of Epsilon Soft are updated after every episode. With Sarsa and Q-learning the Q-values are_
updated after every action taken.
Using Sarsa[18, p.130], to calculate the new Q-value we need to also look at the next state-action pair
that is visited, which is the bootstrapping aspect of the TD style algorithms. To find this action-state
pair we again use the Epsilon Soft policy to choose the next action. Then the new Q-value is calculated
by the following formula in state s doing action a, where α is a previously chosen variable, R is the
reward given for that specific action, s[′] is the state visited after s and a[′] the action taken in that state.
_Q(s, a) ⇐_ _Q(s, a) + α(R + γ · Q(s[′], a[′]) −_ _Q(s, a))_ (4.1)
This action a[′] is decided using the policy π and the algorithm at the same time also updates π. Thus
Sarsa is an on-policy algorithm.
Q-learning [18, p.131] is very similar to Sarsa. Instead of using the policy in the next visited state s[′]
to find the next action a[′], and use that Q-value for bootstrapping, we use the maximal Q-value in that
specific state s[′]:
_Q(s, a)_ _Q(s, a) + α(R + γ_ max (4.2)
_⇐_ _·_ _a_ **A** _[Q][(][s][′][, a][)][ −]_ _[Q][(][s, a][))][.]_
_∈_
Here the policy π is not used in generating the Q-values and thus Q-Learning is an off-policy method.
We use α = 0.7 as the default value, we have not further examined the impact it has on the workings
of the algorithms. This is in general a high α and thus the following state-action pair has a high impact
on the value of the current state-action pair.
-----
## Chapter 5
# Implementation
### 5.1 Introduction
In this chapter we give a short overview on Oasis, the program we created, and the adaptations we have
made to the theory earlier discussed. We have created Oasis in Python and we communicate with Coq
through a library called Sertop in SerAPI [3].
For a much more detailed overview of all the classes and methods we created see McCarrens thesis
[14].
As a first step we implemented the Markov Decision Process described in Chapter 3. We have made
adaptations to certain aspects, in particular the state space. These changes are described below.
### 5.2 Parts of Oasis
**SerAPI Instance**
In this part all the communication possible with Sertop/SerAPI, and thus with Coq, is defined. Herewith
we can send statements to Coq and receive its feedback. These statements can be tactics or queries. A
query for example can be used to ask for the proof context, which is a string containing the goals.
**Student**
In this part we created the artificial student. It uses SerAPI Instance to send its actions to Coq and
then receives back the reward and new state. It incorporates everything to be able to do an episode and
later on to check if the optimal actions actually give a complete proof.
It also contains the policy we have implemented. Since all three algorithms (Epsilon Soft, Sarsa and
Q-Learning) use the same policy we have decided to put the policy, the epsilon greedy way to choose
actions, in this section of the program, instead of with the algorithms.
**State Space**
This part controls everything which has to do with the state space. The agent can use this to see if a
state has already been visited previously and it remembers the reward value pairs. The different state
spaces are properly introduced and explained in Chapter 6.
We have defined our state a little differently then introduced in Chapter 3. Since our program
cannot yet understand completely what the goal means, a state will initially be a string containing this
information. We have improved on the information the program can get from this string (Chapter 6).
The user can choose what type of state space is to be used.
**Action Space**
This section of Oasis contains all the tactics we have defined ourselves and an overview of the implemented
ones.
As explained before the action space is dependent on the state we are currently in. Everything to do
with creating this individual action space per state is implemented here.
-----
We have also implemented another form of the action space, where the student has access to a
’simplify’ action, which incorporates all the actions that do not need a specific hypothesis to work on.
In this report this action space will not be used, but for more information see McCarrens report [14,
chapter 5].
An aspect of our action space is the ability to remove actions that give an error out of the action space
of a specific state. This ensures that a action space is created for each state with only the applicable
actions. And from this again follows that if there is no way to complete a proof from this state, the
student can notice this and abandon this episode. In our program this ability is a variable and can be
chosen to be active or not. If it is not active a user should be aware that the student can get stuck for
eternity, so a way out should be provided.
**Reward Space**
The reward space contains our reward functions.
The first reward function, named terminating states, gives a positive reward of +1 to the artificial
student if a state-action pair results in a complete proof. If the episode has to be abandoned due to the
student getting stuck in the proof it gets a reward of −1.
+1 if _st = [Γ0 ⊢∅]_
1 if Action space of a state is empty and st = [Γ0 ]
_−_ _̸_ _⊢∅_
0 else
_Rterm(t) =_
The second reward function, named standard, gives a positive reward of +1 if a state-action pair
results in a different state. Thus if an action changes the state we are in. It gives a negative reward of
_−1 if the state-action pair does not result in a state changed._
1 if _st = st_ 1
_Rstand(t) =_ _−_ _−_
+1 else
The third reward function, named standard & qed, combines the two previously explained reward
functions, which gives a large positive reward +100 for completing a proof, but also still gives +/ − 1 for
changing states or states staying the same.
+100 if _st = [Γ0 ⊢∅]_
1 if _st = st_ 1
_−_ _−_
+1 else
_Rs&q(t) =_
**Algorithms**
Here we have implemented the way to update the Q-values, according to the algorithms Epsilon Soft,
Q-Learning and Sarsa introduced and explained in Chapter 4. It also contains all the methods needed
to update our Q-matrix.
-----
## Chapter 6
# State Spaces
### 6.1 Introduction
We have implemented three different types of state spaces in Oasis. In this chapter we will first look
into the elements that play a key in seeing if states are equivalent. Next we will look at the three types
of state spaces and explain these further.
In Chapter 3 we have given a theoretical definition of our state space. Oasis is not able to understand
the content of the goal, so we changed the idea of the state. For our new states we used as the basis the
pretty printed string of the goal that Coq can provide us with. As an example this looks like
_′\nH : A\nH0 : B\nA, B : Prop\n ============================ \nB′_
which corresponds to the state in Figure 6.1.
Figure 6.1: The state corresponding to the pretty printed string
It depends on the state space type chosen what the state actually looks like and how we see if they
are equivalent. For example the state [H : A ∧ _B ⊢_ _A] is equivalent to [H : P ∧_ _Q ⊢_ _P_ ] for a human
student, but Oasis won’t be able to see this if the state is just a string and equivalence is checked by
string comparison.
### 6.2 Elements for States to Be Equivalent
We have a few elements that we have considered as important for states to be equivalent, that we will
further explain here.
**Proposition names**
The names given to propositions in a lemma should not change the state. Whether a proposition is
named A or P or P123523 should not change the contents of the lemma itself. Thus we would prefer
that proposition names are arbitrary.
**Hypotheses names and order**
The names of hypothesis after introducing them are chosen by Coq, but the names of the hypothesis
should not impact if a state is different. Also the order of the hypotheses in the local environment should
not matter.
-----
### 6.3 State Spaces
We implemented three state space options, in which we elucidate what the state looks like and how we
define equivalence between these states.
**6.3.1** **Simple**
The Simple state space is - as the name suggest, the most simple way of looking at the state - defined
as the string containing the goal given by Coq.
A state is thus only equivalent if the strings are completely equal. None of the elements of equivalence
are covered by this state. When doing a few episodes with the same lemma this state is sufficient.
However, as soon as multiple lemmas need to be proven by the same student, this state space is not
adequate anymore.
**6.3.2** **Name**
The Name state space takes the string and uses string parsing to gather the information of the hypothesis names, proposition names and the relations between them. It creates binary trees containing this
information. Then we try to find mappings between this information between the binary trees already in
the state space and the one constructed from the current state. If these functions exist we can conclude
equivalence.
For more in depth explanation see McCarrens work [14, chapter 3].
**6.3.3** **Match**
In the Match state space we used a feature in Coq called ’match goal’. This method takes a given
pattern and tries to match it to the proof context we are working with [9]. The patterns have the form
of [H1 = . . ., . . ., Hn = . . . ⊢ _G]. Thus it contains the active goal. If the matching between two states_
are positive we say the states are equivalent.
In this matching the proposition and hypothesis names and hypothesis orders are arbitrary. The
states are thus defined in the form that they can be matched with each other using the ’match goal’
method. This is created from the string, used in the simple state space. It is changed using regular
string manipulation.
So when the artificial student changes a state during an episode, we check if there is an equivalent
state which we already have visited. We do this by creating a ’match goal’ statement with all visited
states, whose string has the same length as the state the student is currently in, taking into account
lengths of proposition names. Then this ’match goal’ statement returns either the equivalent previously
visited state and thus we know which state we are in, or it will return that this is a new state.
As an example let’s look at this example of a possible state:
[H : A ∧ _B ⊢_ _A]_ (6.1)
The state now would be defined as:
_H :?A∧?B ⊢?A_ (6.2)
If you use this state in a match goal statement it would match with any that are similar. The question
marks in front of the propositions communicate to Coq that we do not care what the name is. As long
as the ones with a similar placeholder are still named the same. For example it matches with states such
as [H : P ∧ _Q ⊢_ _P_ ], but it does not match with [H : P ∧ _Q ⊢_ _Q]._
There are still a few shaky details in this state space. The ’match goal’ method is prone to match to
a simpler goal than the actual state. For example if we try a ’match goal’ statement of the form [ ?A],
_⊢_
it will match with pretty much any state instead of the one that is actually equivalent in reality. This
would mean there are multiple states to be matched with, and we have no way yet to see which is the
correct one. We have tried to reduce this happening by adding that states have to be the same length,
taking into account that proposition names length can vary.
Another problem that is bound to happen if we work with extremely large lemmas. If a state is very
long, Coq does not give the entire local environment in the string. This leads to (...) being added in a
string, which this state space cannot handle. For now this will raise an error, and thus this state space
cannot be used for these.
-----
## Chapter 7
# Results
### 7.1 Introduction
In this chapter we explain the experiments we have done. The goal of these experiments are to compare
different aspects of our program, for example the different algorithms and state spaces.
For our results we use a subset of lemmas from the Intuitionistic Logic Theorem Proving (ILTP)
library. This library is created to benchmark theorem provers for intuitionistic propositional logic. It
also contains non-theorems, which automated theorem provers should be able to find out that they are
impossible to solve. But Oasis isn’t able to do that, so because of this we use the subset of theorems
which can be proven. For a more in depth look at this database see McCarrens report [14, chapter 7].
This library contains classes of lemmas. Each class contains a certain type of lemma, where each
lemma has a level. A higher level corresponds to a more difficult lemma of that certain type.
We will use the ability to remove actions that give errors from an action space, thus ensuring that
the program will never get into an infinite loop. This results in abandoned episodes. We still count these
episodes when evaluating the effectiveness of an algorithm.
### 7.2 Comparison of the algorithms
In our first experiment we compared our three algorithms, Epsilon Soft, Sarsa and Q-Learning. We
compared them using one lemma and letting the artificial student practice it for 15 episodes and repeat
this for 50 times. For these 15 episodes we looked both at the average amount of actions each algorithm
needs and the amount of episodes which end in a completed proof.
We used the following conditions:
|Conditions for Comparing Algorithms|Col2|
|---|---|
|Lemma|A, B, C, D : Prop, (((C (B A)) (C D) (B D) (A ∀ ∧ ∧ ∨ → ∨ → ∨ → D)) D) D (number 43) → →|
|Algorithm|Epsilon Soft, Sarsa and Q-Learning|
|Epsilon|0.4|
|State Space|Simple|
|Reward Function|Terminating States (5.2)|
|Episodes|15|
|Average|500|
-----
Figure 7.1: Comparison of Epsilon Soft, Sarsa and Q-Learning for one lemma
The result of this is seen in Figure 7.1.
This experiment implies that Epsilon Soft works the best, both in least amount of actions and the
most completed proofs. There is a very clear curve where things improve.
There hardly seems to be any learning going on with Sarsa and Q-Learning, who interestingly are very
similar. Both the average number of actions and the number of completed proofs are pretty constant.
### 7.3 Comparison of Possible Epsilons
In our second experiment we compared different values for the choice of epsilon.
For the algorithm we chose to only look at Epsilon Soft, since this algorithm has the best learning
curve in early episodes.
We use the following conditions:
|Conditions for Comparing Epsilons|Col2|
|---|---|
|Lemma|A, B, C : Prop : (((A B) (A C) (B C)) C) C ∀ ∧ ∨ → ∨ → → → (number 42)|
|Algorithm|Epsilon Soft|
|Epsilon|0.1, ..., 0.5|
|State Space|Simple|
|Reward Function|Terminating States (5.2)|
|Episodes|10|
|Average|500|
Again we have look both at the average amount of actions needed per episode and the amount of
completed proofs.
Figure 7.2: Different epsilons for Epsilon Soft
-----
In Figure 7.2 it seems that lower epsilon implies that one needs more actions for an episode, but one
does end more episodes in a completed proof. Higher epsilon does the opposite.
### 7.4 Comparison of State Spaces
In order to compare the possible state spaces (Simple, Match, Name) we conducted two experiments.
**Experiment 1**
The first experiment has to do with the classes of lemmas explained in the introduction of this chapter.
Within a class of lemmas we expected lemmas to be very similar, for example similar states.
We look again at the amount of episodes ending in a proof and the average of the amount of actions
need to end an episode. But we also looked at the average size of the state space at the end of a run
through of an entire class. We hoped this would show if there are actually equivalent spaces in the class.
|Conditions for Comparing State Spaces (1)|Col2|
|---|---|
|Lemma|Class number 41 to 46|
|Algorithm|Epsilon Soft|
|Epsilon|0.4|
|State Space|Simple, Match or Name|
|Reward Function|Terminating States (5.2)|
|Episodes|1|
|Average|75|
Figure 7.3: Comparing the different state spaces for one class of lemmas
The results can be seen in Figure 7.3.
We can either conclude from this that our equivalence does not work as hoped, or that while working
through a class of lemma there are not a lot of equivalent states.
-----
**Experiment 2**
The second experiment is about the elements of equivalence explained in Chapter 6, in particular the
one concerning the different proposition names.
For this we used one and the same lemma several times, but we changed one or more of the proposition
names in order to not change the structure of the lemma.
We looked at the average number of actions per episode, the amount of completed proofs in total and
most importantly the average size of the state space per episode.
|Conditions for Comparing State Spaces (2)|Col2|
|---|---|
|Lemma|A, B, C, D, E : Prop, (((C (E (B A))) (C D) (E ∀ ∧ ∧ ∧ ∨ → ∨ → D) (B D) (A D)) D) D (number 44, different ∨ → ∨ → → → proposition names)|
|Algorithm|Epsilon Soft|
|Epsilon|0.4|
|State Space|Simple, Match or Name|
|Reward Function|Terminating States (5.2)|
|Episodes|1|
|Average|75|
Figure 7.4: Comparing the different state spaces using the same lemma with changing proposition names
The results can be found in Figure 7.4.
These results imply that both the Name and the Match state space find equivalent states.
### 7.5 Comparison of Reward Functions
For this experiment we wanted to see if our different reward functions actually impacted the way the
artificial student learns. Since Epsilon Soft seems to work the best, we used this algorithm to see if the
reward functions make any impact.
-----
|Conditions for Comparing Reward Functions|Col2|
|---|---|
|Lemma|A, B, C, D : Prop, (((C (B A)) (C D) (B D) (A ∀ ∧ ∧ ∨ → ∨ → ∨ → D)) D) D (number 43) → →|
|Algorithm|Epsilon Soft|
|Epsilon|0.4|
|State Space|Simple|
|Reward Function|Terminating States, Standard and Standard & qed (5.2)|
|Episodes|10|
|Average|100|
Figure 7.5: Comparing the different reward functions
from Figure 7.5 we can see that the Standard reward function has a higher average number of actions
for most episodes and less completed proofs compared to the other two. Standard Qed seems to be the
winner, but there is not a big difference with the Terminating States.
-----
## Chapter 8
# Discussion
**Python vs. OCaml**
At the beginning of the project we were conflicted about which programming language to choose for
our project. In the end we chose to work with Python, due to us having more experience with it and
more resources being available concerning reinforcement learning. However, we quickly learned that the
communication with Coq, through queries with Serapi/Sertop, takes way too long to be used in extensive
testing and real life applications.
We would therefore recommend for future research to build directly onto Coq, and not to use an
intermediate program to communicate.
**Validity of Results**
The problem of runtime, due to slow communication with Serapi/Sertop, also forms an obstacle for
making conclusions based on our results. We are not able to compare our algorithms for example for a
100 episodes, due to Sertop shutting down after a certain amount of queries.
Thus our results are interesting, but the conclusions are not definite. To be able to get more information we would need to change the whole communication of our program.
**State Spaces**
We have implemented three types of state spaces, all with their own strengths and weaknesses.
The Simple state space is very fast, since the equivalence of states is only based on a simple string
comparison. However, it is not very strong since it cannot find equivalent states between different lemmas,
for example with different proposition names.
The Name state space is very strong in its finding of equivalent states. The problem here is its long
runtime.
The Match state space is able to find equivalent states, with different proposition names and hypothesis names. Due to some changes, for example only doing a ’match goal’ statement with similar lengths
states, the runtime has been greatly reduced. But due to our sparse knowledge to how the string given
by Coq changes for certain lemmas, bugs still prop up now and then. This means this state space is
definitely not foolproof.
We think it would be better to be able to communicate with Coq directly and thus get more complete
information about the local environment and goal and create the goal from there.
**Reward Functions and Other Variables**
We have only done a little experimentation with reward functions. We don’t know the impact of which
reward functions we choose and the size of the rewards.
Apart from an experiment to do with epsilon, we have not spent much time figuring out the impact
of certain variables, namely α in Q-Learning and Sarsa and the discount factor γ in all three algorithms.
These variables might have a large impact on the way the student learns.
All this warrants more research.
-----
## Chapter 9
# Conclusions
We have created an artificial student that uses three algorithms Epsilon Soft, Sarsa and Q-Learning to
learn to prove lemmas based on intuitionistic propositional logic.
The theoretical basis for our student was defined by how the interactive theorem prover Coq works,
the Markov Decision Process and the three algorithms.
The best working algorithm we have implemented is Epsilon Soft. This is a Monte-Carlo approach
which uses a greedy epsilon policy to choose actions and the average of discounted reward sums as its
state-action values.
In our results we can see that the epsilon does have an impact on the learning achievement. A
lower epsilon causes more episodes to end in a completed proof, a higher epsilon causes a lower average
number of actions needed per episode. The best working state space representation is the Name state
space, based on the binary trees. The Match state space, based on ’match goal’ tactics, seems to have
some potential.
We have experimented with three reward functions. The two, which give a positive reward for a
completed proof, work better than the one based only with rewards only for changing states.
As a basis for future research, this artificial student can be first stepping stone to creating an artificial
teacher who can actually help students in real life situations.
-----
# Bibliography
[[1] Official Website of Coq. 1, 3](https://coq.inria.fr/)
[[2] Github of Coq. 3](https://github.com/coq/coq)
[[3] Github of SerAPI. 13](https://github.com/ejgallego/coq-serapi)
[[4] Github of Waterproof. 1](https://github.com/impermeable/waterproof)
[[5] HOL Interactive Theorem Prover. 1](https://hol-theorem-prover.org/)
[[6] CoqIDE. 3](https://opam.ocaml.org/packages/coqide/)
[[7] The E Theorem Prover, official website. 1](https://wwwlehre.dhbw-stuttgart.de/~sschulz/E/E.html)
[[8] Richard Bellman. The Theory of Dynamic Programming. 1954. 11](https://www.rand.org/content/dam/rand/pubs/papers/2008/P550.pdf)
[[9] Yves Bertot and Pierre Cateran. Interactive Theorem Proving and Program Development, Coq’Art:](https://www.springer.com/gp/book/9783540208549)
_[The Calculus of Inductive Constructions. 2004. 3, 4, 5, 16](https://www.springer.com/gp/book/9783540208549)_
[[10] T.P.J. Beurskens. Computer programs for analysis Eindhoven University of Technology Computer](https://research.tue.nl/nl/studentTheses/computer-programs-for-analysis)
[Programs for Analysis Department of Mathematics and Computer Science. 2019. 1](https://research.tue.nl/nl/studentTheses/computer-programs-for-analysis)
[[11] Roy Dyckhoff. Contraction-Free Sequent Calculi for Intuitionistic Logic. The Journal of Symbolic](https://www.jstor.org/stable/2275431?seq=1)
_Logic, 57(3):795–807, 2010. 7_
[[12] Brian Groenke. Learning to reason. 2018. 1](https://arxiv.org/abs/1810.05315)
[[13] Mitsuru Kusumoto, Keisuke Yahata, and Masahiro Sakai. Automated Theorem Proving in Intu-](https://arxiv.org/pdf/1811.00796.pdf)
[itionistic Propositional Logic. 1](https://arxiv.org/pdf/1811.00796.pdf)
[14] Sean McCarren. A Student-Teacher Reinforcement Learning Model for Automated Theorem Proving. 2020. 1, 7, 13, 14, 16, 17
[[15] Grigori Mints. A Short Introduction to Intuitionistic Logic. 2000. 1](https://www.springer.com/gp/book/9780306463945)
[[16] Joan Moschovakis. Intuitionistic Logic. 1](https://plato.stanford.edu/entries/logic-intuitionistic/)
[[17] Rob Nederpelt and Herman Geuvers. Type Thoery and Formal Proof: An Introduction. 2014. 3](https://www.amazon.com/Type-Theory-Formal-Proof-Introduction/dp/110703650X)
[[18] Andrew G. Sutton, Richard S. Barto. Reinforcement Learning — An Introduction. Number 2. 2018.](https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf)
1, 11, 12
[19] Josef Urban Ramana Kumar Michael Norrish Thibault Gauthier, Cezary Kaliszyk. [TacticToe:](https://thibaultgauthier.fr/tactictoe_jv.pdf)
[Learning to prove with tactics. 2018. 1](https://thibaultgauthier.fr/tactictoe_jv.pdf)
-----
| [
"Jolijn, Cottaar"
] | 2020-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Reinforcement Learning for Interactive Theorem Proving in HOL4 | N/A | null | # Reinforcement Learning for Interactive Theorem Proving in HOL4
Minchao Wu[1], Michael Norrish[12], Christian Walder[12], and Amir Dezfouli[2]
1
Research School of Computer Science
Australian National University, Canberra, ACT, Australia
2
Data61, CSIRO, Canberra, ACT, Australia
We present an interface for reinforcement learning for interactive theorem proving in HOL4.
The interface supports treating HOL4 as an interactive environment for agents to learn to prove
theorems in a tactic style. We also describe in detail our reinforcement learning settings for
the task, including the design of states, rewards and policy networks. We then give preliminary
results demonstrating that theorem proving in HOL4 can be learned with our baseline approach
of reinforcement learning using the interface.
Learning systems for interactive theorem proving have started to appear in recent years.
Among them there are systems for special purposes such as premise selection [2][16] or algebraic
rewriting [11]. There are supervised learning systems designed for general proof search such
as TacticToe [8] for HOL4 [14] and CoqGym [18] for Coq [4]. There are also systems using
deep reinforcement learning for general proof search such as DeepHOL [3] for HOL Light [9].
Our system is designed for general proof search in HOL4. Unlike TacticToe, which learns from
human proof scripts without using deep learning, we use deep reinforcement learning to train
policy networks to predict tactics as well as their arguments. Our system is also different from
DeepHOL in the following aspects.
_• The arguments of a tactic can be not only names of theorems, but also HOL4 terms. Like_
DeepHOL, predictions are made based on the embedded statements (i.e., expressions) of
theorems, not their names.
_• For tactics that can take more than one argument, an argument is predicted not only_
depending on the tactic and the context, but also the previously predicted arguments
of the same tactic application. This is because some tactics, such as simp and fs, are
sensitive to such dependence.
_• The system does not assume a fixed set of tokens in advance. Once the agent is trained,_
it should be able to handle newly introduced definitions and theorems which are likely to
contain new tokens invented by a user.
Another related implementation of deep reinforcement learning in HOL4 is given by Gauthier [7]
recently. The implementation supports reinforcement learning inside HOL4 by implementing
basic learning algorithms in standard ML. On the other hand, our interface supports interaction with HOL4 from within Python and manages proofs on the Python side. The interface is designed in a way that HOL4 theorem proving could be integrated as an actual Gym
environment[5]. The environment provides information that can be directly processed by popular machine learning frameworks such as PyTorch [12] or TensorFlow [1].
**Reinforcement learning formulation** A proof attempt in HOL4 can be treated as a game.
A state of the game is what we call a fringe. A fringe contains all the remaining goals of
a proof attempt, along with their corresponding local context. If one thinks of proof search
-----
Reinforcement Learning in HOL4 Wu, Norrish, Walder and Dezfouli
as a tree with edges being tactic applications and nodes being the resulting set of goals with
their contexts, then the fringe is the union of the unexplored nodes at some stage. A game
is won if the fringe becomes empty within a fixed number of timesteps. The action space can
be arbitrarily large, as we consider a set of selected tactics as well as their arguments, which
can possibly be all the definitions and theorems available in HOL4 or those provided by a user.
During proof search, if a theorem is proved, then it is also added to the candidate pool from
which an argument is chosen. We distinguish certain resulting states of a tactic application
for reward shaping. An action is called ineffective if the tactic application does not change the
goal nor its corresponding context. For an inapplicable or ineffective action, we penalize the
agent by giving it a reward -2. If an action times out, then the agent receives a reward -1. If
the agent managed to prove the main goal within a fixed number of timesteps, then we give it
a positive reward that is sufficient to compensate the damage due to the penalization so that
the accumulated reward of the episode would end up positive. In other situations, it receives a
reward 0.
**Policies** Actions are predicted by a combination of three policy networks – a tactic policy for
choosing a tactic, an argument policy for choosing a list of theorem names as the arguments
of the tactic and a term policy for choosing a term if the tactic expects a HOL4 term as its
argument. The tactic policy takes a state as an input, and returns a probability distribution
_πtactic over the possible tactics._ The agent then samples one tactic to apply according to
_πtactic. The argument policy takes additionally the previously predicted argument and a hidden_
variable, and returns the scores s of the candidate theorems and a hidden variable h. An
argument t is then chosen by sampling Softmax(s). Then t and h are passed to the same policy
again to predict next argument. The hidden variables are computed by a LSTM [10]. The
term policy is similar to the argument policy, but the candidates are currently restricted to
the tokens occurring in the goal being handled. In our basic settings, the predicted action is
applied to the first element in the fringe by default. Backtracking is also not explicitly treated
as an action in the basic settings, as it can be expected that the policy networks should learn to
avoid unpromising applications by itself. However, more sophisticated approaches are always
possible. For example, we can have an additional value network that scores the states for
pruning unpromising actions.
**Learning algorithms** The policies are trained by policy gradient methods [15]. In our baseline approach, the policy networks are trained jointly using the REINFORCE [17] algorithm.
We also describe the possibility of adding Monte-Carlo Tree Search [6] based on the learned
policies as a policy improvement operator [13].
**Preliminary Experiments** We implement the baseline approach in PyTorch. Preliminary
results are obtained based on the following settings. We train the agent to prove 10 theorems
from the list theory of HOL4. Tactics allowed to be used in the proofs are simp, fs, metis_tac,
```
Induct_on, irule, and strip_tac. For tactics that take theorems as arguments, we only
```
allow the 56 definitions in the list theory to be chosen as the arguments. The length of the
argument list is fixed to 5. Reuse of proved theorems is disabled. That is, the agent always
tries to prove a theorem from scratch. For induction, the argument can be an arbitrary variable
occurring in the goal. One iteration of training contains 10 episodes. Each episode is a proof
attempt of one of the 10 theorems. If a theorem is proved, then the agent gains a reward of
100. Othewise, the rewards are as described in the above reinforcement learning formulation.
The timeout limit for a single tactic is set to be 0.2 seconds. The timestep limit for a single
-----
Reinforcement Learning in HOL4 Wu, Norrish, Walder and Dezfouli
(a) Average total rewards received in each iteration.(b) Average steps to find a proof in each iteration.
Figure 1: Peformances in terms of total rewards and timesteps.
proof attempt is 20. We train the agent for 400 iterations and compare its performance against
random rollouts with the same settings. It can be seen from Table 1 and Figure 1 that the agent
is able to prove more theorems as the training goes on, and is guessing less to find a proof.
|Col1|average rewards|average steps|successful proofs|success rate|
|---|---|---|---|---|
|Overall|56.8|6.6|2771|69.2%|
|Last 100 episodes|65.7|5.9|75|75%|
|Random|14.2|10|1533|38.3%|
average rewards average steps successful proofs success rate
Overall 56.8 6.6 2771 69.2%
Last 100 episodes 65.7 5.9 75 75%
Random 14.2 10 1533 38.3%
Table 1: Performances of training and random rollouts on the same settings. Average steps
refer to the number of timesteps needed to find a proof.
**Improvements** In our baseline approach, each formula in the fringe is given as a sequence of a
finite number of tokens in Polish notation. The tree struture of a formula is not fully reflected in
the representation which uses integer encoding, and the models are sequence-based. We plan to
replace the current representation by more sophisticated ones such as learned embeddings using
RNN as proposed in GamePad [11] or TNN for HOL4 terms as proposed in [7]. With better
and deeper networks for both representation and policies, we hope that the performance of
preliminary experiments generalizes to a larger scale. Other improvements include pre-training
the policies on easy problems to accelerate training, or learning a supervised policy in advance
[13] to help with proof exploration. We may also model backtracking by considering a proof
graph as a state and allow the agent to choose fringes to work on.
## References
[1] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.
Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew
Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath
Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike
-----
Reinforcement Learning in HOL4 Wu, Norrish, Walder and Dezfouli
Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent
Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg,
Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on
heterogeneous systems, 2015. Software available from tensorflow.org.
[2] Alexander A. Alemi, François Chollet, Niklas Een, Geoffrey Irving, Christian Szegedy, and Josef
Urban. DeepMath - deep sequence models for premise selection. In Proceedings of the 30th
_International Conference on Neural Information Processing Systems, NIPS’16, pages 2243–2251,_
USA, 2016. Curran Associates Inc.
[3] Kshitij Bansal, Sarah Loos, Markus Rabe, Christian Szegedy, and Stewart Wilcox. HOList: An
environment for machine learning of higher order logic theorem proving. In Kamalika Chaudhuri
and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine
_Learning, volume 97 of Proceedings of Machine Learning Research, pages 454–463, Long Beach,_
California, USA, 09–15 Jun 2019. PMLR.
[4] Pierre Boutillier, Stephane Glondu, Benjamin Grégoire, Hugo Herbelin, Pierre Letouzey, PierreMarie Pédrot, Yann Régis-Gianas, Matthieu Sozeau, Arnaud Spiwack, and Enrico Tassi. Coq 8.4
Reference Manual. Research report, Inria, July 2014. The Coq Development Team.
[5] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang,
and Wojciech Zaremba. OpenAI Gym. CoRR, abs/1606.01540, 2016.
[6] C. B. Browne, E. Powley, D. Whitehouse, S. M. Lucas, P. I. Cowling, P. Rohlfshagen, S. Tavener,
D. Perez, S. Samothrakis, and S. Colton. A survey of Monte Carlo tree search methods. IEEE
_Transactions on Computational Intelligence and AI in Games, 4(1):1–43, March 2012._
[7] Thibault Gauthier. Deep reinforcement learning in HOL4. CoRR, abs/1910.11797, 2019.
[8] Thibault Gauthier, Cezary Kaliszyk, and Josef Urban. Learning to reason with HOL4 tactics.
_CoRR, abs/1804.00595, 2018._
[9] John Harrison. HOL Light: An overview. In Stefan Berghofer, Tobias Nipkow, Christian Urban,
and Makarius Wenzel, editors, Proceedings of the 22nd International Conference on Theorem Prov_ing in Higher Order Logics, TPHOLs 2009, volume 5674 of Lecture Notes in Computer Science,_
pages 60–66, Munich, Germany, 2009. Springer-Verlag.
[10] Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. _Neural Computation,_
9(8):1735–1780, 1997.
[11] Daniel Huang, Prafulla Dhariwal, Dawn Song, and Ilya Sutskever. Gamepad: A learning environment for theorem proving. CoRR, abs/1806.00608, 2018.
[12] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito,
Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in PyTorch. In NeurIPS Autodiff Workshop, 2017.
[13] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez,
Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of GO
without human knowledge. Nature, 550(7676):354, 2017.
[14] Konrad Slind and Michael Norrish. A brief overview of HOL4. In Otmane Ait Mohamed, César
Muñoz, and Sofiène Tahar, editors, Theorem Proving in Higher Order Logics, pages 28–32, Berlin,
Heidelberg, 2008. Springer Berlin Heidelberg.
[15] Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Proceedings of the 12th Interna_tional Conference on Neural Information Processing Systems, NIPS’99, pages 1057–1063, Cam-_
bridge, MA, USA, 1999. MIT Press.
[16] Mingzhe Wang, Yihe Tang, Jian Wang, and Jia Deng. Premise selection for theorem proving by
deep graph embedding. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages
2786–2796. Curran Associates, Inc., 2017.
[17] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforce
-----
Reinforcement Learning in HOL4 Wu, Norrish, Walder and Dezfouli
ment learning. Machine Learning, 8(3):229–256, May 1992.
[18] Kaiyu Yang and Jia Deng. Learning to prove theorems via interacting with proof assistants. In
_International Conference on Machine Learning, 2019._
-----
| [
"Minchao, Wu",
"Michael, Norrish",
"Christian, Walder",
"Amir, Dezfouli"
] | 2020-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Relating the Seemingly Unrelated: Principled Understanding of Generalization for Generative Models in Arithmetic Reasoning Tasks | Large language models (LLMs) have demonstrated impressive versatility across numerous tasks, yet their generalization capabilities remain poorly understood. To investigate these behaviors, arithmetic tasks serve as important venues. In previous studies, seemingly unrelated mysteries still exist -- (1) models with appropriate positional embeddings can correctly perform longer unseen arithmetic operations such as addition, but their effectiveness varies in more complex tasks like multiplication; (2) models perform well for longer unseen cases in modular addition under specific moduli (e.g., modulo 100) but struggle under very close moduli (e.g., modulo 101), regardless of the positional encoding used. We believe previous studies have been treating the symptoms rather than addressing the root cause -- they have paid excessive attention to improving model components, while overlooking the differences in task properties that may be the real drivers. This is confirmed by our unified theoretical framework for different arithmetic scenarios. For example, unlike multiplication, the digital addition task has the property of translation invariance which naturally aligns with the relative positional encoding, and this combination leads to successful generalization of addition to unseen longer domains. The discrepancy in operations modulo 100 and 101 arises from the base. Modulo 100, unlike 101, is compatible with the decimal system (base 10), such that unseen information in digits beyond the units digit and the tens digit is actually not needed for the task. Extensive experiments with GPT-like models validate our theoretical predictions. These findings deepen our understanding of the generalization mechanisms, and facilitate more data-efficient model training and objective-oriented AI alignment. | Extensive experiments with GPT-like models validate the theoretical predictions and deepen the understanding of the generalization mechanisms, and facilitate more data-efficient model training and objective-oriented AI alignment. | [
"Xingcheng, Xu",
"Zibo, Zhao",
"Haipeng, Zhang",
"Yanqing, Yang"
] | 2024-07-25T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2407.17963v1 | https://arxiv.org/abs/2407.17963 | https://www.semanticscholar.org/paper/d57152fd687576c656fa01c160b3af566cf6789d |
|
Retrieval-Augmented Proof Step Synthesis | Automated theorem proving is relying increasingly on sophisticated language modelling approaches for synthesizing proof steps (tactic applications, rewrite rules). However, one of the most significant difficulties of proof search is finding the correct premises to be used. This raises the problem of combining premise selection with language modeling. There are two obvious avenues towards this goal: synthesizing the full theorem text to be utilized as a premise, or using a separate premise selection model that is used as an extra component to be used when referencing theorems. In this paper, we suggest a new solution based on language modelling that allows premise selection to become an organic component of the deep learning model and is not trained in separation. We compare this approach to theorem proving using a combination of pretrained premise selection and tactic synthesis on the HOList dataset. | null | # Retrieval-Augmented Proof Step Synthesis
May 2021
Christian Szegedy, Markus Rabe, and Henryk Michalewski
Google Research, Mountain View, CA, USA
```
(szegedy,mrabe,henrykm)@google.com
```
**Abstract**
Automated theorem proving is relying increasingly on sophisticated language modelling
approaches for synthesizing proof steps (tactic applications, rewrite rules). However, one
of the most significant difficulties of proof search is finding the correct premises to be used.
This raises the problem of combining premise selection with language modeling. There are
two obvious avenues towards this goal: synthesizing the full theorem text to be utilized as
a premise, or using a separate premise selection model that is used as an extra component
to be used when referencing theorems. In this paper, we suggest a new solution based
on language modelling that allows premise selection to become an organic component of
the deep learning model and is not trained in separation. We compare this approach to
theorem proving using a combination of pretrained premise selection and tactic synthesis
on the HOList dataset.
## 1 Introduction
Premise selection [9] is a central problem of theorem proving in large theories. Realistic benchmarks of large-scale automated theorem proving are based on corpora of human-formalized
mathematics and can include theories with hundreds of thousands of theorems to be utilized [10].
This suggests an evaluation setup in which the proof of each theorem is allowed to utilize only
premises that were available for the original author of the corpus. Typically, the theorems
are sorted in some topological order of dependence and only theorems preceding the current
theorem are allowed during proving. In order to avoid information leaks, the machine learning
models for premise selection have to be retrained incrementally for each theorem to be proved.
This is a realistic scenario for machine learning models that can be updated very quickly, but
has posed a challenge for deep-learning-based approaches. For this purpose, premise selection
systems using deep learning have been evaluated by a two-phases methodology, in which the
performance is measured on a held-out set of the theorems to be proved, but is trained on all
possible premises, first. While this approach is compatible with most modern deep-learningbased setups, it has a potential for information leak as the premise selection for theorems might
be based on information of theorems proved layer in the database. This does not model the
real-life constraints in a conservative manner. Most current deep-learning-based theorem proving systems are evaluated based on the same questionable assumptions. While some results [2]
on FlySpeck suggest that the effect of training on future theorems is not too critical, this
methodology also reinforces the practice of developing systems that are not easily used in an
incremental setup and does not measure this important aspect of the system faithfully.
## 2 Related Work
Theorem proving in large theories and premise selection was pioneered by [9, 10] and later in
[5] for first order theorem proving. DeepMath [1] proposed a deep-learning-based approach
-----
Retrieval-Augmented Proof Step Synthesis Szegedy, Rabe and Michalewski
for premise selection in a similar setup for the Mizar [6] corpus, however their methodology
suffers from the same issue of training on the proof of future theorems. Later, TacTicToe [3]
suggested premise selection for higher-order-logic theorem proving. HOList [7] was suggested
to combine HOL-based theorem proving with graph-neural-network-based premise-selection.
However, deep-learning-based language modeling has shown surprising effectiveness for this
purpose [12] and recently, GPT-f [8] has demonstrated the usefulness of large language models
for proof-automation for Lean.
## 3 Incremental Proof Step Synthesis
Here, we present an incremental proof step synthesis approach that relies heavily and integrates seamlessly with the state-of-the-art neural architectures: especially with transformer
networks [11] designed for language modelling. While language modelling has been increasingly
and successfully used for synthesizing proof steps, it is typically used in a setup in which the
transformer model is given enough training steps to memorize the statements to be used. This
way, the network can produce either the full theorem text or a reference by naming the theorem.
However, as we will demonstrate, this approach results in theorem proving performance that
lags behind systems that were trained for premise selection directly via contrastive training [1].
In this talk, we present an approach that augments transformers with a retrieval based model
similarly to [4]. Our approach differs from pure language modeling in that our approach allows
for looking up theorems immediately after they are proved: the embeddings of theorems are
stored in a database that is consulted by the transformer model using a side-attention mechanism into this dynamically database and the keys of the embeddings are updated using the
standard backpropagation mechanism of that attention later and the theorem names can be
extracted from the value associated with those premises. The advantage of this approach is that
it integrates directly with the transformer architecture and the lookup is trained, incrementally
using standard attention layers which includes a large number of negative premises and therefore
alleviates the need for hard negative mining. Still, the inference mechanism utilizes standard
autoregressive decoding and the final result can consult any premises that are appropriate in
the given context. This is different from previous approaches [7] in which the premises were
preselected and the decoder did not have full control of the synthesized proof step. We present
experiments with a system that integrates this memory lookup into the transformer architecture
and trained in end-to-end manner and verify that it is competitive with those approaches that
utilize a separate premise selection model trained explicitly for this purpose. This paves the
way towards simpler systems that allow knowledge utilization conditioned on large knowledge
bases in incremental fashion and requiring less training steps.
## References
[1] Alexander A Alemi, Fran¸cois Chollet, Geoffrey Irving, Niklas E´en, Christian Szegedy, and Josef Urban. Deepmath-deep sequence models for premise selection. In D. Lee, M. Sugiyama, U. Luxburg,
I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems, pages
2235–2243, 2016.
[2] Kshitij Bansal, Sarah M Loos, Markus N Rabe, Christian Szegedy, and Stewart Wilcox. HOList:
An environment for machine learning of higher-order theorem proving. In Kamalika Chaudhuri
and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine
_Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings_
_of Machine Learning Research, pages 454–463. PMLR, 2019._
-----
Retrieval-Augmented Proof Step Synthesis Szegedy, Rabe and Michalewski
[3] Thibault Gauthier, Cezary Kaliszyk, and Josef Urban. TacticToe: Learning to reason with HOL4
tactics. In Thomas Eiter and David Sands, editors, LPAR-21, 21st International Conference on
_Logic for Programming, Artificial Intelligence and Reasoning, Maun, Botswana, May 7-12, 2017,_
volume 46 of EPiC Series in Computing, pages 125–143. EasyChair, 2017.
[4] Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. Retrieval augmented language model pre-training. In Hal Daum´e III and Aarti Singh, editors, International
_Conference on Machine Learning, pages 3929–3938. PMLR, 2020._
[5] Cezary Kaliszyk and Josef Urban. Mizar 40 for mizar 40. _Journal of Automated Reasoning,_
55(3):245–256, 2015.
[6] The Mizar Mathematical Library. Accessed: 2018/01/18.
[7] Aditya Paliwal, Sarah Loos, Markus Rabe, Kshitij Bansal, and Christian Szegedy. Graph representations for higher-order logic and theorem proving. In The Thirty-Fourth AAAI Conference on
_Artificial Intelligence, AAAI 2020, New York, NY, USA, February 7-12, 2020. AAAI Press, 2020._
[8] Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving,
2020.
[9] Josef Urban. MPTP–motivation, implementation, first experiments. Journal of Automated Rea_soning, 33(3-4):319–339, 2004._
[10] Josef Urban, Geoff Sutcliffe, Petr Pudl´ak, and Jiˇr´ı Vyskoˇcil. Malarea sg1-machine learner for automated reasoning with semantic guidance. In Baumgartner Peter Dowek Gilles Armando, Alessandro, editor, International Joint Conference on Automated Reasoning, pages 441–456. Springer,
2008.
[11] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez,
Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Isabelle Guyon, Ulrike von
Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman
Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008, 2017.
[12] Qingxiang Wang, Chad Brown, Cezary Kaliszyk, and Josef Urban. Exploration of neural machine
translation in autoformalization of mathematics in Mizar. In Jasmin Blanchette and C˘at˘alin
Hrit¸cu, editors, Proceedings of the 9th ACM SIGPLAN International Conference on Certified
_Programs and Proofs, pages 85–98, 2020._
-----
| [
"Markus, Rabe",
"Henryk, Michalewski",
"Christian, Szegedy"
] | 2021-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
RevOrder: A Novel Method for Enhanced Arithmetic in Language Models | This paper presents RevOrder, a novel technique aimed at improving arithmetic operations in large language models (LLMs) by reversing the output digits in addition, subtraction, and n-digit by 1-digit (nD by 1D) multiplication tasks. Our method significantly reduces the Count of Sequential Intermediate Digits (CSID) to $\mathcal{O}(1)$, a new metric we introduce to assess equation complexity. Through comprehensive testing, RevOrder not only achieves perfect accuracy in basic arithmetic operations but also substantially boosts LLM performance in division tasks, particularly with large numbers where traditional models struggle. Implementation of RevOrder is cost-effective for both training and inference phases. Moreover, applying RevOrder to fine-tune the LLaMA2-7B model on the GSM8K math task results in a considerable improvement, reducing equation calculation errors by 46% and increasing overall scores from 41.6 to 44.4. | RevOrder, a novel technique aimed at improving arithmetic operations in large language models (LLMs) by reversing the output digits in addition, subtraction, and n-digit by 1-digit (nD by 1D) multiplication tasks, significantly reduces the Count of Sequential Intermediate Digits to $\mathcal{O}(1)$, a new metric to assess equation complexity. | ## RevOrder: A Novel Method for Enhanced Arithmetic in Language Models
**Si Shen**
Nanjing University of Science
and Technology,
Nanjing, China
**Abstract**
**Peijun Shen**
Henan University,
Kaifeng, China
[email protected]
**Danhao Zhu**
Jiangsu Police Institute,
Nanjing, China
[email protected]
This paper presents RevOrder, a novel technique aimed at improving arithmetic operations
in large language models (LLMs) by reversing
the output digits in addition, subtraction, and
n-digit by 1-digit (nD by 1D) multiplication
tasks. Our method significantly reduces the
Count of Sequential Intermediate Digits (CSID)
to O(1), a new metric we introduce to assess
equation complexity. Through comprehensive
testing, RevOrder not only achieves perfect accuracy in basic arithmetic operations but also
substantially boosts LLM performance in division tasks, particularly with large numbers
where traditional models struggle. Implementation of RevOrder is cost-effective for both
training and inference phases. Moreover, applying RevOrder to fine-tune the LLaMA2-7B
model on the GSM8K math task results in a
considerable improvement, reducing equation
calculation errors by 46% and increasing overall scores from 41.6 to 44.4. [1 2]
Figure 1: An illustration of performing addition using
various methods. In the RevOrder method, the ’r|’ symbol indicates that the subsequent digits are presented in
reverse order.
in a chain-of-thought (COT) manner, common in
reasoning tasks (Wei et al., 2022b; Kojima et al.,
2022; Zhou et al., 2022). For instance, Nye et al.
(2021) used a ’Scratchpad’ to generate intermediate
steps, achieving high accuracy in 8D addition tasks,
as shown in Fig. 1(b). Similar methods are applied
for subtraction, multiplication, division, and other
arithmetic operations (Liu and Low, 2023; Yang
et al., 2023).
However, practical application of arithmetic reasoning in LMs faces significant challenges. Firstly,
LMs lack consistency in providing accurate results,
and there is no established theory to measure equation complexity or to determine if an equation is
within an LM’s capabilities. For example, Liu and
Low (2023) posited that addition is learnable by
LLMs, but their experiments with large-digit addition contained minor errors. Secondly, current
decomposition methods are token-intensive, making them more expensive than tool-based solutions
during inference. Even for a simple 2D addition,
the Scratchpad method (Nye et al., 2021), shown in
Fig. 1(b), is not more token-efficient than Python
**1** **Introduction**
Large language models (LLMs) have gained significant attention in recent years, excelling in natural
language understanding and generation tasks (Zhao
et al., 2023). Despite their advancements, the leading models like ChatGPT (OpenAI, 2022) and GPT4 (OpenAI, 2023) struggle with basic arithmetic,
particularly with large digits. The GPT-4 website
service[3] addresses this by switching to external
Python tools, as depicted in Fig. 1(a). This shift
not only adds a cumbersome step but also leads to
excessive token usage, significantly disrupting the
language processing flow and efficiency.
Arithmetic reasoning has long focused on solving arithmetic problems with LMs (Lu et al., 2022).
Typically, LMs generate the solutions step-by-step
1Corresponding authors: Danhao Zhu
2The data and code for this paper are available on Github.
3https://chat.openai.com/, 2024-1-26
-----
**2** **Related Works**
Arithmetic ability, a cornerstone of mathematics,
has long served as a benchmark for assessing
model capabilities, evolving from statistical methods (Hosseini et al., 2014) through machine learning techniques (Kushman et al., 2014), deep learning approaches (Wang et al., 2017) to LLM methods (Wei et al., 2022a).
While scaling laws for LLMs suggest that model
capacity increases with model size, compute resources, and training data (Kaplan et al., 2020;
Hoffmann et al., 2022), LLMs often struggle to
directly generate arithmetic results. Consequently,
step-by-step arithmetic reasoning methods have
been developed. ScratchPad (Nye et al., 2021) introduces this concept for additions, achieving nearperfect accuracy on 8D addition tasks. This idea
has since been expanded to more complex operations, such as multiplication and division (Liu and
Low, 2023; Yang et al., 2023). These complex
operations depend on the assumption that LLMs
can efficiently perform basic operations such as
addition and subtraction. Otherwise, token usage
quickly becomes unsustainable. However, these socalled basic operations often fail to achieve 100%
accuracy with large digits, making the more complex operations built upon them even more prone
to error. Our CSID theory provides a framework to
assess the complexity of equations, showing that
LLMs’ ability to perform basic operations diminishes as digit size grows. Conversely, RevOrder
introduces an efficient method to keep equations’
CSID low, ensuring their manageability within constrained token budgets.
Given the limitations and high token consumption of previous arithmetic reasoning methods,
more pragmatic solutions have emerged, such as
utilizing external tools or programming (Schick
et al., 2023; Chen et al., 2022; Gao et al., 2023).
RevOrder stands out by offering reliability and efficiency in addition and subtraction, positioning
itself as a resource-saving alternative to these methods.
**3** **Sequential Intermediate Digits in**
**Arithmetic Computation**
Arithmetic reasoning in language models (LMs) is
challenging, mainly due to the sequential prediction of digits. This complexity is exacerbated when
contextual digits required for accurate predictions
are not inferred from previous steps. For example,
tools (Fig. 1(a)).
To address these challenges, we introduce two
novel concepts. First, we propose the Count of
Sequential Digits (CSID) as an indicator to measure the difficulty of arithmetic equations. A larger
CSID suggests more omitted reasoning steps, indicating a more complex equation. We demonstrate
that the CSID complexity grows at O(n) for addition and subtraction, where n is the digit count. Empirical evidence suggests that advanced language
models struggle considerably with high-CSID problems. This indicates a notable limitation: LLMs
are unreliable in directly producing results for even
basic arithmetic tasks, such as single additions or
subtractions, when the digits involved are large.
Second, we propose RevOrder, a technique that
reduces the CSID to a constant 1 for addition, subtraction, and nD by 1D multiplication operations.
Illustrated in Fig. 1(c), RevOrder reverses the output order of addition. This approach aligns with the
natural human reasoning sequence, where higherorder digits are resolved after the lower ones. Unlike previous methods such as Scratchpad (Nye
et al., 2021), RevOrder requires virtually no additional tokens for these basic operations. Building
upon these, we can construct more complex operations with significantly reduced token usage.
RevOrder, evaluated on the Big-bench arithmetic
task (Srivastava et al., 2022) and an expanded set
with larger digits, achieved 100% accuracy in addition, subtraction, multiplication, and low-digit division tasks, and nearly 100% in large-digit division,
outperforming baseline methods. The experimental section highlights its training and inference efficiency. Finetuning LLAMA2 (Touvron et al., 2023)
with RevOrder on the GSM8K dataset (Cobbe et al.,
2021) significantly improved equation accuracy
and overall scores (from 88.9% to 94.1%, and
41.6 to 44.4, respectively). These results affirm
RevOrder’s effectiveness and token economy in a
range of arithmetic tasks, especially in addition and
subtraction.
Section 2 reviews related work, Section 3 introduces the CSID metric, Section 4 details the
RevOrder technique, Section 5 reports on experiments on arithmetic calculation, Section 6 discusses finetuning on GSM8K, and Section 7 concludes the paper.
-----
in addition, LMs may predict higher-order digits
before lower-order ones, contradicting the logical
computation order. This paper introduces a novel
metric to quantify and understand this complexity.
**3.1** **Definition of Sequential Intermediate**
**Digits (SIDs)**
A Sequential Intermediate Digit (SID) is a numeral
crucial for the accurate prediction of the next digit
in a sequence, yet not present in the preceding sequence. Within the framework of chain-of-thought
reasoning, SIDs represent indispensable steps that,
despite being missing, are vital for the computational process. Consequently, the Count of SIDs
(CSIDs) is employed as a metric to assess the complexity of a generation step, with a higher CSID
denoting a more demanding and intricate task. The
CSID of an equation is thus defined as the maximum CSID required for generating each step of the
result.
The primary types of SIDs include:
- Carry-over or borrow digits in addition and
subtraction. For example, in 123 + 179 =
302, the digit 3 in the hundreds place requires
the carry-over from the tens and units places,
resulting in a maximum CSID of 2.
- Digits from omitted reasoning steps, such as
the intermediate sum 3 in 1 + 2 + 4 = 7.
It is postulated that basic operations like 1D by
1D addition, subtraction, multiplication, division,
counting, and copying do not require SIDs, as their
straightforward nature falls within the capabilities
of modern LMs. Directly generating results for
complex operations, such as multi-digit multiplication and division, requires more SIDs due to the
omitted steps for decomposing these into multiple
basic operations.
Reducing an equation’s CSIDs, thereby lowering
its solving difficulty, can be achieved by expanding the equation step-by-step in a chain-of-thought
manner. For instance, the CSID for the calculation
1+2+4 = 3+4 = 7 is lower than for 1+2+4 = 7
because the intermediate sum 3 is included in the
reasoning process, effectively reducing the number
of SIDs.
**3.2** **The CSIDs for Arithmetic Operations**
In our CSID analysis of standard arithmetic operations, which is akin to analyzing space or
time complexity in algorithms, we focus on the
Figure 2: LLM performance on equations with varying
CSIDs.
worst-case scenario. Consider two numbers a =
_anan_ 1 . . . a2a1 and b = bmbm 1 . . . b2b1, result_−_ _−_
ing in c = ctct 1 . . . c2c1, with m _n. When_
_−_ _≤_
involving negative numbers, the minus sign ’-’ is
also treated as a digit.
- In addition and subtraction, the computation sequence _anan_ 1 . . . a2a1
_−_ _±_
_bmbm_ 1 . . . b2b1 = ctct 1 . . . c2c1 depends
_−_ _−_
on each ci involving ai, bi, and possibly
_ci_ 1 for carry-overs or borrows. Hence, the
_−_
CSID for ct includes all lower digits as SIDs,
indicating a complexity of O(n).
- For multiplication and division, the CSIDs
are O(n[2]) and O(n[2] _−_ _m[2]) respectively, as_
detailed in Appendix A.
**3.3** **LLM Performance on Large CSID**
**Equations**
We trained various models on arithmetic tasks involving 15D+15D calculations, maintaining identical hyper-parameters, training data, and training
steps across all models to ensure a fair comparison.
The test equations, strictly in 15D+15D format,
were classified into various CSID levels according
to the maximum number of continuous carry-over
digits. The findings, as depicted in Fig. 2, demonstrate that:
- CSID effectively measures the complexity of
arithmetic equations, where the performance
consistently declines with increasing CSIDs.
- Larger models exhibit improved performance
on equations with higher CSIDs.
-----
- The benefit of increasing model size diminishes on high CSID equations. For instance,
a 7B model shows more significant improvement on equations with CSIDs of 4 and 5
than on those with 6-9. This trend suggests
that even advanced LLMs, like GPT-4, encounter difficulties with large digit addition
tasks. Given that CSIDs have a complexity of
at least O(n), arithmetic problems quickly surpass the capacity of LLMs when dealing with
large digits. Therefore, LLMs cannot serve
**as reliable calculators for immediate result**
**generation in complex arithmetic tasks.**
**4** **RevOrder: Reversing the Order of**
**Output Digits**
We introduce RevOrder, an innovative technique
devised to maintain low CSID in equations, thereby
ensuring their solvability by LMs. Additionally,
RevOrder is designed to minimize token usage,
enhancing overall efficiency.
**4.1** **Addition and Subtraction**
For addition and subtraction, we reverse the output
digits’ order:
_a ± b = r|c1c2 . . . ct_
= ct . . . c2c1
Here, r| is a special token indicating that the followed digits are in a reversed order. To generate
each ci in r|c1c2 . . . ct, only ai, bi, and at most a
SID for the carry-over or borrow number from ci−1
are required. Thus, both addition and subtraction
only consume at most 1 SID regardless of number
length. Therefore, the complexity of CSID drop to
_O(1) from O(n), by applying RevOrder._
The cost of RevOrder for addition and subtraction is quite cheap during both training and inference. In training, RevOrder simply reverses the
result digit orders. During inference, almost no
additional tokens are required since the recovery of
the result sequence can be done with rules.
**4.2** **Multiplication and Division**
More complex operations like multiplication and
division, can be decomposed to basic operations.
**4.2.1** **Multiplication**
Firstly, consider the simplest form of multiplication, nD by 1D, e.g, 12*7=r|48, which consistently
requires only 1 SID. This efficiency originates from
the definition that 1D by 1D multiplication does
not incur any SIDs, with the only one SID being
the carry-over number in the addition.
Next, let’s examine a more general multiplication example.
12 × 4567
=12 × 4000 + 12 × 500 + 12 × 60 + 12 × 7
(1)
=r|00084 + r|0006 + r|027 + r|48 (2)
=(r|00084 + r|0006) + (r|027 + r|48) (3)
=r|00045 + r|408 (4)
=r|40845
=54804
First, decompose the multiplication as shown in
Eqn. (1), which does not require any SIDs (require
only count and copy operations that does not use
SID in our definition). Second, output the results of
each sub-multiplication in reverse order, as demonstrated in Eqn. (2). The zeros in these results can be
efficiently generated through a copy operation from
previous sequences. The nD by 1D multiplication
in reverse order has a CSID of 1. Finally, iteratively combine the adjacent addition results until
the final outcome is achieved, as illustrated in Eqn.
(3) and (4). As each addition operation involves
only two numbers, the CSID remains constant at 1
throughout the process.
In conclusion, the CSID in this multiplication
process never exceeds 1, with a complexity of
_O(1)._
**4.2.2** **Division**
Consider the division 948 ÷ 12 = 79:
948 ÷ 12
=7 Rem (948 − 12 × 70) (5)
=7 Rem (948 − _r|048)_
=7 Rem r|801
=79 Rem (r|801 − 12 ∗ 9) (6)
=79 Rem (r|801 − _r|801)_
=79 Rem (0)
=79
Utilizing traditional long division alongside
RevOrder, the CSID typically remains at 1, with the
exception of quotient estimation, as noted in Eqn.
(5) and Eqn. (6). Since the CSID analysis here
is similar to that of multiplication, we omit it for
-----
**5** **Experiments on Arithmetic Problems**
In this section, we aim to address two key research
questions (RQs):
- RQ1: Does RevOrder enable a language
model to function as a reliable calculator?
(Section 5.2)
- RQ2: Is RevOrder a cost-effective solution
for practical using? (Section 5.4)
**5.1** **Setup**
**5.1.1** **Dataset**
Our training dataset is synthetically generated
using a Python script, with each sample being an equation formatted with RevOrder, e.g.,
’123+46=r|961’. The dataset comprises positive
integers, except in subtraction where negative numbers may result. Each division equation is assigned
a probability of 0.5 to be selected for generating a
rollback version. This involves intentionally misestimating a quotient step by a number ±1, followed
by a correction through the rollback process to the
accurate estimation. The detailed of the training
data is shown in Appendix B.
**5.1.2** **Training and evaluation protocol**
We train a model named RevOrder-1B, which has
1.1 billion parameters. This model is trained on the
TinyLLaMA 1.1B framework (Zhang et al., 2024),
utilizing their released finetuning script. Specifically, the learning rate is set to 1e-4 for first 2
epochs and 1e-5 for the last epoch. The batch size
is 500.
For evaluation, we employ the BIG-bench Arithmetic sub-task (Srivastava et al., 2022) and additional challenging tasks proposed in the GOAT-7B
paper (Liu and Low, 2023). Each task has 1000
equations. We meticulously ensure that there is
no overlap between the evaluation datasets and our
training dataset, except for unavoidable overlaps in
small digits tasks. The evaluation metric is exact
match precision.
**5.1.3** **Baselines**
As baselines, we compare against three methods:
- GOAT-7B (Liu and Low, 2023): This model,
finetuned with 1 million instruction data on
LLAMA-7B (Touvron et al., 2023), decomposes multiplication and division similarly to
our approach. However, it relies on direct
result generation for subtraction and addition.
brevity, . However, it’s important to note that quotient estimation often involves heuristic guesswork,
making precise CSID measurement challenging. In
practice, we observed instances where the language
model incorrectly estimated the quotient. To address this challenge, we implemented a rollback
mechanism. If an incorrect quotient is detected, as
illustrated in Eqn. (7), we insert a symbol ’W’ after
the line. This serves as a signal to adjust the process and re-estimate the quotient, as demonstrated
in Eqn. (8). This method ensures more accurate
quotient estimations in the long division process.
In practice, a proportion of rollback scenarios are
included in training to enhance the model’s capability to correct such errors.
948 ÷ 12
=8 Rem (948 − 12 × 80)
=8 Rem (948 − _r|069)_
=8 Rem (−r|21)W (7)
=7 Rem (948 − 12 × 70) (8)
_..._
However, the quotient estimation in division is
inherently unpredictable, rendering the CSID of
this operation less controllable. Consequently, unlike other arithmetic operations, the CSID for division cannot be consistently maintained at O(1).
This limitation makes division with RevOrder less
robust compared to addition, subtraction, and multiplication, as will be evidenced in our experimental
results.
**4.3** **Towards More Compact Forms**
To reduce token usage, we propose compact forms
while maintaining CSID unchangeable.
For the multiplication example, it can be succinctly rewritten as: ’12×4567 = 12×4000 +
12×500 + 12×60+ 12×7=r|00084 + r|0006 + r|027
+ r|48 = r|00045 + r|408 = r|40845 = 54804’.
Similarly, the division example can be condensed to: ’948÷12 = 7R - (12×70)(r|048)(r|801)
# 9R - (12×9)(r|801)(0) = 79’, where R denotes
REM and # denotes a new quotient estimation.
Two principles guide these simplifications: 1.
Maintaining CSID: No digits essential for generating subsequent tokens are removed, ensuring the
CSID remains unchanged. 2. Eliminating Redundancy: Duplicated digits are removed, but care is
taken to avoid introducing ambiguities that might
confuse the LM.
-----
- MathGLM-2B (Yang et al., 2023): Finetuned
on the GLM-2B model for various arithmetic
tasks, MATHGLM-2B claims that extensive
step-by-step training data (1m-50m instances)
enables GPT models to solve math problems
without external calculators.
- GPT-4 (OpenAI, 2023): Currently one of the
most powerful LMs, GPT-4’s results are based
on direct mathematical problem-solving, without auxiliary tools or equation decomposition.
**5.2** **Main Results (RQ1)**
The results, as presented in Table 1, demonstrate
several key findings. Firstly, RevOrder-1B proves
to be a reliable method for addition, subtraction,
multiplication, and low-digit division tasks, achieving 100% accuracy across all corresponding tasks.
In contrast, the accuracy of all baseline methods
decreases with the increase in digit size. Secondly,
while RevOrder-1B shows slight imperfections in
large-digit division tasks, it still significantly outperforms baseline models. For instance, RevOrder1B attains a 99.4% accuracy on the challenging
12D ÷ 6D tasks, with an increasing of 10.1% than
that of the best-performing baseline, GOAT-7B.
The major success of RevOrder in multiplication
and division can be attributed to its precise execution of basic operations, including addition, subtraction, and nD-1D multiplication. While GOAT-7B
and MathGLM-2B also decompose these operations into basic ones, minor errors in these fundamental steps are amplified in subsequent composite
operations, leading to a rapid decline in accuracy
with larger digits.
In summary, RevOrder emerges as an effective
technique, enabling language models to perform exact arithmetic calculations in addition, subtraction,
multiplication, and low-digit division tasks.
**5.3** **In-Depth Analysis on Division**
Large-digit division represents the sole operation
where RevOrder encounters notable difficulties,
warranting additional focus.
Upon examining division errors case by case, we
discovered that all errors stemmed from incorrect
quotient estimations. Fig. 3 illustrates such an
error, where RevOrder-1B erroneously estimated
the 3rd quotient as 8 (marked in red) instead of 9,
without triggering the ’W’ symbol for a rollback.
Consequently, this led to a series of nonsensical
outputs. It’s notable that when a constant CSID
Figure 3: An error example of division by RevOrder.
Figure 4: Analysis of the rollback ratio in division. (a)
Test precision vs. rollback ratio for 12D ÷ 6D division.
(b) Probability of rollbacks during testing across different digit sizes.
of 1 is maintained in all four arithmetic operations,
no errors occur. Errors only arise during quotient
estimation, where CSID is unmeasurable. These
results validate our theory regarding CSID.
We also assessed the effectiveness of the rollback
mechanism. Fig. 4(a) presents the test precision for
12D ÷ 6D division across varying rollback ratios.
A stark precision decline to 0.84 is observed with
no rollback (ratio = 0). Precision does not significantly improve when the ratio exceeds 0.4, though
this is partly due to the high baseline precision of
0.99. Fig. 4(b) illustrates the frequency of rollbacks
during testing, indicating a higher incidence of rollbacks with larger digits. This trend underscores the
importance of the rollback technique, particularly
as it compensates for the increased likelihood of
errors in quotient estimation with larger numbers.
**5.4** **The Cost of RevOrder (RQ2)**
**5.4.1** **Cost of Training**
By maintaining a low CSID, RevOrder simplifies the learning process for arithmetic problems,
thereby reducing the volume of training data required. Table 2 compares the number of training
equations needed for various methods. Despite being a smaller model, RevOrder-1B achieves perfect
-----
|Task|BIG-bench Extra Tasks|
|---|---|
|ADD|1D 2D 3D 4D 5D|8D+8D 16D+8D 16D+16D|
|---|---|---|
|GPT-4 GOAT-7B MathGLM-2B RevOrder-1B|100 100 99.6 98.8 94.1 100 100 99.4 98.3 98.1 100 100 100 100 99.4 100 100 100 100 100|92.1 9.4 94.1 97.8 97.1 97.6 - - - 100 100 100|
|---|---|---|
|SUB|1D 2D 3D 4D 5D|8D-8D 16D-8D 16D-16D|
|---|---|---|
|GPT-4 GOAT-7B MathGLM-2B RevOrder-1B|100 100 99.2 98.9 92.4 100 100 99.7 98.6 98.4 100 100 99.9 99.8 98.9 100 100 100 100 100|70.5 10.6 59.6 96.8 95.8 96.3 - - - 100 100 100|
|---|---|---|
|MUL|1D 2D 3D 4D 5D|16D × 1D 8D × 4D 6D×6D|
|---|---|---|
|GPT-4 GOAT-7B MathGLM-2B RevOrder-1B|100 99.4 30.3 5.3 0.0 100 100 97.8 96.9 96.7 100 99.9 98.3 94.9 89.9 100 100 100 100 100|61.5 0.0 0.0 99.7 88.1 96.8 - - - 100 100 100|
|---|---|---|
|DIV|1D 2D 3D 4D 5D|16D÷1D 6D÷3D 12D÷6D|
|---|---|---|
|GPT-4 GOAT-7B MathGLM-2B RevOrder-1B|100 100 94.5 90.9 53.4 100 100 99.5 99 96.5 100 100 99.4 100 94.9 100 100 100 100 100|54 6.4 0.0 99 94.1 89.3 - - - 99.2 100 99.4|
|---|---|---|
Table 1: Performance comparison on various arithmetic tasks. The results of the baseline methods are taken from
their original paper, while the result of GPT-4 is taken from Liu and Low (2023).
Model # Equations 100% ACC
RevOrder-1B 0.5m Yes
MathGLM-2B 1m-50m No
GOAT-7B 1.7m No
Table 2: Number of training equations for different
methods. This table reports the dataset size required
for RevOrder-1B to achieve 100% accuracy on all Bigbench arithmetic sub-tasks. # Equations denotes the
number of training equations.
precision with at most half the training equations
compared to other methods. Recent studies indicate
that larger models often require less training data
for task mastery (Hoffmann et al., 2022; Xia et al.,
2022). Consequently, the training cost advantage
of RevOrder is likely to be even more pronounced
with larger LLMs.
**5.4.2** **Cost of Inference**
The inference cost is assessed based on the number of additional tokens required for performing
arithmetic calculations with RevOrder. We make
two assumptions: 1) Each character (digit, symbol,
etc.) is counted as one token, and 2) if the final
result is output in reverse, the recovery process is
handled by the tokenizer’s decode function.
For addition and subtraction equations, only one
Figure 5: The number of extra tokens required for multiplication and division.
extra token (’r|’) is required. For multiplication
and division equations, the number of extra tokens
used is illustrated in Fig. 5. RevOrder is more
token-efficient in both types of equations. Firstly,
the compact form introduced in Section 3.3 significantly reduces the token requirement for division,
approximately halving the number of extra tokens.
Secondly, the iterative combination approach in
multiplication, as exemplified in Eqn. (3), also
notably reduces token usage in multiplication.
However, it must be acknowledged that for largedigit multiplication and division tasks, the token
-----
consumption of RevOrder increases polynomially
and may eventually exceed the cost of using external tools. LLM service providers can set a threshold of digit number to decide between RevOrder
and tool-based solutions.
**6** **Additional Experiments on Math Word**
**Problems**
In this section, we delve into finetuning scenarios
to address the research question:
- RQ3: How does applying RevOrder affect finetuning performance on mathematical
tasks?
**6.1** **Setup**
The experiment is conducted on GSM8K (Cobbe
et al., 2021), a dataset of 8.5K high quality linguistically diverse grade school math word problems created by human problem writers. Our experiments
utilize LLAMA2-7B (Touvron et al., 2023) as the
foundational model. We modified the equations
in the GSM8K training set to adopt the RevOrder
format. This adaptation involved two major updates: Firstly, we presented the outcomes for addition, subtraction, and multiplication in reverse
order. Secondly, polynomial equations were expanded and solved iteratively, in pairs. Noted that
we did not decompose multi-digit multiplications
and divisions, as these cases are infrequent in the
GSM8K dataset. To further enhance the model’s
proficiency with RevOrder, we supplemented the
training set with a small, synthetically generated
dataset using a Python script. The comprehensive
details of the dataset and the training parameters
are provided in Appendix C.
**6.2** **Results**
From the analysis, it is evident that RevOrder significantly reduces calculation errors, by 94% for
addition, 87% for subtraction, and 46% for overall equation errors, thereby enhancing the final
score. This improvement underscores the potential
of seamlessly integrating RevOrder into fine-tuning
processes to achieve substantial performance gains.
We also observe the errors, and find most of the
errors are due to lack of enough training. Therefor,
the model cannot well follow the instructions of
RevOrder. Some examples are presented in Appendix C.
Hence, integrating RevOrder effectively into
LMs is ideally conducted during the pretraining
Baseline RevOrder
Score 41.6 44.4 (+2.8)
Equation Acc 88.9 94.1 (+5.2)
Acc of + 96.7 99.8 (+2.1)
Acc of - 97.0 99.6 (+2.6)
Acc of * 95.8 98.8 (+3)
Table 3: Fine-tuning results on GSM8K Dataset. This table compares the performance of models fine-tuned with
the original GSM8K dataset (baseline) against those
finetuned using the RevOrder-modified GSM8K dataset.
The Score is measured by the correctness ratio of final
results.
stage rather than the fine-tuning stage. The primary rationale is that excessive fine-tuning can
lead to catastrophic forgetting, thereby impairing
the general capabilities of LMs (Luo et al., 2023;
Ramasesh et al., 2021).
**7** **Conclusion**
In this paper, we introduce the CSID as a metric to
evaluate the complexity of arithmetic equations and
demonstrate that even large-scale LLMs struggle
with high-CSID equations. We propose RevOrder,
an innovative technique that ensures accurate arithmetic calculations by minimizing CSID, thereby
enhancing precision while reducing both training
and inference costs. Our experiments confirm that
RevOrder significantly outperforms previous methods in terms of accuracy and efficiency.
For future work, we identify two possible paths:
Firstly, developing token-efficient decomposition
algorithms suitable for larger LLMs, which can
handle higher CSIDs for complex arithmetic operations. Secondly, integrating RevOrder into LLMs’
pretraining could enhance arithmetic capabilities
more fundamentally than finetuning, reducing the
risk of catastrophic forgetting and ensuring broader
model proficiency.
Ultimately, RevOrder stands out as a particularly
promising approach for arithmetic operations, especially addition and subtraction, due to its precision
and efficiency. This positions it as a competitive
alternative to existing methods in enhancing LLMs’
arithmetic reasoning.
-----
**References**
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W Cohen. 2022. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint
_arXiv:2211.12588._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, et al. 2021. Training verifiers to solve math
word problems. arXiv preprint arXiv:2110.14168.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Pal: Program-aided language
models. In International Conference on Machine
_Learning, pages 10764–10799. PMLR._
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks,
Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. arXiv
_preprint arXiv:2203.15556._
Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren
Etzioni, and Nate Kushman. 2014. Learning to solve
arithmetic word problems with verb categorization.
In Proceedings of the 2014 Conference on Empirical
_Methods in Natural Language Processing (EMNLP),_
pages 523–533.
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B
Brown, Benjamin Chess, Rewon Child, Scott Gray,
Alec Radford, Jeffrey Wu, and Dario Amodei. 2020.
Scaling laws for neural language models. _arXiv_
_preprint arXiv:2001.08361._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in
_neural information processing systems, 35:22199–_
22213.
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
Regina Barzilay. 2014. Learning to automatically
solve algebra word problems. In Proceedings of the
_52nd Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
271–281.
Tiedong Liu and Bryan Kian Hsiang Low. 2023. Goat:
Fine-tuned llama outperforms gpt-4 on arithmetic
tasks. arXiv preprint arXiv:2305.14201.
Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and
Kai-Wei Chang. 2022. A survey of deep learning for mathematical reasoning. _arXiv preprint_
_arXiv:2212.10535._
Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie
Zhou, and Yue Zhang. 2023. An empirical study
of catastrophic forgetting in large language models during continual fine-tuning. _arXiv preprint_
_arXiv:2308.08747._
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari,
Henryk Michalewski, Jacob Austin, David Bieber,
David Dohan, Aitor Lewkowycz, Maarten Bosma,
David Luan, et al. 2021. Show your work: Scratchpads for intermediate computation with language
models. arXiv preprint arXiv:2112.00114.
OpenAI. 2022. Introducing chatgpt.
_https://openai.com/blog/chatgpt._
OpenAI. 2023. Gpt-4 technical report.
_https://openai.com/research/gpt-4._
Vinay Venkatesh Ramasesh, Aitor Lewkowycz, and
Ethan Dyer. 2021. Effect of scale on catastrophic
forgetting in neural networks. In International Con_ference on Learning Representations._
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta
Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola
Cancedda, and Thomas Scialom. 2023. Toolformer:
Language models can teach themselves to use tools.
_arXiv preprint arXiv:2302.04761._
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao,
Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch,
Adam R Brown, Adam Santoro, Aditya Gupta,
Adrià Garriga-Alonso, et al. 2022. Beyond the
imitation game: Quantifying and extrapolating the
capabilities of language models. _arXiv preprint_
_arXiv:2206.04615._
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint_
_arXiv:2307.09288._
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
Deep neural solver for math word problems. In Pro_ceedings of the 2017 conference on empirical meth-_
_ods in natural language processing, pages 845–854._
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel,
Barret Zoph, Sebastian Borgeaud, Dani Yogatama,
Maarten Bosma, Denny Zhou, Donald Metzler, et al.
2022a. Emergent abilities of large language models.
_arXiv preprint arXiv:2206.07682._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022b. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural
_Information Processing Systems, 35:24824–24837._
Mengzhou Xia, Mikel Artetxe, Chunting Zhou, Xi Victoria Lin, Ramakanth Pasunuru, Danqi Chen, Luke
Zettlemoyer, and Ves Stoyanov. 2022. Training trajectories of language models across scales. arXiv
_preprint arXiv:2212.09803._
Zhen Yang, Ming Ding, Qingsong Lv, Zhihuan Jiang,
Zehai He, Yuyi Guo, Jinfeng Bai, and Jie Tang. 2023.
Gpt can solve mathematical problems without a calculator. arXiv preprint arXiv:2309.03241.
-----
**A** **The CSID Analysis of Multiplication**
**and Division**
This section extends the CSID analysis to nD by nD
multiplication and nD by mD division, following
the algorithmic approach outlined in Section 4.2
but excluding the RevOrder technique.
**A.1** **Multiplication**
The decomposition of an nD by nD multiplication
into n sub-multiplications, each an nD by 1D operation, serves as the initial step. This phase does not
generate SIDs, as all required digits for a × b are
immediately accessible.
Addressing these sub-multiplications yields up
to n[2] + _n_ _×_ (n +1) = 2n[2] + _n SIDs, with n[2]_ SIDs
allocated for the sub-multiplications and n×(n+1)
SIDs dedicated to storing the outcomes.
Aggregating the results of these submultiplications necessitates a maximum of
4n[2] SIDs, with each addition consuming 4n SIDs,
2n for carry-overs and another 2n for storing the
results.
Consequently, directly generating an nD by nD
multiplication outcome requires a maximum of
6n[2] + n SIDs, indicating a complexity of O(n[2]).
This substantial complexity explains the difficulty
models face with even 2D by 2D multiplications.
Decomposition methods, as applied in models
like GOAT-7B and MathGLM-2B, reduce the CSID
to O(n), by omitting intermediate results from the
SID count, though carry-overs are still considered.
**A.2** **Division**
For an nD by mD division, typically n − _m itera-_
tions are needed, each estimating a quotient digit.
Each iteration involves an nD by 1D multiplication and a subtraction, with the multiplication
incurring 2m SIDs for result and carry-over digit
storage, and the subtraction using up to 2n SIDs
for result storage and borrow digits.
Thus, the total CSID for an nD by mD division
reaches (2m+2n)∗(n−m) = 2n[2]−2m[2], amounting to a complexity of O(n[2] _−_ _m[2])._
This estimation excludes the quotient estimation
step’s complexity, which could further complicate
large number divisions, potentially surpassing the
_O(n[2]_ _−_ _m[2]) complexity._
In models like GOAT-7B and MathGLM-2B,
using decomposition methods keeps the CSID at
_O(n), with the subtraction’s borrow digits being_
the primary complexity factor.
Peiyuan Zhang, Guangtao Zeng, Tianduo Wang, and
[Wei Lu. 2024. Tinyllama: An open-source small](http://arxiv.org/abs/2401.02385)
[language model.](http://arxiv.org/abs/2401.02385)
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang,
Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen
Zhang, Junjie Zhang, Zican Dong, et al. 2023. A
survey of large language models. _arXiv preprint_
_arXiv:2303.18223._
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Claire Cui, Olivier Bousquet, Quoc Le, et al. 2022.
Least-to-most prompting enables complex reasoning in large language models. _arXiv preprint_
_arXiv:2205.10625._
-----
**C.2** **Training Details**
The models were trained with a batch size of 32
and a learning rate of 5e-5, employing a warm-up
ratio of 0.08 over 3 epochs. During each epoch, the
model was exposed to both the additional datasets
and the GSM8K datasets sequentially.
**C.3** **Equation Errors**
Fig. 9 showcases representative errors encountered
in the GSM8K test set, attributable to difficulties
in adhering to RevOrder instructions. For instance,
while the model successfully solved the second
equation in reverse order, it faltered in performing
the simple task of reversing the solution to arrive
at the final result.
Figure 6: The distribution of the equations in training
set.
**B** **Training Data for Arithmetic**
**Experiments**
The training dataset comprises 1.7 million equations. For addition and subtraction tasks, equations
involve numbers as large as 16D on both sides.
Multiplication tasks are capped at 8D by 8D, supplemented by 16D by 1D equations to enhance
generalization in the test set. Division tasks feature dividends up to 16D. Fig. 6 illustrates the
distribution of these equations. The major training
samples are division, since the quotient estimation
steps require more training samples to achieve a
high precision.
**C** **Settings for Math Word Experiments**
**C.1** **Training Data**
Our approach involved two types of instructional
data to train models on arithmetic tasks using
RevOrder.
Firstly, we modified the original GSM8K dataset
to reflect RevOrder formatting. An example of this
adaptation is illustrated in Fig. 7.
Secondly, to further bolster the model’s proficiency in RevOrder calculations, we compiled an
additional enhancement dataset. A sample from
this dataset is depicted in Fig. 8.
Given the limited size of the training data, the 7B
model faced challenges in mastering the use of the
reverse symbol r|. To address this, we introduced
a notation where all numbers enclosed by @@,
signify reverse order.
-----
Figure 7: A data sample from the GSM8K dataset formatted in RevOrder.
Figure 8: A sample from the additional enhancement dataset for RevOrder calculations.
Figure 9: Illustrative errors from the GSM8K test set encountered by the model trained with RevOrder.
-----
| [
"Si, Shen",
"Peijun, Shen",
"Danhao, Zhu"
] | 2024-02-23T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2402.03822 | https://arxiv.org/abs/2402.03822 | https://www.semanticscholar.org/paper/dc9ffc09f51f8c4c828219fec7d5967847f09684 |
Reverse That Number! Decoding Order Matters in Arithmetic Learning | Recent advancements in pretraining have demonstrated that modern Large Language Models (LLMs) possess the capability to effectively learn arithmetic operations. However, despite acknowledging the significance of digit order in arithmetic computation, current methodologies predominantly rely on sequential, step-by-step approaches for teaching LLMs arithmetic, resulting in a conclusion where obtaining better performance involves fine-grained step-by-step. Diverging from this conventional path, our work introduces a novel strategy that not only reevaluates the digit order by prioritizing output from the least significant digit but also incorporates a step-by-step methodology to substantially reduce complexity. We have developed and applied this method in a comprehensive set of experiments. Compared to the previous state-of-the-art (SOTA) method, our findings reveal an overall improvement of in accuracy while requiring only a third of the tokens typically used during training. For the purpose of facilitating replication and further research, we have made our code and dataset publicly available at \url{https://anonymous.4open.science/r/RAIT-9FB7/}. | This work introduces a novel strategy that not only reevaluates the digit order by prioritizing output from the least significant digit but also incorporates a step-by-step methodology to substantially reduce complexity. | ## Reverse That Number! Decoding Order Matters in Arithmetic Learning
**Daniel Zhang-Li[1][∗], Nianyi Lin[1][∗], Jifan Yu[1], Zheyuan Zhang[1], Zijun Yao[1], Xiaokang Zhang[2],**
**Lei Hou[1], Jing Zhang[2], Juanzi Li[1][†]**
1Department of Computer Science and Technology, Tsinghua University
2Renmin University of China
{zlnn23,linny20}@mails.tsinghua.edu.cn
{lijuanzi}@tsinghua.edu.cn
**Abstract**
Recent advancements in pretraining have
demonstrated that modern Large Language
Models (LLMs) possess the capability to effectively learn arithmetic operations. However, despite acknowledging the significance of digit order in arithmetic computation, current methodologies predominantly rely on sequential, stepby-step approaches for teaching LLMs arithmetic, resulting in a conclusion where obtaining better performance involves fine-grained
step-by-step. Diverging from this conventional
path, our work introduces a novel strategy that
not only reevaluates the digit order by prioritizing output from the least significant digit but
also incorporates a step-by-step methodology
to substantially reduce complexity. We have
developed and applied this method in a comprehensive set of experiments. Compared to
the previous state-of-the-art (SOTA) method,
our findings reveal an overall improvement
of 11.1% in accuracy while requiring only a
third of the tokens typically used during training. For the purpose of facilitating replication and further research, we have made our
[code and dataset publicly available at https://](https://anonymous.4open.science/r/RAIT-9FB7/)
[anonymous.4open.science/r/RAIT-9FB7/.](https://anonymous.4open.science/r/RAIT-9FB7/)
**Big-Endian** **Little-Endian**
Representation Representation
Reverse!
72001 10027
Model Input Model Input
Reverse!
72001 + 28862 = 10027 + 26882 =
Model Output
90863 ❌
Model Output
368001
Reverse!
100863
Figure 1: Reversing the numbers in training enables
models to better learn to do arithmetic operations.
previous studies have demonstrated that LLMs
can learn arithmetic effectively through pretraining (Yang et al., 2023). This suggests that it might
be feasible to efficiently teach LLMs arithmetic
operations through fine-tuning alone, without the
need external tool such as calculators.
The prevailing challenge in employing Large
Language Models for arithmetic tasks is intricately
linked to their next-token prediction mechanism.
This mechanism often leads to a reversed computation order, where more significant digits are
calculated before less significant ones, a flaw attributed to LLMs’ inherent limitation in forward
planning (Bubeck et al., 2023). This characteristic
has led to the perception that arithmetic in LLMs is
akin to other complex symbolic and logical tasks,
necessitating a similar approach (Nye et al., 2021).
Consequently, prior research has predominantly
focused on the necessity of a step-by-step methodology, breaking down arithmetic into a series of
sub-steps, as a critical strategy for addressing these
challenges (Wei et al., 2022; Lee et al., 2023).
Such a technique achieves significant gains in
performance but introduces a trade-off between efficiency and effectiveness, necessitating a balance
between the number of tokens per training case and
**Introduction**
Large language models (LLMs), though proficient
in a range of tasks (Ouyang et al., 2022; Achiam
et al., 2023; Anil et al., 2023), encounter challenges
in arithmetic operations due to their inherent design
limitations, such as reliance on next-token prediction methods and limited working memory (Bubeck
et al., 2023). Despite their capability to utilize
external tools for circumventing direct arithmetic
computations during inference (Gao et al., 2023;
Imani et al., 2023; Schick et al., 2023), efficiently
and effectively incorporating arithmetic proficiency
within LLMs is an unresolved issue. However,
_∗_ Equal contribution
_† Corresponding Author_
-----
**Step-By-Step** **Intermediate Step**
**Intermediate Product** **Cumulative Sum** **Resplit Sum**
Product Sum Split Sum ai * B = Uprod Ulow(Uhigh+Uprod=Uhigh) Append(Pop(Uhigh)Ulow)
Product Sum Split Sum
**8*45788=366304** **616(4573+403663=850073)** **6168 50073**
**Example Case In Multiplication**
**Human Method** **Our Method**
18082 * 45788 = 28081 * 88754 =
2 * 45788 = 91576; 0+91576 = 91576; 9157 6 2 * 88754 = 67519; 67519 = 67519; 6 7519
8 * 45788 = 366304; 9157 + 366304 = 375461 6; 37546 16 8 * 88754 = 403663; 6 7519 + 403663 = 164573; 61 64573
0 * 45788 = 0; 37546 + 0 = 37546 16; 3754 616 0 * 88754 = 0; 61 64573 + 0 = 64573; 616 4573
**8 * 45788 = 366304; 3754 + 366304 = 370058 616; 37005 8616** **8 * 88754 = 403663; 616 4573 + 403663 = 850073; 6168 50073**
1 * 45788 = 45788; 37005 + 45688 = 82793 8616; 8279 38616 1 * 88754 = 88754; 6168 50073 + 88754 = 39728; 61683 9728
827938616 616839728
**Ordering in Intermediate Product**
**Print Order** **Decoding Order**
8 - 4 5 7 8 8 = 3 6 6 3 0 4 8 - 8 8 7 5 4 = 4 0 3 6 6 3
a4 - b4 b3 b2 b1 b0 = u5 u4 u3 u2 u1 u0 a4 - b0 b1 b2 b3 b4 = u0 u1 u2 u3 u4 u5
Figure 2: Example training data for Multiplication. Where the task is solved using a step-by-step process. During
the ith intermediate step, the intermediate product is first computed. Then, inspired by the human process, we set
the least significant digits(Uhigh) unchanged and directly added the product to the remaining digits(Ulow) of the
cumulative sum. Finally, we pop the least significant digit from the updated Uhigh and append it into Ulow as it will
not be added with non-zero digits in later steps. During decoding, we express all numbers in Little-Endian, where
the least significant digit goes first. We convert all the numbers back to Big-Endian before printing.
the total number of training cases. To enhance both
efficiency and effectiveness without resorting to a
brute-force integration of step-by-step processes,
we adopt a novel approach termed LEFT (Little**Endian Fine-Tuning). Rather than incrementally**
integrating step-by-step mechanisms, we employ
a strategy that reverses the number representation,
prioritizing the computation of less significant digits. This approach utilizes the concept of Little**Endian, where numbers are represented with the**
least significant digits first, while maintaining the
position of any negative signs. In contrast, the
standard numeral representation is referred to as
**Big-Endian. Figure 1 demonstrates that initiating**
output generation with the most significant digit
may result in carry-related errors. In contrast, employing a Little-Endian format, where the model
produces the number 100863 as 368001, simplifies
carry operations resulting in a correct solution. We
present experimental results (Sec. 5) showcasing
that LEFT not only improves accuracy by 11.1%
against the current state-of-the-art (SOTA) for large
digit inputs but also demonstrates efficiency by utilizing just 5.2% of the training tokens required by
the previous SOTA for addition and subtraction
tasks. Specifically, in multiplication, LEFT records
a 35.7% performance gain while consuming only
56.6% of the training tokens used by prior SOTA.
The key contributions of this paper include:
- We proposed a novel method, LEFT, leverag
ing Little-Endian to reduce the complexity of
learning arithmetic operations.
- We conduct detailed evaluation and demonstrate LEFT achieves better performance with
lesser token used during training.
- Observations from our experiments indicate
that, by reversing digit order, LLMs are capable of solving addition in human alike manner.
**2** **Problem Formulation**
Consider the simple case where the input (I) consists of two numbers, A and B, combined with
an operator op. We denote the digits of A as
**A =** _i=0_ [10][i][a][i][, where each][ a][i][ is a single-digit]
integer (0 ≤ **ai ≤** 9), and am−1 ̸= 0 to ensure
no leading zeros. Similarly, for[P][m][−][1] **B, we express**
its digits as B = _i=0_ [10][i][b][i][, where each][ b][i][ is a]
single-digit integer (0 ≤ **bi ≤** 9), and bn−1 ̸= 0.
We assume the ground truth output is a[P][n][−][1] _k-digit_
number, C = _i=0_ [10][i][c][i][ (for][ C][ <][ 0][, we use][c][−][1]
to represent the negative sign). The trained LLM
outputs an ordered sequence[P][k][−][1] = **o1, o2, . . .**,
_O_ _{_ _}_
which includes the output number C ⊆O.
As step-by-step designs often incorporate intermediate results, we denote the ith intermediate result as U[i]. Finally, we define the remaining output
as auxiliary tokens (X = O \ {U[i] _| ∀i} ∪{C})._
-----
**3** **Little-Endian Fine-Tuning**
In order to effectively and efficiently teach LLMs
arithmetic, we need to address three crucial questions: 1. What is the complexity in standard BigEndian training(where no step-by-step is applied)?
2. Are there spaces for optimizing the standard
method? 3. How to optimize cases when stepby-step is required? In the remaining parts of this
section, we tackle such questions one by one.
**3.1** **Learning Complexity of Arithmetic**
Autoregressive LLMs are interpreted as probabilistic models that predict output sequences by maximizing the likelihood of generating the correct
output. In operations such as addition, this process
of prediction can be formalized as follows:
arg max _P_ (ci _a0_ _n_ 1, b0 _m_ 1, ci+1 _k)_
_ci_ _|_ _∼_ _−_ _∼_ _−_ _∼_
Considering the specific nature of addition,
where the outcome of each digit is influenced only
by digits of equal or lesser significance, the process
is refined to concentrate on pertinent inputs:
arg max _P_ (ci _a0_ _i, b0_ _i)_ (1)
_ci_ _|_ _∼_ _∼_
Assuming that all numbers involved possess an
identical number of digits simplifies the analysis.
Under this assumption, during the generation of
each digit, there exist 10 potential inputs from each
of the two numbers, resulting in 10[2][i][+2] possible
input combinations. Given that the output digit
can assume 10 possible values, the complexity of
predicting a single digit’s value transitions from
10[2][i][+2] input conditions to 10 output conditions.
The overall learning complexity is quantified by
summing the probabilities of accurately predicting
each digit, based on the inputs up to that digit:
**3.2** **Optimizing Complexity via Little-Endian**
In addressing the complexity of arithmetic operations, it is noted that the output token with the
greatest complexity is typically the most significant
digit. Interestingly, unlike computational models,
humans often do not consider all input digits simultaneously. Instead, they start from the least significant digit, using any carry-over to simplify the computation. Assuming the model can similarly infer
the carry from the previous digit (ai−1, bi−1, ci−1),
we can streamline the optimization target by focusing on this simplified context:
arg max _P_ (ci _ai, ai_ 1, bi, bi 1, ci 1)
_ci_ _|_ _−_ _−_ _−_
Such adjustment leads to a significant reduction
in input complexity, now quantified as 10[5]. By
adopting this revised generating order, the task becomes markedly less challenging:
10[5] _≤_ _n · 10[5]_
_i=0_
X
_CLittle =_
For cases where n ≥ 2, this model showcases a
substantial decrease in learning complexity compared to the conventional approach ( _Little_
_C_ _≤_
_n_ 10[5] _< 10[2][n][+2]_ _Big). Such findings illu-_
_·_ _≤C_
minate the potential benefits of inverting the decoding order to mitigate complexity. Motivated
by this insight, we propose abandoning the classic,
step-by-step design prevalent in previous methodologies in favor of revising addition and subtraction
training to leverage this more efficient strategy.
**Addition.** In addressing addition within LEFT,
the traditional approach of processing numbers
from the most significant digit to the least significant is reimagined. By reversing both the input
and output numbers, the calculation aligns with the
Little-Endian format, where operations commence
from the least significant digit and progress towards
the most significant. Such conversion simplifies the
decoding order, making it more intuitive and akin to
human arithmetic practices. We hypothesized that
the model can autonomously recompute the necessary carry for the subsequent significant digit. This
method eliminates the need for a step-by-step design or the introduction of auxiliary tokens, streamlining the addition process without necessitating
any extra tokens beyond the sum itself.
**Subtraction.** For subtraction, the model simplifies the process by first determining if the result
log P (ci|a0∼i, b0∼i) (2)
_i=0_
X
_LBig = −_
Accordingly, the cumulative learning complexity, denoted as _Big, is conceptualized as the ag-_
_C_
gregate of complexities across all digits, with the
input variations providing a lower bound:
10[2][i][+2] _≥_ 10[2][n][+2] (3)
_i=0_
X
_CBig =_
This model illustrates the exponential increase
in learning complexity with the increment of digit
count n, presenting a significant scalability challenge in teaching arithmetic to LLMs.
-----
will be negative, then applying the operation in
Little-Endian order. This approach, which keeps
the negative sign’s position unchanged (e.g., -256
becomes -652), enhances efficiency by eliminating the need for intermediate results that assume
a non-negative outcome. This streamlined method
contrasts with traditional digit-wise subtraction, offering a more straightforward computation strategy.
**3.3** **Augmenting Step-by-Step**
The application of Little-Endian formatting extends
beyond the realms of addition and subtraction, offering substantial benefits in operations that inherently require a step-by-step approach due to their
complexity. One prime example of such an operation is multiplication, where the intricacies of the
computation process are significantly amplified.
**Multiplication.** Traditional methods often involve breaking down the solving process into manageable chunks, typically computing the product of
a single digit with a multi-digit number, and then
summing these intermediate products. This conventional approach, however, often operates under
the Big-Endian framework, starting with the most
significant digits and potentially complicating the
computation of intermediate products.
In contrast, the use of Little-Endian proposes a
significant optimization. By reversing the order
of digits—starting from the least significant—this
method aligns with the natural flow of human computation, simplifying both the computation of intermediate product and subsequent sums.
**4** **Implementation**
In this section, we delve into the detailed implementation of LEFT and explore the methodologies
applied in our experiments, along with the baselines for comparison. Our discussion spans from
the step-by-step design utilized in the experiments
(Sec. 4.1) to dataset generation (Sec. 4.2) and other
settings for the experiments(Sec. 4.3).
**4.1** **Step-By-Step Design**
**Addition/Subtraction.** While our hypothesis
posits that the step-by-step process might not be essential for efficiently learning addition and subtraction, we incorporate it as a comparative measure
to validate our assumption. We adopt the step-bystep design from the chain-of-thought methodology (Wei et al., 2022), as reproduced in previous
studies (Zhou et al., 2022), for LEFT’s addition and
subtraction tasks when necessary for evaluation.
**Addition/Subtraction.** Contrary to our initial hypothesis that a step-by-step process may not be
crucial for efficiently mastering addition and subtraction, we included it for comparative analysis
to test our theory. Thus, we utilized the Chain-Of_Thought approach (Wei et al., 2022), as previously_
replicated (Zhou et al., 2022), in evaluating LEFT
joined with step-by-step on addition/subtraction.
**Multiplication.** We previously outlined the key
features of the step-by-step approach for multiplication within LEFT, yet a direct implementation
was not provided. As shown in Figure 2, with the
reversal of all numbers, the task is divided into numerous substeps. Each substep iterates over the
digits of the first input number, ai **A, starting**
_∈_
from the least significant digit. In each iteration,
the process begins by multiplying the current digit
with the second input number to generate an intermediate product. This intermediate product is
then added to the cumulative sum of products from
previous iterations. Since the lower i digits of the
product are always zero, these are not explicitly
represented; instead, the product is directly added
to the higher section of the cumulative sum. The
higher section is defined as the part of the cumulative sum obtained in the last step of the previous
iteration, which considers the lower i-digits as a
fixed result and defines the remaining digits as the
higher section of the cumulative sum.
This refined step-by-step design for multiplication highlights the efficiency and adaptability of
the Little-Endian approach in managing complex
arithmetic operations. By streamlining the integration of intermediate products into a simplified
cumulative sum, this method not only improves
the performance and clarity of the model but also
showcases the extensive utility of Little-Endian formatting in enhancing computational processes.
**4.2** **Dataset**
The inherent characteristics of arithmetic calculations, which do not necessitate human-generated
labels, enable the automated generation of training
and testing sets in our study. Our primary objective
is to create a dataset that is fair, isolated, and balanced, facilitating a comprehensive evaluation of
the LEFT’s effectiveness and efficiency.
-----
**Fairness.** Given that different methods may operate on varied data inputs, we aim to minimize
the variance in performance attributable to different inputs as much as possible. To achieve this,
we initiate the process by generating a set of meta
data during the data generation phase. Each piece
of meta data is conceptualized as a triplet in the
form (A, op, B). This triplet serves as a unified
seed for generating training and testing data for
each method, ensuring that the same set of input
is utilized across methods. Then, each triplet is
expanded and formatted to suit the specific requirements of each method’s data format.
**Isolation.** Recognizing the critical importance
of preventing data leakage, we take meticulous
steps to ensure the uniqueness of input number sets,
denoted by {A, B}. This strategy guarantees that
the test set contains no identical input number pairs
as found in the training set, thereby also ensuring
the uniqueness of each training and testing set.
**Digit Distribution Balancing.** Echoing previous
methods that have highlighted the importance of
balanced data distribution (Lee et al., 2023), we ensure that both the training and test sets are balanced
such that the maximum quantity of any single number in each data slice falls within the digit range of
[5, 12]. Specifically, we generate in total of 15K
training data and 3K test data, with 5K points
for each operation, accompanied by 1K test data
points for each operation, to maintain this balance.
**4.3** **Experiment Setup**
**Baseline.** We first include End-To-End training
used in during pretraining methods (Yang et al.,
2023) as a ground to compare performance in previous methods. We then include Scratchpad(Nye
et al., 2021), one of the early founders in using
step-by-step approaches to break down arithmetic
into multiple steps. We also include Chain-Of_Thought (Wei et al., 2022) which provided a gen-_
eral approach of breaking step-by-step to a wide
range of complex tasks. In addition, we include the
_Detailed-Scratchpad method introduced in (Zhou_
et al., 2022). (Zhou et al., 2022) also introduces
_Algorithmic-Prompting technique but as it requires_
too many auxiliary tokens making it hard to fit 12digit training into the context length. As a result,
we exclude it during our evaluation.
**Metric.** As arithmetic reasoning is strongly affected by error propagation, solutions with interme
diate errors are almost impossible to provide the
correct solution. As a result, we directly use the
accuracy (ACC) of the predicted output to evaluate the effectiveness of the methods. As the discussion for efficiency is aimed at training betterperformed models using fewer resources, we record
the amount of tokens used for training and observe
the change in accuracy as more tokens are used.
**Backbone Model.** The base checkpoint for our
experimental framework is Llama2-13B (Touvron
et al., 2023), chosen for its status as a well-regarded
and openly accessible LLM. To address the need
for processing longer sequences, the model’s context length has been extended to 4, 096 tokens.
**5** **Experiments**
We now turn to a systematic evaluation of the proposed method. Specifically, we design and conduct
a series of comprehensive analysis which seeks to
answer the following research questions:
**Q1 Is LEFT effective and efficient?(Sec. 5.1)**
**Q2 What grants LEFT the ability to effectively**
_tackles the provided task?(Sec. 5.2)_
**Q3 What can be further done on LEFT?(Sec. 5.3)**
**5.1** **Direct Evaluation Over Performance**
We began our analysis with the overall performance of LEFT against previous methods for
jointly trained and evaluated addition, subtraction,
and multiplication performance. We then conduct
operation-by-operation analysis to observe the results of training when jointly training is opt-out.
**Observation 1: LEFT Learns Faster Than Base-**
**lines.** Table 1 shows the resulting performance of
each method after training. We order the baselines
according to token used during training. LEFT
used the least amount of training token among all
the step-by-step methods, yet achieving 11.1% performance improvement over previous SOTA.
Specifically, LEFT’s accuracy on addition and
subtraction is slightly below Scratchpad-Detailed.
However, LEFT only used 160K and 161K tokens for learning addition and subtraction. But
_Scratchpad-Detailed used 2, 936K and 3, 254K_
for training. This means LEFT uses only 1/20 of
training data yet still achieves similar performance.
_LEFT also achieved 35.7% accuracy improvement_
over previous SOTA on multiplication, further highlighting LEFT’s effectiveness and efficiency.
-----
|Method|Endian StepByStep|+ − ×|Overall Token Usage|
|---|---|---|---|
|End-To-End Chain-of-Thought Scratchpad Scratchpad-Detailed|Big No Big Yes Big Yes Big Yes|63.3 32.3 00.0 88.0 83.5 08.2 94.8 73.1 00.0 99.8 97.3 52.8|31.9 494,815 59.9 4,938,148 56.0 5,747,670 83.3 10,995,191|
|---|---|---|---|
|LEFT (Our)|Little Mix|98.8 95.9 88.5|94.4 3,040,616|
|---|---|---|---|
Table 1: Performance comparison between methods, trained with 5K data for each operation with randomly
generated data. The maximum digits of input numbers for each data are equally distributed in the range of [5, 12]
for each operation. The test set is generated in a similar manner but with only 1K data per operation. LEFT uses
Little-Endian to represent all numbers and excludes the step-by-step process for addition and subtraction.
**Observation 2: Using Little-Endian Alone Ob-**
**tains Better Efficiency On Addition/Subtraction.**
During method design(Sec. 3.2), we proposed that
Little-Endian is a better substitute than existing
methods, which leverage step-by-step to reduce
the complexity required for arithmetic. However,
we have not yet examined such a statement. This
raised two major questions: (1) Would it be better
to contain step-by-step? (2) How does step-by-step
itself perform? As a result, we apply step-by-step
for closer observation. We scale down the training
data to half and a quarter of training cases than the
joint evaluation and observe the change in performance. To omit influences caused by joint training,
we train addition and subtraction separately.
As shown in Figure 3, we observe that the use
of Little-Endian outperforms other settings in both
operations, despite the use of fewer tokens when
compared to the step-by-step settings.
Moreover, we observe that the conventional
_Chain-Of-Thought approach, which does not incor-_
porate Little-Endian formatting, also significantly
lags behind the LEFT configuration. This outcome
suggests that employing a step-by-step methodology does not invariably enhance performance. Particularly in addition, both the presence and absence
of Little-Endian in the settings lead to inferior results compared to employing Little-Endian without
a step-by-step approach. This implies that reversing
the endian inherently captures critical information,
which the step-by-step process aimed to convey in
digit generation. Consequently, not only does the
step-by-step application decrease efficiency, but it
also deteriorates model performance by introducing additional chance of error propagation.
On the other hand, by taking a closer observation
of subtraction, we see whether the use of step-bystep is integrated or not, the integration of LittleEndian brings much better performance. However,
the learning curve of Little-Endian without step-by
Figure 3: Performance when integrating step-by-step.
BE stands for Big-Endian and LE stands for LittleEndian. The graph on the left shows the results after
training on addition. The the right figure shows results
for trained and evaluated on subtraction.
step is smoother than in addition. We believe this
could be related to the pretraining setting, where
the model is trained with Big-Endian. On addition,
when the carry is not occurring, knowing what endian is involved doesn’t have a strong effect on the
result, the model could falsely interpret the task as
aligning the numbers with the leftmost digit and
still achieve some level of performance. However,
on subtraction, the endian greatly affects the result,
as whether the result is negative is affected by the
most significant digit, which is strongly related to
the endian. Such difference resulted in poor performance in the beginning, as the model will have
a great chance of failing unless it actually understands the task. But it also brings faster learning as
the chance for the model to falsely understand the
task reduces. We believe such case highlights that
the arithmetic ability of a fine-tuned model could
be further improved with a backbone model that is
pretrained with Little-Endian representation.
**Observation 3: Little-Endian And Step-by-Step**
**Are Both Crucial For Multiplication.** We now
conduct a detailed examination for multiplication.
We re-evaluate our backbone model to examine our
designs on multuplication. For better comparison,
-----
ysis for the errors in our main experiment in order to find an explanation of the performance gain
caused by changing the endian. To do so, we first
selected the place where the first error occurred as
an indication of the error of each falsely inferred
test case. This is because error propagation is critical in arithmetic. We then focused on two crucial
parts during each inference step, calculating the
intermediate 1-by-n product and the cumulative
sum. As a result, we find that among the 417 errors
that occurred during intermediate calculations in
_Scratchpad-Detailed: 1. 140 errors occurred dur-_
ing calculating the intermediate product; 2. 236
errors occurred during accumulating sum. Both
operations had much better performance in LEFT,
where only 77 errors were observed during computing the intermediate product and only 22 errors
were observed when updating the cumulative sum.
The error occurrence is decreased by a factor of 10
for summation and by a factor of 2 for the intermediate product. We believe this is because the carry
is easier than to compute when the less significant
digits are already shown, which possibly could reduce the complexity in computing the result for the
current digit. The error for the intermediate sum is
reduced by a greater factor as the addition training
is transferable when accumulating sum on LEFT,
whereas in Scratchpad-Detailed, the addition task
stands more on its own. Despite slightly better
performing while evaluated on addition, it cannot
transfer its ability to other tasks like multiplication.
**Finding 2: LEFT Conducts Addition Just Like**
**Humans** We now take a closer observation of
how LEFT conducts addition. By logging the attention (Vaswani et al., 2017) scores in the model, we
observe a correlation between the output digit and
related digits from the input numbers, as shown in
Figure 4. We observe that the input digits are recognized when computing the corresponding output
during generation in some attention heads. We also
observed that, in the 22th layer, shown traits suggest the fine-tuned LLM has learned to re-compute
the carry from the previous digits. Adressing our
hypothesized during the method design, this proofs
the assumption that the model can recover the carry
when it’s used (Sec. 3.2). This is a interesting indication because it suggests Little-Endian might
be conducting training in a manner similar to how
humans conduct addition without a draft paper.
|Method|# of Epochs 1 2 3|Token Usage|
|---|---|---|
|End-To-End|- - -|186K|
|---|---|---|
|Detailed-Scratchpad|24.9 32.6 39.3|4,805K|
|---|---|---|
|LEFT w/o Step-by-Step w/ Big-Endian|61.1 89.1 91.6 - - - 24.2 42.8 52.7|2,719K 186K 2,719K|
|---|---|---|
Table 2: Multiplication scores by different epochs and
token usage. We observe settings without step-by-step
solution failed to learn the task.
we include two additional settings other than the
standard End-To-End. We first include a similar
design as we proposed for solving addition and subtraction, where the model directly outputs the result
but the input and output are both in Little-Endian.
We then include LEFT’s step-by-step design but
convert the numbers into Big-Endian. We also
measure the different performances after different
epochs of training to observe the convergence for
the same amount of training cases.
The results are shown in Table 2. We first observe that when the use of step-by-step is removed,
it becomes impossible to learn multiplication. This
demonstrates the need for step-by-step to break
down the complexity in solving multiplication is
still needed when only 5K of training data is available. We also observe that when Little-Endian is
removed, the performance further improves over
the step-by-step setting. The model also converges
much faster, as the performance after 2 epochs of
training is already close to the performance of the
last epoch, an accuracy of 91.6%. We are amazed
that LEFT achieves better performance when the
model is trained only on multiplication, suggesting
the potential for further optimization.
We also observe the number of tokens used during LEFT’s training in multiplication is approximately half of the tokens used by Scratchpad_Detailed. In addition and subtraction training, to-_
kens are better off with a factor of 20. This shows
that LEFT with better performance achieves even
greater improvement in token efficiency.
**5.2** **Case Studies**
We now conduct a detailed study of the results obtained in the previous section, seeking to discover
findings that can help future studies.
**Finding 1: Little-Endian Reduces Step-By-Step**
**Errors.** In this section, we conduct an error anal
-----
Figure 4: Visualization of attention weights during inference, with rows representing output tokens and columns
indicating input tokens involved in generation. Attention weights are square-root transformed for enhanced visibility
of correlations. The attention on the left(layer 14) reveals output digits are correlate with their inputs, while
attention(right) from layer 22 suggests carry information reconstruction.
the accuracy. Qian et al. recognized the challenger
where LLM performance drops as repeated symbols increase. Goat (Liu and Low, 2023) classified
tasks discussed the learnability of different operations and conducted supervised fine-tuning. Lee
and Kim proposed the Recursion of Thought to
divide the solving process into short contexts.
On the other hand, some works also focus on
analyzing arithmetic learning. Yuan et al. proposed MATH 401 to evaluate LLM’s arithmetic
ability. Jelassi et al. discussed the length generalization ability in arithmetic. Muffo et al. evaluated
the ability of Transformer to perform arithmetic
operations following a pipeline that decomposes
numbers in decimal before performing computations and demonstrated that this method was 60%
more accurate than GPT-3 on 5-digit addition and
subtraction tasks, but was inferior to GPT-3 on 2digit multiplication tasks. Lee et al. conducted
a compressive analysis on training strategies and
discussed that reversing the output of addition can
speed up the learning process.
**7** **Conclusion**
In this study, we introduced a novel approach for
teaching arithmetic to LLMs by reversing the number order to emphasize the least significant digit.
This strategy, which aligns with human arithmetic
practices, significantly reduces computational complexity and training data requirements, demonstrating an 11.1% increase in overall accuracy over previous SOTA and showcasing efficiency in token
usage during training. The success of our method
suggests the potential for broader applications in
mathematical problem-solving and in environments
**Max Digit** **5** **6** **7** **8** **9** **10** **11** **12**
+ 100.0 98.4 100.0 99.2 97.6 97.6 98.4 99.2
_−_ 92.0 96.8 93.6 96.8 100.0 100.0 93.6 94.4
_×_ 93.6 96.0 86.4 96.0 88.0 86.4 84.8 76.8
Table 3: Accuracy trends with increasing max input
digits. We observe a steeper decline in multiplication’s
performance compared to other operations.
**5.3** **Additional Error Analysis**
Finally, we look at the errors occurred in LEFT’s
joint experiment in the perspective of different maximum amount of input digits. As shown in Table 3,
_LEFT is able to perform well in lower digits, but_
when it is challenged towards higher digits of inputs, it loses part of its performance. Such a drop in
performance is mostly significant when it comes to
higher-digit multiplications, the digits being operated become much more complicated comparing to
addition and subtraction. This stated that, despite
well in performance, LEFT still faces challenges
when inputted with larger digits, highlighting the
need for future studies to not only focus on effectiveness and efficiency but also continue to narrow
the gap for the LLMs’ inability to scale towards
larger inputs and the amazing capability in humans.
**6** **Related Works**
Previous methods that seek to teach LLMs to learn
arithmetic mainly focus on the use of step-by-step
processes. Scratchpad (Nye et al., 2021) was one of
the early founders that recognized the use of stepby-step arithmetic solving. Zhou et al. focused
on in-context learning and showed that a detailed
version of Scratchpad could significantly improve
-----
with limited resources. We hope this study of ours
paves the way for future investigations into optimizing LLM training techniques for numerical
reasoning and arithmetic precision.
**Limitations**
Our study introduces a novel approach to arithmetic learning in LLMs but is not without limitations. Firstly, our focus on basic arithmetic operations such as addition, subtraction, and multiplication leaves unexplored territories in more complex
arithmetic and mathematical problem-solving areas.
Secondly, the generalizability of our method to domains beyond arithmetic is yet to be determined. A
critical consideration is the reliance on LLMs pretrained with standard numeral expressions; our experiments did not explore the potential benefits of
pretraining models directly with reversed numeral
expressions. Addressing these limitations could
further enhance the applicability and efficiency of
LLMs in numerical reasoning and arithmetic precision, suggesting a promising direction for future
research to broaden the scope of operations covered and to investigate the impact of pretraining
strategies.
**Ethics Statement**
Our research contributes to the field of artificial
intelligence by proposing an innovative approach
to improve the efficiency and accuracy of LLMs
in performing arithmetic operations. This advancement has the potential to positively impact areas
where numerical understanding is crucial, including but not limited to, educational technologies,
data analysis, and automated reasoning systems.
By improving the capability of LLMs to process
and understand arithmetic, our work aims to support further developments in technology that can
assist in educational settings, enhance scientific research, and provide more reliable computational
tools for industries relying on accurate numerical
data processing.
We are mindful of the importance of conducting
our research with a commitment to ethical principles, ensuring that our methodologies and results
are transparent, reproducible, and contribute constructively to the academic community and society
at large. While our work primarily focuses on the
technical aspects of improving LLMs’ arithmetic
abilities, we recognize the broader implications of
AI and machine learning advancements. Therefore,
we encourage the responsible use and continuous
ethical evaluation of AI technologies, emphasizing the importance of using such advancements to
foster positive societal outcomes.
**References**
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama
Ahmad, Ilge Akkaya, Florencia Leoni Aleman,
Diogo Almeida, Janko Altenschmidt, Sam Altman,
[Shyamal Anadkat, et al. 2023. GPT-4 technical re-](https://doi.org/10.48550/arXiv.2303.08774)
[port. CoRR, abs/2303.08774.](https://doi.org/10.48550/arXiv.2303.08774)
Rohan Anil, Sebastian Borgeaud, Yonghui Wu, JeanBaptiste Alayrac, Jiahui Yu, Radu Soricut, Johan
Schalkwyk, Andrew M. Dai, Anja Hauth, Katie Millican, David Silver, Slav Petrov, Melvin Johnson,
Ioannis Antonoglou, Julian Schrittwieser, Amelia
Glaese, Jilin Chen, Emily Pitler, Timothy P. Lillicrap, Angeliki Lazaridou, Orhan Firat, James Molloy,
Michael Isard, Paul Ronald Barham, Tom Hennigan, Benjamin Lee, Fabio Viola, Malcolm Reynolds,
Yuanzhong Xu, Ryan Doherty, Eli Collins, Clemens
Meyer, Eliza Rutherford, Erica Moreira, Kareem
Ayoub, Megha Goel, George Tucker, Enrique Piqueras, Maxim Krikun, Iain Barr, Nikolay Savinov,
Ivo Danihelka, Becca Roelofs, Anaïs White, Anders
Andreassen, Tamara von Glehn, Lakshman Yagati,
Mehran Kazemi, Lucas Gonzalez, Misha Khalman,
[Jakub Sygnowski, and et al. 2023. Gemini: A fam-](https://doi.org/10.48550/arXiv.2312.11805)
[ily of highly capable multimodal models. CoRR,](https://doi.org/10.48550/arXiv.2312.11805)
abs/2312.11805.
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg,
Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro,
[and Yi Zhang. 2023. Sparks of Artificial General](https://arxiv.org/abs/2303.12712)
[Intelligence: Early experiments with GPT-4.](https://arxiv.org/abs/2303.12712)
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Gra[ham Neubig. 2023. PAL: program-aided language](https://proceedings.mlr.press/v202/gao23f.html)
[models. In International Conference on Machine](https://proceedings.mlr.press/v202/gao23f.html)
_Learning, ICML 2023, 23-29 July 2023, Honolulu,_
_Hawaii, USA, pages 10764–10799._
Shima Imani, Liang Du, and Harsh Shrivastava. 2023.
[MathPrompter: Mathematical reasoning using large](https://aclanthology.org/2023.acl-industry.4)
[language models. In Proceedings of the 61st An-](https://aclanthology.org/2023.acl-industry.4)
_nual Meeting of the Association for Computational_
_Linguistics (Volume 5: Industry Track), pages 37–42._
Samy Jelassi, Stéphane d’Ascoli, Carles DomingoEnrich, Yuhuai Wu, Yuanzhi Li, and François Char[ton. 2023. Length generalization in arithmetic trans-](https://doi.org/10.48550/arXiv.2306.15400)
[formers. CoRR, abs/2306.15400.](https://doi.org/10.48550/arXiv.2306.15400)
Nayoung Lee, Kartik Sreenivasan, Jason D. Lee,
Kangwook Lee, and Dimitris Papailiopoulos. 2023.
[Teaching arithmetic to small transformers. CoRR,](https://doi.org/10.48550/arXiv.2307.03381)
abs/2307.03381.
-----
[Soochan Lee and Gunhee Kim. 2023. Recursion of](https://aclanthology.org/2023.findings-acl.40)
[thought: A divide-and-conquer approach to multi-](https://aclanthology.org/2023.findings-acl.40)
[context reasoning with language models. In Find-](https://aclanthology.org/2023.findings-acl.40)
_ings of the Association for Computational Linguis-_
_tics: ACL 2023, pages 623–658._
[Tiedong Liu and Bryan Kian Hsiang Low. 2023. Goat:](http://arxiv.org/abs/2305.14201)
[Fine-tuned llama outperforms gpt-4 on arithmetic](http://arxiv.org/abs/2305.14201)
[tasks.](http://arxiv.org/abs/2305.14201)
Matteo Muffo, Aldo Cocco, and Enrico Bertino. 2022.
[Evaluating transformer language models on arith-](https://aclanthology.org/2022.lrec-1.30)
[metic operations using number decomposition. In](https://aclanthology.org/2022.lrec-1.30)
_Proceedings of the Thirteenth Language Resources_
_and Evaluation Conference, pages 291–297, Mar-_
seille, France. European Language Resources Association.
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari,
Henryk Michalewski, Jacob Austin, David Bieber,
David Dohan, Aitor Lewkowycz, Maarten Bosma,
David Luan, Charles Sutton, and Augustus Odena.
[2021. Show your work: Scratchpads for intermediate](http://arxiv.org/abs/2112.00114)
[computation with language models.](http://arxiv.org/abs/2112.00114)
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida,
Carroll Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. Advances in Neural In_formation Processing Systems, pages 27730–27744._
Jing Qian, Hong Wang, Zekun Li, Shiyang Li, and
[Xifeng Yan. 2023. Limitations of language models in](https://aclanthology.org/2023.acl-long.516)
[arithmetic and symbolic induction. In Proceedings](https://aclanthology.org/2023.acl-long.516)
_of the 61st Annual Meeting of the Association for_
_Computational Linguistics (Volume 1: Long Papers),_
pages 9285–9298.
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta
Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola
[Cancedda, and Thomas Scialom. 2023. Toolformer:](https://doi.org/10.48550/arXiv.2302.04761)
[Language models can teach themselves to use tools.](https://doi.org/10.48550/arXiv.2302.04761)
_CoRR, abs/2302.04761._
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton
Ferrer, Moya Chen, Guillem Cucurull, David Esiobu,
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa,
Isabel Kloumann, Artem Korenev, Punit Singh Koura,
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten,
Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu,
Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas
[Scialom. 2023. Llama 2: Open Foundation and Fine-](https://arxiv.org/abs/2307.09288)
[Tuned Chat Models.](https://arxiv.org/abs/2307.09288)
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
[Kaiser, and Illia Polosukhin. 2017. Attention is all](https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html)
[you need. In Advances in Neural Information Pro-](https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html)
_cessing Systems 30: Annual Conference on Neural_
_Information Processing Systems 2017, December 4-9,_
_2017, Long Beach, CA, USA, pages 5998–6008._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le,
[and Denny Zhou. 2022. Chain-of-thought prompting](http://papers.nips.cc/paper_files/paper/2022/hash/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html)
[elicits reasoning in large language models. In Ad-](http://papers.nips.cc/paper_files/paper/2022/hash/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html)
_vances in Neural Information Processing Systems 35:_
_Annual Conference on Neural Information Process-_
_ing Systems 2022, NeurIPS 2022, New Orleans, LA,_
_USA, November 28 - December 9, 2022._
Zhen Yang, Ming Ding, Qingsong Lv, Zhihuan Jiang,
Zehai He, Yuyi Guo, Jinfeng Bai, and Jie Tang. 2023.
[GPT can solve mathematical problems without a cal-](https://doi.org/10.48550/arXiv.2309.03241)
[culator. CoRR, abs/2309.03241.](https://doi.org/10.48550/arXiv.2309.03241)
Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang,
[and Songfang Huang. 2023. How well do large lan-](https://doi.org/10.48550/arXiv.2304.02015)
[guage models perform in arithmetic tasks? CoRR,](https://doi.org/10.48550/arXiv.2304.02015)
abs/2304.02015.
Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron C.
Courville, Behnam Neyshabur, and Hanie Sedghi.
[2022. Teaching algorithmic reasoning via in-context](https://doi.org/10.48550/arXiv.2211.09066)
[learning. CoRR, abs/2211.09066.](https://doi.org/10.48550/arXiv.2211.09066)
-----
| [
"Jing, Zhang",
"Daniel, Zhang-Li",
"Nianyi, Lin",
"Xiaokang, Zhang",
"Jifan, Yu",
"Zijun, Yao",
"Zheyuan, Zhang",
"Lei, Hou",
"Juanzi, Li"
] | 2024-03-09T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2403.05845 | https://arxiv.org/abs/2403.05845 | https://www.semanticscholar.org/paper/1a41726a9648fcaea5e6d1a460b2b8c0639197e1 |
Rewriting Math Word Problems with Large Language Models. | N/A | null | null | [
"Kole, Norberg",
"Husni, Almoubayyed",
"Stephen E., Fancsali",
"Logan, De Ley",
"Kyle, Weldon",
"April, Murphy",
"Steve, Ritter"
] | 2023-01-01T00:00:00 | null | false | 0 | 0 | null | https://eric.ed.gov/?id=ED655931 | null | null |
RoMath: A Mathematical Reasoning Benchmark in Romanian | Mathematics has long been conveyed through natural language, primarily for human understanding. With the rise of mechanized mathematics and proof assistants, there is a growing need to understand informal mathematical text, yet most existing benchmarks focus solely on English, overlooking other languages. This paper introduces RoMath, a Romanian mathematical reasoning benchmark suite comprising three datasets: RoMath-Baccalaureate, RoMath-Competitions and RoMath-Synthetic, which cover a range of mathematical domains and difficulty levels, aiming to improve non-English language models and promote multilingual AI development. By focusing on Romanian, a low-resource language with unique linguistic features, RoMath addresses the limitations of Anglo-centric models and emphasizes the need for dedicated resources beyond simple automatic translation. We benchmark several open-weight language models, highlighting the importance of creating resources for underrepresented languages. We make the code and dataset available. | null | [
"Adrian, Cosma",
"Ana-Maria, Bucur",
"Emilian, Radoi"
] | 2024-09-17T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.11074 | https://arxiv.org/abs/2409.11074 | https://www.semanticscholar.org/paper/57d9f01ff509a81ac1b59b1faaac43b2f1ebb1ec |
|
Robustness Assessment of Mathematical Reasoning in the Presence of Missing and Contradictory Conditions | Large language models (LLMs) have demonstrated impressive performance on reasoning tasks, which can be further improved through few-shot prompting techniques. However, the current evaluation primarily focuses on carefully constructed benchmarks and neglects the consideration of real-world reasoning problems that present missing and contradictory conditions, known as ill-defined problems. Our observations suggest that existing few-shot prompting techniques are ineffective in such scenarios, often providing overconfident answers or hallucination. To further study this problem, we develop a benchmark called Problems with Missing and Contradictory conditions (PMC) and introduce two novel metrics to evaluate the performance of few-shot prompting methods in these scenarios. Our analysis using the PMC benchmark reveals a trade-off dilemma between the performance of mathematical reasoning for well-defined problems and the ability to recognize ill-defined problems. To address the challenges posed by PMC, we propose a novel few-shot prompting method called SMT-LIB Prompting (SLP), which utilizes the SMT-LIB language to model the problems instead of solving them directly. Subsequently, a double-check solving strategy checks the satisfiability and uniqueness of the solution and provides final feedback. Extensive experiments demonstrate the superiority of our SLP approach compared to existing few-shot prompting methods when dealing with problems with missing and contradictory conditions. We will open-source our benchmark and code to facilitate future research. | null | ## Robustness Assessment of Mathematical Reasoning in the Presence of Missing and Contradictory Conditions
**Shi-Yu Tian[12][†], Zhi Zhou[1][†], Lin-Han Jia[1], Lan-Zhe Guo[13][‡], Yu-Feng Li[12][‡]**
1National Key Laboratory for Novel Software Technology, Nanjing University
2School of Artifical Intelligence, Nanjing University
3School of Intelligence Science and Technology, Nanjing University
```
[email protected], [email protected]
```
_† Equal Contribution_ _‡ Corresponding Author_
**Abstract**
Large language models (LLMs) have demonstrated impressive performance on
reasoning tasks, which can be further improved through few-shot prompting techniques. However, the current evaluation primarily focuses on carefully constructed
benchmarks and neglects the consideration of real-world reasoning problems that
present missing and contradictory conditions, known as ill-defined problems. Our
observations suggest that existing few-shot prompting techniques are ineffective
in such scenarios, often providing overconfident answers or hallucination. To
further study this problem, we develop a benchmark called Problems with Missing
_and Contradictory conditions (PMC) and introduce two novel metrics to evaluate_
the performance of few-shot prompting methods in these scenarios. Our analysis
using the PMC benchmark reveals a trade-off dilemma between the performance
of mathematical reasoning for well-defined problems and the ability to recognize
ill-defined problems. To address the challenges posed by PMC, we propose a novel
few-shot prompting method called SMT-LIB Prompting (SLP), which utilizes the
SMT-LIB language to model the problems instead of solving them directly. Subsequently, a double-check solving strategy checks the satisfiability and uniqueness of
the solution and provides final feedback. Extensive experiments demonstrate the
superiority of our SLP approach compared to existing few-shot prompting methods
when dealing with problems with missing and contradictory conditions. We will
open-source our benchmark and code to facilitate future research.
**1** **Introduction**
Recently, large language models (LLMs) have demonstrated impressive performance on challenging reasoning tasks, including mathematical reasoning tasks [Hendrycks et al., 2021, Lewkowycz
et al., 2022]. Their capabilities can be further enhanced through the use of few-shot prompting
techniques such as chain-of-thought prompting (CoT) [Wei et al., 2022], program-aided language
prompting (PAL) [Gao et al., 2023]. However, these techniques are mainly evaluated on carefully
constructed and well-defined benchmark datasets [Cobbe et al., 2021, Hosseini et al., 2014, KoncelKedziorski et al., 2016, Patel et al., 2021], while neglecting the consideration of mathematical
reasoning problems in the real world are often with missing, surplus and even contradictory condition
issues [Puchalska and Semadeni, 1987].
For example, the application of AI-assisted models is becoming increasingly popular in the fields
of tax and legal consulting [Armour and Sako, 2020, Pavlova and Knyazeva, 2022, Roberts, 2024,
Saragih et al., 2023]. These models require complete and detailed representations of input information.
However, individuals often provide inadequate and contradictory information under pressure or when
Preprint. Under review.
-----
dealing with unfamiliar knowledge [Brown et al., 2009, Weller et al., 2008]. In these cases, LLMs
prefer to produce overconfident answers that lack a logical basis due to hallucination problems rather
than reject to answer the ill-defined problems(like Example 1), thereby substantially compromising
the robustness of LLMs for real-world applications.
Example 1: An example of ill-defined question and corresponding answer
**Ill-defined question: Two trains leave San Rafael at the same time. The next day, they travel**
northwards, covering 150 miles. What’s the distance covered by the first train in the two
days? # Missing information about the speed of each vehicle separately
**GPT-3.5 answer: 150 %**
**GPT-3.5 (CoT) answer: Since both trains leave at the same time and travel the same distance,**
each train covers 150 miles in the two days. The answer is 150 miles. %
To this end, we focus on investigating the robustness of LLMs in performing mathematical reasoning
tasks using few-shot prompting methods, particularly when dealing with problems that present
missing and contradictory conditions (i.e., ill-defined problems). We first construct an evaluating
benchmark, namely, Problems with Missing and Contradictory conditions (PMC). This benchmark
dataset introduces mutations to four commonly used mathematical reasoning datasets by incorporating
missing and contradictory problems, thereby resulting in eight distinct sub-datasets. Then, two novel
metrics are introduced to evaluate the performance of LLMs and some few-shot prompting methods
on our proposed PMC benchmark.
By evaluating on PMC, we found that existing few-shot prompting techniques typically assume a
priori that the problem is solvable, thus neglecting to model ill-defined problems. This leads to
the identification of ill-defined problems entirely relying on the capabilities of the backbone model
itself, leaving the algorithm with a trade-off dilemma of reasoning for well-defined problems and
recognizing ill-defined problems (Section 4).
Inspired by this, we propose a novel few-shot prompting method, namely, SMT-LIB Prompting (SLP),
which models each problem using SMT-LIB language [Barrett et al., 2010] rather than directly solve
them. Then, a double-check solving strategy is adopted to verify whether this problem is satisfiable
and has a unique solution with SMT solver. With the help of SMT-LIB Prompting, we can effectively
recognize and reject ill-defined problems to avoid the potential risks of overconfident but incorrect
answers. Finally, if the problem is well-defined, we can obtain the valid answer utilizing the SMT
solver.
Our contribution can be summarized as follows:
1) We investigate a novel problem related to mathematical reasoning, specifically focusing on
problems that involve missing and contradictory conditions. This investigation is significant
because of its practical applications in real-world scenarios.
2) We develop a benchmark, named PMC, and propose two novel metrics for evaluating the effectiveness of few-shot prompting methods in addressing problems with missing and contradictory
conditions.
3) To address the challenges presented by PMC, we propose a novel few-shot prompting approach
named SLP. This approach utilizes the SMT-LIB language to model the problems, allowing for the
identification of ill-defined problems before LLMs provide overconfident but incorrect answers.
4) We conduct comprehensive experiments and case analysis on our proposed SLP method as well
as existing methods. The results demonstrate that our proposal outperforms existing prompting
methods by a significant margin.
**2** **PMC Benchmark**
In this section, we present our PMC benchmark in detail, which comprises two versions, i.e.,
M (missing conditions) and C (contradictory conditions) versions, based on four common math
-----
mathematical reasoning datasets. Then, we introduce the evaluation protocol and the metrics used to
evaluate the LLMs’ performance.
**2.1** **Dataset Construction**
We choose four common mathematical reasoning datasets, i.e., GSM8k [Cobbe et al., 2021],
SVAMP [Patel et al., 2021], AddSub [Hosseini et al., 2014], and MultiArith [Koncel-Kedziorski
et al., 2016], as the seed datasets to construct our benchmark. Specifically, we mutate the testing
problems in the seed datasets to generate the M and C versions of our benchmark using the following
three strategies:
Example 2: An Example of PMC
**Statement: Gunter is trying to count the jelly beans in a jar. He asks his friends how many**
they think are in the jar. One says 80. Another says 20 more than half the first one. A third
says 25% more than the first one. What is their average guess? # Excepted Answer: 80
**M Version: Gunter is trying to count the jelly beans in a jar. He asks his friends how many**
they think are in the jar. One says 80 a certain number. Another says 20 more than half the
first one. A third says 25% more than the first one. What is their average guess?
**C Version: Gunter is trying to count the jelly beans in a jar. He asks his friends how many**
they think are in the jar. One says 80. Another says 20 more than half the first one. A third
says 25% more than the first one. What is their average guess? If the first friend’s guess is
more than 77 jelly beans (# Aline with the condition),the average guess will be more than 86
(# Contrary to the expected answer) .
**Prompt-Based Removing: We first adopt a prompt-based method to remove one condition from the**
original testing problems using a carefully designed template and GPT-4 [OpenAI, 2023] model. We
obtain an initial M version of each dataset where each problem has one missing condition, thereby
becoming an ill-defined math problem. Based on the initial M version, we implement an LLM-human
collaborative refinement strategy to further refine the constructed datasets: (1) We adopt the zero-shot
baseline method with a carefully designed prompt and the majority voting strategy, to verify each
construction. Through this approach, any problems that fail to meet the desired standards are filtered
out. (2) Filtered problems are manually inspected and subsequently revised to ensure the correctness
of mutation.
**Template-Based Removing: We adopt a straightforward template-based method to select one**
```
[number] in problem statement and replace it with some. Then, the LLM-human collaborative
```
refinement strategy is adopted to ensure the quality of constructed datasets.
**Template-Based Addition: We design a template-based method to add contradictory hints to the end**
of each problem. The template is if [variable] [more/less] than [number], the answer
```
will be [more/less] [number], where [variable] is a variable selected in the problem state
```
ment and [number] and [more/less] are automatically decided, ensuring problems unsolved.
We applied the three methods to the four seed datasets (details can be found in Appendix A) and
verified that there was not a satisfiable and unique solution solution to each mutated problem. Finally,
our PMC benchmark contains eight different datasets, i.e., GSM8k-M, GSM8k-C, SVAMP-M,
SVAMP-C, AddSub-M, AddSub-C, MultiArith-M, and MultiArith-C, which can comprehensively
evaluate the robustness of LLMs in mathematical reasoning problems with missing and contradictory
conditions. An illustration of mutated M and C versions of PMC is presented in Example 2, where
red strike-through indicates deleted sentences and blue indicates added sentences.
**2.2** **Evaluation Protocol**
To assess the robustness of few-shot prompting methods in mathematical reasoning problems involving missing and contradictory conditions, we propose two distinct evaluation metrics: the Rejection
-----
Rate (R-Rate) and the Reaction Score (R-Score). These metrics serve to evaluate the ability of
the few-shot prompting methods to handle ill-defined problems and effectively respond when both
ill-defined and well-defined problems are given.
We denote P as the textual problem space, where p ∼P is a problem and S(p) is the corresponding
solution. M (p) denotes the solution generated using the few-shot prompting method M on problem
_p, where M_ (p) ∈R indicates that the method rejects to answer this problem and R denotes the
rejection domain. Then, for an ill-defined problem setPwell ∈P, we define the R-Rate and R-Score as follows. Pill ∈P and a well-defined problem set
**Rejection Rate measures the ability of the few-shot prompting methods to identify ill-defined**
problems. We denote the R-Rate as the ratio of ill-problems rejected by each method to the total
number of ill-problems:
R-Rate = _p∈Pill_ [I][ [][M] [(][p][)][ ∈R][]] (1)
**Pill**
P _|_ _|_
**Reaction Score comprehensively measures the ability of the few-shot prompting methods to handle**
ill-defined problems as well as solve well-defined problems. For well-defined problems, the method
will gain one point for each correct answer and lose one point for each wrong answer. Rejecting to
answer well-defined problems will not affect the R-Score as the rejection will not result in the user
getting the wrong answer. For ill-defined problems, the method will gain one point for each correct
rejection. The R-Score is defined as:
R-Score =
I[S(p) = M (p)] − I[S(p) ̸= M (p) ∧ _M_ (p) /∈R]
_p∈XPwell_ (2)
+ I[M (p) ∈R]
_pX∈Pill_
For computing R-Score, this metric is closely related to the ratio of normal-defined to pathologicaldefined questions in the test dataset. We use a hyperparameter α = |Pwell|/|Pill|, to control the
proportion of the test dataset consisting of the two types of samples. Unless otherwise specified later,
we set α to 1.
**3** **Methodology**
Existing few-shot prompting methods typically assume a priori that the problem is solvable, thus
selectively model the conditions that are favorable for the problem to be solved and construct some
illusory assumptions for the missing information. This leads to the identification of ill-defined
problems entirely relying on the capabilities of the backbone model itself, leaving the algorithm with
a trade-off dilemma of reasoning for well-defined problems and recognizing ill-defined problems.
In this section, we will introduce our SLP approach to preliminarily address the challenges in our
PMC benchmark, which contains two parts: SMT-LIB Prompting and a double-check solving strategy.
The former will use the SMT-LIB language to model the problem robustly by defining variables and
adding constraints for each condition. The latter will check if the problem has a satisfiable and unique
solution and give corresponding feedback(answer/reject) to the user. An overall illustration of our
SLP approach is shown in Figure 1.
**3.1** **SMT-LIB Prompting**
The SMT-LIB(Satisfiability Modulo Theories Library) [Barrett et al., 2010] is a tool for working with
satisfiability problems. It provides a standard notation compatible input language for representing
logical formulas. And powful SMT solvers, such as Z3 [de Moura and Bjørner, 2008], extend the
classical boolean satisfiability problem (SAT problem) to enable verification of numerical arithmetic
problems, among others. The SMT solver will initially determine whether the modeled problem is
satisfiable (SAT/UNSAT). If it is satisfiable, the solver will then provide a feasible solution within the
feasible domain of the problem.
Inspired by this, we propose our SMT-LIB prompting approach, referred to as SLP, for addressing
the solving and recognition dilemma in our PMC benchmark. To the best of our knowledge, we are
-----
**Question:**
A new program had some
downloads in the first month.
The number of downloads in
the second month was three
times as many as the
downloads in the first month,
but then reduced some in the
third month. How many
downloads did the program
have total over the three
months?
**Modeling problem** **Double-**
**Check**
**Prompts based on SMT-Lib** **Solving**
Solver.add(second ······ = first * 3) **Strategy**
Solver.add(third = second - reduced)
Solver.add(total = first + second +third)
return total, solver
**SMT Solver**
**UNSAT**
**Double-Check Solving Strategy** **Reject**
- ····· **S.check()** **S.solve()** **Candidate Solution**
Solver.add(second = first * 3)
Solver.add(third = second - reduced)
Solver.add(total = first + second +third)return total, solver Add new constraint **147** **same** **Answer**
**Modeling of problem by LLM** **Satisfiability** **Uniqueness**
Figure 1: An illustration of our SLP approach.
**User**
the first to apply a similar tool in the field of robust math reasoning. We utilize LLMs to generate
SMT-LIB expressions for each problem with few-shot prompting techniques to model the question.
These expressions are then evaluated by the SMT solver to verify their satisfiability.
To elaborate on our SLP approach, let f represent the formalized SMT-LIB expression for a given
problem p. We prepare a set of example problems Pexample = {p1, p2, . . ., pk} along with their
corresponding SMT-LIB expressions Fexample = {f1, f2, . . ., fk}. The prompts, which include
in-context examples, are designed as
**prompt ≡< p1, f1 > ∥** _< p2, f2 > ∥_ _. . . ∥_ _< pk, fk > ∥ptest_
where ∥ denotes string concatenation and < ·, · > represents a pair of a problem and its corresponding
formal expression. For each problem ptest, we feed the prompt to the LLMs and obtain the
formalized SMT-LIB expression ftest from the LLMs’ response.
The formal expression begins with denoting the variables necessary to solve the problem, followed
by adding constraints to these variables. Finally, we return the variable to the SMT solver for further
verification. In our approach, we directly use Z3 as the SMT solver and instruct the LLMs to generate
Z3 formal expressions to avoid potential errors caused by converting between the standard SMT-LIB
language and the Z3 solver. Full prompts can be found in Appendix D.
**3.2** **Double-Check Solving Strategy**
After modeling the problem, we apply a double-check solving strategy to solve these problem. We
use a SMT solver to verify whether this problem is well-defined. Specifically, we verify both the
_satisfiability of the formal expression and the uniqueness of the solution. Then If the formal expression_
is either unsatisfiable or encompasses multiple solutions, the SLP approach identifies the problem as
ill-defined and rejects to answer the problem. If it passes this double-check process, we return the
verified answer to the user.
To be specific, to check the satisfiability of the formal expression, we utilize the Z3 solver. SLP
approach regards the problem as ill-defined and rejects the answer if the formal expression is
unsatisfiable(UNSAT). To assess the uniqueness of the solution, We develop this check through
a two-stage process. First, we utilize the Z3 solver to determine one solution and subsequently
incorporate this candidate solution as a constraint into the formal expression. If the formal expression
remains satisfiable, then it implies that the formal expression encompasses multiple solutions, leading
the SLP algorithm to reject the answer as it violates the uniqueness of the answer.
Following the verification of both satisfiability and uniqueness, the SLP algorithm delivers the final
answer in the form of a satisfiable and unique solution. A detailed implementation of the SLP
algorithm is presented in Algorithm 1 in Appendix B.
-----
Table 1: The comparison between SLP and comparison methods with GPT-3.5 Turbo (0710 version).
- The results are conducted using GPT-3.5 Turbo (0710 version). The best results are in bold.
|AddSub AddSub-M AddSub-C MultiArith MultiArith-M MultiArith-C Methods Accuracy R-Rate R-Score R-Rate R-Score Accuracy R-Rate R-Score R-Rate R-Score|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||Accuracy|R-Rate R-Score|R-Rate R-Score|Accuracy|R-Rate R-Score|R-Rate R-Score|
|Basic CoT PAL PHP Ours|94.43 91.63 89.87 91.39 88.86|63.54 26.64 63.03 27.91 2.28 -8.98 74.68 29.87 90.89 37.39|2.02 -4.55 15.70 1.51 0.25 -10.00 0.76 -7.08 85.06 31.89|88.46 98.00 98.17 98.50 86.50|63.33 20.65 56.17 25.83 2.33 -0.41 57.17 27.25 81.50 33.50|2.00 -9.96 10.50 3.00 1.67 -1.42 0.16 -1.25 85.00 35.25|
|SVAMP SVAMP-M SVAMP-C GSM8k GSM8k-M GSM8k-C Methods Accuracy R-Rate R-Score R-Rate R-Score Accuracy R-Rate R-Score R-Rate R-Score|||||||
||Accuracy|R-Rate R-Score|R-Rate R-Score|Accuracy|R-Rate R-Score|R-Rate R-Score|
|Basic 75.37 72.41 15.62 13.14 -14.16 69.45 56.37 -2.61 8.50 -26.57 CoT 81.58 51.10 9.83 9.40 -11.06 71.72 51.25 -1.08 30.10 -12.39 PAL 82.80 6.30 -13.70 1.00 -16.80 76.55 5.15 -18.95 0.38 -21.34 PHP 80.80 66.30 15.85 1.90 -16.35 78.71 53.85 8.57 2.04 -17.32 Ours 81.80 81.00 25.80 85.00 27.80 69.22 68.39 10.42 78.09 15.27|||||||
**4** **Experiments**
In this section, we first present the experimental results for analyzing the performance of existing
methods and demonstrating the superiority of our SLP approach. Then, a series of discussions are
conducted to investigate the challenges of our PMC benchmark.
**4.1** **Experiment Setup**
**Datasets and LLMs. As introduced in Section 2, we conduct experiments on our PMC benchmark,**
which contains eight different constructed datasets, i.e., GSM8k-M, GSM8k-C, SVAMP-M, SVAMPC, AddSub-M, AddSub-C, MultiArith-M, and MultiArith-C, as well as their original versions. For all
experiments, we choose the 0710 version of GPT-3.5 Turbo and GPT-4 as the backend LLMs, which
are the most advanced and widely used LLMs at present.
**Evaluated methods. We choose five well-performed few-shot prompting methods for evaluation,**
including one zero-shot baseline method, two language-based few-shot prompting methods, one
program-based few-shot prompting method, and our proposed SLP approach. The methods are
introduced as follows:
- Basic, which is the zero-shot baseline method, directly feeds the problem and instructions to
the LLMs without any example problem in the context.
- CoT [Wei et al., 2022], requires the model to explicitly output intermediate step-by-step
reasoning through natural language before providing the final answer.
- PAL [Gao et al., 2023], converts each step of problem-solving into a programming language
format and subsequently utilizes an external programming language interpreter for execution,
thereby obtaining the results.
- PHP [Zheng et al., 2023], involves asking a question multiple times and using the answer
obtained from the last inquiry as a hint for the next inquiry until the same answer is obtained
multiple times in a row.
- SLP (ours), utilizes SMT-LIB to model the problems, then uses an external SMT solver to
check for a feasible solution to the problem as well as obtain the ground-truth answer.
**Prompts Setting. We prepare eight contexts containing corresponding problem-solving forms for**
each method. For the context of the comparison method, we followed the settings in their original
paper. Meanwhile, for our method, we choose a combination of four normal problems, two missing
condition problems, two contradictory problems, and their corresponding SMT modeling statements
(Python Z3 format). Detailed prompts can be found in Appendix D.
-----
Table 2: The comparison between SLP and comparison methods with GPT-4 (0710 version).
- The results are conducted using GPT-4 (0710 version). The best results are in bold.
|AddSub AddSub-M AddSub-C MultiArith MultiArith-M MultiArith-C Methods Accuracy R-Rate R-Score R-Rate R-Score Accuracy R-Rate R-Score R-Rate R-Score|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||Accuracy|R-Rate R-Score|R-Rate R-Score|Accuracy|R-Rate R-Score|R-Rate R-Score|
|Basic CoT PAL PHP Ours|94.57 96.45 95.95 95.95 91.90|90.61 41.06 84.77 39.31 75.19 33.67 92.40 42.65 94.43 41.64|1.30 -3.02 2.03 -2.41 0.51 -3.67 2.53 -2.27 81.52 35.18|92.17 99.00 98.00 96.33 95.33|89.67 37.17 81.00 39.50 65.83 31.33 96.34 45.41 93.50 43.92|4.16 -5.58 3.33 2.33 3.00 -0.08 3.83 -0.83 85.00 39.67|
|SVAMP SVAMP-M SVAMP-C GSM8k GSM8k-M GSM8k-C Methods Accuracy R-Rate R-Score R-Rate R-Score Accuracy R-Rate R-Score R-Rate R-Score|||||||
||Accuracy|R-Rate R-Score|R-Rate R-Score|Accuracy|R-Rate R-Score|R-Rate R-Score|
|Basic 82.35 79.82 25.40 10.38 -9.24 57.69 75.25 -3.62 14.43 -33.83 CoT 92.14 74.30 30.72 5.30 -4.02 92.61 62.63 25.73 8.46 -1.69 PAL 95.50 61.10 26.35 0.50 -3.95 83.09 41.13 9.71 0.53 -10.72 PHP 89.40 81.60 33.45 7.30 -3.70 94.38 78.01 33.89 3.41 -3.41 Ours 94.80 83.20 37.45 86.00 38.85 88.17 81.12 32.18 80.52 31.88|||||||
**4.2** **Main Results**
The main results of our study are presented in Tables 1 and 2, with all methods using greedy decoding
(i.e. temperature = 0). We report five metrics for each dataset. They are the accuracy (×100) of the
original dataset, rejection rate (R-Rate) (×100) of ill-defined questions and reaction score (R-Score)
for the mixed dataset. We conduct the following observations based on the presented results.
**The few-shot prompting methods hinder the rejection of problems with missing conditions. For**
problems with missing conditions, the zero-shot baseline method (i.e., Basic) already exhibits good
identification capabilities. As the performance of the backend LLMs improves (from GPT-3.5 Turbo
to GPT-4), this capability is further enhanced. However, current few-shot techniques hurt LLMs’
ability to identify ill-defined problems. Particularly for the PAL method, the rejection rate of problems
with missing conditions using GPT-3.5 significantly degrades compared to the zero-shot baseline
method. When using GPT-4, the rejection rate of PAL also decreases by at least 15% compared to the
zero-shot baseline method. Upon examining error examples, we discovered the reason that, during
the process of writing the programming language, PAL defaults the missing value to zero or another
random number, leading to incorrect results.
**The few-shot prompting methods cannot handle problems with contradictory conditions. All**
comparison methods exhibit poor performance in identifying problems with contradictory conditions,
where they can only identify no more than 15.7% of ill-defined problems with contradictory conditions. Upon analyzing error samples, it was observed that many methods choose to disregard the
contradictory conditions and instead, focus on directly modeling the preceding problem part. When
facing two contradictory conditions, both zero-shot baseline and few-shot prompting methods fail to
recognize the conflicts, resulting in choosing randomly to believe in one condition while ignoring the
other. Such responses pose significant security risks in practical scenarios, as users do not receive
feedback indicating that the query is ill-defined.
**SLP improves the LLMs’ ability to handle both well-defined and ill-defined problems simulta-**
**neously. As shown in Tables 1 and 2, our SLP approach enhances the LLMs’ ability to address both**
well-defined and ill-defined problems simultaneously. For the ill-defined problem, involving either
missing conditions or contradictory conditions, SLP achieved a recognition rate of over 80% in nearly
all datasets. Particularly for contradictory conditions, SLP demonstrates a significant performance advantage over other comparative methods. In the case of Reaction Score, SLP consistently outperforms
other methods across major datasets, demonstrating its robustness in handling real-world complexities
and diverse scenarios. PHP surpasses SLP in one dataset, but it requires multiple inference processes,
incurring substantial additional resources and time consumption. In contrast, our method only needs
to call the LLM once to achieve performance that is nearly on par with PHP.
-----
CoT
CoT CoT-2 CoT-4 CoT-6 CoT-8
|PAL|Col2|Col3|
|---|---|---|
||||
||Accuracy R-Rate (M) R-Rate (C)||
||||
||||
||||
||||
PAL PAL-2 PAL-4 PAL-6 PAL-8
100
80
60
40
20
Accuracy
R-Rate (M)
R-Rate (C)
Accuracy R-Rate (M) R-Rate (C)
Figure 2: The performance comparison of CoT, PAL and their variants. The results indicate that
simply adjusting the few-shot examples in the context presents a trade-off between effectively
addressing both well-defined and ill-defined problems.
**4.3** **More Discussion**
In this part, we present a series of discussions to investigate the challenges of our PMC benchmark
and the SLP approach. Due to page limitations, most of the results and analyses are detailed in the
Appendix C. We primarily highlight the trade-off dilemma of existing methods and explain why our
approach can effectively resolve ill-defined problems through case analysis.
**4.3.1** **Trade-off dilemma**
Contextual prompts play a crucial role in LLM reasoning. All the algorithms we compared employ
8 in-context examples to guide the LLMs in generating reasoning paths or programs. A natural
conjecture is that the reason for the limited success of existing few-shot methods on the M and C
versions lies in the insufficiency of the provided few-shot examples to enable the LLMs to recognize
problems with missing and contradictory conditions. To validate this hypothesis, we introduce two
sets of comparison methods on CoT and PAL, denoted as CoT-X and PAL-X, which replace X
well-defined examples with X ill-defined examples in the context, aiming to instruct the LLMs to
recognize the ill-defined problems. The corresponding results are presented in Figure 2.
We observed that, in the CoT method, the accuracy of original problems tends to decrease as the
number of ill-defined examples increases from 0 to 8. Concurrently, the rejecting rate of ill-defined
problems increases. These findings suggest that the performance of LLMs in mathematical reasoning
tasks and rejecting tasks is significantly impacted by few-shot prompting. Furthermore, there exists
a trade-off between the performance of these two tasks. This phenomenon gives rise to a dilemma
in our PMC benchmark, where altering the few-shot examples fails to address the issue of LLMs
rejecting ill-defined problems fundamentally. Instead, it only allows for a compromise between
reasoning and rejecting.
In the case of the PAL method, we observed that the accuracy and rejecting rate remain stable as the
number of ill-defined examples increases from 0 to 6 until all examples are replaced with ill-defined
problems. The stability in accuracy is attributed to the PAL method’s reliance on external tools for
completing mathematical reasoning tasks. However, the consistently low rejecting rate of ill-defined
problems suggests that sequentially executed programs are not suitable for effectively identifying
and rejecting ill-defined problems. In contrast to the previous two methods, our algorithm naturally
avoids such issues. We model each condition and ascertain whether a satisfiable and unique solution
can be obtained with the assistance of the SMT solver.
**4.3.2** **Case analysis**
In this section, we present concrete case studies to show why our approach can solve ill-defined
problems. As shown in Figure 3, we take two specific M-type and C-type problems as examples. For
Problem 1, crucial information regarding the students attending the jazz class is missing. Traditional
-----
**Ill-defined problem 1:(M-type)** **Core step by Slp**
In a dance class of 20 students, 20% enrolled in jazz_students = Int('jazz_students’)
contemporary dance, and the rest enrolled in solver.add(hip_hop_students + jazz_students == \
jazz or hip-hop dance. What percentage of the total_students - contemporary_students)
**# lack the information of jazz student** return hip_hop_percentage, solver
**Core step by Slp**
jazz_students = Int('jazz_students’)
solver.add(hip_hop_students + jazz_students == \
total_students - contemporary_students)
solver.add(hip_hop_percentage == hip_hop_students * 100/total_students)
return hip_hop_percentage, solver
A robe takes 2 bolts of blue fiber and half that much solver.add(white_fiber == blue_fiber / 2)
the number of bolts of blue fiber is less than 6, the solver.add(Implies(blue_fiber < 6, total_bolts > 5))
**Ill-defined problem 2:(C-type)**
A robe takes 2 bolts of blue fiber and half that much
white fiber. How many bolts in total does it take? If
the number of bolts of blue fiber is less than 6, the
answer will be more than 5. **#Contradictory hint**
Figure 3: Example of how SLP method models ill-defined problem.
methods typically default this missing value to 0, resulting in an incorrect answer. In contrast, our
method only imposes a “int” constraint (highlighted in red) on this missing variable. This way, the
question will be rejected as ill-defined during the uniqueness test because it returns two different
solutions.
For Problem 2, most comparison methods ignore our contradiction hints and model only the preceding
conditions. Some methods disregard the preceding valid conditions and model only the ill-defined
hints. Both approaches yield incorrect answers by selectively modeling parts of the information.
In contrast, our approach models all the conditions and then refuses to answer the question by
identifying it as paradoxically defined through a satisfiability check. Detailed answers to the individual
comparison methods for these two problems can be found in the Appendix C.
**5** **Related work**
Our paper is related to three branches of studies, that is, few-shot prompting methods, mathematical
reasoning for LLMs and natural language benchmarks for LLM robustness.
**Few-shot prompting methods. Few-shot technique has gained a lot of momentum due to its time**
and power-saving properties. CoT-type methods [Wei et al., 2022, Zhang et al., 2022, Zhou et al.,
2022, Zheng et al., 2023] strengthen model inference by displaying the intermediate inference process
of the model, improve performance on multiple review benchmarks, and perform very well in the
zero-shot setting [Kojima et al., 2022]. Program-type methods [Chen et al., 2022, Gao et al., 2023,
Chowdhery et al., 2023]transform natural language into an easily processable programming language,
which is then executed by an external program interpreter. More external engines have also been
explored, such as Calculators, search engines, translators [Schick et al., 2023, Lu et al., 2023a].
Ensemble-optimized [Wang et al., 2022, Li et al., 2022] approach further improve performance by
integrating and calibrating multiple inference paths and fusing existing methods.
**Mathematical Reasoning for LLMs. Mathematical reasoning is a crucial aspect in evaluating model**
reasoning skills, and there are currently two predominant lines for enhancing these skills. One line
involves leveraging the existing few-shot prompt tool, as detailed in the preceding section. The other
is centered around fine-tuning strategy. Metamath [Yu et al., 2023] bootstraps mathematical questions
by rewriting the question from multiple perspectives, and finetune the LLaMA-2 [Touvron et al.,
2023] models on this dataset. WizardMath [Luo et al., 2023] combine supervised fine-tuning and
PPO [Schulman et al., 2017] training to enhance math reasoning abilities by a reinforced evol-instruct
method. Mugglemath [Li et al., 2023] explores in detail the relationship between enhanced data
strategies and improved model inference after fine-tuning. Mathvista [Lu et al., 2023b] focuses
on the multimodal domain, combines challenges from diverse mathematical and visual tasks, and
systematically investigates LLM’s ability to reason mathematically in visual contexts.
**Natural language benchmarks for LLM robustness. Previous work can be broadly categorized**
into two types, perturbations to model inputs and prompting with noisy ground truth. Adversarial
example generation [Jia and Liang, 2017, Morris et al., 2020, Wang et al., 2021] and irrelevant
context [Sinha et al., 2019, Clark et al., 2020, Han et al., 2022] are the two main types of perturbations
to the input. [Shi et al., 2023] have found that adding a single irrelevant sentence into the problem
description significantly degrades the performance. A line of work [Weston et al., 2015, Yoo et al.,
2022, Madaan and Yazdanbakhsh, 2022] studies the model performance with incorrect prompting
exemplars, i.e., the example problems are paired with wrong answers. One intriguing observation is
-----
that the alignment between the label and the question does not always serve as a conclusive factor in
determining performance.
**6** **Conclusion**
In this paper, we study the mathematical reasoning problems with missing and contradictory conditions and propose a novel PMC benchmark to evaluate the robustness of LLMs and few-shot
prompting methods. Our observations reveal a dilemma between the performance of mathematical
reasoning for well-defined problems and the ability to recognize and reject ill-defined problems. To
solve this trade-off, we propose a novel few-shot prompting method called SLP, which utilizes the
SMT-LIB language to model the problems instead of solving them directly. Then, a double-check
strategy is applied to verify the problem and thereby provide the final feedback. Extensive experiments demonstrate the superiority of our SLP approach compared to existing few-shot prompting
methods. We hope our benchmark and proposed SLP approach can facilitate future research about
the robustness of mathematical reasoning tasks and few-shot prompting methods.
**References**
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. In Advances in Neural
_Information Processing Systems Track on Datasets and Benchmarks, 2021._
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh,
Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning problems
with language models. Advances in Neural Information Processing Systems, pages 3843–3857, 2022.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and
Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural
_Information Processing Systems, pages 24824–24837, 2022._
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham
Neubig. Pal: Program-aided language models. In Proceedings of the 40th International Conference on
_Machine Learning, pages 10764–10799, 2023._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert,
Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv
_preprint arXiv:2110.14168, 2021._
Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. Learning to solve arithmetic
word problems with verb categorization. In Proceedings of the 2014 Conference on Empirical Methods in
_Natural Language Processing, pages 523–533, 2014._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. Mawps: A math
word problem repository. In Proceedings of the 2016 conference of the north american chapter of the
_association for computational linguistics, pages 1152–1157, 2016._
Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are nlp models really able to solve simple math word
problems? arXiv preprint arXiv:2103.07191, 2021.
Ewa Puchalska and Zbigniew Semadeni. Children’s reactions to verbal arithmetical problems with missing,
surplus or contradictory data. For the learning of mathematics, 7(3):9–16, 1987.
John Armour and Mari Sako. Ai-enabled business models in legal services: from traditional law firms to
next-generation law companies? Journal of Professions and Organization, 7(1):27–46, 2020.
KS Pavlova and NV Knyazeva. Artificial intelligence technologies in tax consulting and forensic tax expertise.
_Digital Technologies in the New Socio-Economic Reality, pages 291–300, 2022._
Taylor Roberts. Utilizing generative artificial intelligence for tax and legal consultancy: Design science approach.
2024.
Arfah Habib Saragih, Qaumy Reyhani, Milla Sepliana Setyowati, and Adang Hendrawan. The potential of an
artificial intelligence (ai) application for the tax administration system’s modernization: the case of indonesia.
_Artificial Intelligence and Law, 31(3):491–514, 2023._
-----
Rhonda Brown, Stewart Dunn, Karen Byrnes, Richard Morris, Paul Heinrich, and Joanne Shaw. Doctors’ stress
responses and poor communication performance in simulated bad-news consultations. Academic Medicine,
84(11):1595–1602, 2009.
Susan C Weller, Roberta D Baer, Javier Garcia de Alba Garcia, and Ana L Salcedo Rocha. Susto and nervios:
Expressions for stress and depression. Culture, Medicine, and Psychiatry, 32:406–420, 2008.
Clark Barrett, Aaron Stump, Cesare Tinelli, et al. The smt-lib standard: Version 2.0. In Proceedings of the 8th
_international workshop on satisfiability modulo theories, volume 13, page 14, 2010._
OpenAI. Gpt-4. Technical report, 2023.
Leonardo Mendonça de Moura and Nikolaj S. Bjørner. Z3: an efficient SMT solver. In C. R. Ramakrishnan
and Jakob Rehof, editors, Procddings of the 14th Tools and Algorithms for the Construction and Analysis of
_Systems International Conference, volume 4963 of Lecture Notes in Computer Science, pages 337–340, 2008._
Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo Li, and Yu Li. Progressive-hint prompting improves
reasoning in large language models. arXiv preprint arXiv:2304.09797, 2023.
Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. Automatic chain of thought prompting in large
language models. arXiv preprint arXiv:2210.03493, 2022.
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire
Cui, Olivier Bousquet, Quoc Le, et al. Least-to-most prompting enables complex reasoning in large language
models. arXiv preprint arXiv:2205.10625, 2022.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language
models are zero-shot reasoners. In Advances in neural information processing systems, pages 22199–22213,
2022.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompting: Disentangling
computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588, 2022.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul
Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling
with pathways. Journal of Machine Learning Research, 24(240):1–113, 2023.
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola
Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv
_preprint arXiv:2302.04761, 2023._
Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng
Gao. Chameleon: Plug-and-play compositional reasoning with large language models. arXiv preprint
_arXiv:2304.09842, 2023a._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and
Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint
_arXiv:2203.11171, 2022._
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. On the advance of
making language models better reasoners. arXiv preprint arXiv:2206.02336, 2022.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li,
Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language
models. arXiv preprint arXiv:2309.12284, 2023.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov,
Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat
models. arXiv preprint arXiv:2307.09288, 2023.
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin,
Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical reasoning for large language
models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583, 2023.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization
algorithms. arXiv preprint arXiv:1707.06347, 2017.
Chengpeng Li, Zheng Yuan, Guanting Dong, Keming Lu, Jiancan Wu, Chuanqi Tan, Xiang Wang, and Chang
Zhou. Query and response augmentation cannot help out-of-domain math reasoning generalization. arXiv
_preprint arXiv:2310.05506, 2023._
-----
Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang,
Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in
visual contexts. arXiv preprint arXiv:2310.02255, 2023b.
Robin Jia and Percy Liang. Adversarial examples for evaluating reading comprehension systems. arXiv preprint
_arXiv:1707.07328, 2017._
John X Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. Textattack: A framework for
adversarial attacks, data augmentation, and adversarial training in nlp. arXiv preprint arXiv:2005.05909,
2020.
Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah,
and Bo Li. Adversarial glue: A multi-task benchmark for robustness evaluation of language models. arXiv
_preprint arXiv:2111.02840, 2021._
Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle Pineau, and William L Hamilton. Clutrr: A diagnostic
benchmark for inductive reasoning from text. arXiv preprint arXiv:1908.06177, 2019.
Peter Clark, Oyvind Tafjord, and Kyle Richardson. Transformers as soft reasoners over language. arXiv preprint
_arXiv:2002.05867, 2020._
Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Luke Benson, Lucy Sun, Ekaterina
Zubova, Yujie Qiao, Matthew Burtell, et al. Folio: Natural language reasoning with first-order logic. arXiv
_preprint arXiv:2209.00840, 2022._
Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed H Chi, Nathanael Schärli, and
Denny Zhou. Large language models can be easily distracted by irrelevant context. In Proceedings of the
_40th International Conference on Machine Learning, pages 31210–31227, 2023._
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart Van Merriënboer, Armand Joulin, and
Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint
_arXiv:1502.05698, 2015._
Kang Min Yoo, Junyeob Kim, Hyuhng Joon Kim, Hyunsoo Cho, Hwiyeol Jo, Sang-Woo Lee, Sang-goo Lee,
and Taeuk Kim. Ground-truth labels matter: A deeper look into input-label demonstrations. arXiv preprint
_arXiv:2205.12685, 2022._
Aman Madaan and Amir Yazdanbakhsh. Text and patterns: For effective chain of thought, it takes two to tango.
_arXiv preprint arXiv:2209.07686, 2022._
-----
**A** **Dataset Details**
We construct our impossible dataset on the basis of four basic mathematical datasets by adding
irrelevant conditions, adding contradictory conditions, and reducing necessary conditions.
**A.1** **Origin dataset detatils**
1. AddSub. [Hosseini et al., 2014] A dataset of addition and subtraction arithmetic word
problems with 395 examples.
2. MultiArith. [Koncel-Kedziorski et al., 2016] A dataset consists of math word problems
requiring multiple reasoning steps and operations with 600 examples.
3. SVAMP. [Patel et al., 2021] A benchmark consists of one unknown arithmetic word problem
for up-to-4 grade level students by making simple changes to a set of problems from another
existing dataset. It has 1000 examples.
4. GSM8K. [Cobbe et al., 2021] This dataset consists of 8.5K high-quality grade school math
problems created by human problem writers. It is divided into 7.5K training problems
and 1K test problems. These problems take between 2 and 8 steps to solve, and solutions
primarily involve performing a sequence of elementary calculations using basic arithmetic
operations (+ - / *) to reach the final answer. We simply use its test problems in our
work(1318 examples).
**A.2** **PMC Construction**
For the C version of each dataset, we adopt our template-based addition method to add contradictory
hints to the end of each problem. For the M version of SVAMP, AddSub, and MultiArith datasets, we
apply our template-based removing method to each dataset. For the M version of GSM8k, we apply
our prompt-based removing method to each problem because GSM8k is more complex and cannot be
simply mutated using a template-based method.
**B** **Algorithm details**
**B.1** **Pseudo code**
**Algorithm 1 Solver for SLP Appraoch**
**Input: Query variable V, Z3 SMT solver S**
**Output: The response of SLP approach ans**
{Check the Satisfiability.}
1: sat ← S.check()
2: if sat = UNSAT then
3: _ans ←_ _Reject {No solution.}_
4: **return ans**
5: end if
{Check the Uniqueness.}
6: model = S.model()
7: ans ← _model[V ]_
8: Add constraint V ̸= ans to SMT solver S
9: sat ← S.check()
10: if sat = SAT then
11: _ans ←_ _Reject {Multiple solutions.}_
12: **return ans**
13: end if
{Solve the Problem.}
14: return ans
-----
Table 3: Comparison of “Unsolvable" prompts used by CoT or not with GPT-3.5. We report the
accuracy (×100) for the original dataset and the rejection rate (×100) for both M and C versions.
|Base Dataset Additional Prompt|Accuracy|R-Rate (M) R-Rate (C)|
|---|---|---|
|Base Dataset Additional Prompt Accuracy ✗ 91.63|R-Rate (M) R-Rate (C) 63.03 15.70|
|---|---|
|✗ 91.63 AddSub ✓ 89.36 ∆|63.03 15.70 90.89 25.56 27.86 9.86 ↑ ↑|
|✗ 98.00 MultiArith ✓ 97.83 ∆|56.17 10.50 81.33 11.17 25.16 0.67 ↑ ↑|
|✗ 81.58 SVAMP ✓ 74.90 ∆|51.10 9.40 84.20 33.40 33.10 24.00 ↑ ↑|
|✗ 71.72 GSM8k ✓ 61.41 ∆|51.25 30.10 78.24 44.35 26.99 14.25 ↑ ↑|
**C** **Experiment details**
**C.1** **Effect of Prompt.**
We conduct experiments to investigate whether modifying prompts can enhance the few-shot prompting methods’ resilience when handling ill-defined problems. To this end, we add “ If you cannot get
_a definite answer, please write ‘Unsolvable’ ” to the prompt and evaluate the performance of the CoT_
method on our PMC benchmark. The result is shown in Table 3, demonstrating that although this
modification effectively enhances the model’s capacity to identify ill-defined problems, it decreases
performance for mathematical reasoning. This observation also corroborates our previous claim
that there is a trade-off dilemma between the mathematical reasoning capability and the ability to
recognize ill-defined problems. Our SLP approach does not encounter this dilemma naturally, as it
focuses solely on modeling the problems without the necessity to solve them. By employing an SMT
solver, the solvability of the problem can be assessed, offering a definitive solution.
**C.2** **Effect of Self-consistency.**
Existing studies [Wang et al., 2022] demonstrate that self-consistency is a crucial factor in enhancing
the robustness of a model. Therefore, we investigate the impact of increasing the number of examples
for majority voting on various metrics and visualize the results in Figure 4. Our observation indicates
that self-consistency cannot effectively improve the robustness of the few-shot prompting methods
and LLMs when dealing with challenges in our PMC benchmark. As the number of voting times
increases, the performance of the few-shot prompting methods only improves slightly for problems
with missing conditions and even decreases for problems with contradictory conditions.
**C.3** **Prompting Methods Induce Preferences and Hallucination.**
Previous experiments have shown that the few-shot prompting methods can adversely affect the
performance of LLMs when face problems with missing conditions or contradictory conditions.
One possible explanation for this phenomenon is that these few-shot prompting methods induce
preferences and hallucinations in the LLMs. To further support our hypothesis, we conduct an
ablation analysis to investigate the LLMs and few-shot prompting methods. We construct two types
of variant datasets, namely, MC-V1 and MC-V2 versions, based on AddSub, MultiArith, SVAMP,
and GSM8k datasets, In these variant datasets, we introduced irrelevant conditions by appending
them to the original problem statements (MC-V1) or by prepending them (MC-V2). The results
are reported in Table 4, using the average accuracy as the performance metric. Although existing
methods demonstrate good performance on the original dataset, they all experience varying degrees of
decline in performance on the MC-V1 and MC-V2 versions. Among these methods, the Basic method
exhibits the least impact and demonstrates robust performance compared to other existing methods.
The PAL method achieves the highest average performance on the MC-V1 version of the datasets
but performs worse on the MC-V2 version. Our analysis of error samples indicates that the PAL
-----
#### 100
80
#### 60
40
#### 20
#### 0 5 10 20 50
Voting Times
#### Accuracy (M) R-Rate (C) R-Rate
Figure 4: Results: change in performance as the number of votes increases on PMC with CoT. We
report the average results for the four subdatasets.
method tends to generate hypothetical problems based solely on the initial conditions while neglecting
subsequent information. The CoT method follows a similar trend, although it performs worse than
the PAL method. The PHP method performs the worst on both the MC-V1 and MC-V2 variants, even
though it performs exceptionally well on the original dataset. This suggests that repeatedly querying
the LLMs with hints actually exacerbates the harm caused by irrelevant conditions. It is worth noting
that our findings are consistent with previous work [Shi et al., 2023], further highlighting the fact that
different few-shot prompting methods induce various preferences and hallucinations in LLMs.
Table 4: Detailed experimental results on GPT-3.5 with two multi-condition dataset. We report the
average results for the four subdatasets. The best performance is in bold.
|Methods|Origin MC-V1 MC-V2|
|---|---|
|Basic CoT PAL PHP|81.20% 61.42% 70.15% 87.23% 64.24% 33.46% 86.87% 71.78% 60.46% 87.34% 41.54% 53.45%|
|---|---|
**C.4** **Case analysis**
Questions in the case analysis and detailed responses to each of the comparative methods.
-----
Example 3: Detailed responses for Question 1 in case analysis
**Question 1: In a dance class of 20 students, 20% enrolled in contemporary dance, and the**
rest enrolled in jazz or hip-hop dance. What percentage of the entire students enrolled in
hip-hop dance? # lack the information of jazz student
**GPT-3.5 answer: 80%**
**GPT-3.5 (CoT) answer: In a dance class of 20 students, 20% enrolled in contemporary**
dance. So, 20% of 20 is (20/100) * 20 = 4 students enrolled in contemporary dance. The rest
of the students, which is 20 - 4 = 16 students, enrolled in jazz or hip-hop dance. Therefore,
the percentage of students enrolled in hip-hop dance is (16/20) * 100 = 80%. The answer is
80%.
**GPT-3.5 (PAL) answer:**
Def solution():
total-students=20
contemporary-students = int(total-students * 0.2)
hip-hop-students=-studentscontemporary-students
percentage-hip-hop = (hip-hop-students / total-students) * 100
result = percentage-hip-hop
return result
**GPT-3.5 (SLp) answer:**
def solution():
solver = Solver()
total_students = Int(’total_students’)
contemporary_students = Int(’contemporary_students’)
hip_hop_students = Int(’hip_hop_students’)
jazz_students = Int(’jazz_students’)
hip_hop_percentage = Real(’hip_hop_percentage’)
solver.add(total_students == 20)
solver.add(contemporary_students == total_students * 20 / 100)
solver.add(hip_hop_students + jazz_students == total_students - contemporary_students)
solver.add(hip_hop_percentage == hip_hop_students * 100 / total_students)
return hip_hop_percentage, solver
**D** **Full prompts in experiments**
**D.1** **Prompts of SLP.**
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
# solution in Python:
from z3 import *
def solution():
solver = Solver()
“‘Olivia has $23. She bought five bagels for $3 each. How much money does she have left?”’
initial_money = Real(’initial_money’)
total_cost = Real(’total_cost’)
money_left = Real(’money_left’)
bagels = Int(’bagels’)
bagel_cost = Real(’bagel_cost’)
“‘Olivia has $23. She bought five bagels for $3 each. How much money does she have left?”’
solver.add(bagels == 5)
-----
Example 4: Detailed responses for Question 2 in case analysis
**Question 2: A robe takes 2 bolts of blue fiber and half that much white fiber. How many**
bolts in total does it take? If the number of bolts of blue fiber is less than 6, the answer will
be more than 5.
# Contradictory hint
**GPT-3.5 answer: 3**
**GPT-3.5 (CoT) answer: A robe takes 2 bolts of blue fiber and half that much white fiber. So**
if the blue fiber is 2 bolts, then the white fiber is 2/2 = 1 bolt. In total, it takes 2 + 1 = 3 bolts.
The answer is 3.
**GPT-3.5 (PAL) answer:**
def solution():
blue_fiber_bolts = 2
white_fiber_bolts = blue_fiber_bolts / 2
total_bolts = blue_fiber_bolts + white_fiber_bolts
result = total_bolts
return result"
**GPT-3.5 (SLp) answer:**
def solution():
solver = Solver()
blue_fiber = Int(’blue_fiber’)
white_fiber = Int(’white_fiber’)
total_bolts = Int(’total_bolts’)
solver.add(blue_fiber >= 0) solver.add(white_fiber == blue_fiber / 2)
solver.add(total_bolts == blue_fiber + white_fiber)
solver.add(Implies(blue_fiber < 6, total_bolts > 5))
return total_bolts, solver
solver.add(bagel_cost == 3)
solver.add(initial_money == 23)
solver.add(total_cost == bagels * bagel_cost)
solver.add(money_left == initial_money - total_cost)
“‘How much money does she have left?”’
return money_left, solver
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How
many golf balls did he have at the end of wednesday?
# solution in Python:
from z3 import *
def solution():
solver = Solver()
“‘Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How
many golf balls did he have at the end of wednesday?”’
initial_golf_balls = Int(’initial_golf_balls’)
lost_on_tuesday = Int(’lost_on_tuesday’)
lost_on_wednesday = Int(’lost_on_wednesday’)
total_golf_balls = Int(’total_golf_balls’)
“‘Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How
many golf balls did he have at the end of wednesday?”’
-----
solver.add(initial_golf_balls == 58)
solver.add(lost_on_tuesday == 23)
solver.add(lost_on_wednesday == 2)
solver.add(total_golf_balls == initial_golf_balls - lost_on_tuesday - lost_on_wednesday)
“‘How many golf balls did he have at the end of wednesday?”’
return total_golf_balls, solver
Q: There were nine computers in the server room. Five more computers were installed each day, from
monday to thursday. How many computers are now in the server room?
# solution in Python:
from z3 import *
def solution():
solver = Solver()
“‘There were nine computers in the server room. Some more computers were installed each day, from
monday to thursday. How many computers are now in the server room?”’
initial_computers = Int(’initial_computers’)
daily_installed = Int(’daily_installed’)
total_days = Int(’total_days’)
total_computers = Int(’total_computers’)
“‘There were nine computers in the server room. Some more computers were installed each day, from
monday to thursday. How many computers are now in the server room?”’
solver.add(initial_computers == 9)
solver.add(daily_installed >= 0) # no detailed information, give a loose constraint
solver.add(total_days == 0) # from Monday to Thursday
solver.add(total_computers == initial_computers + daily_installed * total_days)
“‘How many computers are now in the server room¿“ return total_computers, solver
Q: Shawn has five toys. For Christmas, he got some toys each from his mom and dad. How many
toys does he have now?
# solution in Python:
from z3 import *
def solution():
solver = Solver()
“‘Shawn has five toys. For Christmas, he got some toys each from his mom and dad. How many toys
does he have now?”’
initial_toys = Int(’initial_toys’)
toys_from_mom = Int(’toys_from_mom’)
toys_from_dad = Int(’toys_from_dad’)
total_toys = Int(’total_toys’)
“‘Shawn has five toys. For Christmas, he got some toys each from his mom and dad. How many toys
does he have now?”’
solver.add(initial_toys == 5)
solver.add(toys_from_mom >= 0) # no detailed information, give a loose constraint
solver.add(toys_from_mom >= 0) # no detailed information, give a loose constraint
solver.add(total_toys == initial_toys + toys_from_mom + toys_from_dad)
“‘How many toys does he have now?”’
return total_toys, solver
Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many
lollipops did Jason give to Denny?
# solution in Python:
from z3 import *
-----
def solution():
solver = Solver()
“‘Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many
lollipops did Jason give to Denny?”’
initial_lollipops = Int(’initial_lollipops’)
lollipops_left = Int(’lollipops_left’)
lollipops_given = Int(’lollipops_given’)
“‘Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many
lollipops did Jason give to Denny?”’
solver.add(initial_lollipops == 20)
solver.add(lollipops_left == 12)
solver.add(lollipops_given == initial_lollipops - lollipops_left)
“‘How many lollipops did Jason give to Denny?”’
return lollipops_given, solver
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in
total?
# solution in Python:
from z3 import *
def solution():
solver = Solver()
“‘Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in
total?”’
chocolates_leah = Int(’chocolates_leah’)
chocolates_sister = Int(’chocolates_sister’)
chocolates_eaten = Int(’chocolates_eaten’)
chocolates_left = Int(’chocolates_left’)
“‘Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in
total?”’
solver.add(chocolates_leah == 32)
solver.add(chocolates_sister == 42)
solver.add(chocolates_eaten == 35)
solver.add(chocolates_left == chocolates_leah + chocolates_sister - chocolates_eaten)
“‘How many pieces do they have left in total?”’
return chocolates_left, solver
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?
If there are less than 6 cars initially, the answer will be less than 1 .
# solution in Python:
from z3 import *
def solution():
solver = Solver()
“‘If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? If
there are less than 6 cars initially, the answer will be less than 1 .”’
initial_cars = Int(’initial_cars’)
arriving_cars = Int(’arriving_cars’)
total_cars = Int(’total_cars’)
“‘If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? If
there are less than 6 cars initially, the answer will be less than 1 .”’
solver.add(initial_cars == 3)
solver.add(arriving_cars == 2)
solver.add(total_cars == initial_cars + arriving_cars)
solver.add(Implies(initial_cars < 6, total_cars < 1 ))
-----
“‘How many cars are in the parking lot?”’
return total_cars, solver
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are
done, there will be 21 trees. How many trees did the grove workers plant today? If there are more
than 10 trees initially, the answer will be more than 10.
# solution in Python:
from z3 import *
def solution():
solver = Solver()
“‘There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are
done, there will be 21 trees. How many trees did the grove workers plant today? If there are more
than 10 trees initially, the answer will be more than 10.”’
initial_trees = Int(’initial_trees’)
final_trees = Int(’final_trees’)
trees_planted = Int(’trees_planted’)
“‘There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are
done, there will be 21 trees. How many trees did the grove workers plant today? If there are more
than 10 trees initially, the answer will be more than 10.”’
solver.add(initial_trees == 15)
solver.add(final_trees == 21)
solver.add(trees_planted == final_trees - initial_trees)
solver.add(Implies(initial_trees > 10, trees_planted >10 ))
“‘How many trees did the grove workers plant today?”’
return trees_planted, solver
Q: {Your question is here}
# solution in Python:
from z3 import *
**D.2** **Prompts of CoT.**
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are
done, there will be 21 trees. How many trees did the grove workers plant today?
A: There are 15 trees originally. Then there were 21 trees after the Grove workers planted some more.
So there must have been 21 - 15 = 6 trees that were planted. The answer is 6.
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?
A: There are originally 3 cars. Then 2 more cars arrive. Now 3 + 2 = 5 cars are in the parking lot.
The answer is 5.
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in
total?
A: Originally, Leah had 32 chocolates and her sister had 42. So in total they had 32 + 42 = 74. After
eating 35, they had 74 - 35 = 39 pieces left in total. The answer is 39.
Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many
lollipops did Jason give to Denny?
A: Jason had 20 lollipops originally. Then he had 12 after giving some to Denny. So he gave Denny
20 - 12 = 8 lollipops. The answer is 8.
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys
does he have now?
A: Shawn started with 5 toys. He then got 2 toys each from his mom and dad. So he got 2 * 2 = 4
more toys. Now he has 5 + 4 = 9 toys. The answer is 9.
Q: There were nine computers in the server room. Five more computers were installed each day, from
monday to thursday. How many computers are now in the server room?
-----
A: There were originally 9 computers. For each day from monday to thursday, 5 more computers
were installed. So 4 * 5 = 20 computers were added. Now 9 + 20 = 29 computers are now in the
server room. The answer is 29.
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How
many golf balls did he have at the end of wednesday?
A: Michael started with 58 golf balls. He lost 23 on Tuesday, and lost 2 more on wednesday. So he
had 58 - 23 = 35 at the end of Tuesday, and 35 - 2 = 33 at the end of wednesday. The answer is 33.
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
A: Olivia had 23 dollars. She bought 5 bagels for 3 dollars each. So she spent 5 * 3 = 15 dollars.
Now she has 23 - 15 = 8 dollars left. The answer is 8.
Q: {Your question is here}
A:
**D.3** **Prompts of PAL.**
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
# solution in Python:
def solution():
“Olivia has $23. She bought five bagels for $3 each. How much money does she have left?”
money_initial = 23
bagels = 5
bagel_cost = 3
money_spent = bagels * bagel_cost
money_left = money_initial - money_spent
result = money_left
return result
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How
many golf balls did he have at the end of wednesday?
# solution in Python:
def solution():
“‘Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How
many golf balls did he have at the end of wednesday?”’
golf_balls_initial = 58
golf_balls_lost_tuesday = 23
golf_balls_lost_wednesday = 2
golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesday
result = golf_balls_left
return result
Q: There were nine computers in the server room. Five more computers were installed each day, from
monday to thursday. How many computers are now in the server room?
# solution in Python:
def solution():
“‘There were nine computers in the server room. Five more computers were installed each day, from
monday to thursday. How many computers are now in the server room?”’
computers_initial = 9
computers_per_day = 5
num_days = 4 # 4 days between monday and thursday
computers_added = computers_per_day * num_days
computers_total = computers_initial + computers_added
result = computers_total
return result
-----
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys
does he have now?
# solution in Python:
def solution():
“‘Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys
does he have now?”’
toys_initial = 5
mom_toys = 2
dad_toys = 2
total_received = mom_toys + dad_toys
total_toys = toys_initial + total_received
result = total_toys
return result
Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many
lollipops did Jason give to Denny?
# solution in Python:
def solution():
“‘Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many
lollipops did Jason give to Denny?”’
jason_lollipops_initial = 20
jason_lollipops_after = 12
denny_lollipops = jason_lollipops_initial - jason_lollipops_after
result = denny_lollipops
return result
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in
total?
# solution in Python:
def solution():
“‘Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in
total?”’
leah_chocolates = 32
sister_chocolates = 42
total_chocolates = leah_chocolates + sister_chocolates
chocolates_eaten = 35
chocolates_left = total_chocolates - chocolates_eaten
result = chocolates_left
return result
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?
# solution in Python:
def solution():
“‘If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?”’
cars_initial = 3
cars_arrived = 2
total_cars = cars_initial + cars_arrived
result = total_cars
return result
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are
done, there will be 21 trees. How many trees did the grove workers plant today?
-----
# solution in Python:
def solution():
“‘There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are
done, there will be 21 trees. How many trees did the grove workers plant today?”’
trees_initial = 15
trees_after = 21
trees_added = trees_after - trees_initial
result = trees_added
return result
Q: {Your question is here}
# solution in Python:
**D.4** **Prompts of PHP.**
Question: Angelo and Melanie want to plan how many hours over the next week they should study
together for their test next week. They have 2 chapters of their textbook to study and 4 worksheets to
memorize. They figure out that they should dedicate 3 hours to each chapter of their textbook and
1.5 hours for each worksheet. If they plan to study no more than 4 hours each day, how many days
should they plan to study total over the next week if they take a 10-minute break every hour, include 3
10-minute snack breaks each day, and 30 minutes for lunch each day? (Hint: The answer is near to 4).
A: We know the Answer Hints: 4. With the Answer Hints: 4, we will answer the question. Let’s
think step by step. Angelo and Melanie think they should dedicate 3 hours to each of the 2 chapters,
3 hours x 2 chapters = 6 hours total. For the worksheets they plan to dedicate 1.5 hours for each
worksheet, 1.5 hours x 4 worksheets = 6 hours total. Angelo and Melanie need to start with planning
12 hours to study, at 4 hours a day, 12 / 4 = 3 days. However, they need to include time for breaks
and lunch. Every hour they want to include a 10-minute break, so 12 total hours x 10 minutes = 120
extra minutes for breaks. They also want to include 3 10-minute snack breaks, 3 x 10 minutes = 30
minutes. And they want to include 30 minutes for lunch each day, so 120 minutes for breaks + 30
minutes for snack breaks + 30 minutes for lunch = 180 minutes, or 180 / 60 minutes per hour = 3
extra hours. So Angelo and Melanie want to plan 12 hours to study + 3 hours of breaks = 15 hours
total. They want to study no more than 4 hours each day, 15 hours / 4 hours each day = 3.75 They
will need to plan to study 4 days to allow for all the time they need. The answer is 4
Question: Mark’s basketball team scores 25 2 pointers, 8 3 pointers and 10 free throws. Their
opponents score double the 2 pointers but half the 3 pointers and free throws. What’s the total number
of points scored by both teams added together?(Hint: The answer is near to 201).
A: We know the Answer Hints: 201. With the Answer Hints: 201, we will answer the question.
Let’s think step by step Mark’s team scores 25 2 pointers, meaning they scored 25*2= 50 points
in 2 pointers. His team also scores 6 3 pointers, meaning they scored 8*3= 24 points in 3 pointers
They scored 10 free throws, and free throws count as one point so they scored 10*1=10 points in
free throws. All together his team scored 50+24+10= 84 points Mark’s opponents scored double his
team’s number of 2 pointers, meaning they scored 50*2=100 points in 2 pointers. His opponents
scored half his team’s number of 3 pointers, meaning they scored 24/2= 12 points in 3 pointers. They
also scored half Mark’s team’s points in free throws, meaning they scored 10/2=5 points in free
throws. All together Mark’s opponents scored 100+12+5=117 points The total score for the game is
both team’s scores added together, so it is 84+117=201 points The answer is 201
Question: Bella has two times as many marbles as frisbees. She also has 20 more frisbees than deck
cards. If she buys 2/5 times more of each item, what would be the total number of the items she will
have if she currently has 60 marbles? (Hint: The answer is near to 160,145).
A: We know the Answer Hints: 160, 145. With the Answer Hints: 160, 145, we will answer the
question. Let’s think step by step When Bella buys 2/5 times more marbles, she’ll have increased
the number of marbles by 2/5*60 = 24 The total number of marbles she’ll have is 60+24 = 84 If
Bella currently has 60 marbles, and she has two times as many marbles as frisbees, she has 60/2 =
30 frisbees. If Bella buys 2/5 times more frisbees, she’ll have 2/5*30 = 12 more frisbees. The total
number of frisbees she’ll have will increase to 30+12 = 42 Bella also has 20 more frisbees than deck
-----
cards, meaning she has 30-20 = 10 deck cards If she buys 2/5 times more deck cards, she’ll have
2/5*10 = 4 more deck cards. The total number of deck cards she’ll have is 10+4 = 14 Together, Bella
will have a total of 14+42+84 = 140 items The answer is 140
Question: A group of 4 fruit baskets contains 9 apples, 15 oranges, and 14 bananas in the first three
baskets and 2 less of each fruit in the fourth basket. How many fruits are there? (Hint: The answer is
near to 93, 141).
A: We know the Answer Hints: 93, 141. With the Answer Hints: 93, 141, we will answer the question.
Let’s think step by step For the first three baskets, the number of apples and oranges in one basket is
9+15=24 In total, together with bananas, the number of fruits in one basket is 24+14=38 for the first
three baskets. Since there are three baskets each having 38 fruits, there are 3*38=114 fruits in the first
three baskets. The number of apples in the fourth basket is 9-2=7 There are also 15-2=13 oranges in
the fourth basket The combined number of oranges and apples in the fourth basket is 13+7=20 The
fourth basket also contains 14-2=12 bananas. In total, the fourth basket has 20+12=32 fruits. The
four baskets together have 32+114=146 fruits. The answer is 146
Question: You can buy 4 apples or 1 watermelon for the same price. You bought 36 fruits evenly
split between oranges, apples and watermelons, and the price of 1 orange is $0.50. How much does 1
apple cost if your total bill was $66? (Hint: The answer is near to 1, 5).
A: We know the Answer Hints: 1, 5. With the Answer Hints: 1, 5, we will answer the question. Let’s
think step by step If 36 fruits were evenly split between 3 types of fruits, then I bought 36/3 = 12
units of each fruit If 1 orange costs $0.50 then 12 oranges will cost $0.50 * 12 = $6 If my total bill
was $66 and I spent $6 on oranges then I spent $66 - $6 = $60 on the other 2 fruit types. Assuming
the price of watermelon is W, and knowing that you can buy 4 apples for the same price and that the
price of one apple is A, then 1W=4A If we know we bought 12 watermelons and 12 apples for $60,
then we know that $60 = 12W + 12A Knowing that 1W=4A, then we can convert the above to $60 =
12(4A) + 12A $60 = 48A + 12A $60 = 60A Then we know the price of one apple (A) is $60/60= $1
The answer is 1
Question: Susy goes to a large school with 800 students, while Sarah goes to a smaller school with
only 300 students. At the start of the school year, Susy had 100 social media followers. She gained
40 new followers in the first week of the school year, half that in the second week, and half of that in
the third week. Sarah only had 50 social media followers at the start of the year, but she gained 90
new followers the first week, a third of that in the second week, and a third of that in the third week.
After three weeks, how many social media followers did the girl with the most total followers have?
(Hint: The answer is near to 180, 160).
A: We know the Answer Hints: 180, 160. With the Answer Hints: 180, 160, we will answer the
question. Let’s think step by step After one week, Susy has 100+40 = 140 followers. In the second
week, Susy gains 40/2 = 20 new followers. In the third week, Susy gains 20/2 = 10 new followers. In
total, Susy finishes the three weeks with 140+20+10 = 170 total followers. After one week, Sarah has
50+90 = 140 followers. After the second week, Sarah gains 90/3 = 30 followers. After the third week,
Sarah gains 30/3 = 10 followers. So, Sarah finishes the three weeks with 140+30+10 = 180 total
followers. Thus, Sarah is the girl with the most total followers with a total of 180. The answer is 180
Question: Sam bought a dozen boxes, each with 30 highlighter pens inside, for $10 each box. He
rearranged five of these boxes into packages of six highlighters each and sold them for $3 per package.
He sold the rest of the highlighters separately at the rate of three pens for $2. How much profit did he
make in total, in dollars? (Hint: The answer is near to 120).
A: We know the Answer Hints: 120. With the Answer Hints: 120, we will answer the question. Let’s
think step by step Sam bought 12 boxes x $10 = $120 worth of highlighters. He bought 12 * 30 = 360
highlighters in total. Sam then took 5 boxes × 6 highlighters/box = 30 highlighters. He sold these
boxes for 5 * $3 = $15 After selling these 5 boxes there were 360 - 30 = 330 highlighters remaining.
These form 330 / 3 = 110 groups of three pens. He sold each of these groups for $2 each, so made
110 * 2 = $220 from them. In total, then, he earned $220 + $15 = $235. Since his original cost was
$120, he earned $235 - $120 = $115 in profit. The answer is 115
Question: In a certain school, 2/3 of the male students like to play basketball, but only 1/5 of the
female students like to play basketball. What percent of the population of the school do not like to
-----
play basketball if the ratio of the male to female students is 3:2 and there are 1000 students? (Hint:
The answer is near to 45).
A: We know the Answer Hints: 45. With the Answer Hints: 45, we will answer the question. Let’s
think step by step The students are divided into 3 + 2 = 5 parts where 3 parts are for males and 2 parts
are for females. Each part represents 1000/5 = 200 students. So, there are 3 x 200 = 600 males. And
there are 2 x 200 = 400 females. Hence, 600 x 2/3 = 400 males play basketball. And 400 x 1/5 = 80
females play basketball. A total of 400 + 80 = 480 students play basketball. Therefore, 1000 - 480 =
520 do not like to play basketball. The percentage of the school that do not like to play basketball is
520/1000 * 100 = 52 The answer is 52.
Question: {Your question is here}
A:
**D.5** **Prompt of CoT-8.**
Q: There are some trees in the grove. Grove workers will plant trees in the grove today. After they
are done, there will be 21 trees. How many trees did the grove workers plant today? A: The question
don’t give enough information, I can’t get a definite answer.
Q: If there are few cars in the parking lot and 2 more cars arrive, how many cars are in the parking
lot? A: The question don’t give enough information, I can’t get a definite answer.
Q: Leah had some chocolates and her sister had 42. If they ate 35, how many pieces do they have left
in total? A: The question don’t give enough information, I can’t get a definite answer.
Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has some other lollipops. How
many lollipops did Jason give to Denny? A: The question don’t give enough information, I can’t get
a definite answer.
Q: Shawn has some toys. For Christmas, he got two toys each from his mom and dad. How many
toys does he have now? A: The question don’t give enough information, I can’t get a definite answer.
Q: There were nine computers in the server room. Some more computers were installed each day,
from Monday to Thursday. How many computers are now in the server room? A: The question don’t
give enough information, I can’t get a definite answer.
Q: Michael had 58 golf balls. On Tuesday, he lost 23 golf balls. On Wednesday, he lost some more.
How many golf balls did he have at the end of Wednesday? A: The question don’t give enough
information, I can’t get a definite answer.
Q: Olivia has 23.Sheboughtfewbagelsfor3 each. How much money does she have left? A: The
question don’t give enough information, I can’t get a definite answer.
**D.6** **Prompt of PAL-8.**
Q: Olivia has $23. She bought five bagels. How much money does she have left?
# solution in Python:
def solution(): """Olivia has $23. She bought five bagels. How much money does she have left?"""
return None
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost some. How
many golf balls did he have at the end of wednesday?
# solution in Python:
def solution():
"""Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost some. How
many golf balls did he have at the end of wednesday?"""
return None
Q: There were nine computers in the server room. Some more computers were installed each day,
from monday to thursday. How many computers are now in the server room? # solution in Python:
-----
def solution():
"""There were nine computers in the server room. Some more computers were installed each day,
from monday to thursday. How many computers are now in the server room?"""
return None
Q: Shawn has some toys. For Christmas, he got two toys each from his mom and dad. How many
toys does he have now?
# solution in Python:
def solution():
"""Shawn has some toys. For Christmas, he got two toys each from his mom and dad. How many
toys does he have now?"""
return None
Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many
lollipops did Jason give to Denny?
# solution in Python:
def solution():
"""Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many
lollipops did Jason give to Denny?"""
return None
Q: Leah had 32 chocolates and her sister had 42. How many pieces do they have left in total?
# solution in Python:
def solution():
"""Leah had 32 chocolates and her sister had 42. How many pieces do they have left in total?"""
return None
Q: If there are 3 cars in the parking lot and some more cars arrive, how many cars are in the parking
lot?
# solution in Python:
def solution():
"""If there are 3 cars in the parking lot and some more cars arrive, how many cars are in the parking
lot?"""
return None
Q: There are some trees in the grove. Grove workers will plant trees in the grove today. After they
are done, there will be 21 trees. How many trees did the grove workers plant today?
# solution in Python:
def solution():
"""There are some trees in the grove. Grove workers will plant trees in the grove today. After they
are done, there will be 21 trees. How many trees did the grove workers plant today?"""
return None
Q: question # solution in Python:
**E** **Limitation**
The limitation of our work is that we have only considered the case of the simplest mathematical
applications, and we would like to experiment with more varied problem types in the future, such as
more realistic business requirements, or more complex mathematical problem forms. In addition,
our proposed method is relatively dependent on the performance of the backbone model, which is a
common problem of all contextual cue learning methods.
-----
| [
"Shi-Yu, Tian",
"Zhi, Zhou",
"Lin-Han, Jia",
"Yu-Feng, Li",
"Lan-Zhe, Guo"
] | 2024-06-07T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2406.05055 | https://arxiv.org/abs/2406.05055 | https://www.semanticscholar.org/paper/08df216c9d99583c623efdf72f9590595b49293e |
S$^3$c-Math: Spontaneous Step-level Self-correction Makes Large Language Models Better Mathematical Reasoners | Self-correction is a novel method that can stimulate the potential reasoning abilities of large language models (LLMs). It involves detecting and correcting errors during the inference process when LLMs solve reasoning problems. However, recent works do not regard self-correction as a spontaneous and intrinsic capability of LLMs. Instead, such correction is achieved through post-hoc generation, external knowledge introduction, multi-model collaboration, and similar techniques. In this paper, we propose a series of mathematical LLMs called S$^3$c-Math, which are able to perform Spontaneous Step-level Self-correction for Mathematical reasoning. This capability helps LLMs to recognize whether their ongoing inference tends to contain errors and simultaneously correct these errors to produce a more reliable response. We proposed a method, which employs a step-level sampling approach to construct step-wise self-correction data for achieving such ability. Additionally, we implement a training strategy that uses above constructed data to equip LLMs with spontaneous step-level self-correction capacities. Our data and methods have been demonstrated to be effective across various foundation LLMs, consistently showing significant progress in evaluations on GSM8K, MATH, and other mathematical benchmarks. To the best of our knowledge, we are the first to introduce the spontaneous step-level self-correction ability of LLMs in mathematical reasoning. | null | [
"Yuchen, Yan",
"Yang, Liu",
"Jin, Jiang",
"Xin, Xu",
"Mengdi, zhang",
"Xunliang, Cai",
"Yixin, Cao",
"Jian, Shao"
] | 2024-09-02T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.01524 | https://arxiv.org/abs/2409.01524 | null |
|
SAAS: Solving Ability Amplification Strategy for Enhanced Mathematical Reasoning in Large Language Models | This study presents a novel learning approach designed to enhance both mathematical reasoning and problem-solving abilities of Large Language Models (LLMs). We focus on integrating the Chain-of-Thought (CoT) and the Program-of-Thought (PoT) learning, hypothesizing that prioritizing the learning of mathematical reasoning ability is helpful for the amplification of problem-solving ability. Thus, the initial learning with CoT is essential for solving challenging mathematical problems. To this end, we propose a sequential learning approach, named SAAS (Solving Ability Amplification Strategy), which strategically transitions from CoT learning to PoT learning. Our empirical study, involving an extensive performance comparison using several benchmarks, demonstrates that our SAAS achieves state-of-the-art (SOTA) performance. The results underscore the effectiveness of our sequential learning approach, marking a significant advancement in the field of mathematical reasoning in LLMs. | This study proposes a sequential learning approach, named SAAS (Solving Ability Amplification Strategy), which strategically transitions from CoT learning to PoT learning, and demonstrates that the SAAS achieves state-of-the-art (SOTA) performance. | ### SAAS: Solving Ability Amplification Strategy for Enhanced Mathematical Reasoning in Large Language Models
**Hyeonwoo Kim[1][∗], Gyoungjin Gim[1][∗], Yungi Kim[1][∗]**
**Jihoo Kim[1], Byungju Kim[2], Wonseok Lee[2],**
**Chanjun Park[1][†]**
1 Upstage AI, 2 Mathpresso Inc.
{choco_9966, gyoungjin.gim, eddie, jerry, chanjun.park}@upstage.ai
{peyton.kim, jack.lee}@mathpresso.com
**Abstract**
This study presents a novel learning approach
designed to enhance both mathematical reasoning and problem-solving abilities of Large
Language Models (LLMs). We focus on integrating the Chain-of-Thought (CoT) and the
Program-of-Thought (PoT) learning, hypothesizing that prioritizing the learning of math_ematical reasoning ability is helpful for the_
_amplification of problem-solving ability. Thus,_
_the initial learning with CoT is essential for_
solving challenging mathematical problems. To
this end, we propose a sequential learning approach, named SAAS (Solving Ability Amplification Strategy), which strategically transitions from CoT learning to PoT learning. Our
empirical study, involving an extensive performance comparison using several benchmarks,
demonstrates that our SAAS achieves state-ofthe-art (SOTA) performance. The results underscore the effectiveness of our sequential learning approach, marking a significant advancement in the field of mathematical reasoning in
LLMs.
thinking, problem-solving, and complex decisionmaking, which are essential for understanding and
generating human-like responses in the different
situations (Lu et al., 2022b; Meadows and Freitas, 2022; Thawani et al., 2021). In other words,
mathematical reasoning in LLMs is essential for
a comprehensive understanding and manipulation
of language in numerous scientific and practical
applications. However, the current ability of LLMs
in mathematical reasoning hinder their potential
in the fields where numerical and logical comprehension are paramount such as coding. Thus, it’s
critical challenge to enhance the ability of LLMs
in mathematical reasoning.
In this study, we explore a learning approach
for enhancing both mathematical reasoning ability and problem-solving ability in LLMs, focusing on learning with both the Chain-of-Thought
(CoT) (Wei et al., 2022b) and the Program-ofThought (PoT) (Chen et al., 2022; Gao et al.,
2023a). The CoT rationale (Figure 1-(a)) consists
of a series of intermediate reasoning steps. Although it enhances the reasoning ability of LLMs, it
leads to arithmetic calculation errors when dealing
with large numbers (Chen et al., 2022), resulting
a low problem-solving ability. To address this issue, Chen et al. (2022) proposed the PoT rationale
(Figure 1-(b)), which expresses the reasoning steps
as code and delegate computation steps to an code
interpreter. It requires the reasoning steps to be expressed accurately as code. Therefore, we hypothesize that prioritizing the learning of mathematical
_reasoning ability is helpful for the amplification of_
_problem-solving ability. In other words, the initial_
_learning with CoT is essential for solving chal-_
lenging mathematical problems, since it improves
the mathematical reasoning ability (Magister et al.,
2022; Shridhar et al., 2023; Jie et al., 2023; Liang
et al., 2023).
Our research is motivated by an analysis of existing models (Gou et al., 2023; Yue et al., 2023).
**Introduction**
The advent of Large Language Models (LLMs)
has marked a significant breakthrough in various
domains. However, despite their remarkable performance across these domains, a notable challenge persists in the realm of mathematical reasoning (Zhao et al., 2023; Lu et al., 2022b; Meadows
and Freitas, 2022; Qian et al., 2022; Zhou et al.,
2022; Lightman et al., 2023; Drori et al., 2021;
Zhang et al., 2019). The ability of LLMs to comprehend, interpret, and manipulate mathematical
concepts is not yet on par with their linguistic capabilities.
The significance of mathematical reasoning in
LLMs involves more than just crunching numbers.
It also encompasses the ability to engage in logical
_∗Equal Contribution_ Corresponding
_†_
-----
ToRA (Gou et al., 2023) tried to learn reasoning
ability as well as PoT by adding reasoning step
into the PoT rationale. Similarly, MAmmoTH (Yue
et al., 2023) tried to learn both CoT and PoT by
using both CoT rationale and PoT rationale as training data simultaneously. However, we conjecture
that they do not fully utilize the advantages of learning with both CoT and PoT. This is because they
did not consider the sequence of CoT learning and
_PoT learning, resulting a less effective learning._
In this work, we introduce a sequential learning
approach, named SAAS (Solving Ability Amplification Strategy), to effectively utilize the strengths
of CoT learning and PoT learning. This approach
transitions from CoT learning to PoT learning, focusing on enhancing problem-solving ability in PoT
_learning based on logical skills established in CoT_
_learning. This pedagogical strategy ensures that the_
competencies developed during CoT learning positively influence the PoT learning phase, leading
to an overall improvement in solving challenging
mathematical problems.
We validate the rationality and effectiveness of
our SAAS via extensive experiments on the reputable benchmarks (Cobbe et al., 2021; Hendrycks
et al., 2021; Gao et al., 2023b; Patel et al., 2021;
Miao et al., 2021; Lu et al., 2022a; KoncelKedziorski et al., 2016). Most importantly, SAAS
achieved state-of-the-art with remarkable performance. Through this, in this paper, we present a
_novel and effective perspective (i.e., our hypothesis)_
within the field of mathematics.
**2** **Related Work and Background**
The field of Large Language Models (LLMs) has
witnessed substantial advancements, yet the integration of mathematical reasoning within these
models remains a challenging frontier. Existing
researches in LLMs primarily focus on the natural
language understanding and generation (Wei et al.,
2022a; Yang et al., 2023), with limited exploration
in mathematical problem-solving. The complexity of mathematical problems, which requires not
only numerical computation but also logical inference and the understanding of abstract concepts,
still remains a notable challenge for LLMs (Zhao
et al., 2023; Lu et al., 2022b; Meadows and Freitas,
2022; Qian et al., 2022; Zhou et al., 2022; Lightman et al., 2023; Drori et al., 2021; Zhang et al.,
2019). To address this challenge, many researches
are being conducted via the following approaches:
1) prompting approach, 2) fine-tuning approach,
and 3) continued pretraining approach.
**Prompting Approach** Recent studies are based
on the prompting methods for mathematical reasoning without additional training. Recently, the
concepts of Chain of Thoughts (CoT) (Wei et al.,
2022b) and Program of Thoughts (PoT) (Chen
et al., 2022; Gao et al., 2023a) have emerged as
promising approaches to enhance mathematical
reasoning in LLMs. The CoT involves breaking
down complex reasoning problems into a series of
intermediate reasoning steps. This approach has
shown promise in improving the accuracy and reliability of LLMs in mathematical problem-solving,
by mimicking the human thought process of stepby-step reasoning. However, it is not ideal for solving complex mathematical problems (Chen et al.,
2022). To address this issue, the PoT introduces
a more algorithmic perspective. Specifically, it expresses the reasoning steps as code and delegate
computation steps to an code interpreter. This approach allows the LLMs to effectively deal with
problems that require a combination of mathematical operations and logical reasoning, by structuring
the problem-solving process in a programmatic
manner.
**Fine-tuning Approach** More recently, many
works (Luo et al., 2023; Yue et al., 2023; Yu et al.,
2023; Gou et al., 2023) focus on the fine-tuning
LLMs for mathematical reasoning tasks. WizardMath (Luo et al., 2023) proposed Reinforcement
Learning from Evol-Instruct Feedback (RLEIF),
which integrates supervised fine-tuning (SFT) and
proximal policy optimization (PPO) for mathematical reasoning. MAmmoTH (Yue et al., 2023) introduces a new hybrid instruction-tuning dataset
called MathInstruct[1], which consists of CoT rationale and PoT rationale. MetaMath (Yu et al.,
2023) proposed a new instruction-tuning dataset
named MetaMathQA[2], which is augmented by
question bootstrapping methods. ToRA (Gou et al.,
2023) suggested a series of tool-integrated reasoning agents, which is fine-tuned on the tool-use
trajectories (PoT rationale) datasets generated by
prompting GPT-4.
[1https://huggingface.co/datasets/](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
[TIGER-Lab/MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
[2https://huggingface.co/datasets/](https://huggingface.co/datasets/meta-math/MetaMathQA)
[meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
-----
(c) SAAS
Question: Patrick is half the age of his elder brother Robert. If Robert
will turn 30 after 2 years, how old is Patrick now?
(a) CoT rationale (b) PoT rationale
If Robert will turn 30 after 2 years, ```python
then his current age is 30 - 2 = 28 years. def calculate_patrick_age():
"""Patrick is … how old is Patrick
Since Patrick is half the age of Robert, now?""" 𝐷123 × 𝑛(%)
then Patrick's age is 28 / 2 = 14 years. robert_age_future = 30
robert_age_now = robert_age_future - 2
Therefore, Patrick is currently 14 years old. patrick_age_now = robert_age_now / 2return patrick_age_now 𝐷123
The answer is: 14
patrick_age_now = calculate_patrick_age()print(patrick_age_now) 𝐷423
```
Patrick is 14 years old now.
Figure 1: Overview of SAAS (Solving Ability Amplification Strategy) with two core strategies: i) sequential
learning strategy; ii) cognitive retention strategy.
**Continued Pretraining Approach** Some re- in details.
searches (Lewkowycz et al., 2022; Azerbayev
**3.1** **Chain-of-Thought Learning**
et al., 2023) continually pretrain a base model
to specialize in the mathematical reasoning. Min- It has been shown in various domains that CoT
erva (Lewkowycz et al., 2022) is a large language
model pretrained on general natural language data
and further trained on the scientific and mathematical data. Llemma (Azerbayev et al., 2023) was
also obtained through continued pretraining Code
Llama (Roziere et al., 2023) on their own collected
data named Proof-Pile-2[3].
In this paper, we focus on the fine-tuning approach by integrating the CoT and PoT learning.
Motivated by Dong et al. (2023) that showed that
the abilities of LLMs can be improved depending
on the SFT strategy, we analyze how much performance can be improved depending on the SFT
strategy from the perspective of solving challenging mathematical problems.
**3** **SAAS: Solving Ability Amplification**
**Strategy**
In this paper, we hypothesize that learning about
the problem-solving ability is more effective after logical skills are well established. To explore
this, we propose the sequential learning approach,
named SAAS (Solving Ability Amplification Strategy), which transitions from CoT learning to PoT
learning as shown in Figure 1. Our SAAS is
motivated by the pedagogical strategy of human
that first learns logical skills and then develops
problem-solving abilities by solving numerous
problems (Glaser, 1984). In the following subsections, we describe CoT learning and PoT learning
[3https://huggingface.co/datasets/](https://huggingface.co/datasets/EleutherAI/proof-pile-2)
[EleutherAI/proof-pile-2](https://huggingface.co/datasets/EleutherAI/proof-pile-2)
It has been shown in various domains that CoT
learning, which trains LLMs with data composed
of CoT rationales, improves reasoning ability (Jie
et al., 2023; Liang et al., 2023). Thus, we first finetune the LLM via CoT learning for improving mathematical reasoning ability. The primary objective
in this phase is to optimize the model parameters
for logically interpreting and responding to mathematical problems.
To achieve this, we employ a widely used optimization approach (Yu et al., 2023; Gou et al.,
2023) that seeks to find the optimal parameters,
denoted as θcot[∗] [, which minimize the negative log-]
likelihood. This is expressed mathematically as:
log pθ(ycot _xcot),_ (1)
_|_
(xcot,yXcot)∈Dcot
arg min
_Dcot_
_|_ _|_
where θ represents the learnable parameters of the
LLM. The dataset Dcot consists of (xcot, ycot) pairs,
where xcot denotes a mathematical question, and
_ycot is the desired CoT rationale for that question._
This optimization process is designed to ensure
that the model learns to generate CoT rationales
that are logically consistent throughout the reasoning process. This is particularly important in
the field of mathematics, since the rationale behind each step is as critical as the final answer.
By minimizing the negative log-likelihood, we effectively guide the model to generate step-by-step
explanations that mirror human problem-solving
approaches, thus enhancing its overall reasoning
capability.
This phase sets the foundation for the subsequent
PoT learning phase, where the model’s enhanced
-----
Augment with
**Question: Patrick is half the age of** PoT Prompt
his elder brother Robert. If Robert **Post-processing**
will turn 30 after 2 years, how old is **(Validation of Answer,**
Patrick now? **LLMs** Augment with **Near Deduplication)**
**Answer : 14** CoT Prompt
**Seed Dataset**
Figure 2: Overall procedure of the synthetic data generation.
reasoning ability, developed through CoT training,
is further refined and applied to more complex
problem-solving scenarios.
**3.2** **Program-of-Thought Learning**
Although the LLM optimized with parameters θcot[∗]
demonstrates improved logical skills, it still exhibits limitations in problem-solving ability, particularly in computational accuracy (Chen et al.,
2022), which will be empirically validated in Section 4.2.4. To amplify this problem-solving ability,
building upon the mathematical reasoning established in the CoT learning phase, we further finetune the LLM with θcot[∗] [as its starting point using]
data composed of PoT rationales.
To accomplish this, we construct a dataset
_Dpot+cot that consists of both PoT and CoT ratio-_
nales. Notably, we integrate CoT rationales alongside PoT rationales in this dataset. This is because
we observed that focusing exclusively on PoT rationales during this phase leads to a deterioration in
mathematical reasoning ability in our experiments,
as detailed in Table 3. To mitigate this cognitive for_getting, we introduce a cognitive retention strategy._
This strategy involves randomly sampling CoT rationales and incorporating them into the PoT learning phase. Such a mixed approach (i.e., congnitive
retention strategy) ensures that the LLM retains its
previously acquired reasoning skills while adapting
to the new learning format.
The objective in this phase is to find the final
optimal parameters θ[∗] of the LLM, which involves
minimizing the following negative log-likelihood:
**Seed Dataset** **Rationale** **Models** **Size**
MetaMathQA CoT GPT, WizardMath 465K
MATH, GSM8K CoT WizardMath 300K
QANDA CoT WizardMath 120K
MetaMathQA PoT ToRA 60K
MATH, GSM8K PoT ToRA 226K
MathInstruct PoT ToRA 38K
QANDA PoT ToRA 12K
Table 1: Summary of synthetic datasets
and problem-solving abilities while maintaining its
proficiency in logical reasoning.
**4** **Experiments**
In this section, we conduct extensive experiments
to answer the following key research questions
(RQs):
- RQ1: Does SAAS quantitatively outperform its
competitors for solving challenging mathematical problems?
- RQ2: Are two core strategies of SAAS (sequential learning, cognitive retention strategy) effective in improving the accuracy?
- RQ3: Is SAAS effective in solving not only basic
but also challenging mathematical problems?
- RQ4: Does sequential learning that transitions
from CoT learning to PoT learning help improve
both the mathematical reasoning and computational accuracy?
**4.1** **Experimental Settings**
**4.1.1** **Dataset Details**
In this paper, we synthesize GSM8K (Cobbe et al.,
2021), MATH (Hendrycks et al., 2021), MetaMathQA (Yu et al., 2023), MathInstruct (Yue et al.,
2023), and QANDA. The QANDA dataset was
gathered manually through direct interaction with
the application[4]. The overall procedure of synthetic
data generation is illustrated in Figure 2.
4https://mathpresso.com/en
log pθcot[∗] [(][y][|][x][)][,] (2)
(x,y)∈XDpot+cot
arg min
_θcot[∗]_
_Dpot+cot_
_|_ _|_
where x represents a mathematical question, and y
is the desired output, which could be either a PoT
rationale or a CoT rationale, for the given question
_x. This approach aims to harmonize the strengths_
of both CoT and PoT learning, thereby equipping
the LLM with enhanced computational accuracy
-----
|Model|Size|GSM8K MATH GSM-Hard SVAMP TabMWP ASDiv MAWPS|Avg.|
|---|---|---|---|
|Col1|Col2|General Models|Col4|
|---|---|---|---|
|GPT-4 GPT-4 (PAL) ChatGPT ChatGPT (PAL) Claude-2 PaLM-2|- - - - - 540B|92.0 45.2 64.7 93.1 67.1 91.3 97.6 94.2 51.8 77.6 94.8 95.9 92.6 97.7 80.8 35.5 55.9 83.0 69.1 87.3 94.6 78.6 38.7 67.6 77.8 79.9 81.0 89.4 85.2 32.5 - - - - - 80.7 34.3 - - - - -|78.3 86.4 72.3 73.3 - -|
|LLaMa-2 Platypus-2 CodeLLaMa (PAL)|7B 7B 7B|13.3 4.1 7.8 38.0 31.1 50.7 60.9 14.4 5.4 8.6 36.7 26.5 47.9 58.4 34.0 16.6 33.6 59.0 47.3 61.4 79.6|29.4 28.3 47.4|
|SOLAR-1 LLaMa-2 Platypus-2 CodeLLaMa (PAL)|10.7B 13B 13B 13B|25.8 8.0 17.1 59.3 33.6 55.1 68.4 24.3 6.3 13.6 43.1 39.5 56.3 70.4 23.7 7.1 14.3 50.7 45.3 55.1 69.6 39.9 19.9 39.0 62.4 59.5 65.3 86.0|38.1 36.2 38.0 53.1|
|CodeLLaMa (PAL)|34B|53.3 23.9 49.4 71.0 63.1 72.4 91.5|60.7|
|LLaMa-2 Platypus-2|70B 70B|57.8 14.4 36.0 73.6 57.5 76.0 92.4 45.9 15.0 24.6 74.3 47.3 72.7 91.1|58.2 53.0|
|Col1|Col2|Mathematics Domain-Specific Models|Col4|
|---|---|---|---|
|WizardMath MetaMath MuggleMATH Toolformer MathCoder MathCoder-CODE MAmmoTH MAmmoTH-CODE ToRA SAAS ToRA-CODE SAAS-CODE|7B 7B 7B 7B 7B 7B 7B 7B 7B 7B 7B 7B|54.9 10.7 20.6 57.3 38.1 59.1 73.7 66.5 19.8 - - - - - 68.4 - - - - - - - - - 29.4 - 40.4 44.0 64.2 23.3 - - - - - 67.8 30.2 - - - - - 53.6 31.5 - - - - - 59.4 33.4 - - - - - 68.8 40.1 54.6 68.2 42.4 73.9 88.8 74.3 43.2 58.3 74.3 49.6 77.3 93.6 72.6 44.6 56.0 70.4 51.6 78.7 91.3 74.8 45.2 58.1 73.6 64.0 80.4 93.8|44.9 - - - - - - - 62.4 67.2 66.5 70.0|
|SAAS WizardMath MetaMath MuggleMATH MathCoder MathCoder-CODE MAmmoTH MAmmoTH-CODE ToRA SAAS ToRA-CODE SAAS-CODE|10.7B 13B 13B 13B 13B 13B 13B 13B 13B 13B 13B 13B|82.0 50.1 64.9 85.0 72.5 87.5 95.7 63.9 14.0 28.4 64.3 46.7 65.8 79.7 72.3 22.4 - - - - - 74.0 - - - - - - 72.6 29.9 - - - - - 74.1 35.9 - - - - - 62.0 34.2 - - - - - 64.7 36.3 - - - - - 72.7 43.0 57.3 72.9 47.2 77.2 91.3 76.6 46.2 61.6 77.8 58.2 80.5 94.3 75.8 48.1 60.5 75.7 65.4 81.4 92.5 79.4 50.6 61.6 80.6 68.2 84.5 95.4|76.8 51.8 - - - - - - 65.9 70.7 71.3 74.3|
|MathCoder-CODE MAmmoTH-CODE ToRA-CODE SAAS-CODE SAAS-LLEMA|34B 34B 34B 34B 34B|81.7 45.2 - - - - - 72.7 43.6 - - - - - 80.7 50.8 63.7 80.5 70.5 84.2 93.3 82.9 52.3 64.1 82.8 73.9 85.4 95.2 85.4 54.7 67.0 85.2 80.2 87.6 96.6|- - 74.8 76.6 79.5|
|WizardMath MetaMath MuggleMATH MathCoder ToRA|70B 70B 70B 70B 70B|81.6 22.7 50.3 80.0 49.8 76.2 86.2 82.3 26.6 - - - - - 82.3 - - - - - - 83.9 45.1 - - - - - 84.3 49.7 67.2 82.7 74.0 86.8 93.8|63.8 - - - 76.9|
Table 2: Accuracies of competitors and our SAAS on the mathematical benchmark datasets. Our SAAS models are
shown in purple color.
Specifically, we synthesize these datasets into
Chain-of-Thought (CoT) and Program-of-Thought
(PoT) rationales via various models (GPT, WizardMath (Luo et al., 2023), ToRA (Gou et al., 2023)).
To generate diverse synthetic data, we adjust some
hyperparameters such as temperature and top_p.
Then, we select only the correct responses and eliminate similar ones among these correct responses as
-----
in Wang et al. (2022). The detailed descriptions of
seed datasets are described in Appendix A. Table 1
provides the summary of our synthetic datasets for
fine-tuning.
**4.1.2** **Training Details**
We used the CodeLLaMA 13B model (Roziere
et al., 2023) as our base model and fine-tuned it
with our synthetic datasets by setting the batch
size to 128. We set learning rate to 2e − 5 and use
cosine scheduler with warm-up period (1 epoch).
For efficient model training, we used DeepSpeed
ZeRO Stage3 (Rajbhandari et al., 2020).
**4.1.3** **Model Details**
To evaluate the effectiveness of our SAAS in RQ1,
we compared it with several state-of-the-art competitors. They can be divided into the following
two groups:
- General models: GPT-4 (Achiam et al.,
2023), ChatGPT (gpt-3.5-turbo) (OpenAI,
2023), Claude-2 (Anthropic, 2023), PaLM2 (Anil et al., 2023), LLaMA-2 (Touvron et al.,
2023), Platypus-2 (Lee et al., 2023), CodeLLaMA (Roziere et al., 2023), SOLAR-1 (Kim
et al., 2023).
- Mathematics domain-specific models: WizardMath (Luo et al., 2023), MetaMath (Yu
et al., 2023), MulggleMath (Li et al., 2023a),
Toolformer (Schick et al., 2023), MathCoder (Wang et al., 2023), MammoTH (Yue
et al., 2023), ToRA (Gou et al., 2023).
As in Gou et al. (2023), we report CoT prompting results by default, and include PAL (Gao
et al., 2023a) prompting results for selected models. Within the category of mathematics domainspecific models, WizardMath, MetaMath, and MuggleMath exclusively employ CoT learning for finetuning. Conversely, ToRA utilizes solely PoT learning, whereas MathCoder and MammoTh integrate
a combination of CoT and PoT learning methodologies for fine-tuning. Also, Toolformer is trained
to utilize calculators.
**4.1.4** **Evaluation Details**
We evaluated the model’s performance and its ability to generalize mathematical reasoning using
both in-domain and out-of-domain data. For indomain evaluation, we use the test set of MATH
and GSM8K dataset. For out-of-domain evaluation,
we utilized the following various datasets, which
are used in the previous studies (Gou et al., 2023;
Yue et al., 2023) and publicly available: GSMHard (Gao et al., 2023b), SVAMP (Patel et al.,
2021), ASDIV (Miao et al., 2021), TabMWP (Lu
et al., 2022a), and MAWPS (Koncel-Kedziorski
et al., 2016) that consists of SingleEQ, SingleOP,
AddSub, and MultiArith. These datasets ensure a
comprehensive analysis of the model’s applicability
across various mathematical contexts.
**4.2** **Results and Analysis**
We highlight the best and the second-best results in
each column (i.e., dataset) of the following tables
in bold and underline, respectively.
**4.2.1** **RQ1: Comparison with Competitors**
To demonstrate the superiority of our SAAS over
competitors, we compare the accuracies of all competitors and SAAS. In this experiment, we utilize
LLaMA-2 7B, CodeLLaMA 7B, SOLAR-1 10.7B,
LLaMA-2 13B, CodeLLaMA 13B, CodeLLaMA
34B, and Llemma-34B as our base models.[5]
Table 2 shows the results. We summarize our
empirical findings as follows. First, we observed
that mathematics domain-specific models outperforms general models with similar size in almost
cases. This indicates a requisite for domain-specific
models to address complex mathematical problems
effectively. Second, among mathematics domainspecific competitors, ToRA, which utilizes solely
PoT learning, consistently outperforms all others
with similar size, including MathCoder and MammoTH, which integrate a combination of CoT learning and PoT learning methodologies. This implies
that simply combining CoT and PoT learning does
not effectively solve complex mathematical problems. Therefore, a strategic and careful approach
is imperative in the combination of CoT and PoT
learning. Third and most importantly, our SAAS
_consistently and significantly outperforms all com-_
petitors with similar size. Specifically, on ∼7B
size, 7B∼13B size, 13B∼34B size, and 34B∼70B
size, SAAS outperforms the best competitors (i.e.,
ToRA-CODE and ToRA) by up to 5.26%, 7.71%,
and 6.28% in terms of average score. Note that although we could not fine-tune 70B model, SAAS
with 10.7B showed similar performance to ToRA
with 70B. Furthermore, SAAS-LLEMA demonstrated superior performance than ToRA with 70B.
5For experiment on the 70B model, we could not proceed
it due to hardware constraint.
-----
**Strategy** **GSM8K** **MATH**
Chain-of-Thought (CoT) 69.7 26.9
Program-of-Thought (PoT) 76.8 47.7
Combination of CoT and PoT 79.0 49.2
**SAAS** **79.4** **50.6**
without cognitive retention strategy 79.0 49.6
Reverse SAAS 76.8 47.1
without cognitive retention strategy 69.4 27.6
Table 3: Accuracies of different learning strategies. All
improvements are statistically significant with p-value
_≤_ 0.001.
This remarkable performance of SAAS underscore
the effectiveness of our sequential learning approach.
**4.2.2** **RQ2: Effectiveness of Sequential**
**Learning and Cognitive Retention**
**Strategy**
To further explore what factors contribute to the
improvement of our SAAS, we conduct comparative experiments on diverse learning strategies, as
shown in Table 3. Specifically, we compare CoT
learning, PoT learning, CoT+PoT learning, SAAS
that transtions from CoT learning to PoT learning,
and reverse SAAS that transtions from PoT learning to CoT learning. In addition, we compare (reverse) SAAS without cognitive retention strategy
to validate the effectiveness of this strategy. From
Table 3, our empirical findings are summarized as
follows:
i) Effectiveness of the hybrid learning: Combining of CoT and PoT learning significantly outperforms both CoT learning and PoT learning. This
is because CoT learning, which enhances mathematical reasoning ability, and PoT learning,
which improves problem-solving ability, play
a complementary role;
ii) Effectiveness of the sequential learning: Our
**SAAS without cognitive retention strategy**
_slightly outperforms combining of CoT and PoT_
learning in MATH only. We conjecture that the
absence of significant improvement, despite sequential learning, can be attributed to the deterioration of mathematical reasoning abilities
during the PoT learning phase (i.e., cognitive
forgetting). Furthermore, reverse SAAS without cognitive retention strategy shows a lower
accuracy than combining of CoT and PoT learning. This result indicates that the order of the
learning sequences in sequential learning is vital
Figure 3: Accuracies on GSM8K with respect to the
number of required reasoning steps.
for mathematical reasoning and problem-solving
abilities;
iii) Effectiveness of the cognitive retention strat**egy: To mitigate the cognitive forgetting, in Sec-**
tion 3.2, we proposed the cognitive retention
strategy, which includes some data samples from
first phase in the second phase. (Reverse) SAAS
outperforms (reverse) SAAS without cognitive
retention strategy, verifying the effectiveness of
the cognitive retention strategy.
**4.2.3** **RQ3: Further Analysis of the**
**Capabilities of SAAS**
To analyze the capabilities of SAAS depending
on the difficuly of mathemtical problem, we quantitatively assess the break-down accuracies for
problems with respect to the reasoning steps as
in Shi et al. (2023). Specifically, we segmented
the GSM8K dataset into 4 categories based on the
number of reasoning steps required to arrive at
an answer. Then, we quantified accuracies of CoT
learning, PoT learning, and SAAS across each designated category.
As illustrated in Figure 3, SAAS outperforms
both CoT learning and PoT learning for all categories. Moreover, as the reasoning steps in a
mathematical problem extend (i.e., the difficulty
increases), especially the step 5 or above, the difference between our SAAS and other strategies becomes more pronounced. This result supports our
hypothesis that prioritizing the learning of mathematical reasoning ability via CoT learning is helpful for the amplification of challenging problemsolving ability.
-----
Question: Benjamin is trying to break a combination lock. This particular type of lock has 5 digits from 0 to 9, and Benjamin just happens
to know that there can be no repeated digits in the code for this type of lock. How many valid codes are there?
Answer: 30240
CoT Learning
To find the number of valid codes, we need to
consider the number of choices for each digit.
For the first digit, there are 9 choices
(0-9, excluding 0).
For the second digit, there are 8 choices
(0-9, excluding the digit already chosen).
For the third digit, there are 7 choices
(0-9, excluding the digits already chosen).
For the fourth digit, there are 6 choices
(0-9, excluding the digits already chosen).
For the fifth digit, there are 5 choices
(0-9, excluding the digits already chosen).
Therefore, the total number of valid codes is
9 * 8 * 7 *6 * 5 = 1260.
The answer is: 1260
PoT Learning SAAS
```python
from sympy import factorial
def valid_codes():
"""Benjamin is trying to break a
combination lock. This particular type of lock
has 5 digits from 0 to 9, and Benjamin just
happens to know that there can be no
repeated digits in the code for this type of
lock. How many valid codes are there?"""
# There are 10 choices for each of the 5
digits, and the choices are independent,
# so we use the product of the number of
choices for each digit
valid_codes = factorial(10) / factorial(10
5)
return valid_codes
result = valid_codes()
print(result)
```
```output
30240
```
The number of valid codes is 30240.
```python
from sympy import factorial
def valid_codes():
"""Benjamin is trying to break a
combination lock. This particular type of lock
has 5 digits from 0 to 9, and Benjamin just
happens to know that there can be no
repeated digits in the code for this type of
lock. How many valid codes are there?"""
# There are 10 choices for each of the 5
digits
num_codes = 10**5
return num_codes
result = valid_codes()
print(result)
```
```output
100000
```
The number of valid codes is 100000.
Figure 4: Responses of different learning approaches for a given question-answer pair.
**5** **Conclusion**
In this paper, we demonstrated the following two
important points in the sense of solving challenging
mathematical problems: (1) prioritizing the learning of mathematical reasoning ability via Chain-ofThought (CoT) learning is helpful for the amplification of problem-solving ability during Program-ofThought (PoT) learning; (2) for effective sequential
learning, it is necessary to employ a cognitive reten_tion strategy that incorporates some data samples_
from the initial phase into the subsequent phase.
In light of this, we proposed a novel sequential
learning approach, named SAAS (Solving Ability Amplification Strategy), which progresses from
CoT learning to PoT learning with cognitive retention strategy. Through extensive experiments with
the reputable benchmarks, we demonstrated that
**SAAS consistently and significantly outperforms**
all competitor, marking a significant advancement
in the field of mathematical reasoning in LLMs.
**4.2.4** **RQ4: Case Study**
To demonstrate that our SAAS is effective in terms
of both mathematical reasoning and computational
accuracy, we conduct a case study showing the responses of CoT learning, PoT learning, and SAAS
for a given question-answer pair. Figure 4 shows
the visualization results, where the colored words
indicate incorrect responses and the words with no
color mark indicate correct responses.
As depicted in Figure 4, CoT learning approach
exhibited inaccuracies in arithmetic computations
as well as deficiencies in mathematical reasoning. Conversely, PoT approach demonstrated precise calculations yet exhibited a critical deficiency
in mathematical reasoning. As we expected, our
**SAAS exhibited precise computational accuracy**
along with enhanced mathematical reasoning capabilities (See the more detailed comments than
the comments of PoT learning). Through this case
study, we demonstrated the following three observations: i) only CoT learning approach leads to
arithmetic calculation errors; ii) only PoT learning
approach may result in a deficit of mathematical
reasoning; iii) sequential learning that transitions
from CoT learning to PoT learning help improve
computational accuracy as well as mathematical
reasoning.
-----
**Acknowledgements**
This work was supported by the 2023 KT ICT
AI2XL Laboratory R&D Fund" project funded by
KT.
**Limitations**
This study, while advancing the field of computational linguistics through the use of Large Language Models (LLMs), encounters several limitations that are important to acknowledge.
Firstly, the intricate nature of LLMs can sometimes lead to unpredictability in their outputs. This
unpredictability can be particularly challenging
when dealing with mathematical reasoning, where
precision and accuracy are paramount, making it
difficult to utilize LLMs in applications in the field
of mathematics.
Furthermore, despite advancements via our
study, LLMs still have limitations in their understanding and application of advanced mathematical
concepts. While they can perform well on structured problems, their ability to handle abstract and
complex mathematical reasoning is still an area of
ongoing research and development.
Additionally, the reliance on synthetic data for
training these models also presents a limitation.
While synthetic datasets are useful for mitigating
the scarcity of real-world data, it may not always
accurately capture real-world scenarios, leading to
potential gaps in the model’s performance when
applied to practical, real-world tasks.
Finally, ethical considerations, particularly
around the potential misuse of AI, remain a concern. Ensuring that LLMs are used responsibly and
do not perpetuate biases is an ongoing challenge in
the field.
In summary, while our study leverages the capabilities of LLMs to enhance mathematical reasoning in computational linguistics, it is important to
recognize the limitations related to unpredictability
of LLMs, understanding of advanced mathematical concepts, reliance on synthetic data, and ethical
considerations. These limitations highlight the need
for continued research and development in the field
to address these challenges effectively.
**Ethics Statement**
In this research, we have diligently adhered to the
highest ethical standards of scientific inquiry and
data management, ensuring the integrity and reliability of our findings. The design and execution of
our experiments were grounded in fairness and objectivity, without favoring any particular outcome.
This commitment was reflected in our meticulous
planning and consistent application of methodologies across various datasets.
We also placed a strong emphasis on data privacy
and security, handling all data, especially synthetic
data generated for our models, in compliance with
relevant data protection laws and guidelines. We
confirmed that all the data used in our experiments
were free of licensing issues. Our approach to data
was characterized by strict anonymization protocols and its use was confined strictly to research
purposes. We have strived for transparency in our
research process, documenting all methodologies,
data sources, and analysis techniques clearly, which
underpins our commitment to the reproducibility
of scientific research. This allows other researchers
to verify our results and build upon our work, contributing to the collective knowledge in the field.
Recognizing the broader impacts of AI and
LLMs on society, our research was conducted with
a profound sense of responsibility. We were mindful of the ethical implications of AI development
and aimed to create models that are effective yet
ethically aligned, avoiding any form of biased, discriminatory, or harmful applications of these technologies. We believe our research makes a positive
contribution to the field of computational linguistics and AI, particularly in enhancing the mathematical reasoning capabilities of Large Language
Models in a manner that is ethically sound and
socially responsible.
Our work underscores our commitment to conducting scientifically rigorous and ethically responsible research, maintaining the highest standards of
integrity in AI and computational linguistics.
**References**
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama
Ahmad, Ilge Akkaya, Florencia Leoni Aleman,
Diogo Almeida, Janko Altenschmidt, Sam Altman,
Shyamal Anadkat, et al. 2023. Gpt-4 technical report.
_arXiv preprint arXiv:2303.08774._
Aida Amini, Saadia Gabriel, Peter Lin, Rik KoncelKedziorski, Yejin Choi, and Hannaneh Hajishirzi.
2019. Mathqa: Towards interpretable math word
problem solving with operation-based formalisms.
_arXiv preprint arXiv:1905.13319._
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak
Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng
-----
Chen, et al. 2023. Palm 2 technical report. arXiv
_preprint arXiv:2305.10403._
Anthropic. 2023. Model card and evaluations for
claude models. [URL https://www-files.](https://www-files.anthropic.com/production/images/Model-Card-Claude-2.pdf)
[anthropic.com/production/images/](https://www-files.anthropic.com/production/images/Model-Card-Claude-2.pdf)
[Model-Card-Claude-2.pdf.](https://www-files.anthropic.com/production/images/Model-Card-Claude-2.pdf)
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster,
Marco Dos Santos, Stephen McAleer, Albert Q Jiang,
Jia Deng, Stella Biderman, and Sean Welleck. 2023.
Llemma: An open language model for mathematics.
_arXiv preprint arXiv:2310.10631._
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W Cohen. 2022. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint
_arXiv:2211.12588._
Wenhu Chen, Ming Yin, Max Ku, Pan Lu, Yixin Wan,
Xueguang Ma, Jianyu Xu, Xinyi Wang, and Tony
Xia. 2023. Theoremqa: A theorem-driven question
answering dataset. arXiv preprint arXiv:2305.12524.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, et al. 2021. Training verifiers to solve math
word problems. arXiv preprint arXiv:2110.14168.
Guanting Dong, Hongyi Yuan, Keming Lu, Chengpeng Li, Mingfeng Xue, Dayiheng Liu, Wei Wang,
Zheng Yuan, Chang Zhou, and Jingren Zhou. 2023.
How abilities in large language models are affected
by supervised fine-tuning data composition. arXiv
_preprint arXiv:2310.05492._
Iddo Drori, Sunny Tran, Roman Wang, Newman Cheng,
Kevin Liu, Leonard Tang, Elizabeth Ke, Nikhil
Singh, Taylor L Patti, Jayson Lynch, et al. 2021.
A neural network solves and generates mathematics problems by program synthesis: Calculus, differential equations, linear algebra, and more. CoRR,
_abs/2112.15594._
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Graham
Neubig. 2023a. Pal: Program-aided language models.
In International Conference on Machine Learning.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Graham
Neubig. 2023b. Pal: Program-aided language models.
In International Conference on Machine Learning,
pages 10764–10799. PMLR.
Robert Glaser. 1984. Education and thinking: The role
of knowledge. American psychologist.
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yujiu Yang,
Minlie Huang, Nan Duan, Weizhu Chen, et al.
2023. Tora: A tool-integrated reasoning agent
for mathematical problem solving. arXiv preprint
_arXiv:2309.17452._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. arXiv preprint
_arXiv:2103.03874._
Weisen Jiang, Han Shi, Longhui Yu, Zhengying Liu,
Yu Zhang, Zhenguo Li, and James T Kwok. 2023.
Forward-backward reasoning in large language models for mathematical verification. _arXiv preprint_
_arXiv:2308.07758._
Zhanming Jie, Trung Quoc Luong, Xinbo Zhang, Xiaoran Jin, and Hang Li. 2023. Design of chain-ofthought in math problem solving. arXiv preprint
_arXiv:2309.11054._
Dahyun Kim, Chanjun Park, Sanghoon Kim, Wonsung
Lee, Wonho Song, Yunsu Kim, Hyeonwoo Kim,
Yungi Kim, Hyeonju Lee, Jihoo Kim, et al. 2023.
Solar 10.7 b: Scaling large language models with
simple yet effective depth up-scaling. arXiv preprint
_arXiv:2312.15166._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate
Kushman, and Hannaneh Hajishirzi. 2016. Mawps:
A math word problem repository. In Proceedings of
_the 2016 conference of the north american chapter of_
_the association for computational linguistics: human_
_language technologies, pages 1152–1157._
Ariel N Lee, Cole J Hunter, and Nataniel Ruiz. 2023.
Platypus: Quick, cheap, and powerful refinement of
llms. arXiv preprint arXiv:2308.07317.
Aitor Lewkowycz, Anders Andreassen, David Dohan,
Ethan Dyer, Henryk Michalewski, Vinay Ramasesh,
Ambrose Slone, Cem Anil, Imanol Schlag, Theo
Gutman-Solo, et al. 2022. Solving quantitative reasoning problems with language models. Advances
_in Neural Information Processing Systems, 35:3843–_
3857.
Chengpeng Li, Zheng Yuan, Guanting Dong, Keming
Lu, Jiancan Wu, Chuanqi Tan, Xiang Wang, and
Chang Zhou. 2023a. Query and response augmentation cannot help out-of-domain math reasoning generalization. arXiv preprint arXiv:2310.05506.
Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani,
Dmitrii Khizbullin, and Bernard Ghanem. 2023b.
Camel: Communicative agents for" mind" exploration of large scale language model society. arXiv
_preprint arXiv:2303.17760._
Zhenwen Liang, Dian Yu, Xiaoman Pan, Wenlin Yao,
Qingkai Zeng, Xiangliang Zhang, and Dong Yu. 2023.
Mint: Boosting generalization in mathematical reasoning via multi-view fine-tuning. arXiv preprint
_arXiv:2307.07951._
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri
Edwards, Bowen Baker, Teddy Lee, Jan Leike,
John Schulman, Ilya Sutskever, and Karl Cobbe.
2023. Let’s verify step by step. _arXiv preprint_
_arXiv:2305.20050._
-----
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word
problems. arXiv preprint arXiv:1705.04146.
Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu,
Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark,
and Ashwin Kalyan. 2022a. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. arXiv preprint arXiv:2209.14610.
Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and
Kai-Wei Chang. 2022b. A survey of deep learning for mathematical reasoning. _arXiv preprint_
_arXiv:2212.10535._
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wizardmath: Empowering mathematical reasoning for
large language models via reinforced evol-instruct.
_arXiv preprint arXiv:2308.09583._
Lucie Charlotte Magister, Jonathan Mallinson, Jakub
Adamek, Eric Malmi, and Aliaksei Severyn. 2022.
Teaching small language models to reason. arXiv
_preprint arXiv:2212.08410._
Jordan Meadows and André Freitas. 2022. A survey in
mathematical language processing. arXiv preprint
_arXiv:2205.15231._
Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su.
2021. A diverse corpus for evaluating and developing
english math word problem solvers. arXiv preprint
_arXiv:2106.15772._
Swaroop Mishra, Arindam Mitra, Neeraj Varshney,
Bhavdeep Sachdeva, Peter Clark, Chitta Baral, and
Ashwin Kalyan. 2022. Numglue: A suite of fundamental yet challenging mathematical reasoning tasks.
_arXiv preprint arXiv:2204.05660._
[OpenAI. 2023. Chat-gpt. URL https://openai.](https://openai.com/blog/chatgpt)
[com/blog/chatgpt.](https://openai.com/blog/chatgpt)
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
2021. Are nlp models really able to solve
simple math word problems? _arXiv preprint_
_arXiv:2103.07191._
Jing Qian, Hong Wang, Zekun Li, Shiyang Li, and
Xifeng Yan. 2022. Limitations of language models
in arithmetic and symbolic induction. arXiv preprint
_arXiv:2208.05051._
Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase,
and Yuxiong He. 2020. Zero: Memory optimizations
toward training trillion parameter models. In SC20:
_International Conference for High Performance Com-_
_puting, Networking, Storage and Analysis._
Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten
Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi,
Jingyu Liu, Tal Remez, Jérémy Rapin, et al. 2023.
Code llama: Open foundation models for code. arXiv
_preprint arXiv:2308.12950._
Timo Schick, Jane Dwivedi-Yu, R Dessı, Roberta
Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola
Cancedda, and Thomas Scialom. 2023. Toolformer:
Language models can teach themselves to use tools
(2023). arXiv preprint arXiv:2302.04761.
Freda Shi, Xinyun Chen, Kanishka Misra, Nathan
Scales, David Dohan, Ed H Chi, Nathanael Schärli,
and Denny Zhou. 2023. Large language models can
be easily distracted by irrelevant context. In Inter_national Conference on Machine Learning, pages_
31210–31227. PMLR.
Kumar Shridhar, Alessandro Stolfo, and Mrinmaya
Sachan. 2023. Distilling reasoning capabilities into
smaller language models. In Findings of the Associa_tion for Computational Linguistics: ACL 2023, pages_
7059–7073.
Avijit Thawani, Jay Pujara, Pedro A Szekely, and Filip
Ilievski. 2021. Representing numbers in nlp: a survey
and a vision. arXiv preprint arXiv:2103.13136.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint_
_arXiv:2307.09288._
Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun
Luo, Weikang Shi, Renrui Zhang, Linqi Song,
Mingjie Zhan, and Hongsheng Li. 2023. Mathcoder: Seamless code integration in llms for enhanced mathematical reasoning. _arXiv preprint_
_arXiv:2310.03731._
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2022. Self-instruct: Aligning language model with self generated instructions. arXiv
_preprint arXiv:2212.10560._
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel,
Barret Zoph, Sebastian Borgeaud, Dani Yogatama,
Maarten Bosma, Denny Zhou, Donald Metzler, et al.
2022a. Emergent abilities of large language models.
_arXiv preprint arXiv:2206.07682._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022b. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural
_Information Processing Systems._
Jingfeng Yang, Hongye Jin, Ruixiang Tang, Xiaotian
Han, Qizhang Feng, Haoming Jiang, Bing Yin, and
Xia Hu. 2023. Harnessing the power of llms in practice: A survey on chatgpt and beyond. arXiv preprint
_arXiv:2304.13712._
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2023.
Metamath: Bootstrap your own mathematical questions for large language models. _arXiv preprint_
_arXiv:2309.12284._
-----
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting
Dong, Chuanqi Tan, and Chang Zhou. 2023. Scaling relationship on learning mathematical reasoning with large language models. _arXiv preprint_
_arXiv:2308.01825._
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
2023. Mammoth: Building math generalist models
through hybrid instruction tuning. arXiv preprint
_arXiv:2309.05653._
Dongxiang Zhang, Lei Wang, Luming Zhang, Bing Tian
Dai, and Heng Tao Shen. 2019. The gap of semantic
parsing: A survey on automatic math word problem
solvers. IEEE transactions on pattern analysis and
_machine intelligence._
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang,
Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen
Zhang, Junjie Zhang, Zican Dong, et al. 2023. A
survey of large language models. _arXiv preprint_
_arXiv:2303.18223._
Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron
Courville, Behnam Neyshabur, and Hanie Sedghi.
2022. Teaching algorithmic reasoning via in-context
learning. arXiv preprint arXiv:2211.09066.
-----
**A** **Detailed Descriptions of Seed Datasets**
The detailed description of each seed dataset is as follows:
i) GSM8K (Cobbe et al., 2021): It focuses on elementary-level math problems to evaluate abilities that
handle logical reasoning and parse and interpret math questions presented in natural language;
ii) MATH (Hendrycks et al., 2021): It includes a wide range of math problems, ranging from elementary
arithmetic to advanced topics such as algebra, calculus, and geometry, which are challenging more than
GSM8K;
iii) MetaMathQA (Yu et al., 2023): It is a dataset augmented through rephrasing question, forwardbackward reasoning (Jiang et al., 2023), self-verification, and answer augmentation based on GSM8K
and MATH;
iv) MathInstruct (Yue et al., 2023): It consists of a mix of 13 types of CoT and PoT mathematical
rationales from various mathematical fields. Specifically, CoT type data consist of GSM8K, GSM8KRFT (Yuan et al., 2023), AQuA-RAT (Ling et al., 2017), MATH, THeoremQA (Chen et al., 2023)
Camel-Math (Li et al., 2023b) and College-Math. Otherwise, PoT type data consist of GSM8K,
AQuA-RAT, TheoremQA, MathQA (Amini et al., 2019) and NumGLUE (Mishra et al., 2022);
v) QANDA: It consists of a diverse collection of real-world mathematical questions and detailed solutions,
catering to a broad spectrum of mathematical concepts and difficulty levels.
-----
| [
"Hyeonwoo, Kim",
"Gyoungjin, Gim",
"Yungi, Kim",
"Jihoo, Kim",
"Byungju, Kim",
"Wonseok, Lee",
"Chanjun, Park"
] | 2024-04-24T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2404.03887 | https://arxiv.org/abs/2404.03887 | https://www.semanticscholar.org/paper/53cd612f5046901ca454f3c72dcad45a84f4f31d |
SEGO: Sequential Subgoal Optimization for Mathematical Problem-Solving | Large Language Models (LLMs) have driven substantial progress in artificial intelligence in recent years, exhibiting impressive capabilities across a wide range of tasks, including mathematical problem-solving. Inspired by the success of subgoal-based methods, we propose a novel framework called SEquential subGoal Optimization (SEGO) to enhance LLMs’ ability to solve mathematical problems. By establishing a connection between the subgoal breakdown process and the probability of solving problems, SEGO aims to identify better subgoals with theoretical guarantees. Addressing the challenge of identifying suitable subgoals in a large solution space, our framework generates problem-specific subgoals and adjusts them according to carefully designed criteria. Incorporating these optimized subgoals into the policy model training leads to significant improvements in problem-solving performance. We validate SEGO’s efficacy through experiments on two benchmarks, GSM8K and MATH, where our approach outperforms existing methods, highlighting the potential of SEGO in AI-driven mathematical problem-solving. | A novel framework called SEGO is proposed to enhance LLMs' ability to solve mathematical problems by establishing a connection between the subgoal breakdown process and the probability of solving problems, and SEGO aims to identify better subgoals with theoretical guarantees. | # SEGO: Sequential Subgoal Optimization for Mathematical Problem-Solving
**Xueliang Zhao[♠][∗]** **Xinting Huang[⋆]** **Wei Bi[⋆]** **Lingpeng Kong[♠]**
_♠The University of Hong Kong_
⋆Tencent AI Lab
[email protected]
**Abstract**
Large Language Models (LLMs) have driven
substantial progress in artificial intelligence in
recent years, exhibiting impressive capabilities
across a wide range of tasks, including mathematical problem-solving. Inspired by the success of subgoal-based methods, we propose a
novel framework called SEquential subGoal
**Optimization (SEGO) to enhance LLMs’ abil-**
ity to solve mathematical problems. By establishing a connection between the subgoal
breakdown process and the probability of solving problems, SEGO aims to identify better
subgoals with theoretical guarantees. Addressing the challenge of identifying suitable subgoals in a large solution space, our framework
generates problem-specific subgoals and adjusts them according to carefully designed criteria. Incorporating these optimized subgoals
into the policy model training leads to significant improvements in problem-solving performance. We validate SEGO’s efficacy through
experiments on two benchmarks, GSM8K and
MATH, where our approach outperforms existing methods, highlighting the potential of
SEGO in AI-driven mathematical problemsolving.
**1** **Introduction**
In recent years, the emergence of Large Language
Models (LLMs) has marked a significant milestone
in the field of artificial intelligence. Models such as
ChatGPT and LLaMA have demonstrated remarkable capabilities across diverse tasks. Within this
context, addressing mathematical problems has attracted considerable interest from researchers, as
it serves as a prominent showcase of the reasoning
capabilities inherent in LLMs. Reasoning involves
a multitude of aspects, among which the ability to
decompose the overall problem into smaller, more
manageable subproblems (i.e., subgoals) is particularly essential for effective problem-solving.
_∗_ This work was done during an internship at Tencent AI
Lab.
In this paper, we draw inspiration from the successful application of subgoal-based methods in
both RL and LLMs (Zhang et al., 2020; Zhao et al.,
2023) and introduce a novel framework called
SEGO (SEquential subGoal Optimization). Intuitively, a good subgoal should serve as a bridge
to solving a bigger problem, such that breaking
down the problem into these subgoals makes the
subproblems easier to solve, thereby increasing the
likelihood of solving the entire problem. SEGO
quantifies this intuition by establishing a theoretical connection between the subgoal breakdown
process and the probability of solving the problem
(Eq. 6). Concretely, we construct a lower bound
on the probability of solving the complete problem using a proposal distribution considering a
specific subgoal. We then employ a method inspired by annealed importance sampling (Neal,
2001) to efficiently navigate through vast search
spaces, seeking the subgoal corresponding to the
theoretically optimal proposal distribution, while
ensuring the process doesn’t get trapped in suboptimal subgoals (§3.2). By incorporating these
sequentially optimized subgoals into the training of
the policy model, we achieve significant improvements in solving mathematical problems.
To empirically validate the efficacy of SEGO,
we conducted experiments on two primary benchmarks: GSM8K (Cobbe et al., 2021) and
MATH (Hendrycks et al., 2021). Our approach
demonstrated marked superiority against existing
methods with comparable model sizes, highlighting the potential of SEGO in advancing the field
of AI-driven mathematical problem-solving. We
hope that our findings can open up new avenues
for future research on the applicability of LLMs to
complex tasks in diverse domains (Yao et al., 2022;
Liu et al., 2023).
7544
-----
**2** **Preliminaries**
**2.1** **Problem Formulation**
This study focuses on a goal-conditioned reinforcement learning (RL) framework, consisting of goal
space (G), state space (S), action space (A), transition probability (P), and reward function (R). The
transition probability P(s[′]|s, a) indicates the probability of transitioning from a current state s to a
new state s[′] after an action a. The reward function
_R(s, g) gives a reward of 1 if the goal g is reached_
at state s, and 0 otherwise. The policy π(a|s, g)
maps state-goal pairs to actions in A.
Building on the goal-conditioned RL framework,
in mathematical problem-solving, an action denotes a step in the solution process, while the state
comprises cumulative actions. The (sub-)goal is
the specific problem targeted for resolution. The
transition probability, P(s[′]|s, a), uniquely assigns
a probability of 1 to the state s[′] = [s; a] where [; ]
represents sequence concatenation, and 0 to all others. The reward function R(s, g) evaluates whether
state s correctly solves the goal g. In this work,
we employ a program-aided approach (Gao et al.,
2023; Chen et al., 2022; Drori et al., 2022) to form
the solution. For illustration, consider the goal g as
“Calculate sin(30[◦])”. Here, a state s could be “import math; def solve(): angle = math.radians(30);”,
and an action a, “return math.sin(angle)”. This
work aims to create a policy network that predicts
trajectories for new goals. It uses a demonstration
dataset = _τ : (g; s0, a0, . . ., sℓ, aℓ)_, where
_D_ _{_ _}_
_τ represents a trajectory of length ℓ_ with states st
and actions at at every timestep. The special state,
**_sˆ, consists solely of essential imports and function_**
definitions. The task is to predict a trajectory, starting from ˆs and aligned with a given goal g. This is
formulated as:
The subgoal collection phase is central to this
framework and follows a “generate-select” pipeline.
Specifically, for a challenging goal g, the process
_generates a variety of potential subgoals, each_
paired with its respective state. A suitable subgoal, gw, and its state, sw[1], are then selected based
on criteria that vary among different algorithms (Li
et al., 2022; Zhang et al., 2021; Chane-Sane et al.,
2021). These criteria typically ensure that the chosen subgoal is attainable from the initial state and
facilitates achieving the final goal. For example,
the subgoal might be “Calculate the radian value of
30[◦]” with the state “import math; def solve(): angle
= math.radians(30);”. In the trajectory sampling
phase, trajectories τ1 and τ2 are drawn from the distributions p(τ **_sˆ, gw) and p(τ_** **_sw, g) respectively._**
_|_ _|_
The final training phase utilizes these trajectories to
optimize the policy network, thereby enhancing the
ability to achieve both subgoals and the ultimate
goal.
**3** **Method**
This work addresses challenges in subgoal-based
RL, focusing on the suboptimality of generated subgoals and their selection process’s lack of theoretical guarantees. We introduce the SEGO framework,
which innovates beyond the traditional “generateselect” pipeline. SEGO employs a “generate(sequentially) optimize-select” approach (Figure 1),
encompassing a policy network, subgoal generator,
subgoal optimizer, reward network, and value network. Notably, only the policy network is used in
the testing phase.
The “generate-(sequentially) optimize-select”
pipeline starts with initial subgoal generation, followed by sequential optimizations. In each iteration, a new subgoal is proposed and evaluated for
its increased likelihood of achieving the goal. Improved subgoals are retained for further refinement.
This results in a collection of refined subgoals, from
which the most suitable are selected based on specific criteria.
SEGO presents substantial advantages: (1) Its
sequential optimization aligns generated subgoals
more closely with an optimal subgoal distribution.
(2) It accurately calculates subgoal weights based
on an unbiased estimate of the probability of reaching a goal from a given state.
1In this work, the subscript “w” denotes “waypoint”, which
is used interchangeably with “subgoal”.
_p(τ | ˆs, g) =_
_π(at_ **_st, g)_** (st+1 **_at, st),_**
_t=0_ _|_ _· P_ _|_
Y
(1)
with s0 = ˆs.
**2.2** **Subgoal-based Reinforcement Learning**
The main idea behind subgoal-based RL involves
decomposing a challenging task into two more
manageable sub-tasks, each of which can be addressed by the existing policy (Li et al., 2022). In
subgoal-based RL, a typical approach consists of
three phases: subgoal collection, trajectory sampling, and training, which together form a cyclical
process.
7545
-----
|𝑓|Col2|
|---|---|
|||
𝑣[!]
𝑠!(#,%) 𝑣𝑟[!] 𝑠!(#,#) 𝑠!(#,()𝑣[!] 𝛼[(#)]
𝑟
ℎ Δ 𝑣[!] Calculate sin(30°)
def solve():
𝑔 𝑔!(#,%) 𝑔!(#,#) 𝑔!(#,() 𝑠! angle = math.radians(30)…
Calculate sin(30°)
𝑓
def solve():… 𝑣[!] vs Calculate the radian of 30°
𝑠!(',%) 𝑟 𝑠!(',#) 𝑠!(',() 𝑔! def solve():…
𝑣[!]
ℎ Δ
𝑔!(',%) 𝑔!(',#) 𝑔!(',() 𝛼[(')]
Generate Sequentially Optimize Select
Figure 1: An overview of the “generate-(sequentially) optimize-select” pipeline. Within this pipeline, the symbols f,
_h, r, and v[π]_ correspond to the subgoal generator, subgoal optimizer, reward network, and value network, respectively.
The terms g, sw, and gw denote the intended goal, the subgoal state, and the subgoal. The pipeline initiates by
generating a diverse set of subgoals. Each subgoal is then optimized in sequence. The process ends with the
selection of the most appropriate subgoal.
**Road Map.** We start by discussing the initialization fine-tuning in §3.1, which includes setting up
key components and preparing initial training data.
Next, we detail subgoal-based fine-tuning in §3.2,
the core of our framework. This section explains
the “generate-(sequentially) optimize-select” process and how the resultant data updates various
component parameters. The overall algorithm is
outlined in §3.3.
**3.1** **Initialization Fine-tuning**
The SEGO framework employs the following key
components: policy network, subgoal generator,
subgoal optimizer, reward network, and value network, all of which are implemented using large
language models (LLMs) (Touvron et al., 2023a,b;
Rozière et al., 2023). We defer the training details of these components after an overview of their
implementation. More details about these components are provided in Appendix B.
**Policy Network.** The policy network π(a | s, g)
processes the current state and goal, represented as
token sequences, to predict actions. This network
employs standard decoding methods like greedy
search or top-k sampling (Holtzman et al., 2019).
**Subgoal Generator and Subgoal Optimizer.**
The subgoal generator f, represented as sw, gw =
_f_ (s, g), breaks complex tasks into simpler subtasks, transforming the current state and goal into
a subgoal and its associated state. This method
ensures manageable progression towards the ultimate goal. The subgoal optimizer h, denoted
as s[′]w[,][ g][′]w [=][ h][(][s][w][,][ g]w[,][ s][,][ g][)][, refines these sub-]
goals and states to improve goal decomposition
efficiency.
**Reward Network and Value Network.** The reward network r(s, g) evaluates if the current state
achieves the goal, acting as a surrogate for the
actual reward function R. The value network
_v[π](s, g), a regression model, assesses the success_
probability from a given state under policy π.
Training of the SEGO components begins with a
goal collection, Dg = {g}, and a trajectory dataset,
= _τ : (g; s0, a0, s1, a1, . . .)_, created using
_D_ _{_ _}_
GPT-3.5-turbo.[2] This dataset comprises various
mathematical problem-solving trajectories. The
policy network is trained with triplets (g, si, ai)
from D to predict action ai for a given state si and
goal g. For each trajectory, a state st is randomly
chosen, and GPT-3.5-turbo predicts the intermediate subgoal gw and its state sw. This step trains
the subgoal generator to predict subgoals and their
states from (st, g). GPT-3.5-turbo also introduces
slight modifications to gw and sw, producing ˜gw
and ˜sw. The subgoal optimizer is trained to restore
(gw, sw) from these corrupted versions, considering the current state st and goal g.
After initializing the policy network, it gener
2Further details regarding the trajectory dataset are elaborated in Appendix C.
7546
-----
**Algorithm 1 SEGO: Sequential Subgoal Optimization**
**Requires:** _π, f_, h, r, v[π]: policy network, subgoal generator, subgoal optimizer, reward network, value network, respectively.
_Kmax:_ maximum iterations for subgoal-based fine-tuning.
_Dg:_ a collection of goals (or mathematical problems).
1: Construct the trajectory dataset D using GPT-3.5-turbo and Dg.
2: Initialize and fine-tune π, f, h, r, and v[π] with Dg and D.
3: k ← 0.
4: while k < Kmax do
5:6: _Dforp τ, D ∈Dv ←∅ do._ _▷_ Prepare datasets▷ Each τ is a trajectory in the form Dp for policy and Dv ( for value network training.g; s0, a0, . . ., st, at, . . .).
7: Generate a diverse set of subgoals via Eq.2.
8: Optimize each subgoal using Eq.3 and Eq.4.
9: Select a subgoal via Eq.5.
10: Sample new trajectories τ1 and τ2 utilizing the selected subgoal.
11: Calculate ¯α and form a triplet (g, st, ¯α). _▷_ _α¯ estimates the probability of achieving g from st under π._
12: Update Dp ←Dp ∪{τ1, τ2}, Dv ←Dv ∪{(g, st, ¯α)}.
13: Train policy network π with Dp and value network v[π] with Dv.
14: _k ←_ _k + 1._
ates trajectories for each goal in _g, with goals_
_D_
linked to human-annotated answers. Trajectories
leading to correct answers are positive examples,
_τ : (g : s0, a0, . . ., sℓ, aℓ)_, and those miss_{_ _}_
ing the correct answers are negative examples,
_τ : (g : ˜s0, ˜a0, . . ., ˜sℓ, ˜aℓ)_ . The reward net_{_ _}_
work is trained to classify the final state-goal pair
(sℓ, g) as positive and (˜sℓ, g) as negative. Simultaneously, the value network trains to approximate to
1 for (st, g) and 0 for (˜st, g), where st and ˜st are
randomly selected from their respective sets.
**3.2** **Subgoal-based Fine-tuning**
The policy network, when only fine-tuned at the
initialization phase, struggles with complex problems (Luo et al., 2023). Inspired by recent advancements in subgoal-based RL (Li et al., 2022; Zhang
et al., 2021; Chane-Sane et al., 2021) and annealed
importance sampling (Neal, 2001), we introduce
a fine-tuning stage that emphasizes decomposing
tasks into subgoals. Additionally, it evolves from
the traditional “generate-select” pipeline to a more
advanced “generate-(sequentially)optimize-select”
approach.
**Subgoal Collection.** For each trajectory τ :
(g; s0, a0, s1, a1, . . .), this phase aims to gen_∈D_
erate a subgoal pair (sw, gw) that decompose g into
more manageable subtasks. The procedure starts by
generating N independent pairs of initial subgoals,
denoted as {(sw[(][i,][1)], g[(]w[i,][1)])}i[N]=1[. Subsequently, each]
pair (s[(]w[i,][1)], g[(]w[i,][1)]) proceeds through a sequential
optimization process, which yields a sequence of
subgoal pairs: (sw[(][i,][1)], g[(]w[i,][1)]), . . ., (s[(]w[i,η][)], g[(]w[i,η][)]),
_{_ _}_
where η represents the maximum number of iterations within the sequential optimization.
Within each trajectory τ, a state st is randomly
selected from the set **_s0, s1, . . ._** . Subsequently,
_{_ _}_
the subgoal generator is tasked with producing a
series of subgoals, defined as follows:
**_s[(]w[i,][1)], g[(]w[i,][1)]_** = f (st, g), for i = 1, . . ., N (2)
To ensure the generation of diverse subgoals, a
top-k sampling strategy (Holtzman et al., 2019) is
implemented.
The pipeline then progresses to a sequential optimization process. At the j-th iteration, the subgoal
optimizer proposes a potentially improved subgoal
pair, which is defined as:
**_s[(]w[i,j][)], g[(]w[i,j][)]_** = h(s[(]w[i,j][−][1)], g[(]w[i,j][−][1)], st, g),
for i = 1, . . ., N (3)
To ensure the improvement of the new subgoal pair (s[(]w[i,j][)], g[(]w[i,j][)]) over its predecessor
(s[(]w[i,j][−][1)], g[(]w[i,j][−][1)]), it is necessary to establish a rigorous criteria for evaluation. To do that, a criteria
is derived from a theoretical perspective to guarantee an unbiased estimation, as detailed in Proposition 4.3. Formally, the criteria is defined as:
∆=βj 1 log _p(s[(]w[i,j][)], g[(]w[i,j][)]_ _| st, g; f_ )
_−_
_p(s[(]w[i,j][−][1)], g[(]w[i,j][−][1)]_ **_st, g; f_** )
_|_
+ (1 − _βj−1) log_ _vv[π][π](s(s[(]w[i,j][(]w[i,j][−][)][1)], g, g)_ ) _×_ _vv[π][π](s(st,t g, g[(]w[i,j]w[(][i,j][−][)][1)])_ )
exp _r(s[(]w[i,j][)], g[(]w[i,j][)])_
_×_
exp _r(s[(]w[i,j][−][1)], g[(]w[i,j][−][1)])_
+ log _[p][(][s]w[(][i,j][−][1)], g[(]w[i,j][−][1)]_ _| s[(]w[i,j][)], g[(]w[i,j][)], st, g; h)_
_p(s[(]w[i,j][)], g[(]w[i,j][)]_ **_s[(]w[i,j][−][1)], g[(]w[i,j][−][1)], st, g; h)_**
_|_
(4)
7547
-----
where the sequence of weights βj satisfies 1 =
_β0 > β1 > . . . > βη_ = 0. If ∆ _≤_ 0,
the subgoal pair at the j-th step is redefined
as (s[(]w[i,j][−][1)], gw[(][i,j][−][1)]); otherwise, (s[(]w[i,j][)], g[(]w[i,j][)]) is
maintained. Intuitively, as the coefficient βj approaches 0, the criteria increasingly emphasizes
the comparison between the values of two subgoals within the optimal distribution (Proposition 4.2), represented in logarithmic form. Specifically, v[π](sw, g) and v[π](s, gw) serve as proxies
for p[π](g **_sw) and p[π](gw_** **_s), respectively. This_**
_|_ _|_
comparison favors the subgoal that better aligns
with the optimal distribution, thus incrementally
steering the subgoal optimization towards more
theoretically effective choices. The final term in ∆
acts as a regularization factor.
The weight α[(][i][)] associated with each subgoal
pair (s[(]w[i,η][)], g[(]w[i,η][)]) is defined as follows:
tion, trajectory sampling and component training.
This procedure leads to the development of our
final framework, SEGO, detailed in Algorithm 1.
**Remarks.** In this work, we concentrate on mathematical problem-solving, yet our proposed methodology serves as a universal framework for tackling
a wide range of complex tasks that can be modeled
as goal-conditioned reinforcement learning problems (see §2.1), including code generation (Chen
et al., 2021) and commonsense reasoning (Clark
et al., 2018). To do that, one only needs to customize the goal, action, and state space definitions
to suit the task specifics and adjust prompts for
trajectory generation and subgoal prediction using
GPT-3.5-turbo, aligning them with the specific requirements of the task.
**4** **Theoretical Analysis**
We begin by constructing a lower bound on the
probability of successfully solving the complete
problem. This is done through the consideration of
a proposal distribution focused on a specific subgoal. Letting p[π][(][·|·][,][g][)](g | s) represent the probability of achieving a goal g from a state s under policy
_π(· | ·, g), we have the following proposition:_
**Proposition 4.1. The objective defined below con-**
_stitutes a lower bound on the probability of reach-_
_ing the goal g from state s:_
log p[π][(][·|·][,][g][)](g | s) ≥ Eq(g _w[,][s]w[|][g][,][s][)]_ log p[π][(][·|·][,][g][)](g | sw)+
h
log α[(][i][)] =
(βj − _βj−1) log p(s[(]w[i,j][)], g[(]w[i,j][)]_ _| st, g; f_ )
_j=1_
+ (βj−1 − _βj)_ log v[π](s[(]w[i,j][)], g)
+ log v[π](st, g[(]w[i,j][)]) + r(s[(]w[i,j][)], g[(]w[i,j][)])
(5)
Subsequently, the subgoal pair (sw, gw) is selected based on a softmax distribution over these
weights, i.e., (sw, gw) Softmax(log α[(][i][)]).
_∼_
**Trajectory Sampling and Component Training.**
Upon obtaining a trajectory τ and the predicted
subgoal pair (sw, gw), the subsequent procedure
involves generating two new trajectories through
the policy network. These trajectories, denoted
as τ1 and τ2, are sampled from p(τ | st, gw) and
_p(τ_ **_sw, g) (defined in Eq.1), respectively. Within_**
_|_
these trajectories, for each triplet (g, si, ai), the
policy network is trained to predict the action ai
given the state si and the goal g.
As a byproduct of this sequential optimization
process, the average coefficient ¯α = _N[1]_ _Ni=1_ _[α][(][i][)]_
acts as an unbiased estimator that correlates with
P
the probability of successfully achieving the goal
**_g from the state st when guided by the policy_**
network π (see Proposition 4.3). Leveraging this
byproduct, the value network is further trained to
regress towards ¯α, using the state st and the goal g
as inputs.
**3.3** **SEGO: Sequential Subgoal Optimization**
After completing the initialization phase, our approach involves repeated cycles of subgoal collec
log p[π][(][·|·][,][g] _w[)](gw_ **_s) + r(sw, gw)_** log q(sw, gw **_s, g)_** _._
_|_ _−_ _|_
(6)i
We provide the proof in Appendix A.1. Next, we
derive the analytical solution for the optimal subgoal distribution and obtain the following proposition.
**Proposition 4.2. The optimal subgoal distribution**
_satisfies the following condition:_
_q[⋆](sw, gw_ **_s, g)_**
= _[p][π][(][·|·][,][g] |[)][(][g][ |][ s][w][)][p][π][(][·|·][,][g]_ _w[)](gw | s)exp(r(sw, gw))_
_Z_
_′_
_where Z =_ _p[π][(][·|·][,][g][)](g_ **_s[′]w[)][p][π][(][·|·][,][g]_** _w[)](g[′]w_
_|_ _[|][ s][)]_
ZZ
_× exp(r(s[′]w[,][ g][′]w[))d][s]w[′]_ [d][g][′]w[.]
(7)
We provide the proof in Appendix A.2. Proposition 4.2 reveals that the optimal subgoal should not
only be reachable from the starting point but also
aid in ultimately reaching the final goal. We further investigate the ability of SEGO to provide an
unbiased estimate of the Z. Inspired by annealed
7548
-----
**Model** **Base** **Prompt** **Params** **GSM8K** **MATH**
GPT-4 (OpenAI, 2023) - CoT - 92.0 42.5
PaLM-2 (Anil et al., 2023) PaLM CoT 540B 80.7 34.3
Minerva (Lewkowycz et al., 2022) PaLM CoT 540B 58.8 33.6
7B 14.6 2.5
LLaMA2 (Touvron et al., 2023b) LLaMA2 CoT
13B 28.7 3.9
7B 54.9 10.7
WizardMATH (Luo et al., 2023) LLaMA2 CoT
13B 63.9 14.0
7B 66.5 19.8
MetaMath (Yu et al., 2023) LLaMA2 CoT
13B 72.3 22.4
7B 25.2 14.2
CodeLLaMA (Rozière et al., 2023) CodeLLaMA PoT
13B 36.1 18.1
7B 59.4 33.4
MAmmoTH-Coder (Yue et al., 2023) CodeLLaMA PoT
13B 64.7 36.3
**7B** **68.7** **36.8**
**SEGO (ours)** **CodeLLaMA** **PoT**
**13B** **72.5** **40.0**
Table 1: Evaluation results on GSM8K and MATH. “CoT” and “PoT” represent chain-of-thoughts (Wei et al., 2023)
program-of-thoughts (Chen et al., 2022) respectively.
importance sampling (Neal, 2001), we arrive at the
following proposition:
**Proposition 4.3. Let ¯α be defined as ¯α** =
1 _N_
_N_ _i=1_ _[α][(][i][)][, wherein each][ α][(][i][)][ adheres to the def-]_
_inition in Eq.5. It follows that ¯α constitutes an_
P
_unbiased estimator of Z._
We provide the full proof of the unbiasedness
in Appendix A.3. Proposition 4.3 reveals that the
training objective for the value network can be approximated as a proportional estimate of the probability of attaining the goal g from state s following
the current policy π.
**5** **Experiments**
**5.1** **Dataset and Evaluation**
**Evaluation and Training Data.** Our model is
evaluated using two datasets: GSM8K (Cobbe
et al., 2021) and MATH (Hendrycks et al., 2021).
GSM8K contains 8, 792 math word problems for elementary students, with 1, 319 reserved for testing.
MATH, with 12, 500 problems (including 5, 000
for testing), focuses on advanced mathematics, featuring questions from competitions like the AMC
and AIME. Data preprocessing follows the methodologies in the original papers to ensure consistent
evaluation. We provide the details of the training
data in Appendix D.
**Evaluation Metric.** We evaluate by comparing
the results of the solution generated by the policy
network in SEGO to the provided correct answers
within the datasets. For evaluation, we report the
accuracy, specifically focusing on the rate at which
the policy network correctly solves the problems
on the first attempt.
**5.2** **Baselines**
Due to space constraints, details on the baselines
are available in Appendix D.
**5.3** **Main Results**
As indicated in Table 1, our key findings include:
(1) SEGO’s performance on the GSM8K and
MATH datasets is notable. SEGO (7B) achieves
68.7% accuracy on GSM8K and 36.8% on MATH,
while SEGO (13B) reaches 72.5% and 40.0%, respectively. These results surpass those of comparable models, underscoring SEGO’s effectiveness
in mathematical problem-solving; and (2) The integration of finetuning and the Program of Thought
(PoT) approach substantially enhances model performance, particularly in complex tasks. This is
evident in SEGO and MetaMath, where finetuning
aligns models with task specifics, and in comparisons involving CodeLLaMA and LLaMA2 on the
MATH dataset, showcasing PoT’s efficiency. Additionally, incorporating Sequential Subgoal Optimization into SEGO underlines the significance
of strategic planning in complex mathematical
problem-solving, resulting in notably improved accuracy.
7549
-----
GSM8K
|Col1|Col2|Col3|
|---|---|---|
|.7 63|.4 67|.5|
|.9 68 .8|.7||
59.3 61.3 64.1 65.9
62.7 63.4 67.5
63.9 68.7
66.8
2 3
Number of Sequences (N)
MATH
|Col1|Col2|Col3|
|---|---|---|
|.5 36|.3 36|.7|
|.7 36 .7|.8||
33.1 34.9 35.6 35.9
34.5 36.3 36.7
34.7 36.8
35.7
2 3
Length of Sequences ( )
68
66
36.5
36.0
35.5
35.0
34.5
34.0
33.5
64
62
60
Figure 2: The balance between the number of sequences (N ) and the length of sequences (η) on the test sets of
GSM8K and MATH.
**The balance between N and η.** We begin by
exploring various combinations of N and η, illustrated in Figure 2, to comprehend the synergistic
effects of these parameters on the model’s performance. The results on GSM8K and MATH reveal
that incrementing both N and η typically enhances
the model’s accuracy, achieving 68.7% on GSM8K
and 36.8% on MATH at N = 2 and η = 3. However, the enhancements appear to stabilize beyond
certain thresholds, indicating optimal points for
these parameters.
**In-depth analysis of Hyperparameters N and**
_η._ We further conduct an in-depth analysis of the
hyperparameters N and η, examining each one’s
individual impact by holding one constant and varying the other. The results are illustrated in Figure 3.
From the results, it is clear that when N = 2, the
model achieves peak accuracy at η = 3 for both
GSM8K and MATH, with no significant gains beyond this point. Similarly, with η = 3, optimal
accuracy is reached at N = 2, remaining stable
thereafter.
**6.3** **Analysis of Subgoal Evolution**
**Validity and Progression of Subgoals.** To
deepen our understanding of subgoals during the
Reinforcement Learning phase, we analyze the evolution of subgoal validity and its correlation with
the performance on the test set. A subgoal (i.e., gw
and sw) is deemed valid if both τ1 and τ2, sampled
with policies π( **_sw, g) and π(_** **_s, gw), yield cor-_**
_·|_ _·|_
rect solutions for goals g and gw respectively. Our
findings, illustrated in Figure 4 (Left), reveal a positive correlation between the progression of training
steps and the percentage of valid subgoals. This
increase in valid subgoals is paralleled by improvements in accuracy on both GSM8K and MATH
**Models** **GSM8K** **MATH**
Ours 68.7 36.8
-Sequential 61.3 34.9
-Sequential & Subgoal 57.1 32.6
-Sequential & Subgoal & FT 25.2 14.2
Table 2: Ablation study results on GSM8K and MATH
datasets.
**6** **Analysis**
**6.1** **Ablation Study**
In our study, we conducted ablation experiments on
7B CodeLLaMA using SEGO and three variants to
assess each component’s impact: (1) -Sequential:
the sequential subgoal optimization is omitted. (2)
**-Sequential & Subgoal: the subgoal-based finetun-**
ing is omitted. (3) -Sequential & Subgoal & FT:
both subgoal-based finetuning and initialization
fine-tuning are omitted. Results in Table 2 show the
crucial role of sequential subgoal optimization in
SEGO, with its absence in the -Sequential variant
leading to reduced accuracy. The significant performance drop in the -Sequential & Subgoal & FT
variant, comparable to the base 7B CodeLLaMA,
highlights the collective value of all components in
enhancing SEGO’s mathematical problem-solving
capabilities.
**6.2** **Analysis of Hyperparameters**
In this section, we conduct a detailed examination
of the hyperparameters N and η, where N represents the number of sequences and η denotes the
length of each sequence, as defined in Proposition 4.3. All the experiments in this section are
anchored on the 7B CodeLLaMA to ensure consistency in the results.
7550
-----
N=2
|=|Col2|Col3|Col4|=3|Col6|Col7|
|---|---|---|---|---|---|---|
|||||=3|||
||||||||
||||||||
||||||||
||||||||
|GS MA|M8K acc TH acc||||||
||||||||
||||||||
||||||||
||||||||
||||||||
GSM8K acc
MATH acc
3
N Values
70
68
66
64
62
70
68
66
64
62
38
36
34
32
30
38
36
34
32
30
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
|GS MA|M8K acc TH acc||||||
||||||||
||||||||
||||||||
||||||||
||||||||
GSM8K acc
MATH acc
Values
Figure 3: Analysis of model accuracy for variations N and η. Left: Fixed N = 2 and various η; Right: Fixed η = 3
and various N .
70
60
50
40
30
20
10
70
60
50
40
30
20
10
|us N.|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
|GSM8K acc MATH acc Percentage of valid subgoal|||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
GSM8K acc
MATH acc
Percentage of valid subgoal
800 1200 1600 2000 2400 800 1200 1600 2000 2400
Training Steps Training Steps
4.0
3.5
3.0
2.5
2.0
Hardness
1.5
1.0
0.5
0.0
800 1200 1600 2000 2400
Training Steps
Figure 4: Left: Changes in the percentage of valid subgoals during the RL training. Right: Changes in hardness of
problems yielding valid subgoals.
datasets, suggesting that the validity of subgoals is
a crucial factor in enhancing the model’s problemsolving capabilities.
**Hardness of Problems Yielding Valid Subgoals.**
To further our understanding of subgoals, we delve
into the relationship between the hardness of problems and the emergence of valid subgoals. This
analysis aims to reveal any trends in the difficulty
of problems that tend to yield valid subgoals, providing insights into the learning progression. The
hardness of each problem is labeled by ChatGPT,
with more details available in Appendix E. The
results, shown in Figure 4 (Right), reveal a correlation between training progression and the model’s
ability to formulate valid subgoals for increasingly
intricate problems, underscoring its evolving sophistication and adaptability in problem-solving.
**7** **Related Works**
eas: prompting strategies, involving techniques like
Chain-of-Thought (Wei et al., 2023), ProgressiveHint Prompting (Zheng et al., 2023), bi-modal behavioral alignment (Zhao et al., 2024) and learning with verifications, using methods like outcomebased verifiers (Cobbe et al., 2021). Our approach,
orthogonal to these methods, emphasizes adaptive
curricula with subgoals to improve LLMs’ mathematical reasoning. Concurrently, MAmmoTH (Yue
et al., 2023) explores instruction finetuning in
LLMs for math problem-solving, a concept related
to our strategy. This can be considered as an implementation of the instruction finetuning stage within
our framework.
**Subgoal-based RL.** In reinforcement learning,
Subgoal Search is crucial for navigating complex
tasks, offering insights into subgoal benefits (Zhai
et al., 2022), hierarchical structures (Wen et al.,
2020), option selection (Jinnai et al., 2019a), and
temporal abstraction (Fruit et al., 2017). Research
focuses on exploring efficient strategies (Jinnai
et al., 2019b; Hartikainen et al., 2019; Pitis et al.,
2020; OpenAI et al., 2021) and enhancing planning through various algorithms (Eysenbach et al.,
2019; Parascandolo et al., 2020; Li et al., 2022;
Moro et al., 2022; Chane-Sane et al., 2021). It also
develops curricula for complex subgoals (Zhang
**Mathematical Reasoning with LLMs.** Large
Language Models’ (LLMs) advancement in
mathematical reasoning is largely driven by
datasets like GSM8K (Cobbe et al., 2021) and
MATH (Hendrycks et al., 2021), with additional
resources like MAWPS (Koncel-Kedziorski et al.,
2016) and MWPToolkit (Lan et al., 2022) enhancing the field. Research focuses on two main ar
7551
-----
et al., 2020, 2021). Our work addresses subgoal
learning in mathematical problem-solving, exploring optimal subgoal identification within expansive
state spaces. Owing to space constraints, a detailed discussion of related works is provided in
Appendix F.
**8** **Conclusion**
In conclusion, this work presents SEGO, an innovative framework aimed at improving LLMs’
mathematical problem-solving abilities. Drawing
inspiration from subgoal-based RL, SEGO establishes a theoretical link between subgoal decomposition and the probability of solving problems. It
enhances LLMs’ performance by generating and
refining problem-specific subgoals using theoretically defined criteria. Empirical evaluations on
benchmark datasets GSM8K and MATH demonstrate SEGO’s ability to outperform existing approaches of comparable model sizes.
**Ethical Considerations**
In accordance with the established Code of Ethics,
this research exclusively utilizes data and information that is publicly accessible, thereby ensuring
that no private or confidential resources are engaged.
**Limitations**
While SEGO represents a significant advancement
in the realm of mathematical problem-solving, several limitations need further investigation to fully
harness its potential. These limitations include aspects such as the efficiency of SEGO, the scope
of problem difficulty it addresses, and potential
framework extensions:
(1) While SEGO demonstrates enhanced efficacy
in identifying subgoals compared to non-sequential
methods, there is room for improvement in efficiency. This can be addressed through dynamic
resource allocation, such as adjusting the annealing
schedule in response to performance metrics or the
complexity of the problem at hand, alongside the
deployment of more sophisticated proposal distribution mechanisms that more accurately mirror the
target distribution.
(2) Our evaluation benchmarks predominantly
include elementary to middle school-level problems. Exploring more complex problems, such as
those at the undergraduate level, is a promising
future direction.
(3) In the current SEGO framework, only the
policy network is retained during inference. An
intriguing future direction involves integrating the
subgoal generator/optimizer and the value network
to recursively decompose complex problems into
simpler subgoals.
**Acknowledgements**
We would like to thank the HKU NLP group and
the anonymous reviewers for their helpful suggestions, which greatly improved this work. We especially appreciated the valuable discussions with
Shansan Gong. This work is partially supported by
the joint research scheme of the National Natural
Science Foundation of China (NSFC) and the Research Grants Council (RGC) under grant number
N_HKU714/21.
**References**
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak
Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng
Chen, et al. 2023. Palm 2 technical report. arXiv
_preprint arXiv:2305.10403._
Elliot Chane-Sane, Cordelia Schmid, and Ivan Laptev.
2021. Goal-conditioned reinforcement learning with
imagined subgoals. In International Conference on
_Machine Learning, pages 1430–1440. PMLR._
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming
Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph,
Greg Brockman, et al. 2021. Evaluating large
language models trained on code. arXiv preprint
_arXiv:2107.03374._
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W Cohen. 2022. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint
_arXiv:2211.12588._
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot,
Ashish Sabharwal, Carissa Schoenick, and Oyvind
Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv
_preprint arXiv:1803.05457._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, et al. 2021. Training verifiers to solve math
word problems. arXiv preprint arXiv:2110.14168.
Iddo Drori, Sarah Zhang, Reece Shuttleworth, Leonard
Tang, Albert Lu, Elizabeth Ke, Kevin Liu, Linda
Chen, Sunny Tran, Newman Cheng, et al. 2022. A
7552
-----
neural network solves, explains, and generates university math problems by program synthesis and fewshot learning at human level. Proceedings of the Na_tional Academy of Sciences, 119(32):e2123433119._
Ben Eysenbach, Russ R Salakhutdinov, and Sergey
Levine. 2019. Search on the replay buffer: Bridging
planning and reinforcement learning. Advances in
_Neural Information Processing Systems, 32._
Ronan Fruit, Matteo Pirotta, Alessandro Lazaric, and
Emma Brunskill. 2017. Regret minimization in mdps
with options without prior knowledge. Advances in
_Neural Information Processing Systems, 30._
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and
[Tushar Khot. 2023. Complexity-based prompting for](http://arxiv.org/abs/2210.00720)
[multi-step reasoning.](http://arxiv.org/abs/2210.00720)
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Pal: Program-aided language
models. In International Conference on Machine
_Learning, pages 10764–10799. PMLR._
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot,
[Dan Roth, and Jonathan Berant. 2021. Did aristotle](https://doi.org/10.1162/tacl_a_00370)
[use a laptop? a question answering benchmark with](https://doi.org/10.1162/tacl_a_00370)
[implicit reasoning strategies. Transactions of the](https://doi.org/10.1162/tacl_a_00370)
_Association for Computational Linguistics, 9:346–_
361.
Kristian Hartikainen, Xinyang Geng, Tuomas Haarnoja,
and Sergey Levine. 2019. Dynamical distance learning for semi-supervised and unsupervised skill discovery. arXiv preprint arXiv:1907.08225.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. arXiv preprint
_arXiv:2103.03874._
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and
Yejin Choi. 2019. The curious case of neural text
degeneration. arXiv preprint arXiv:1904.09751.
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan
Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang,
and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. _arXiv preprint_
_arXiv:2106.09685._
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral
7b. arXiv preprint arXiv:2310.06825.
Yuu Jinnai, David Abel, David Hershkowitz, Michael
Littman, and George Konidaris. 2019a. Finding options that minimize planning time. In International
_Conference on Machine Learning, pages 3120–3129._
PMLR.
Yuu Jinnai, Jee Won Park, David Abel, and George
Konidaris. 2019b. Discovering options for exploration by minimizing cover time. In International
_Conference on Machine Learning, pages 3130–3139._
PMLR.
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate
[Kushman, and Hannaneh Hajishirzi. 2016. MAWPS:](https://doi.org/10.18653/v1/N16-1136)
[A math word problem repository. In Proceedings of](https://doi.org/10.18653/v1/N16-1136)
_the 2016 Conference of the North American Chapter_
_of the Association for Computational Linguistics: Hu-_
_man Language Technologies, pages 1152–1157, San_
Diego, California. Association for Computational
Linguistics.
Yihuai Lan, Lei Wang, Qiyuan Zhang, Yunshi Lan,
Bing Tian Dai, Yan Wang, Dongxiang Zhang, and
Ee-Peng Lim. 2022. Mwptoolkit: an open-source
framework for deep learning-based math word problem solvers. In Proceedings of the AAAI Conference
_on Artificial Intelligence, volume 36, pages 13188–_
13190.
Aitor Lewkowycz, Anders Andreassen, David Dohan,
Ethan Dyer, Henryk Michalewski, Vinay Ramasesh,
Ambrose Slone, Cem Anil, Imanol Schlag, Theo
Gutman-Solo, et al. 2022. Solving quantitative
reasoning problems with language models. arXiv
_preprint arXiv:2206.14858._
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
[Jian-Guang Lou, and Weizhu Chen. 2023. Making](http://arxiv.org/abs/2206.02336)
[large language models better reasoners with step-](http://arxiv.org/abs/2206.02336)
[aware verifier.](http://arxiv.org/abs/2206.02336)
Yunfei Li, Tian Gao, Jiaqi Yang, Huazhe Xu, and
Yi Wu. 2022. Phasic self-imitative reduction for
sparse-reward goal-conditioned reinforcement learning. In International Conference on Machine Learn_ing, pages 12765–12781. PMLR._
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri
Edwards, Bowen Baker, Teddy Lee, Jan Leike,
John Schulman, Ilya Sutskever, and Karl Cobbe.
2023. Let’s verify step by step. _arXiv preprint_
_arXiv:2305.20050._
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word
problems. ACL.
Bo Liu, Yuqian Jiang, Xiaohan Zhang, Qiang Liu,
Shiqi Zhang, Joydeep Biswas, and Peter Stone.
2023. Llm+ p: Empowering large language models with optimal planning proficiency. arXiv preprint
_arXiv:2304.11477._
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wizardmath: Empowering mathematical reasoning for
large language models via reinforced evol-instruct.
_arXiv preprint arXiv:2308.09583._
7553
-----
Lorenzo Moro, Amarildo Likmeta, Enrico Prati, Marcello Restelli, et al. 2022. Goal-directed planning via
hindsight experience replay. In International Confer_ence on Learning Representations, pages 1–16._
Radford M Neal. 2001. Annealed importance sampling.
_Statistics and computing, 11:125–139._
Ansong Ni, Jeevana Priya Inala, Chenglong Wang, Alex
Polozov, Christopher Meek, Dragomir Radev, and
Jianfeng Gao. 2023. Learning math reasoning from
self-sampled correct and partially-correct solutions.
In The Eleventh International Conference on Learn_ing Representations._
[OpenAI. 2023. Gpt-4 technical report.](http://arxiv.org/abs/2303.08774)
[OpenAI. 2023. GPT-4 Technical Report. arXiv e-prints,](https://doi.org/10.48550/arXiv.2303.08774)
page arXiv:2303.08774.
OpenAI OpenAI, Matthias Plappert, Raul Sampedro,
Tao Xu, Ilge Akkaya, Vineet Kosaraju, Peter Welinder, Ruben D’Sa, Arthur Petron, Henrique P d O
Pinto, et al. 2021. Asymmetric self-play for automatic goal discovery in robotic manipulation. arXiv
_preprint arXiv:2101.04882._
Giambattista Parascandolo, Lars Buesing, Josh Merel,
Leonard Hasenclever, John Aslanides, Jessica B
Hamrick, Nicolas Heess, Alexander Neitz, and Theophane Weber. 2020. Divide-and-conquer monte carlo
tree search for goal-directed planning. arXiv preprint
_arXiv:2004.11410._
Silviu Pitis, Harris Chan, Stephen Zhao, Bradly Stadie,
and Jimmy Ba. 2020. Maximum entropy gain exploration for long horizon multi-goal reinforcement
learning. In International Conference on Machine
_Learning, pages 7750–7761. PMLR._
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten
Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi,
Jingyu Liu, Tal Remez, Jérémy Rapin, et al. 2023.
Code llama: Open foundation models for code. arXiv
_preprint arXiv:2308.12950._
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and
Jonathan Berant. 2018. Commonsenseqa: A question
answering challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal
Azhar, et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint
_arXiv:2302.13971._
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint_
_arXiv:2307.09288._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le,
Ed Chi, Sharan Narang, Aakanksha Chowdhery, and
Denny Zhou. 2022. Self-consistency improves chain
of thought reasoning in language models. _arXiv_
_preprint arXiv:2203.11171._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and
[Denny Zhou. 2023. Chain-of-thought prompting elic-](http://arxiv.org/abs/2201.11903)
[its reasoning in large language models.](http://arxiv.org/abs/2201.11903)
Zheng Wen, Doina Precup, Morteza Ibrahimi, Andre
Barreto, Benjamin Van Roy, and Satinder Singh.
2020. On efficiency in hierarchical reinforcement
learning. Advances in Neural Information Process_ing Systems, 33:6708–6718._
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak
Shafran, Karthik Narasimhan, and Yuan Cao. 2022.
React: Synergizing reasoning and acting in language
models. arXiv preprint arXiv:2210.03629.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2023.
Metamath: Bootstrap your own mathematical questions for large language models. _arXiv preprint_
_arXiv:2309.12284._
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
2023. Mammoth: Building math generalist models
through hybrid instruction tuning. arXiv preprint
_arXiv:2309.05653._
Yuexiang Zhai, Christina Baek, Zhengyuan Zhou,
Jiantao Jiao, and Yi Ma. 2022. Computational benefits of intermediate rewards for goal-reaching policy
learning. Journal of Artificial Intelligence Research,
73:847–896.
Tianjun Zhang, Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine, and Joseph E Gonzalez.
2021. C-planning: An automatic curriculum
for learning goal-reaching tasks. _arXiv preprint_
_arXiv:2110.12080._
Yunzhi Zhang, Pieter Abbeel, and Lerrel Pinto. 2020.
Automatic curriculum learning through value disagreement. Advances in Neural Information Pro_cessing Systems, 33:7648–7659._
Xueliang Zhao, Xinting Huang, Tingchen Fu, Qintong
Li, Shansan Gong, Lemao Liu, Wei Bi, and Lingpeng
Kong. 2024. Bba: Bi-modal behavioral alignment for
reasoning with large vision-language models. arXiv
_preprint arXiv:2402.13577._
Xueliang Zhao, Wenda Li, and Lingpeng Kong. 2023.
Decomposing the enigma: Subgoal-based demonstration learning for formal theorem proving. arXiv
_preprint arXiv:2305.16366._
Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo
Li, and Yu Li. 2023. Progressive-hint prompting
improves reasoning in large language models. arXiv
_preprint arXiv:2304.09797._
7554
-----
Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Olivier Bousquet, Quoc Le, and Ed Huai hsin
[Chi. 2022. Least-to-most prompting enables com-](https://api.semanticscholar.org/CorpusID:248986239)
[plex reasoning in large language models.](https://api.semanticscholar.org/CorpusID:248986239) _ArXiv,_
abs/2205.10625.
7555
-----
**A** **Proofs**
**A.1** **Proof of proposition 4.1**
In this subsection, we establish the proof of Proposition 4.1
_Proof. We start by considering the joint distribution p(g, sw, gw_ **_s), which can be factorized as_**
_|_
_p[π][(][·|·][,][g][)](g_ **_sw)p[π][(][·|·][,][g][w][)](gw_** **_s)p(sw_** **_gw)._**
_|_ _|_ _|_
The log-likelihood of reaching the goal g from s can be expressed as:
_p(g, sw, gw_ **_s)_**
_|_
_q(gw, sw_ **_g, s)_**
_|_
log p[π][(][·|·][,][g][)](g **_s) = log Eq(gw,sw_** **_g,s)_**
_|_ _|_
Expanding the expectation, we get:
log p[π][(][·|·][,][g][)](g | s) = log _q(gw, sw | g, s)_ _[p]q([(]g[g]w[,][ s], s[w]w[,][ g][w] g[ |],[ s] s)[)]_ [d][g][w][d][s][w]
ZZ _|_
Utilizing Jensen’s inequality, we establish a lower bound for the log-likelihood as follows:
log p[π][(][·|·][,][g][)](g **_s)_** Eq(gw,sw **_g,s)_** log p[π][(][·|·][,][g][)](g **_sw) + log p[π][(][·|·][,][g][w][)](gw_** **_s)_**
_|_ _≥_ _|_ _|_ _|_
h
+ log p(sw **_gw)_** log q(gw, sw **_g, s)_**
_|_ _−_ _|_
i
Given that log p(sw _|_ **_gw)_** = _r(sw, gw) −_ log _s[′]w_ [exp(][r][(][s]w[′] _[,][ g]w[))]_ and that
log _s[′]w_ [exp(][r][(][s]w[′] _[,][ g]w[))]_ can be absorbed into the lower bound as a constant term, whichP
does not affect the optimization process, the lower boundP _L can be written as:_
= Eq(gw,sw **_g,s)_** log p[π][(][·|·][,][g][)](g **_sw) + log p[π][(][·|·][,][g][w][)](gw_** **_s) + r(gw, sw)_** log q(gw, sw **_g, s)_**
_L_ _|_ _|_ _|_ _−_ _|_
h i(8)
This completes the proof of proposition 4.1. The underlying premise of this approach is predicated
on the assumption that the ratio of exponentiated rewards, exp(exp(rr((ss[′],,gg))[′])) [, is equivalent to the ratio of the]
probabilities _pp((ss[′]|gg)[′])_ [. In essence, this implies that the reward function][ r][(][s][,][ g][)][ is directly proportional to]
_|_
the conditional probability p(s | g).
**A.2** **Proof of proposition 4.2**
In this subsection, we establish the proof of Proposition 4.2
_Proof. The optimization objective for finding q(gw, sw_ **_g, s) is:_**
_|_
Eq(gw,sw **_g,s)_** log p[π][(][·|·][,][g][)](g **_sw) + log p[π][(][·|·][,][g][w][)](gw_** **_s) + r(gw, sw)_** log q(gw, sw **_g, s)_**
_|_ _|_ _|_ _−_ _|_
h i
Introducing a Lagrange multiplier λ, the Lagrangian J is constructed as:
= Eq(gw,sw **_g,s)_** log p[π][(][·|·][,][g][)](g **_sw) + log p[π][(][·|·][,][g][w][)](gw_** **_s) + r(gw, sw)_**
_J_ _|_ _|_ _|_
h
log q(gw, sw **_g, s)_** + λ _q(gw, sw_ **_g, s)dgwdsw_** 1
_−_ _|_ _|_ _−_
i Z
Differentiating with respect to q(gw, sw **_g, s) and setting it to zero yields:_**
_J_ _|_
7556
-----
log p[π][(][·|·][,][g][)](g **_sw) + log p[π][(][·|·][,][g][w][)](gw_** **_s) + r(gw, sw)_** log q(gw, sw **_g, s)_** 1 + λ = 0
_|_ _|_ _−_ _|_ _−_
Simplifying, we get:
_q(gw, sw_ **_g, s) = exp(λ_** 1)p[π][(][·|·][,][g][)](g **_sw)p[π][(][·|·][,][g][w][)](gw_** **_s) exp(r(gw, sw))_**
_|_ _−_ _|_ _|_
To ensure q(gw, sw **_g, s) is a valid probability distribution, it is normalized as:_**
_|_
_q[⋆](gw, sw_ **_g, s) =_** _p[π][(][·|·][,][g][)](g | sw)p[π][(][·|·][,][g][w][)](gw | s) exp(r(gw, sw))_
_|_ _p[π][(][·|·][,][g][)](g_ **_s[′]w[)][p][π][(][·|·][,][g]w[′]_** [)](g[′]w _w[,][ s]w[′]_ [))d][g]w[′] [d][s]w[′]
_|_ _[|][ s][)exp(][r][(][g][′]_
ZZ
The denominator serves as the normalizing constant, ensuring that q[⋆](gw, sw **_g, s) sums to one over_**
_|_
its domain, thereby satisfying the properties of a probability distribution.
This concludes the proof.
**A.3** **Proof of proposition 4.3**
For the sake of clarity, we define ω as the tuple (gw, sw) and use q[⋆](ω) as shorthand for q[⋆](gw, sw **_g, s0)._**
_|_
To rigorously proof this proposition, we define a series of functions and transition operators:
**Definition 1. We introduce fj(·) for j ∈{0, . . ., η} as a weighted blend of fη(·) and p(· | s, g; f** ), given
_by fj(ω) = fη(ω)[1][−][β][j]_ _p(ω | s, g; f_ )[β][j] _. The sequence of weights βj satisfies 1 = β0 > β1 > . . . > βη =_
0. Specifically, fη(ω) satisfies _[f][η]Z[(]f[ω][)]_ = q[⋆](ω) where Zf is the normalizing constant.
**Definition 2. Let Tj(ω, ω[′]) for j ∈{1, . . ., η −** 1} denote a transition operator, formulated as
_Tj(ω, ω[′]) = p(ω[′]_ _ω, s, g; h) min_ 1, [f][j][(][ω][′][)][p][(][ω][ |][ ω][′][,][ s][,][ g][;][ h][)] _._
_|_ _fj(ω)p(ω[′]_ _ω, s, g; h)_
_|_
Then the process of sequentially sampling subgoals is defined as follows:
**Definition 3. Let the process start with the sampling of ω1 from f0(·). Sequentially, ω2 is derived**
_from ω1 via the transition operator T1, perpetuating this mechanism until ωη is obtained from ωη_ 1
_−_
_through Tη−1. The joint distribution probability is articulated as_ _[g][(][ω][1]Z[,...,ω]g_ _[η][)]_ _, wherein g(ω1, . . ., ωη) =_
_f0(ω1)T1(ω1, ω2) . . . Tη_ 1(ωη 1, ωη) and Zg is the normalization constant.
_−_ _−_
Finally, the weight α for each sequence is given by α = _j=1_ _fjf−j_ (1w(ωj )j ) [.]
To establish the validity of the proposition, we begin by proving the essential lemmas:
**Lemma 1. Let fj(ω) and Tj(ω, ω[′]) be as specified in Definition[Q][η]** _2. Define pj(ω) as_
_fj(ω)_
_pj(ω) =_
_fj(ω[′]) dω[′][ .]_
_Then, the following detailed balance condition holds:R_
_pj(ω)Tj(ω, ω[′]) = pj(ω[′])Tj(ω[′], ω)._
_Proof. The proof can be divided into two cases:_
**Case 1: pj(ω[′])p(ω | ω[′], s, g; h) > pj(ω)p(ω[′]** _| ω, s, g; h)_
Starting with pj(ω[′])Tj(ω[′], ω), we have:
(
_pj(ω[′])Tj(ω[′], ω) = pj(ω[][′])(((p(ω_ _ω[((((][′], s, g; h)_ _[p][j][(][ω][)][p][(][ω][′][ |][ ω,][ s][,][ g][;][ h]([)]_
_|_ pj[](ω[′])(((p(ω _ω[((((][′], s, g; h)_
_|_
= pj(ω)p(ω[′] _ω, s, g; h)_
_|_
= pj(ω)Tj(ω, ω[′]).
7557
-----
**Case 2: pj(ω[′])p(ω | ω[′], s, g; h) ≤** _pj(ω)p(ω[′]_ _| ω, s, g; h)_
Starting with pj(ω)Tj(ω, ω[′]), we have:
(
_pj(ω)Tj(ω, ω[′]) = pj(ω)(((p(ω[′]_ _| ω,[((((] s, g; h)_ _[p]p[j]j[(][]([ω]ω[′])[)](((p[p]([(]ω[ω][′][ |]|[ ω] ω,[((((][′][,] s[ s],[,] g[ g];[;] h[ h]()[)]_
= pj(ω[′])p(ω _ω[′], s, g; h)_
_|_
= pj(ω[′])Tj(ω[′], ω).
In both cases, we find that pj(ω)Tj(ω, ω[′]) = pj(ω[′])Tj(ω[′], ω), thereby proving the lemma.
**Lemma 2. Let fj(ω) and Tj(ω, ω[′]) be as defined in Definition 2. Define the normalized distribution**
_pj(ω) as_
_fj(ω)_
_pj(ω) =_
_fj(ω[′]) dω[′][ .]_
_Then, Tj(ω, ω[′]) preserves the invariance of pj(ω), formally defined asR_
_Tj(ω[′], ω)pj(ω[′]) dω[′]_ = pj(ω).
Z
_Proof. We proceed by leveraging the results from Lemma 1. Specifically, we have:_
_Tj(ω[′], ω)pj(ω[′]) dω[′]_ = _Tj(ω, ω[′])pj(ω) dω[′]_
Z Z
= pj(ω) _Tj(ω, ω[′]) dω[′]_
Z
Given that _Tj(ω, ω[′]) dω[′]_ = 1, we have _Tj(ω[′], ω)pj(ω[′]) dω[′]_ = pj(ω). This confirms that Tj(ω, ω[′])
preserves the invariance of pj(ω), thereby proving Lemma 2.
R R
Now we give the proof of Proposition 4.3.
_Proof. We first define the function f as follows:_
_fη(ωη)_
_f_ (ω1, . . ., ωη) =
_fη−1(ωη)_ _[T][η][−][1][(][ω][η][−][1][, ω][η][)][ . . . f]f[2]1[(]([ω]ω[2]2[)])_ _[T][1][(][ω][1][, ω][2][)][f][1][(][ω][1][)]_
Given the definition of Zf, we have
_Zf =_ _fη(ω) dω_
Z
By Lemma 2, we have:
_Tj(ωj, ωj+1)fj(ωj) dωj = fj(ωj+1)_
Z
Thus, we can write:
_f_ (ω1, _, ωη)_
_· · ·_ dω1 dωη
_Zf_ _· · ·_
Z
= _fηZ(ωf_ _η)_ dωη _Tη−1(ωη−f1η, ω1η()ωfηη)−1(ωη−1)_ dωη−1 · · · _T1(ω1f, ω1(ω2)2f)1(ω1)_ dω1
Z Z _−_ Z
_fη(ωη)_
= dωη
_Zf_
Z
=1
7558
-----
This implies that Zf is also the normalizing constant of f (ω1, . . ., ωη).
Since f0(·) is a distribution, it is evident that Zg = 1.
We have:
1
_N_
_f_ (ω1, . . ., ωη)
= Eg(ω1,...,ωη) _g(ω1, . . ., ωη)_
_f_ (ω1, . . ., ωη)
= Zf dω1 dωη
_Zf_ _· · ·_
Z
= Zf
Eg(ω1,...,ωη)
This concludes the proof of Proposition 4.3.
**B** **More Implementation Details for Each Module**
The framework of SEGO is composed of five components, each serving a distinct purpose to enhance the
system’s overall efficacy.
**B.1** **Policy Network**
The policy network π(a | s, g) takes as input the current state and intended goal and returns an action.
Since the goal and the state can both be expressed as token sequences, we first concatenate these sequences
before feeding them into the policy network. This network is tasked with predicting the subsequent
action, also framed as a token sequence, utilizing standard decoding techniques like greedy search or
top-k sampling (Holtzman et al., 2019).
The training of the policy network is conducted through instruction finetuning, utilizing the following
instruction template:
Construct a Python script to address the given problem:
{problem}
### Response:
{solution}
In this template, problem and solution represent the goal g and the trajectory respectively. The
base model for this process is CodeLLaMA, and it undergoes full parameter finetuning to optimize its
performance. As the sequential subgoal optimization process progresses, the model is further trained by
utilizing self-generated successful trajectories. This prompt template is also employed to generate the
trajectory dataset using gpt-3.5-turbo-0613.
**B.2** **Subgoal Generator**
The subgoal generator, represented as f, aims to decompose a complex task into two more manageable
sub-tasks. It works by taking the current state s and goal g, and outputting a pair consisting of a subgoal
and its corresponding state: sw, gw = f (s, g). This approach ensures that both the journey from the
current state to the subgoal and from the subgoal state to the intended goal become more tractable
sub-tasks. Crucially, the subgoal state sw is a valid solution of the subgoal gw, adhering to the premise
that a state is an aggregation of actions, each representing a step in the solution process.
The subgoal generator is trained through instruction finetuning, utilizing data collected from
gpt-3.5-turbo-0613. The instruction template is defined as:
Break down the given problem into a smaller task (a subproblem)
and devise a method to solve it, considering a provided partial
solution to the original problem as a starting point.
### Input:
{problem}
7559
-----
{partial solution}
### Output:
{subproblem}{solution}[EOS]
This module, fundamentally built on the architecture of CodeLLaMA (Rozière et al., 2023), leverages
the capabilities of LoRA (Hu et al., 2021) for efficient finetuning. The primary objective is to accurately
predict {subproblem}{solution}[EOS] from its preceding context, realized through a causal
language modeling. This prompt template is also utilized to predict both the subgoal and the corresponding
state using gpt-3.5-turbo-0613.
**B.3** **Subgoal Optimizer**
The subgoal optimizer, denoted as h, is designed to refine subgoal gw and its corresponding state sw. Its
objective is to yield improved subgoal g[′]s [and state][ s]w[′] [that more effectively contribute to decomposing]
the overall intended goal: s[′]w[,][ g][′]w [=][ h][(][s][w][,][ g]w[,][ s][,][ g][)][. This component incorporates both the current state]
**_s and the intended goal g as inputs, providing insights into the complexity of the intended goal and the_**
current status in the problem-solving process.
The subgoal optimizer is also trained through instruction finetuning, drawing upon data from
gpt-3.5-turbo-0613. The instruction template for this module is as follows:
Optimize the given subproblem to make it more manageable. Then,
develop a method to solve it, considering a provided partial solution
to the original problem as a starting point.
### Input:
{problem}
{partial solution}
{subproblem}{solution}
### Output:
{optimized subproblem}{optimized solution}[EOS]
This module, also built on CodeLLaMA, utilizes LoRA for efficient parameter finetuning. The aim here
is to accurately predict {optimized subproblem}{optimized solution}[EOS] from the
provided context, ensuring the outputs are coherent and contextually aligned.
**B.4** **Reward Network**
The reward network, formulated as r(s, g), accepts a state s and a goal g as inputs and produces a score
to evaluate whether the goal has been achieved in the current state. In mathematical problem-solving, this
essentially translates to determining if the state s—which is an aggregate of executed actions, with each
action representing a step towards the solution—is a valid solution for the problem posed by g. Given that
the actual reward function, R (described in §2.1), is applicable only to problems where a ground-truth
answer is available, the reward network serves as a surrogate that is crucial for evaluating sub-problems
encountered during the algorithm’s execution.
This model is built on the architecture of CodeLLaMA and employs LoRA to achieve efficient finetuning.
The reward model is trained through instruction finetuning, utilizing the following instruction template:
Does the provided solution accurately address the given problem?
{problem} {solution} {Y/N}.
7560
-----
**B.5** **Value Network**
The value network, represented as v[π](s, g), is a regression model that takes a state s and an intended goal
**_g as inputs, and outputs a score representing the likelihood of successfully achieving the goal from the_**
state under the policy network π.
This model is trained to approximate the estimated ˆα, utilizing instruction finetuning. The instruction
template is defined as:
Determine the probability of resolving the problem, starting from
the partial solution: {problem} {partial solution}.
This model, built on the CodeLLaMA architecture, is finetuned using LoRA. It is noted that, during
each iteration of the sequential subgoal optimization process, a unique set of LoRA parameters is used
to avoid any potential discrepancies between iterations. This approach ensures that the value network
accurately reflects the real-time capabilities of the policy network.
**C** **Details about Trajectory Dataset Creation**
To construct the goal collection Dg in Alg. 1, we incorporate mathematical problems sourced from the
training subsets of three distinct datasets: GSM8k, MATH, and AQuA. For the generation of solutions
corresponding to each problem, we apply a prompt as follows:
### Instruction
Construct a Python script to address the given problem:
{problem}
### Response:
{solution}
In this format, “solution” is completed by GPT-3.5-turbo. The solution is subsequently broken down
into steps, with the i-th state in a trajectory comprising the first i steps, and the i-th action defined as the
(i + 1)-th step.
In a trajectory, all states except the s0 which includes essential imports and function definitions (e.g.,
“import math; def solve():”) are considered intermediate states.
**D** **Details about Experimental Setup**
**D.1** **Training Data**
For training SEGO, we use GSM8K, MATH, and AQuA (Ling et al., 2017) datasets. After filtering for
correct answers, the resulting training set includes 10, 374 samples from GSM8K, 10, 981 from MATH,
and 35, 355 from AQuA. These problems form the goal collection Dg in Alg. 1.
**D.2** **Baselines**
**Closed-Source Models.** (1) GPT-4: A model that sets a standard in various academic domains, including
those that require intricate mathematical reasoning (OpenAI, 2023). (2) PaLM-2: A model that excels at
logical reasoning and multilingual tasks, demonstrating advanced capabilities in reasoning and solving
complex mathematical problems in multiple languages (Anil et al., 2023). (3) Minerva: A model
that specializes in quantitative reasoning, providing precise and comprehensive solutions to advanced
mathematical, scientific, and engineering problems (Lewkowycz et al., 2022).
**Open-Source Models.** (1) LLaMA2: A model that is trained on 2 trillion tokens of publicly accessible
data, exhibits outstanding capabilities in mathematical reasoning (Touvron et al., 2023a). (2) Wizard**MATH: A model that enhances the mathematical reasoning capabilities of LLaMA2 by curating more**
complex and diverse supervised finetuning data (Luo et al., 2023). (3) MetaMath: This model employs
a question bootstrapping technique, facilitating the generation of questions through both forward and
backward reasoning paths. It further enhances its capabilities by incorporating Large Language Models
(LLMs) to refine the phrasing of the question text (Yu et al., 2023). (4) CodeLLaMA: A model that
7561
-----
excels in code-related tasks with implications in mathematical programming and algorithm synthesis,
demonstrating superior infilling capabilities and support for extensive input contexts in programming
tasks (Rozière et al., 2023).[3] (5) MAmmoTH-Coder: This model leverages a training dataset that
incorporates both chain-of-thought (CoT) and program-of-thought (PoT) rationales, thereby not only
facilitating the utilization of various tools but also accommodating diverse thought processes for solving
distinct mathematical problems (Yue et al., 2023).
**D.3** **Implementation Details**
We maintain model consistency by employing CodeLLaMA as the base model for both the policy
network and auxiliary modules, including the subgoal generator, subgoal Optimizer, reward network,
and value network. Efficient finetuning of the auxiliary modules is achieved through the utilization of
LoRA (Hu et al., 2021), configured with parameters r = 16, lora_alpha = 32, and lora_dropout = 0.05,
targeting the “q_proj” and “k_proj” modules. The learning rates are set at 1e − 5 and 1e − 4 for the
policy and auxiliary modules, respectively, with a uniform batch size of 32. When collecting data
from gpt-3.5-turbo-0613, we set temperature and top_p as 0.8 and 1.0 respectively. All models
go through an initial training phase of 4, 800 steps. Subsequently, a sequential optimization process
is conducted, with the number (N) and length (η) of sequences set as 2 and 3 respectively, and the
temperature and top_p for the Subgoal GeneratorOptimizer and the policy network configured at 0.2
and 0.95 respectively. This optimization is performed three times, each lasting 1, 200 steps, and when
_η = 3, the parameters β1 and β2 are precisely set at 0.33 and 0.66 respectively. Rigorous contamination_
checking, as delineated by OpenAI (2023), is executed to verify the purity of our test sets for GSM8K and
MATH. During the test phase, a greedy search strategy is employed.
**E** **The Annotation of Problem Hardness**
We employ the following prompt to automatically annotate the difficulty with gpt-3.5-turbo-0613:
Please assign a score between 1 and 5 to the following question,
indicating its level of difficulty and complexity. A higher score
should be given to denote greater difficulty and complexity.
Please provide only the score, without any additional explanations
or reasons.
### Input:
{question}
### Output:
**F** **More Discussions about Related Works**
**Mathematical Reasoning with LLMs.** The exploration of mathematical reasoning in Large Language
Models (LLMs) has been significantly influenced by the development of datasets such as GSM8K (Cobbe
et al., 2021) and MATH (Hendrycks et al., 2021), serving as crucial benchmarks for assessing machine
learning models in mathematical domains. GSM8K encompasses a variety of grade school math problems,
while MATH compiles challenging competition mathematics problems. The introduction of extensive
datasets (Koncel-Kedziorski et al., 2016; Ling et al., 2017; Talmor et al., 2018; Geva et al., 2021) and
platforms like MWPToolkit (Lan et al., 2022) has enriched the field. This exploration is systematically
categorized into two main domains: prompting strategies and learning with verifications. In the realm of
prompting strategies, a variety of methods have been conceptualized to enhance the reasoning capabilities
of LLMs. Techniques such as Chain-of-Thought Prompting (Wei et al., 2023; Wang et al., 2022),
Progressive-Hint Prompting (Zheng et al., 2023), Least-to-Most Prompting (Zhou et al., 2022), and
3For CodeLLaMA, we ensure consistency with our models by employing identical decoding methods and prompts during
implementation, while for the other models, we refer to the results reported in their respective papers.
7562
-----
bi-modal behavioral alignment (Zhao et al., 2024) have been instrumental in progressively guiding
LLMs to accurate conclusions and facilitating the generation of intermediate reasoning steps. Moreover,
methodologies like Complexity-Based Prompting (Fu et al., 2023) and Self-Consistency(Wang et al.,
2022) exploit higher reasoning complexity and diverse reasoning paths, respectively, to realize significant
advancements in multi-step reasoning tasks. Within learning with verifications, the emphasis is on
optimizing the mathematical proficiencies of LLMs through the integration of verifiers. Strategies like
outcome-based verifiers (Cobbe et al., 2021), step-aware verifiers (Li et al., 2023; Lightman et al., 2023),
and learning from partially-correct solutions (Ni et al., 2023) have been deployed to bolster reliability
and precision in mathematical reasoning. While the aforementioned domains have significantly advanced
mathematical reasoning within LLMs, our approach is orthogonal to these categories. We concentrate on
the formulation of adaptive curricula, emphasizing the incorporation of subgoals, to facilitate nuanced
learning pathways and enhance the model’s mathematical reasoning capabilities. A parallel and notably
concurrent work, MAmmoTH (Yue et al., 2023), investigates the impact of instruction finetuning to
empower large language models with mathematical problem-solving capabilities. This can be considered
as an implementation of the instruction finetuning stage within our framework.
**Subgoal-based RL.** Subgoal Search is a central component in reinforcement learning, essential for
empowering AI systems to navigate through complex, extensive tasks effectively. This concept has
played a vital role in uncovering important aspects such as the benefits of recognizing and rewarding
subgoals (Zhai et al., 2022), the proper structuring of Markov decision processes for hierarchical reinforcement learning (Wen et al., 2020), the difficulties in selecting the most suitable options for planning (Jinnai
et al., 2019a), and the incorporation of temporal abstraction in RL (Fruit et al., 2017). The practical
research in this field mainly focuses on exploring and creating subgoals for planning and developing
learning curricula for subgoals. Exploration is aimed at finding the best or most efficient strategies, using
diverse approaches like reducing cover time (Jinnai et al., 2019b), understanding dynamical distances (Hartikainen et al., 2019), increasing entropy (Pitis et al., 2020), and applying asymmetric self-play (OpenAI
et al., 2021). In the area of subgoal planning, a variety of algorithms have been developed to refine
decision-making processes. For example, SoRB (Eysenbach et al., 2019) utilizes RL to develop a graph
for subgoal sequences, DC-MCTS (Parascandolo et al., 2020) employs learned subgoal proposals to
divide tasks, PAIR (Li et al., 2022) combines online RL with offline supervised learning, and (Moro et al.,
2022) improve MCTS with Hindsight Experience Replay for goal-oriented planning. Moreover, the work
by (Chane-Sane et al., 2021) provides concise insights into improving goal-conditioned reinforcement
learning by conceptualizing imagined subgoals, adding a fresh viewpoint to the field. Research in curriculum learning has developed innovative methods to construct curricula that systematically escalate the
complexity of subgoals, thereby improving the speed and quality of learning (Zhang et al., 2020, 2021).
The exploration of subgoal learning in the realm of complex mathematical problem-solving represents a
largely unexplored field. Our work delves into the inherent challenges of applying subgoal learning in
mathematical contexts, specifically, the difficulty in identifying the optimal subgoal within expansive state
spaces, and introduces a theoretical framework to navigate these challenges.
**G** **Details about Time-complexity**
This section presents the analysis of time-complexity for the sequential subgoal optimization process (see
§3.2). For each example, the frequency of module invocation is shown in Table 3.
We acknowledge that the primary computational cost in our method stems from the decoding process
conducted by the subgoal generator or optimizer. This process indeed requires significantly more
time compared to the computation involved in calculating scores. Notably, as shown in §6.2, our
method outperforms other approaches that produce more subgoals without sequential optimization, while
maintaining a comparable computational budget. This result indicates the effectiveness of SEGO in
identifying vital subgoals.
7563
-----
**Modules** **Times**
Value Network (score calculation) 2 × N × η
Reward Network (score calculation) _N × η_
Subgoal Generator (score calculation) _N × η_
Subgoal Optimizer (score calculation) 2 × N × η
Subgoal Generator (decoding) _N_
Subgoal Optimizer (decoding) _N × (η −_ 1)
Table 3: The frequency of module invocation in the sequential subgoal optimization process.
**H** **Analysis on Whether GPT-3.5-turbo Serves as an Upper Bound**
This study seeks to explore the hypothesis that GPT-3.5-turbo represents a performance ceiling for SEGO.
Moreover, it examines the applicability of SEGO when paired with more advanced foundational models,
specifically employing Mistral, a language model with 7 billion parameters noted for its exceptional
performance and efficiency (Jiang et al., 2023). In our experimental setup, both SEGO and GPT-3.5-turbo
leverage a program-of-thought (Chen et al., 2022) rationale to ensure a fair comparison. The results are
presented in Table 4.
**Models** **GSM8K** **MATH**
GPT-3.5-turbo 77.2 37.5
SEGO (with CodeLLaMA-13b) 72.5 40.0
SEGO (with Mistral-7b) 77.9 40.3
Table 4: Comparison of model performance across GSM8K and MATH benchmarks.
The results suggest that SEGO’s performance potential is not limited by the upper limits of GPT3.5-turbo. This point is particularly supported by the results in the MATH benchmark, where SEGO
configurations utilizing Mistral-7b and CodeLLaMA-13b models significantly surpass the performance of
GPT-3.5-turbo. This performance differential is predominantly attributed to the subgoal-based fine-tuning
phase within SEGO, which enables the policy network to generate novel solutions that exceed the upper
limits of GPT-3.5-turbo.
**I** **Performance of Various Components**
This section delves into the performance evaluation of key components within the SEGO framework.
**Reward Network.** The efficacy of the reward network was gauged through its performance on a binary
classification task, aimed at determining the feasibility of achieving a goal state from a given state, as
inferred from the reward scores. The classification accuracy achieved by the reward network is 62.8%.
**Value Network.** The performance of the value network was evaluated based on the metric recall1@10,
which reflects the network’s ability to accurately identify viable subgoals from a set of ten candidates.
The criteria for subgoal validity are detailed in §6.3. The results of this evaluation are presented in Table
5, illustrating the value network’s performance improvements post after subgoal-based fine-tuning.
**Models** recall1@10
Value network (after initial fine-tuning) 36.1%
Value network (after subgoal-based fine-tuning) 52.7%
Table 5: Performance of the value network
7564
-----
**Subgoal Generator and Optimizer.** The assessment extended to the subgoal generator and the integrated approach combining subgoal generation with optimization. This combined method, denoted as
“subgoal generator + optimizer”, involves initially generating a subgoal followed by its refinement via
the subgoal optimizer. The efficacy of these approaches, particularly in generating valid subgoals, is
summarized in Table 6.
**Models** **Percentage of valid subgoals**
subgoal generator 27.4%
subgoal generator + optimizer 29.5%
Table 6: Performance of the subgoal generator and optimizer.
**J** **Case Study**
**Problem: A four-digit code can consist of the digits 0 ~ 9 except that it does not contain the digits 1, 2, and 4 at all. If repeated**
digits are allowed, what is the probability that it has at least one even digit?
**Sub-problem: A four-digit code can consist of the digits 0 ~ 9 except that it does not contain the digits 1, 2, and 4 at all. If repeated**
digits are allowed, what is the probability that it has at least one even digit?
**Solution:**
def solve() -> float:
total_codes = 0 # Total number of possible codes
codes_with_even_digit = 0 # Number of codes with at least one even digit
# Iterate through all four-digit codes
for i in range(10000):
code = str(i).zfill(4) # Convert the code to a four-digit string
# Check if the code contains only digits 0, 3, 5, 6, 7, 8, and 9
if all(digit in ['0', '3', '5', '6', '7', '8', '9'] for digit in code):
total_codes += 1
return total_codes
**Full solution:**
def solve() -> float:
total_codes = 0 # Total number of possible codes
codes_with_even_digit = 0 # Number of codes with at least one even digit
# Iterate through all four-digit codes
for i in range(10000):
code = str(i).zfill(4) # Convert the code to a four-digit string
# Check if the code contains only digits 0, 3, 5, 6, 7, 8, and 9
if all(digit in ['0', '3', '5', '6', '7', '8', '9'] for digit in code):
total_codes += 1
# Check if the code has at least one even digit
if any(digit in ['0', '6', '8'] for digit in code):
codes_with_even_digit += 1
# Calculate the probability as the ratio of codes with even digit to total codes
probability = codes_with_even_digit / total_codes
return probability
Figure 5: A case from the training data.
In this section, we delve into a specific example to illustrate the efficacy of our model, depicted
in Figure 5. In this figure, the elements labeled as the problem, sub-problem, and solution (of the
sub-problem) correspond to the final goal, intermediate goal, and intermediate state, respectively. The
sub-problem showcased is derived through the sequential subgoal optimization process. Additionally, we
provide the full solution, which is derived from the solution of the sub-problem. This case study indicates
the model’s capability to search for a suitable sub-problem that ultimately facilitates the derivation of the
accurate solution to the final goal.
7565
-----
| [
"Xueliang, Zhao",
"Vivek, Srikumar",
"Xinting, Huang",
"Lingpeng, Kong",
"Wei, Bi",
"Lun-Wei, Ku",
"Andre, Martins"
] | 2024-08-01T00:00:00 | ACL 2024 Long Papers | true | 0 | 0 | null | https://aclanthology.org/2024.acl-long.407 | https://arxiv.org/abs/2310.12960 | https://www.semanticscholar.org/paper/aac8cdd40b2bfd1b967f0e5ea6c01e93385169e7 |
SIaM: Self-Improving Code-Assisted Mathematical Reasoning of Large Language Models | There is a growing trend of teaching large language models (LLMs) to solve mathematical problems through coding. Existing studies primarily focus on prompting powerful, closed-source models to generate seed training data followed by in-domain data augmentation, equipping LLMs with considerable capabilities for code-aided mathematical reasoning. However, continually training these models on augmented data derived from a few datasets such as GSM8K may impair their generalization abilities and restrict their effectiveness to a narrow range of question types. Conversely, the potential of improving such LLMs by leveraging large-scale, expert-written, diverse math question-answer pairs remains unexplored. To utilize these resources and tackle unique challenges such as code response assessment, we propose a novel paradigm that uses a code-based critic model to guide steps including question-code data construction, quality control, and complementary evaluation. We also explore different alignment algorithms with self-generated instruction/preference data to foster continuous improvement. Experiments across both in-domain (up to +5.7%) and out-of-domain (+4.4%) benchmarks in English and Chinese demonstrate the effectiveness of the proposed paradigm. | This work proposes a novel paradigm that uses a code-based critic model to guide steps including question-code data construction, quality control, and complementary evaluation and explores different alignment algorithms with self-generated instruction/preference data to foster continuous improvement. | ## SIAM: SELF-IMPROVING CODE-ASSISTED MATHEMAT### ICAL REASONING OF LARGE LANGUAGE MODELS
**Dian Yu, Baolin Peng[∗], Ye Tian, Linfeng Song, Haitao Mi, and Dong Yu**
Tencent AI Lab
Bellevue, WA, USA
{yudian,lfsong,haitaomi,dyu}@global.tencent.com
ABSTRACT
There is a growing trend of teaching large language models (LLMs) to solve mathematical problems through coding. Existing studies primarily focus on prompting
powerful, closed-source models to generate seed training data followed by indomain data augmentation, equipping LLMs with considerable capabilities for
code-aided mathematical reasoning. However, continually training these models
on augmented data derived from a few datasets such as GSM8K may impair their
generalization abilities and restrict their effectiveness to a narrow range of question
types. Conversely, the potential of improving such LLMs by leveraging large-scale,
expert-written, diverse math question-answer pairs remains unexplored. To utilize
these resources and tackle unique challenges such as code response assessment,
we propose a novel paradigm that uses a code-based critic model to guide steps
including question-code data construction, quality control, and complementary
evaluation. We also explore different alignment algorithms with self-generated
instruction/preference data to foster continuous improvement. Experiments across
both in-domain (up to +5.7%) and out-of-domain (+4.4%) benchmarks in English
and Chinese demonstrate the effectiveness of the proposed paradigm. Models and
[code will be released at https://github.com/tencent-ailab/siam.](https://github.com/tencent-ailab/siam)
1 INTRODUCTION
Though large language models (LLMs) have demonstrated strong performance on mathematical
benchmarks, they still struggle with achieving accurate computation and reasoning, especially in
out-of-domain scenarios. For example, even powerful closed-source LLMs achieve only 2% accuracy
on 5-digit multiplication (Chen et al., 2023) with step-by-step reasoning (or Chain-of-Thought,
CoT) (Wei et al., 2022). To alleviate the computational burden on LLMs, particularly those of
smaller sizes, there is a growing trend of utilizing code and code interpreters to enhance precise
computation and reasoning of LLMs in solving mathematical problems (Chen et al., 2022; Gao
et al., 2023b; Zhou et al., 2023). An effective method involves prompting closed-source LLMs to
generate code-based solutions for given questions. However, previous studies demonstrated that
closed-source models often struggle with real-world high school and college-level math exams (Liu
et al., 2024). Solving advanced problems through coding demands not only mathematical expertise
but also interdisciplinary knowledge and skills, including programming and natural language, making
it a more formidable challenge even for closed-source LLMs. Previous code-assisted studies primarily
focus on using closed-source LLMs such as GPT-4 to label a few small-scale, representative datasets
such as GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021), verifying the correctness of
the solutions via pattern-based answer matching, and training models on the verified data for further
**in-domain data augmentation through sampling, code execution, and answer validation (Wang et al.,**
2023; Liu et al., 2023; Gou et al., 2024; Lu et al., 2024). However, continually learning from these
datasets or their augmented versions, regardless of the use of code, is evidently less effective for
improving the generalization of LLMs due to the limited diversity.
On the other hand, large-scale, expert-written, mathematical question-answer (QA) pairs from
educational web resources remain under-studied to improve code-assisted math reasoning abilities
_∗Work done at Tencent AI Lab._
-----
|Col1|code|
|---|---|
|code|code|
|question|answer|
|---|---|
```
next iteration
```
`seed model` `sampling` `sampling` code
seed question
```
failed valid
```
(question, solution) (question, solution) `critic`
(question, code) (question, code) `model`
question answer `YES/NO`
```
code interpreter seed data
```
`valid` `valid` unseen web unseen web
corpora corpora `SFT/DPO`
```
next iteration
```
Figure 1: Overview of our self-improving code-assisted paradigm using large-scale web QA data.
of LLMs. These resources span educational levels from primary school to college and include
various question types and answer formats, such as multiple-choice, application, proof, and cloze.
To use these resources to self-improve code-assisted LLMs, instead of further extensively distilling
closed-source models, one natural solution is to use a fine-tuned model to generate code samples for
each problem and use the valid data to (iteratively) improve this LLM, similar to self-improved CoT
reasoners (Zelikman et al., 2022; Yuan et al., 2023; Xu et al., 2024; Hosseini et al., 2024) over data
with reference answers. However, the key challenge is to determine whether the self-generated
**code responses align with reference answers in diverse formats. Fortunately, with the aid of an**
external code interpreter, we are less concerned about potential step-wise computation errors that may
occur in CoT reasoning. We assume a code solution is more likely to be correct if its execution result
matches the reference answers, thus shifting the focus from the step-by-step comparison to comparing
the reference answers with the code execution results. Based on our analysis (Section 3.1), we
observe that most cases primarily require format conversion between plain text and code syntax (e.g.,
“(x-5)(xˆ2-4x+7)” vs. “(x-5)*(x**2-4*x+7)” and “(1, -2, 2, -3)” vs. “{A:1, B:-2, C:2, D:-3}”) and
relatively simple numerical calculations, which do not require advanced logical reasoning abilities or
in-depth language-specific knowledge (Section 3.5).
These observations and task simplification motivate us to design a critic model to evaluate the
correctness of the code execution result against the reference answer by predicting YES or NO (see
examples in Table 1). As illustrated in Figure 1, this critic model is used to guide multiple steps
during self-improvement. We first train a model with seed question-code data following previous
code-assisted studies and consider it as the initial policy model. In each iteration, we use the current
policy model to generate code samples for new questions and keep the highest-scoring valid code
responses rated by the critic model for supervised fine-tuning (SFT) in the subsequent iteration. To
foster continuous improvement, we also explore different preference learning algorithms such as
DPO (Rafailov et al., 2024) and ORPO (Hong et al., 2024) with self-generated preference data, where
the preference labels are provided by the critic model.
We perform experiments on various model families, such as Llama3-8B (AI@Meta, 2024) and
DeepSeek-Coder-7B (Guo et al., 2024), and Qwen2-7B (Yang et al., 2024). Experimental results
across both in-domain (up to +5.7%) and out-of-domain (OOD) (+4.4%) benchmarks in English and
Chinese show the effectiveness of self-improving LLMs using our proposed paradigm with large-scale
web QA pairs. The resulting 7-8B models can outperform state-of-the-art 70B code-assisted math
LLMs (Gou et al., 2024) by 11.9% in OOD scenarios. Notably, we observe a strong correlation
between the traditional heuristic-based evaluation method and the critic model (Section 3.5), with the
latter reducing the additional human effort needed to design rules for new mathematical benchmarks.
Additionally, introducing SFT loss into the DPO training is surprisingly effective in controlling the
code response length. To summarize the contributions of this work:
- To the best of our knowledge, this is the first attempt to leverage large-scale web QA pairs
to improve the code-assisted mathematical reasoning abilities of LLMs.
- To better leverage these large-scale, diverse web QA pairs, we propose a novel iterative
self-improving paradigm that employs a new critic model to guide various steps such as data
construction and filtering. This critic model can also serve as a complementary evaluation
scorer, reducing the reliance on heuristic design for new evaluation tasks.
-----
- Extensive experiments on both English and Chinese tasks demonstrate the effectiveness of
our paradigm, and our comprehensive analysis of the key factors in achieving continuous
improvement at different stages may shed light on future studies.
**System Prompt** Your goal is to evaluate whether the candidate answer provided by the model for a math problem
matches the reference answer. Here are the steps to complete the task:
– First, carefully read the given math problem.
– Next, review the reference answer for the math problem.
– Then, examine the candidate answer provided by the model, which may include a program and the
result of running that program.
– Finally, summarize whether the candidate answer matches the reference answer or can be made to
match through simple calculations/conversions.
– The response format should be Yes or No.
**Instruction** ### Question\n\n Given f(1-2x)=3x+1,find f(-3)= __ .
### Reference Answer\n \n 7 \n \n
### Candidate Answer\n \n <code>from sympy import symbols, Eq, solve\n# Define the variable\n
x = symbols(’x’)\n# Establish the equation based on the problem conditions\n equation = Eq(1 - 2*x,
-3)\n# Solve the equation\n solution = solve(equation, x, dict=True)\n solution_value = solution[0][x]\n
# Calculate f(-3)\n f_neg_3 = 3*solution_value + 1\n f_neg_3</code><solution>7</solution>\n\n
### Assessment
**Output** Yes
**Instruction** ### Question\n\n Given the sequence {a<sub>n</sub>} with the sum of the first n terms
S<sub>n</sub>=2n-3, what is a<sub>3</sub>? A: 3B: 2C: 1D: 0
### Reference Answer\n \n B \n \n
### Candidate Answer\n \n <code># Define a function to calculate the nth term\n def calculate_nth_term(n):\n # Based on the derived formula\n an = 2 ** n - 2 ** (n - 1)\n return an\n # Calculate
the value of the 3rd term \n a3 = calculate_nth_term(3) # a3 </code><solution>4</solution>\n\n
### Assessment
**Output** No
Table 1: Example instances of the code-based critic task ((question, reference answer) pairs come
from the web QA data, and Chinese texts are translated into English).
2 METHOD
2.1 TRAINING AN INITIAL MODEL
One key factor for an iterative paradigm is to have a reasonably strong initial model. To train such a
model, we first use high-quality seed data to fine-tune an LLM, resulting in model Mseed. We use
_Mseed to generate code samples and keep up to four predictions per question wherein the execution_
result of the code matches the reference answer and combines the seed data and the self-distilled data
to train M0, which is further used as the initial model for later stages. We will introduce more details
about the seed data construction in the experiment section.
2.2 BUILDING A MULTI-USE CODE-BASED CRITIC MODEL
To improve LLMs with large-scale, diverse-format math QA data without code annotations, several
challenges arise in data utilization, filtering, and evaluation. First, previous studies primarily use
pattern-based methods to compare predictions and ground truth answers during validation and
evaluation. This works well for GSM-style datasets, where answers are single numbers and wellformatted (e.g., “72” in “...72 clips altogether in April and May.\n #### 72”). However, pattern-based
methods face inherent challenges in handling diverse answer types and formats and bridging the
gap between natural language and programming language. For example, with the MATH dataset,
comparing CoT predictions with reference answers in LaTeX-like format already requires humanwritten patterns and answer conversion (Yue et al., 2023). This complexity is compounded when
predictions are presented in code syntax, even when the task is simplified to compare the reference
answer with the code execution result.
To address the above challenges, we propose building a code-based critic model optimized by the
following objective:
_L(rϕ) =_ log rϕ(y _q, a, c, e),_ (1)
_−_ _|_
-----
where q denotes a question, a is the reference answer to q, c represents the code response to q, and e
is the execution result of code c. To simplify the task, we let y be either “YES” or “NO”. Examples
are shown in Table 1. We leave other formulations, such as training a scalar critic model (Ouyang
et al., 2022), to future work.
2.3 CODE DATA GENERATION
As mentioned previously, our goal is to leverage web math QA data to continuously self-improve the
code-assisted mathematical reasoning ability of LLMs. For well-formatted, web-collected data such
as APE (Zhao et al., 2020) and CM (Qin et al., 2021), where most answers are one or two numerical
values (see examples in Table 15), it is efficient and effective to compare the reference answer and the
execution result of the code using scripts released by previous studies (Section 3.2). For real-world
math data involving multiple types of questions, such as multiple-choice, multiple-question, fill-inthe-blank, application, and proof, using a critic model introduced in the previous section is more
flexible and saves the intensive effort of writing task-specific patterns, which is time-consuming and
may suffer from relatively low recall. Note that for all questions, we only use their reference answers
to verify the correctness of code execution results instead of directly training on these answers, and
we only use benchmarks’ training sets.
In the k + 1-th iteration, for each new question, we use the current policy model πθk to generate
five code samples and execute them to obtain the results. For questions in the diverse-format web
data, the critic model is then used to predict YES or NO for each response (ai, cij, eij) given qi.
We use the probability of YES or NO as the confidence value for the critic model’s judgment. A
higher probability score indicates a greater confidence in the code response, either agreeing with or
disagreeing with the reference answer.
2.4 SELF-IMPROVEMENT WITH UNSEEN DATA
One natural choice is to perform supervised fine-tuning (SFT) on πθk using DSFT:
_LSFT(πθk+1_ ) = log πθk+1 (c _q)_ (2)
_−_ _|_
_DSFT = {(qi, cij) | rϕ(y = YES | qi, ai, cij, eij)}_ (3)
As critiques may contain errors, we explore using the probability of each judgement as a confidence
score to filter out noise. Besides, we introduce extra constraints: for each question, we only retain
the highest-scoring positive instance tij = {qi, ai, cij, eij}, similar to rejection sampling (Bai et al.,
2022), whereproblems, if all instances in tij ∈ _Ti of the same question Ti are labeled as Y qi. To encourage models to learn from more challengingES, we discard this question and its corresponding_
generated code from consideration.
_DSFT, H = {(qi, cij) | rϕ(y = YES | tij),_
_prϕ_ (y = YES _tij) > λ1,_
_|_
_tij = arg max_
_tij_ _∈Ti_ _[p][r][ϕ]_ [(][y][ =][ Y][ES][ |][ t][ij][)][,]
(4)
_|Ti|_
**1** _rϕ(y = No_ _tij)_ _λ2_
_{_ _|_ _} ≥_ _}_
_j=1_
X
where λ1, λ2 represent thresholds for filtering and difficulty control.
In addition to supervised fine-tuning a policy model on self-generated SFT data (DSFT, H or DSFT),
we further leverage negative instances by optimizing the policy model on preference data using
algorithms such as DPO (Rafailov et al., 2024) and ORPO (Hong et al., 2024). Compared to SFT,
these preference learning algorithms additionally decrease the probability of losing responses. We
mainly focus on DPO and leave other options for future studies, and we jointly train the policy with
the SFT objective to alleviate overfitting to the preference data and ensure a stable update (Hong et al.,
-----
2024). See more discussions on the impact of the SFT objective, especially its role in controlling the
response length, in Section 3.4.
_LDPO(πθk+1_ ) = − log σβ log _[π]π[θ][k]θ[+1]k_ (y[(][y]w[w][ |] x[ x])[)] _−_ _β log_ _[π]π[θ][k]θ[+1]k_ (y[(][y]l _[l][ |] x[ x])[)]_ _−_ _λ · log πθk+1_ (yw | x) (5)
_|_ _|_
We can easily leverage our critic model to build preference (cw, cl) pairs, where cw represents the
winning code and cl represents the losing code. For each question, we use the highest-scoring YES
response and the highest-scoring NO response to form a preference pair, aiming to maximize the
difference between them. See preference data examples in Section A.6.
_DDPO = {(qi, cij, cik) | rϕ(y = YES | tij),_
_rϕ(y = NO_ _tik),_
_|_
_tij = arg max_
_tij_ _∈Ti_ _[p][r][ϕ]_ [(][y][ =][ Y][ES][ |][ t][ij][)][,]
_tik = arg max_
_tik∈Ti_ _[p][r][ϕ]_ [(][y][ =][ N][O][ |][ t][ik][)][}]
3 EXPERIMENTS
3.1 DATA
(6)
We summarize the statistics of data used for self-improvement in Table 2. Due to limited space, see
statistics of evaluation benchmarks in Table 14 and more examples in Appendix.
**Seed Data D0: To generate the seed data for**
English, following previous work, we use GPT
**Data/Subset** **QA Source** **Size**
4-0613 to generate Python code in an iterative
zh web 76K
fashion: we repeatedly sample the remaining _D0_ en GSM8K, MATH 44K
questions that do not have correct code (i.e.,
the code execution results match the reference _D1_ APE, CM 211K
answer of the questions) for up to three itera-tions. We use questions from the training sets of _D2,in-house_ SFTSFT(H)DPO educational websites 273K465K893K
GSM8K (7.5K) and MATH (7.5K) as the seedquestions for imitation learning. For datasets _D2,WebInstruct_ DPO pre-training corpora 447K
such as GSM8K in which the answers are mostly
Table 2: Statistics of training data used in our three
single numbers, it is easier to compare answer
stage paradigm (D1 and D2,in-house are Chinese re
and code execution results. After two iterations,
sources; D2,WebInstruct is English-dominant).
we can annotate 98.5% of questions in GSM8K.
For datasets such as MATH wherein the answers
are diverse in formats, we simply keep the code that can be successfully executed without errors. For
seed questions for Chinese, we randomly sample 20K from (1.13M in total) collected or purchased
from educational web resources (Section A.6) and follow the same procedure using GPT-4-0613 for
code generation to construct the Chinese subset of D0.
**Value-Style D1: We utilize the initial policy M0 to generate code samples to questions in training**
sets of two open-source word math problem datasets APE (200.5K) (Zhao et al., 2020) and CM
(13.6K) (Qin et al., 2021), both collected from educational websites covering elementary and middleschool levels. Since all the answers are one or two numerical values, for efficiency, we use heuristics
with Python to compare the code execution results with reference answers for validation. We keep up
to four valid code samples for each question.
**Diverse-Format Data D2 and Critic Data: To increase the diversity of our training data, we further**
consider large-scale mathematical QA pairs (excluding those used for seed data) mentioned previously.
For each question, we retain only one positive code and one negative code (if any exists) judged
by the critic. To evaluate the generalization and robustness of our paradigm, we also use a recently
released large-scale QA dataset named WebInstruct (Yue et al., 2024) to construct a similar-scale D2,
containing 447K preference pairs (see examples in Section A.6). Compared to our in-house web QA
data, WebInstruct is mostly in English and is extracted from the pre-training corpora. Therefore, the
answers are not guaranteed to be written by educational experts as our Chinese web data.
-----
To better understand this web data and the critic task, we analyze the reference answers for 50
instances. Only 14% of them are single numerical values, while 50% involve format conversion (e.g.,
syntax or structure) when the answers are expressions, equations, coordinates, sets, etc. Another
difference between real-world data and well-formatted benchmarks is the inconsistency in the
format of reference answers. Specifically, half of them are mixed with CoT-style (Wei et al., 2022)
explanations and/or irrelevant contents such as tags and URLs. This makes it difficult to parse
short-form answers for easier matching with a few patterns, as done for clean benchmarks (e.g.,
answer indicators “###” for GSM8K and “BOX” for MATH). For multiple-choice or multi-part
questions (8% in total), we additionally require the question context for mapping option labels and
their contents, as well as question decomposition. These observations reflect the diversity of question
types in our web QA data. See more examples and statistics in the Appendix.
To build the training data for the critic model, we use M0 to generate code samples for randomly
sampled questions from D2 and execute these code samples. We then prompt GPT-4-0613 with the
input (question, code, code result, reference answer) following the template in Table 1. After filtering,
we retain 16.8K training instances, of which 48.6% of are judged as YES.
3.2 IMPLEMENTATION
We use LLLAMAFACTORY (Zheng et al., 2024) for efficient fine-tuning built upon DeepSpeed
(ZeRO-3). Our experiments are conducted using 8XA100 40GB GPUs. We train LLMs with BF16
mixed-precision. The training for the self-improving paradigm takes approximately 96 hours. With
80 workers in multi-processing mode on a CPU machine, we can execute about 9,003 code samples
per minute. Each model at each stage is trained for two epochs with a learning rate of 1e-5 for SFT
and 1e-6 for preference learning. We set the SFT loss coefficient (λ in Equation 7) to 1.0. The
maximum sequence length is set to 1024, and the batch size is set to 64. We set λ1 to 0.8 and λ2 to 3.
We experiment with various LLMs to select backbone models such as CodeLlama-7B-Python (Roziere
et al., 2023), Llama3instruct (AI@Meta, 2024), CodeQwen1.5-7B-Chat (Team, 2024), QWEN2(Yang
et al., 2024), and Deepseek-Coder-7B-instruct-v1.5 (Guo et al., 2024), which demonstrate strong
coding capabilities on code-related benchmarks. Due to limited computational resources, we use
their 7-8B versions with their default templates and leave the model scaling up for future work.
We primarily follow the evaluation scripts from previous studies (Liang et al., 2024) for Chinese
benchmarks and FastEval[1] for English benchmarks GSM8K and MATH, which often use Python for
numerical comparison. We also make adjustments to these scripts, as our predicted answers are in
code syntax. CodeLlama-7B-Python is used as the backbone model to train the code-based critic
model for three epochs with the maximum sequence length 4096.
3.3 THE PERFORMANCE OF THE INITIAL POLICY AND SELF-IMPROVED LLMS
As shown in Table 3, we experiment with three backbone models for self-improvement —
DeepSeekcode, Llama3instruct, QWEN2Mathinstruct — that show superior average performance across
math datasets in both Chinese (APE, CM, and CMATH (Wei et al., 2023)) and English (GSM8K
and MATH) than other investigated models when trained with seed data (see complete results of
initial policy models based on eight LLMs in Table 9). Therefore, we consider them as initial policy
models (i.e., M0) for self-improvement. After two additional iterations on the unseen data D1, and D2
constructed with the help of our code-based critic model, the resulting models (i.e., M2) consistently
outperform M0 by a large margin on Chinese benchmarks.
We observe that self-improving the initial policy model with Chinese-only data, D1 and D2, does not
hurt the accuracy of M2 on English tasks. In fact, it may be beneficial (e.g., +1.5% on both MATH
and GSM8K datasets using DeepSeekcode). Conversely, adding English seed data (36.7% of D0)
consistently improves M0’s average performance on Chinese benchmarks (D0 vs. D0,zh in Table 4).
To some extent, we may interpret code as a universal language for solving mathematical problems
across different languages. The language-specific parts are mainly in the code comments, which
are relatively indirect for problem-solving via code execution. Thus, our paradigm may reduce the
burden of preparing large-scale, language-specific math data for each language. We observe similar
trends on DeepSeekcode and QWEN2Mathinstruct, see detailed results in Table 10 for space constraints.
1github.com/FastEval/FastEval/.
-----
We list several general-purpose/math-specified multi-lingual/English LLMs for reference. Note that
direct comparisons are challenging due to differences in architectures, pre-training corpora, alignment
algorithms, model size, the use of tools, and labeled data. For example, code-assisted methods ToRA,
MathCoder, and MathGenieLM are trained on 69K, 80K, and 170K English-only data, respectively,
augmented based on GSM8K and MATH. In contrast, our experiments use 44K English seed data
and explore the use of large-scale Chinese QA pairs. Moreover, the evaluation scripts, originally
designed for plain-text answers instead of code outputs, may cause an underestimation of our methods’
performance on datasets such as MATH, where answers involve more expressions and structures
beyond numerical values. This also highlights the need for a more flexible evaluation method.
**Chinese Tasks** **English Tasks**
**Model** **Size (B)** **Tool** **CM** **APE** **CMATH** **GSM8K** **MATH**
GPT-4-1106-Preview – _×_ – 84.2 89.3 93.6 53.6
Qwen-Chat (Bai et al., 2023) 72 _×_ – 77.1 88.1 76.4 31.8
ChatGLM-Math (Xu et al., 2024) 32 _×_ – 89.4 85.6 82.6 40.6
Skywork-Math (Yang et al., 2023) 13 _×_ – 74.4 77.3 72.3 17.0
Math-InternLM2 (Team, 2023) 20 _×_ – 75.2 78.5 82.6 37.7
MetaMath (Yu et al., 2023a) 70 _×_ – – – 82.3 26.6
MathCoder (Wang et al., 2023) 34 ✓ – – – 81.7 45.2
ToRA (Gou et al., 2024) 70 ✓ – – – 84.3 49.7
7 ✓ – – – 72.6 44.6
MathGenieLM (Lu et al., 2024) 70 ✓ – – – 88.4 51.2
MinT (Liang et al., 2024) 7 ✓ 77.6 76.0 – 40.8 –
**Initial Model Baselines (M0)**
QWEN2Mathinstruct 7 ✓ 84.9 83.4 87.3 79.5 48.0
DeepSeekcode 7 ✓ 82.7 81.2 87.0 77.4 44.4
Llama3instruct 8 ✓ 83.3 83.2 87.2 76.8 41.8
**Self-Improvement with Chinese Diverse-Format Web Data (M2)**
SIaM(QWEN2Mathinstruct) 7 ✓ 90.1 (+5.2) 88.1 (+4.7) 93.2 (+5.9) 81.5 (+2.0) 50.0 (+2.0)
SIaM(DeepSeekcode) 7 ✓ 87.3 (+4.6) 85.9 (+4.7) 91.2 (+4.2) 78.9 (+1.5) 45.9 (+1.5)
SIaM(Llama3instruct) 8 ✓ 89.0 (+5.7) 86.8 (+3.6) 90.8 (+3.6) 80.5 (+3.7) 41.9 (+0.1)
Table 3: Accuracy across the development sets of math datasets. All Chinese datasets are OOD for
M0. CMATH is OOD for M2 as CM and CMATH are later used for distant supervision.
**Model** **Stages** **Data** **CM** **APE** **CMATH** **GSM8K** **MATH** **Average**
Llama3instruct SFT _D0,en_ – – – 75.1 37.2 –
SFT _D0,zh_ 82.5 83.3 85.5 – – –
SFT _D0_ 83.3 83.2 87.2 76.8 41.8 74.4
SFT _D0 + D1_ 87.6 85.0 89.0 76.6 41.8 76.0
SFT → DPO _D0 + D1; D2,WebInstruct_ 87.5 86.1 88.7 80.2 **42.1** 76.9
SFT → DPO _D0 + D1; D2,in-house_ **89.0** **86.8** **90.8** **80.5** 41.9 **77.8**
Table 4: Impacts of different stages and data selection on the development sets of datasets.
3.4 THE COMPARISON OF DIFFERENT CHOICES OF DATA AND ALIGNMENT METHODS
**Diversity: Based on the experimental results, given D0 and D1, we observe that two-stage SFT**
(first on D0 for two epochs and then on D1 for two epochs) under-performs one-stage SFT (over the
concatenation of D0 and D1 for two epochs) (B vs. C in Table 5). However, incorporating D2 using
either strategy achieves similar performance (E vs. F in Table 5). One possible reason may be that
the questions in D1 are from two web-collected value-style benchmarks, resulting in less diversity
compared with D2, which has a broader range of question types (Section 3.1). Ensuring the diversity
of data in each stage may help the model generalize better across various types of math questions,
similar to the observations seen when training general-purpose LLMs (e.g., (Shen et al., 2023)).
**Denoised SFT Data: As mentioned previously, we use the code-based critic model to construct**
SFT data. Since the process will inevitably introduce false positive data, we further consider several
constraints for filtering (Equation 4 in Section 2.4). Experimental results show that we can achieve
similar average accuracy using either D2,SFT,H or the D2,SFT (D vs. E in Table 5). However, D2,SFT,H
is only 30.6% of the latter’s size, indicating the usefulness of the filtering.
-----
**DPO or SFT: Based on a reasonably good model M1 (trained with D0 and D1, such as C in Table 5),**
we can either self-improve it via SFT or DPO as described in Section 2.4. We compare using the
positive (question, code) pairs in the DPO data for another round of SFT, which results in a 1.8% drop
in accuracy on downstream tasks (G vs. I in Table 5). Since we do not impose strict constraints on
the positive data in DPO, D2,DPO, positive is 1.7 times the size of D2,SFT,H. Still, using the filtered SFT
data D2,SFT,H achieves slightly better performance (F vs. G), showing the effectiveness of filtering.
**ID** **Alignment** **Data** **Average Accuracy**
A SFT _D0_ 74.4
B SFT → SFT _D0 ; D1_ 75.4
C SFT _D0 + D1_ 76.0
D SFT _D0 + D1 + D2,SFT_ 76.1
E SFT _D0 + D1 + D2,SFT,H_ 76.1
F SFT → SFT _D0 + D1; D2,SFT,H_ 76.2
G SFT → SFT _D0 + D1; D2,DPO, positive_ 76.0
H SFT → ORPO _D0 + D1; D2,DPO_ 77.0
I SFT → DPO _D0 + D1; D2,DPO_ 77.8
Table 5: The self-improving average accuracy of Llama3instruct on the development sets of different
datasets with various training strategies and data. Refer to Section A.5 for the accuracy on each task.
**DPO with SFT: Our experiments indicate that**
DPO training is relatively insensitive to the
**GSM8K** **CMATH**
weight (λ in Equation 7) of the SFT loss. We _λ_ ACC L L ACC L L
tested with λ = 1.0 and λ = 2.0, both of which L0 L0
resulted in similarly good performance (77.8%). reference model
However, as shown in Table 6, removing the - 76.6 323 1.0 89.0 136 1.0
SFT loss (i.e., λ = 0) from DPO training leads 0.0 73.4 1834 5.7 57.5 3160 23.2
to a dramatic increase in response length, espe- 0.51.0 78.880.5 532352 1.61.1 90.790.8 201136 1.51.0
cially for Chinese tasks such as CMATH, and 1.5 79.0 328 1.0 90.7 135 1.0
yields worse results than the reference policy 2.0 79.8 326 1.0 90.7 134 1.0
model (C in Table 5). This observation aligns
with discussions on length exploitation issue of Table 6: The impact of the weight of the SFT loss
the original DPO loss (Park et al., 2024). One in DPO training on the average accuracy and avpossible reason for the length control achieved erage response length in words on GSM8K and
by adding the SFT loss could be that the positive CMATH (L0: response length of reference policy).
responses used for the SFT loss are generated
by the reference policy model. By setting a larger weight to SFT, we control the deviation from the
reference policy, which alleviates a substantial increase in response length. We also experiment with
using ORPO (Hong et al., 2024), which removes the need for a reference model and jointly trains
with the SFT loss. However, this method is not as effective as jointly training DPO and SFT in our
experiments (H vs. I in Table 5).
**Other Diverse-Format Resources: We also**
experiment with constructing similar-scale preference data using the diverse-format D2 based **Dataset** **ACCEM** **ACCcritic** **CorrelationKendall**
on WebInstruct (Section 3.1). However, the re- CMAPE 89.086.8 84.686.5 0.660.76
sulting improvement in average accuracy is less CMATH 90.8 91.8 0.77
substantial compared to that achieved with the GSM8K 80.5 80.6 0.97
MATH 41.9 48.2 0.79
Chinese diverse-format D2 (+0.9% vs. +1.8%
average 77.8 78.3 0.79
on Llama3instruct in Table 4; +0.6% vs. +2.5%
on QWEN2Mathinstruct in Table 10). One possi
Table 7: Correlation of two evaluation methods:
ble reason for this difference could be that the
heuristics-based EM and the critic model. ACC
QA pairs extracted from pre-training corpora,
despite being of similar scale, provide weaker represents the accuracy of our best-performing M2
on downstream tasks rated by the two methods.
supervision compared to those sourced from educational websites, where answers are typically
written by experts. Although we have filtered out QA pairs where reference answers contain no
numbers, we observe that some questions still do not require any calculations, such as “How is the
_interquartile range (IQR) connected to percentiles?” or related to other subjects such as “What_
_is the most prevalent state of matter in the universe ...?”. As a result, these QA pairs may be less_
-----
effective in improving performance on benchmarks that primarily require numerical computation.
Nevertheless, these results demonstrate the robustness of our paradigm.
3.5 USING THE CRITIC MODEL AS AN EVALUATOR
We have shown the effectiveness of using the critic model to construct SFT and preference data. All
scores are computed by comparing predictions with ground truth answers, using heuristics-based
exact match (EM) following previous studies for fair comparisons. To explore the potential of
using the critic model as a complementary evaluator, we examine the correlation between the two
evaluation methods on the previously used benchmarks. We use the original ground truth answers
(final-step answers if answers are COT-style) (e.g., “3750”, “[12, 18]”, and “\\frac{1}{2}”) in
these benchmarks. Since all scores are either 0 (NO) or 1 (YES), we report the Kendall’s τ between
the two methods. As shown in Table 7, there is a very strong correlation (0.79) (compared to the
very-strong-cutoff value 0.71 and strong-cutoff value 0.49 (Schober et al., 2018)) between the scores
computed by the two evaluators. The strong associations in English tasks are surprising, given that
the critic model is trained on Chinese-only data. This may be due to (i) the backbone model being
a well-instructed model focused on English, and (ii) comparing answers to mathematical questions
relying less on language-specific knowledge.
3.6 THE PERFORMANCE OF SELF-IMPROVED LLMS ON MORE OUT-OF-DOMAIN TASKS
Considering the above results in Section 3.5, we are now more confident in
using the critic model to evaluate models’ performance on additional out-ofdomain benchmarks, without the need
to write extensive heuristics for different
tasks. Besides CMATH, we evaluate the
out-of-domain performance of our models using MathBench (Liu et al., 2024),
a newly released benchmark supporting
evaluation in both Chinese and English.
The open-ended or multiple-choice questions in MathBench span various educational stages, from primary school to
college levels. We report scores on its
two subsets: MathBench-A, which evaluates practical problem-solving skills, and
MathBench-T, which assesses theoretical understanding.
**model** **Tool** **Subset-A** **Subset-T** **ACCaverage**
GPT-4-0125-Preview _×_ 58.8[†] 78.4[†] 68.6[†]
GLM4 _×_ 51.3[†] 73.1[†] 62.2[†]
Qwen-Chat-72B _×_ 49.7[†] 77.2[†] 63.5[†]
Math-InternLM2-20B _×_ 41.9[†] 64.3[†] 53.1[†]
Llama3instruct-8B _×_ 36.7[†] 52.1[†] 44.4[†]
MathCoder-7B ✓ 32.6[⋆] 27.4[⋆] 30.0[⋆]
MathCoder-34B ✓ 50.1[⋆] 49.3[⋆] 49.7[⋆]
ToRA-7B ✓ 31.0[⋆] 28.4[⋆] 29.7[⋆]
ToRA-70B ✓ 54.3[⋆] 54.4[⋆] 54.3[⋆]
**Language-specific system prompt:**
SIaM(Llama3instruct)0-8B ✓ 62.5[⋆] 57.9[⋆] 60.2[⋆]
SIaM(Llama3instruct)2-8B ✓ 66.7[⋆] 62.6[⋆] 64.6[⋆]
**Chinese-only system prompt:**
SIaM(Llama3instruct)0-8B ✓ 64.0[⋆] 64.4[⋆] 64.2[⋆]
SIaM(Llama3instruct)2-8B ✓ 69.5[⋆] 65.8[⋆] 67.6[⋆]
Table 8: OOD accuracy on MathBench (⋆: scored by the
critic model; †: based on the numbers reported by (Liu
et al., 2024); A: application; T: theoretical).
As shown in Table 8, the self-improved
models demonstrate substantial gains on both subsets, with an accuracy improvement of 4.4%. On
both subsets, the self-improved model consistently outperforms the initial one across all educational
levels and subjects with notable improvements particularly in middle school tasks and English
theoretical tasks. See sub-category performance in Tables 11 and 12. Note that we provide the scores
of other models for reference, as they are judged by a different scorer. We compare our method with
ToRA, one of the SOTA code-aided math LLMs, rated by the same critic model. Though trained
on English-only data, ToRA exhibits a surprising performance on Chinese tasks. Nevertheless, our
8B model also outperforms ToRA 70B on the English subset of MathBench by 14.5% and 9.2%,
respectively, on A and T (Table 13). See more discussions and selection of system prompts in
Section A.3. Compared to practical application questions, it seems that using CoT, LLMs are much
better at handling theoretical knowledge questions. In contrast, solving all questions via coding
shows balanced and reasonable performance. This shows the advantage of using tools to aid in
computation, but also indicates the limitations of relying solely on code to address questions that may
not require actual computation. It remains an open question whether, and how, code can be used to
assist advanced theoretical reasoning–a topic beyond the scope of this paper (Liu et al., 2024).
-----
4 RELATED WORK
For automatic math evaluation on well-formatted benchmarks, previous studies mostly use heuristics
and external tools (e.g., the Python EVAL() function) to compare answers and predictions (Fourrier
et al., 2023; Gao et al., 2023a), which works quite well for single numerical value answers, as seen
in datasets such as GSM8K (Cobbe et al., 2021), ASDiv (Miao et al., 2020), and SVAMP (Patel
et al., 2021). However, since answers from web resources are diverse in formats and languagecode syntactic difference, using carefully designed task-specific heuristics becomes infeasible for
comparing answers and code execution results. For datasets that are beyond value-style answers
such as MATH (Hendrycks et al., 2021), closed source LLMs are also used for evaluation such as
OpenAI-Evals, which however is not cost-effective for rating large-scale code samples.
Several approaches (Li et al., 2023; Yu et al., 2023b; Lu et al., 2023; Yuan et al., 2024; Hu et al., 2024)
use the LLM itself or use separate critic model (Ouyang et al., 2022; Xu et al., 2024) for scoring or
filtering natural-language responses. We focus on tool-assisted assessment of code responses to math
questions. Compared to the previously mentioned self-improving studies that use a single LLM to
provide feedback on its own generations, we interpret “self” in contrast to distilling knowledge from
larger closed-source or open-source LLMs for continuous improvements.
Though recent studies shown that in-domain CoT-based or code-based data augmentation and
alignment will lead LLMs to achieve strong performance on in-domain math datasets (Luo et al.,
2023; Yu et al., 2023a; An et al., 2023; Li et al., 2024), we leave data augmentation on web data (CoT
or code) for future work. We only use GPT-4 to annotate seed data and the critic data, and using
closed-source LLMs to annotate large-scale web questions is beyond the scope of this paper.
5 CONCLUSIONS AND FUTURE WORK
We introduce a novel paradigm for improving LLMs, which employs a code-based critic model to
guide stages such as the creation and filtering of question-code data as well as complementary evaluation. We also investigate various alignment algorithms using self-generated instruction/preference
data for further improvement. Results show the effectiveness of self-improving LLMs with this
proposed paradigm. Future research includes studying post-training on code-only data to enhance the
computational capabilities of LLMs and improvement of the critic model.
REFERENCES
[AI@Meta. Introducing meta llama 3: The most capable openly available llm to date. https:](https://ai.meta.com/blog/meta-llama-3/)
[//ai.meta.com/blog/meta-llama-3/, 2024.](https://ai.meta.com/blog/meta-llama-3/)
Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng, Jian-Guang Lou, and Weizhu Chen. Learning
from mistakes makes llm better reasoner. arXiv preprint arXiv:2310.20689, 2023.
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge,
Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609, 2023.
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna
Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness
from ai feedback. arXiv preprint arXiv:2212.08073, 2022.
Jiaao Chen, Xiaoman Pan, Dian Yu, Kaiqiang Song, Xiaoyang Wang, Dong Yu, and Jianshu Chen.
Skills-in-context prompting: Unlocking compositionality in large language models. arXiv preprint
_arXiv:2308.00304, 2023._
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint
_arXiv:2211.12588, 2022._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve
math word problems. arXiv preprint arXiv:2110.14168, 2021.
-----
Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu,
and Maosong Sun. Ultrafeedback: Boosting language models with high-quality feedback. arXiv
_preprint arXiv:2310.01377, 2023._
Clémentine Fourrier, Nathan Habib, Thomas Wolf, and Lewis Tunstall. Lighteval: A lightweight
framework for llm evaluation, 2023. [URL https://github.com/huggingface/](https://github.com/huggingface/lighteval)
[lighteval.](https://github.com/huggingface/lighteval)
Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster,
Laurence Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff,
Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika,
Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot
[language model evaluation, 12 2023a. URL https://zenodo.org/records/10256836.](https://zenodo.org/records/10256836)
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and
Graham Neubig. Pal: Program-aided language models. In International Conference on Machine
_Learning, pp. 10764–10799. PMLR, 2023b._
Zhibin Gou, Zhihong Shao, Yeyun Gong, yelong shen, Yujiu Yang, Minlie Huang, Nan Duan,
and Weizhu Chen. ToRA: A tool-integrated reasoning agent for mathematical problem solving.
[In The Twelfth International Conference on Learning Representations, 2024. URL https:](https://openreview.net/forum?id=Ep0TtjVoap)
[//openreview.net/forum?id=Ep0TtjVoap.](https://openreview.net/forum?id=Ep0TtjVoap)
Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao
Bi, Y. Wu, Y.K. Li, Fuli Luo, Yingfei Xiong, and Wenfeng Liang. Deepseek-coder: When the
[large language model meets programming – the rise of code intelligence, 2024. URL https:](https://arxiv.org/abs/2401.14196)
[//arxiv.org/abs/2401.14196.](https://arxiv.org/abs/2401.14196)
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset, 2021.
Jiwoo Hong, Noah Lee, and James Thorne. Orpo: Monolithic preference optimization without
reference model. arXiv preprint arXiv:2403.07691, 2(4):5, 2024.
Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron Courville, Alessandro Sordoni, and Rishabh
Agarwal. V-star: Training verifiers for self-taught reasoners. arXiv preprint arXiv:2402.06457,
2024.
Chi Hu, Yimin Hu, Hang Cao, Tong Xiao, and Jingbo Zhu. Teaching language models to self-improve
by learning from language feedback. arXiv e-prints, pp. arXiv–2406, 2024.
Chen Li, Weiqi Wang, Jingcheng Hu, Yixuan Wei, Nanning Zheng, Han Hu, Zheng Zhang, and
Houwen Peng. Common 7b language models already possess strong math capabilities. arXiv
_preprint arXiv:2403.04706, 2024._
Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Luke Zettlemoyer, Omer Levy, Jason Weston, and
Mike Lewis. Self-alignment with instruction backtranslation. arXiv preprint arXiv:2308.06259,
2023.
Zhenwen Liang, Dian Yu, Xiaoman Pan, Wenlin Yao, Qingkai Zeng, Xiangliang Zhang, and Dong Yu.
MinT: Boosting generalization in mathematical reasoning via multi-view fine-tuning. In Nicoletta
Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, and Nianwen Xue
(eds.), Proceedings of the 2024 Joint International Conference on Computational Linguistics,
_Language Resources and Evaluation (LREC-COLING 2024), pp. 11307–11318, Torino, Italia, May_
[2024. ELRA and ICCL. URL https://aclanthology.org/2024.lrec-main.988.](https://aclanthology.org/2024.lrec-main.988)
Bingbin Liu, Sebastien Bubeck, Ronen Eldan, Janardhan Kulkarni, Yuanzhi Li, Anh Nguyen, Rachel
Ward, and Yi Zhang. Tinygsm: achieving> 80% on gsm8k with small language models. arXiv
_preprint arXiv:2312.09241, 2023._
Hongwei Liu, Zilong Zheng, Yuxuan Qiao, Haodong Duan, Zhiwei Fei, Fengzhe Zhou, Wenwei
Zhang, Songyang Zhang, Dahua Lin, and Kai Chen. Mathbench: Evaluating the theory and
application proficiency of llms with a hierarchical mathematics benchmark, 2024.
-----
Jianqiao Lu, Wanjun Zhong, Wenyong Huang, Yufei Wang, Fei Mi, Baojun Wang, Weichao Wang,
Lifeng Shang, and Qun Liu. Self: Language-driven self-evolution for large language model. arXiv
_preprint arXiv:2310.00533, 2023._
Zimu Lu, Aojun Zhou, Houxing Ren, Ke Wang, Weikang Shi, Junting Pan, Mingjie Zhan, and
Hongsheng Li. Mathgenie: Generating synthetic data with question back-translation for enhancing
mathematical reasoning of llms. arXiv preprint arXiv:2402.16352, 2024.
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng,
Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical
reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583,
2023.
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and developing
English math word problem solvers. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel
Tetreault (eds.), Proceedings of the 58th Annual Meeting of the Association for Computational
_Linguistics, pp. 975–984, Online, July 2020. Association for Computational Linguistics. doi: 10._
[18653/v1/2020.acl-main.92. URL https://aclanthology.org/2020.acl-main.92.](https://aclanthology.org/2020.acl-main.92)
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong
Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow
instructions with human feedback. Advances in neural information processing systems, 35:27730–
27744, 2022.
Ryan Park, Rafael Rafailov, Stefano Ermon, and Chelsea Finn. Disentangling length from quality in
direct preference optimization. arXiv preprint arXiv:2403.19159, 2024.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are NLP models really able to solve simple
math word problems? In Proceedings of the 2021 Conference of the North American Chapter of
_the Association for Computational Linguistics: Human Language Technologies, pp. 2080–2094,_
Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.
[168. URL https://aclanthology.org/2021.naacl-main.168.](https://aclanthology.org/2021.naacl-main.168)
Jinghui Qin, Xiaodan Liang, Yining Hong, Jianheng Tang, and Liang Lin. Neural-symbolic solver
for math word problems with auxiliary tasks. In ACL, pp. 5870–5881, 2021.
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea
Finn. Direct preference optimization: Your language model is secretly a reward model. Advances
_in Neural Information Processing Systems, 36, 2024._
Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi
Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. Code llama: Open foundation models for code.
_arXiv preprint arXiv:2308.12950, 2023._
Patrick Schober, Christa Boer, and Lothar A Schwarte. Correlation coefficients: appropriate use and
interpretation. Anesthesia & analgesia, 126(5):1763–1768, 2018.
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, YK Li, Y Wu,
and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language
models. arXiv preprint arXiv:2402.03300, 2024.
Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Joel Hestness, Natalia Vassilieva, Daria
Soboleva, and Eric Xing. Slimpajama-dc: Understanding data combinations for llm training. arXiv
_preprint arXiv:2309.10818, 2023._
InternLM Team. Internlm: A multilingual language model with progressively enhanced capabilities,
2023.
[Qwen Team. Code with codeqwen1.5, April 2024. URL https://qwenlm.github.io/](https://qwenlm.github.io/blog/codeqwen1.5/)
[blog/codeqwen1.5/.](https://qwenlm.github.io/blog/codeqwen1.5/)
Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun Luo, Weikang Shi, Renrui Zhang, Linqi Song,
Mingjie Zhan, and Hongsheng Li. Mathcoder: Seamless code integration in llms for enhanced
mathematical reasoning. arXiv preprint arXiv:2310.03731, 2023.
-----
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny
Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in
_neural information processing systems, 35:24824–24837, 2022._
Tianwen Wei, Jian Luan, Wei Liu, Shuang Dong, and Bin Wang. Cmath: can your language model
pass chinese elementary school math test? arXiv preprint arXiv:2306.16636, 2023.
Martin Weyssow, Aton Kamanda, and Houari Sahraoui. Codeultrafeedback: An llm-as-a-judge
dataset for aligning large language models to coding preferences, 2024.
Yifan Xu, Xiao Liu, Xinghan Liu, Zhenyu Hou, Yueyan Li, Xiaohan Zhang, Zihan Wang, Aohan
Zeng, Zhengxiao Du, Wenyi Zhao, et al. Chatglm-math: Improving math problem-solving in large
language models with a self-critique pipeline. arXiv preprint arXiv:2404.02893, 2024.
An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li,
Chengyuan Li, Dayiheng Liu, Fei Huang, et al. Qwen2 technical report. _arXiv preprint_
_arXiv:2407.10671, 2024._
Liu Yang, Haihua Yang, Wenjun Cheng, Lei Lin, Chenxia Li, Yifu Chen, Lunan Liu, Jianfei Pan,
Tianwen Wei, Biye Li, et al. Skymath: Technical report. arXiv preprint arXiv:2310.16713, 2023.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo
Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for
large language models. arXiv preprint arXiv:2309.12284, 2023a.
Xiao Yu, Baolin Peng, Michel Galley, Jianfeng Gao, and Zhou Yu. Teaching language models to
self-improve through interactive demonstrations. arXiv preprint arXiv:2310.13522, 2023b.
Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Sainbayar Sukhbaatar, Jing Xu, and Jason
Weston. Self-rewarding language models. arXiv preprint arXiv:2401.10020, 2024.
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou,
and Jingren Zhou. Scaling relationship on learning mathematical reasoning with large language
models. arXiv preprint arXiv:2308.01825, 2023.
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
Mammoth: Building math generalist models through hybrid instruction tuning. arXiv preprint
_arXiv:2309.05653, 2023._
Xiang Yue, Tuney Zheng, Ge Zhang, and Wenhu Chen. Mammoth2: Scaling instructions from the
web. arXiv preprint arXiv:2405.03548, 2024.
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. Star: Bootstrapping reasoning with
reasoning. Advances in Neural Information Processing Systems, 35:15476–15488, 2022.
Wei Zhao, Mingyue Shang, Yang Liu, Liang Wang, and Jingming Liu. Ape210k: A large-scale and
template-rich dataset of math word problems. arXiv preprint arXiv:2009.11506, 2020.
Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, and Zheyan Luo. Llamafactory: Unified
efficient fine-tuning of 100+ language models. arXiv preprint arXiv:2403.13372, 2024.
Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun Luo, Zipeng Qin, Shaoqing Lu, Anya Jia,
Linqi Song, Mingjie Zhan, et al. Solving challenging math word problems using gpt-4 code
interpreter with code-based self-verification. arXiv preprint arXiv:2308.07921, 2023.
A APPENDICES
A.1 BACKBONE COMPARISONS FOR INITIAL MODEL SELECTION
Although QWEN2 also demonstrates strong performance, we use its math-specific variant to ensure
the diversity of selected backbone models. For the same reason, and given the marginal performance difference between Llama3instruct and Llama3base when both are fine-tuned on D0, we only
Llama3instruct for our experiments.
-----
**Chinese Tasks** **English Tasks** **Average**
**Model** **Size (B)** **Tool** **CM** **APE** **CMATH** **GSM8K** **MATH**
CodeLlama 7 ✓ 77.7 78.0 84.5 69.7 37.6 69.5
QWENcode 7 ✓ 81.9 81.5 86.0 71.9 41.4 72.6
Llama3.1instruct 8 ✓ 82.4 82.1 86.2 76.5 41.1 73.6
Llama3base 8 ✓ 83.9 82.6 86.8 76.8 41.9 74.4
Llama3instruct 8 ✓ 83.3 83.2 87.2 76.8 41.8 74.4
DeepSeekcode 7 ✓ 82.7 81.2 87.0 77.4 44.4 74.5
QWEN2 7 ✓ 83.9 82.8 87.3 77.7 44.4 75.2
QWEN2Mathinstruct 7 ✓ 84.9 83.4 87.3 79.5 48.0 76.6
Table 9: Accuracy across the development sets of math datasets of initial policy models based on
different backbone models.
A.2 IMPACTS OF STAGES AND DATA SELECTION
**Model** **Stages** **Data** **CM** **APE** **CMATH** **GSM8K** **MATH** **Average**
DeepSeekcode SFT _D0,en_ – – – 74.6 43.8 –
SFT _D0,zh_ 81.0 82.4 86.8 – – –
SFT _D0_ 82.7 81.2 87.0 77.4 44.4 74.5
SFT _D0 + D1_ 87.0 84.3 88.0 77.6 44.6 76.3
SFT → DPO _D0 + D1; D2,WebInstruct_ 87.0 84.4 88.2 78.2 44.4 76.5
SFT → DPO _D0 + D1; D2,in-house_ **87.3** **85.9** **91.2** **78.9** **45.9** **77.8**
QWEN2Mathinstruct SFT _D0,en_ – – – 78.5 47.7 –
SFT _D0,zh_ 83.9 83.8 87.0 – – –
SFT _D0_ 84.9 83.4 87.3 79.5 48.0 76.6
SFT _D0 + D1_ 87.8 85.9 88.3 79.2 49.5 78.1
SFT → DPO _D0 + D1; D2,WebInstruct_ 87.8 86.0 88.5 82.4 48.7 78.7
SFT → DPO _D0 + D1; D2,in-house_ **90.1** **88.1** **93.2** **81.5** **50.0** **80.6**
Table 10: Impacts of the stages and data selection on the development sets of datasets. The best
performance for each model family is highlighted in bold.
Ablation studies of the stages and data selection on the development sets of datasets.
A.3 SUB-TYPE PERFORMANCE ON MATHBENCH
The data presented in the tables clearly shows the advantage of SIaM(Llama3instruct)2 over
SIaM(Llama3instruct)0 across various educational levels and subjects. For both the MathBench-A
and MathBench-T datasets, SIaM(Llama3instruct)2 consistently outperforms SIaM(Llama3instruct)0.
In the MathBench-A dataset, improvements are seen in all levels from Primary to College, with
notable jumps in Middle and High school levels (6.7% and 7.0% improvement, respectively, in
Table 11). Similarly, the MathBench-T dataset shows improvement across all levels, particularly in
the Middle school and English categories, which demonstrate 8.1% and 10.5% increases, respectively.
These results indicate that SIaM(Llama3instruct)2 provides enhanced accuracy in out-of-distribution
scenarios, making it a more reliable choice for varied educational levels.
In the seed data D0, we use a language-specific system prompt for each English instance: “Please
_write a python code to solve the following questions”. For the Chinese subset of D0 and all instances_
in D1 and D2 — which are exclusively Chinese data — we use a consistent Chinese system prompt
“请用python代码解决以下问题” (“Please write a python code to solve the following questions”).
When evaluating our self-improved model on MathBench, we observe that it performs better when
the Chinese system prompt is applied to solve English questions (Table 12). This may be due to the
fact that our training data primarily consists of Chinese data with Chinese system prompts.
We compare with SOTA code-assisted models trained on augmented MATH and GSM8K datasets
ToRA and MathCoder. Before detailed comparisons, we first review the background of the use of code
for mathematical reasoning. Code can be used either directly (Chen et al., 2022; Gao et al., 2023b)
(code-only) or interactively (Wang et al., 2023) during problem-solving. The latter approaches such as
ToRA and MathCoder jointly solve problems using CoT explanation and code. One advantage of these
interactive methods over code-only methods is that the final step of their solution is usually written
in CoT, allowing the easy use of existing scripts designed for CoT-style benchmarks for evaluation.
-----
However, this does not allow for robust comparisons for unseen diverse-format comparisons. In
addition, the role of using tools multiple times to address a single math problem is unclear based
on the performance difference of interactive methods (Table 3). For example, ToRA needs 1.02
tool interaction rounds per question while MathCoder requires 2.05 for MATH and GSM8K. This
work focuses on the direct usage of code as a case study to avoid multi-step inference and leave the
interactive setting for future studies.
For ToRA 7B[2] and 70B[3] models, we use their official inference scripts.[4] On MathBench, ToRA needs
an average of 1.00 and 1.01 tool interaction rounds per question. It seems its final CoT reasoning
primarily focuses on adjusting formatting answers to fully leverage existing CoT evaluation scripts.
We use ToRA’s generated code and execution result, keeping the rest of the inputs for the critic model
the same. We also experiment with replacing the execution results with the CoT outputs, but this does
not result in significant changes. Our self-improved 8B model outperforms one SOTA code-assisted
model, ToRA-70B, across all subcategories on this OOD dataset (Table 13).
For MathCoder, we evaluate its best-performing 34B model[5] and 7B model[6], which needs 1.53 and
2.13 tool interaction rounds per question, respectively. We also use their released inference scripts[7]
and follow the data format.
**MathBench-A** **MathBench-T**
**Level** SIaM(Llama3instruct)0 SIaM(Llama3instruct)2 SIaM(Llama3instruct)0 SIaM(Llama3instruct)2
**Arith** 98.0 **99.0** – –
**Primary** 75.7 **80.7** 66.6 **67.5**
**Middle** 56.3 **63.0** 60.1 **68.2**
**High** 50.3 **57.3** 59.1 **60.6**
**College** 32.0 **33.3** 50.2 **57.9**
**Chinese** 56.8 **63.5** 62.7 **63.6**
**English** 66.2 **68.8** 50.6 **61.1**
Table 11: Fine-grained OOD accuracy on the MathBench dataset scored by the critic model using
language-specific system prompts.
**MathBench-A** **MathBench-T**
**Level** SIaM(Llama3instruct)0 SIaM(Llama3instruct)2 SIaM(Llama3instruct)0 SIaM(Llama3instruct)2
**Arith** 97.3 **98.3** – –
**Primary** 71.0 **79.0** **71.6** 71.3
**Middle** 60.0 **69.0** 70.3 **71.5**
**High** 54.0 **59.7** 61.8 **62.3**
**College** 37.7 **41.3** 59.2 **62.6**
**Chinese** 57.3 **63.7** **63.8** 63.4
**English** 68.4 **73.3** 65.4 **69.5**
Table 12: Fine-grained OOD accuracy on the MathBench dataset scored by the critic model using a
Chinese-only system prompt.
A.4 DATA STATISTICS
A.5 OTHER ALIGNMENT ALGORITHMS
As shown in Table 16, DPO demonstrates superior performance compared to ORPO, both with the
SFT loss. We leave the exploration of more length-regularized alignment algorithms and the role of
the reference policy model in preference optimization to future studies.
2https://huggingface.co/llm-agents/tora-code-7b-v1.0.
3https://huggingface.co/llm-agents/tora-70b-v1.0.
4https://github.com/microsoft/ToRA/tree/main.
5https://huggingface.co/MathLLMs/MathCoder-CL-34B.
6https://huggingface.co/MathLLMs/MathCoder-CL-7B.
7https://github.com/mathllm/MathCoder.
-----
**MathBench-A** **MathBench-T**
**Level** T-7B T-70B M-7B M-34B T-7B T-70B M-7B M-34B
**Arith** 39.3 **82.7** 40.7 66.3 – – – –
**Primary** 40.3 **77.7** 43.3 70.0 30.9 **53.9** 26.2 47.0
**Middle** 24.3 39.7 28.7 **45.3** 31.0 **57.6** 25.0 46.8
**High** 30.0 **39.7** 29.7 39.0 28.0 **51.9** 28.2 47.8
**College** 21.0 **31.7** 20.7 30.0 25.5 **55.1** 29.0 54.3
**Chinese** 28.2 **47.5** 23.0 41.5 26.5 **50.5** 25.4 43.8
**English** 32.9 **58.8** 39.0 55.9 31.2 **60.3** 30.4 57.5
Table 13: Fine-grained OOD accuracy of ToRA (70B and 7B) and MathCoder (34B and 7B) on the
MathBench dataset scored by the critic model (T: ToRA; M: MathCoder).
**Dataset** **Language** **Answer Type** **Level** **Training** **Validation**
APE (Zhao et al., 2020) zh numerical value elementary 200,488 5,000
CM (Qin et al., 2021) zh numerical value(s) grades 6—12 13,628 1,703
CMATH (Wei et al., 2023) zh numerical value elementary – 600
MathBench (Liu et al., 2024) en, zh mixed from primary to college – 3,709
MATH (Hendrycks et al., 2021) en mixed college 7,500 5,000
GSM8K (Cobbe et al., 2021) en numerical value elementary 7,473 1,319
Table 14: Statistics of evaluation benchmarks. Note that in our experiments, we do not use any
rationale in these datasets as we focus on solving problems via coding. We only use the questions and
short-form answers from the training set of MATH and GSM8K for constructing the seed data, and
we use the questions and short-form answer from the training set of APE and CM for constructing
the data for self-improvement.
**Question:** Given: Apples cost 6 yuan for 4 kilograms, and oranges cost 11 yuan for 5 kilograms. Uncle Wang
buys 16 kilograms of apples and 20 kilograms of oranges. How much should he pay in total?
**Answer:** 68
**Rationale[⋆]:** x=6/4*16+11/5*20
Table 15: An example instance of the APE dataset (Zhao et al., 2020) (we translate the question into
English; ⋆: we do not use this rationale in our paradigm).
**Model** **Alignment** **Data** **CM** **APE** **CMATH** **GSM8K** **MATH** **ACCaverage**
DeepSeekcode SFT _D0 + D1_ 87.0 84.3 88.0 77.6 44.6 76.3
SFT → ORPO _D0 + D1; D2_ **87.7** 85.5 **91.2** 76.5 44.5 77.1
SFT → DPO _D0 + D1; D2_ 87.3 **85.9** **91.2** **78.9** **45.9** 77.8
Llama3instruct SFT _D0 + D1_ 87.6 85.0 89.0 76.6 41.8 76.0
SFT → ORPO _D0 + D1; D2_ 88.0 86.4 91.8 76.4 42.1 77.0
SFT → DPO _D0 + D1; D2_ **89.0** **86.8** **90.8** **80.5** **41.9** 77.8
Table 16: The self-improving performance in different stages on the development sets of different
datasets. The best open-sourced performance for each backbone model is highlighted in bold.
_LORPO(πθk+1_ ) = − _λ · log σ_ log 1 _πθπk+1θk+1(y(wy |w x) x)_
_−_ _|_
log _πθk+1_ (yl | x)
_−_ 1 _πθk+1_ (yl _x)_
_−_ _|_
log πθk+1 (yw _x)_
_−_ _|_
(7)
A.6 SELF-GENERATED PREFERENCE DATA
Existing preference datasets (Cui et al., 2023; Weyssow et al., 2024) do not include many these kinds
of inter-disciplinary annotations. We use the current policy to generate preference data. Examples are
provided in Table 17 and Table 18.
-----
**question** The perimeter of an isosceles triangle is 20, and the ratio of the lengths of the
two equal sides is 1:2. What is the length of the base? A.10 B.4 C.4 or 10 D.5 or
8
**reference answer** B
**winning code (w)**
from sympy import symbols, Eq, solve
# Define the unknown
a, b = symbols(’a b’)
# Establish the equation
# The hypotenuse is twice the base
equation1 = Eq(a, 2*b)
# The perimeter is 20
equation2 = Eq(2*a + b, 20)
# Solve the equation
solution = solve((equation1, equation2), (a, b), dict=True)
solution
**code result of w** [a: 8, b: 4]
**losing code (l)**
from sympy import symbols, Eq, solve
# Define the unknown variable
a = symbols(’a’)
# Establish the equation
equation = Eq(2 * a + 2 * (2 * a), 20)
# Solve the equation
solution = solve(equation, dict=True)
solution
**code result of l** [a: 10/3]
Table 17: Examples of self-generated preference pairs based on in-house web QA data (I). Texts have
been translated into English.
LIMITATIONS
**Language Diversity of Resources: in this paper, we focus on large-scale question-answer pairs**
from educational websites for Chinese, and accordingly, our critic model used for guiding selfimprovement is trained on Chinese data. While considering resources in other languages such as
English could enhance the the generalizability of LLMs, it would require extensive human efforts for
data collection and cleaning, which is beyond the scope of this work. On the other hand, since the
backbone LLMs are pre-trained and aligned on multi-lingual data and our seed data includes English
instruction data, the initial policy already exhibits reasonable performance on in-domain (Section 3.3)
and out-domain benchmarks (Section 3.6). Self-improving this initial policy model on Chinese data
may even improve its performance on English tasks. Finally, experiments show that the critic model
is as effective at rating English responses as rating Chinese ones (Section 3.5).
**Copyright of Resources: The large-scale question-answer pairs (excluding APE and CM) are either**
collected or purchased from educational websites. We will not release the full-scale resources. Instead,
we will provide question-answer samples, along with all scripts, seed data in English, initial and
self-improved policy models, self-generated preference data, and the critic model to facilitate future
studies.
**LLM Scalings: Due to limited computational resources, our experiments focus on 7-8B LLMs.**
Generally, improving the math reasoning abilities of relatively small LLMs requires a large amount
of training data and knowledge distillation (Li et al., 2024; Shao et al., 2024), which may not be
necessary for larger LLMs.
-----
**question** Given that a and b are opposites, c and d are reciprocals, and the square of x
equals 4, try to find the value of the expression:
_x[2]_ _−_ (a + b + cd)x + (a + b)[2009] + (−cd)[2008].
**reference answer** 3 or 7
**winning code (w)**
# Based on the analysis, first solve for the two possible values of
x
# Since x squared equals 4, solve for x
x1 = 2
x2 = -2
# Next, calculate the value of the expression based on the value of
x
# After simplification, the expression is x^2 - x + 1
# Calculate the value of the expression for x1 and x2
result1 = x1**2 - x1 + 1
result2 = x2**2 - x2 + 1
# Return results
{’x1 result’: result1, ’x2 result’: result2}
**code result of w** {‘x1’: 3, ‘x2’: 7}
**losing code (l)**
# Based on analysis, we have obtained the simplified form of the
expression
# Define variables
# The value of a + b, since a and b are opposites, take a = 1, then
b = -1
a_b_sum = 1
# The value of cd, since c and d are reciprocals, take c = 1, then
d = 1
cd_product = 1
# The square of x
x_squared = 4
# Calculate the value of the expression
# x^2 - (a+b+cd)x + (a+b)^2009 + (-cd)^2008
result = x_squared - (a_b_sum + cd_product) * 2 + a_b_sum**2009 +
(-cd_product)**2008
# Return result
{"result": result}
**code result of l** {2}
Table 18: Examples of self-generated preference pairs based on in-house web QA data (II). Texts
have been translated into English.
-----
**question** How do you find the indefinite integral of [5][x][3](x[−]−[42]5)[x][2][2]([+73]x[2]+4)[x][−][27] ?
**reference answer**
... (117 words omitted) The answer is 2ln|x-5| + 3/(x-5) + 3/2ln(x
^2+4) + 1/2arctan(x/2) + C
**winning code (w)**
from sympy import symbols, integrate, factor
# Define the variable
x = symbols(’x’)
# Expressions for the numerator and denominator
numerator = 5*x**3 - 42*x**2 + 73*x - 27
denominator = (x - 5)**2 * (x**2 + 4)
# Factor the denominator
factored_denominator = factor(denominator)
# Calculate the definite integral
integral_result = integrate(numerator / factored_denominator, x)
integral_result
**code result of w** 2*log(x - 5) + 3*log(x**2 + 4)/2 + atan(x/2)/2 + 3/(x - 5)
**losing code (l)**
from sympy import symbols, integrate
# Define the variable x
x = symbols(’x’)
# Define the numerator and denominator
numerator = 5*x**3 - 42*x**2 + 73*x - 27
denominator = (x - 5)**2 * (x**2 + 4)
# Calculate the derivatives of the numerator and denominator
numerator_derivative = numerator.diff(x)
denominator_derivative = denominator.diff(x)
# Calculate the derivative of the original function
F_prime = numerator_derivative / denominator_derivative
# Use the Newton-Leibniz formula to compute the original function
F = integrate(F_prime, x)
# Return the result
{"F": F}
**code result of l** {’F’: 14*log(x - 5)/29 + 379*log(x**2 - 5*x/2 + 2)/232 +
291*sqrt(7)*atan(4*sqrt(7)*x/7 - 5*sqrt(7)/7)/812}
Table 19: Examples of self-generated preference pairs based on WebInstruct (I). Texts have been
translated into English.
-----
**question** What is the temperature in Fahrenheit if it is 25°C?
**reference answer**
\#\{:\textbackslash n(,"Fahrenheit","Celsius"),\textbackslash n("
boiling point",212\^{}@F,100\^{}@C),\textbackslash n("freezing
point",32\^{}@F,0\^{}@C),\textbackslash n("difference", 180F
\^{}@,100C\^{}@)\textbackslash n:\}\#\textbackslash nSo\
textbackslash n\#color(white) ("XXX")180 F\^{}@=100C\^{}@\
textbackslash n\#color(white)("XXX")1C\^{}@=180/100F\^{}@\
textbackslash n\#25\^{}@C\textbackslash n\#color(white)("XXX")
\# is \#0\^{}@C + 25\^{}@C\textbackslash n\#color(white)("XXXX
")=25\^{}@C\# above freezing.\textbackslash n\#25C\^{}@ = 25
xx180/100F\^{}@=45F\^{}@\#\textbackslash n\#45F\^{}@\# above
freezing is \#32\^{}@F + 45\^{}@F = \textcolor{green}{77\^{}@F
}\#}
**winning code (w)**
from sympy import symbols, Eq, solve
# Define the unknown variable
f = symbols(’f’)
# Given temperature in Celsius
c = 25
# Conversion formula
equation = Eq(f, c * 9/5 + 32)
# Solve the equation
solution = solve(equation, f, dict=True)
solution
**code result of w** [{f: 77.0000000000000}]
**losing code (l)**
# Define a function to convert degrees to minutes
def degrees_to_minutes(degrees):
return degrees * 60
# Given degrees
degrees_celsius = 25
# Convert degrees to minutes
minutes = degrees_to_minutes(degrees_celsius)
# Calculate the corresponding temperature (Fahrenheit), since 1
degree = 60 minutes, divide by 60
temperature_fahrenheit = minutes / 60
# Return the result
result = {’temperature_fahrenheit’: temperature_fahrenheit}
result
**code result of l** {‘temperature_fahrenheit’: 25.0}
Table 20: Examples of self-generated preference pairs based on WebInstruct (II). Texts have been
translated into English.
-----
| [
"Dian, Yu",
"Baolin, Peng",
"Linfeng, Song",
"Ye, Tian",
"Haitao, Mi",
"Dong, Yu"
] | 2024-08-28T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2408.15565v1 | https://arxiv.org/abs/2408.15565 | https://www.semanticscholar.org/paper/3cb2497a363dc78bae3804b6654436b074f0b4e5 |
Scheherazade: Evaluating Chain-of-Thought Math Reasoning in LLMs with Chain-of-Problems | Benchmarks are critical for measuring progress of math reasoning abilities of Large Language Models (LLMs). However, existing widely-used benchmarks such as GSM8K have been rendered less useful as multiple cutting-edge LLMs achieve over 94% accuracy. While harder benchmarks have been proposed, their creation is often manual and expensive. We present Scheherazade, an automated approach for producing challenging mathematical reasoning benchmarks by logically chaining mathematical reasoning problems. We propose two different chaining methods, forward chaining and backward chaining, which require reasoning forward and backward through the chain respectively. We apply Scheherazade on GSM8K to create GSM8K-Scheherazade and evaluate 3 frontier LLMs and OpenAI's o1-preview on it. We show that while frontier models' performance declines precipitously at only a few questions chained, a preliminary evaluation suggests o1-preview performance persists up to 5 questions chained backwards. In addition, while all other models perform worse when problems are chained backwards, o1-preview performs better on backward-chained benchmarks. We will release the dataset and code publicly. | null | ## Scheherazade: Evaluating Chain-of-Thought Math Reasoning in LLMs with Chain-of-Problems
**Stephen Miner[1]** **Yoshiki Takashima[1]** **Simeng Han[1]** **Ferhat Erata[1]**
**Timos Antonopoulos[1]** **Ruzica Piskac[1]** **Scott J Shapiro[1]**
1Yale University
**Abstract**
Benchmarks are critical for measuring progress of math reasoning abilities of Large Language
Models (LLMs). However, existing widely-used benchmarks such as GSM8K have been rendered
less useful as multiple cutting-edge LLMs achieve over 94% accuracy. While harder benchmarks
have been proposed, their creation is often manual and expensive. We present Scheherazade, an
automated approach for producing challenging mathematical reasoning benchmarks by logically
chaining mathematical reasoning problems. We propose two different chaining methods, forward
chaining and backward chaining, which require reasoning forward and backward through the chain
respectively. We apply Scheherazade on GSM8K to create GSM8K-Scheherazade and evaluate 3
frontier LLMs and OpenAI’s o1-preview on it. We show that while frontier models’ performance declines precipitously at only a few questions chained, a preliminary evaluation suggests
```
o1-preview’s performance persists up to 5 questions chained backwards. In addition, while all
```
other models perform worse when problems are chained backwards, o1-preview performs better
on backward-chained benchmarks. We will release the dataset and code publicly.
**Introduction**
_"No problem can be solved from the same level of consciousness that created it." - Albert Einstein_
Benchmarks are the crux of evaluating LLM reasoning capablities. Ranging from grade-school math
problems to advanced math olympiads and beyond, they enable measurement and apples-to-apples
comparisons of LLMs that are black-box, proprietary, or often both. These benchmarks play a pivotal
role in both development of LLMs and claims made about their capabilities [1–3].
Yet the current benchmark ecosystem is becoming unsustainable as the math reasoning capabilities of
LLMs improve rapidly [4, 1–3]. In addition, existing benchmarks are widely used for training and
finetuning LLMs, leading to serious data contamination issues [5, 6]. GSM8K in particular have been
rendered less useful as multiple advanced LLMs surpass 94% accuracy and competitive performance
has been achieved on math [4, 1–3, 7, 8]. Despite the rapid consumption and depreciation of
benchmarks, novel, high-quality benchmark sets are limited, and generating new data often involves
costly manual labeling. While synthetic benchmark creation methods have been proposed, their
scope is limited. Existing approaches shuffle sentences [9], leverage templates [10, 11] and mutate
constants [12], limiting the complexity and diversity of the generated benchmarks.
We introduce Scheherazade, a technique for logically chaining multiple existing benchmarks together
to create larger benchmark problems that test Chain-of-Thought (CoT) mathematical and logical
reasoning abilities of models. We illustrate our approach with the following simple example: consider
the statement, “If it rains, I will wear a raincoat.” Now, if we modify the statement, for example, to
“If 2+3 = 5 and it rains, I will wear a raincoat,’ we, as humans, can immediately see that this statement
is equivalent to the previous one. However, we could easily add more and more such statements to
create a chain of expressions. Such a chain may sound highly artificial to us, but we would still be
able to reason by ignoring all the irrelevant statements. In this paper, we show that such chains are a
great way to test the reasoning capacities of existing LLMs.
38th Conference on Neural Information Processing Systems (NeurIPS 2024).
-----
Our approach chains benchmarks so that necessary information to solve the next question is derived
by solving the previous question in the logical chain. Our tool leverages conditional branches and
randomness to ensure the LLM cannot simply memorize the format. We propose two methods of
syntactically chaining questions, forward chaining and backward chaining. In forward chaining,
problems are connected using implication such that the resulting chained problem can be solved in
the order it is written. In contrast, backward chaining requires that problems earlier in the chain
require information from all problems later in the chain in order to be solvable. While logically
equivalent, backward chaining forces the model under test to look backwards at every question. Both
techniques generalize to chains of any length and ordering of their component problems.
We benchmark the mathematical reasoning abilities of four frontier models, OpenAI o1, GPT-4o,
Meta Llama 3.1 70B, and Anthropic Claude 3.5 Sonnet, using a benchmark set created by applying
_Scheherazade to GSM8K and report our findings in Section 3. Running the models on our benchmark_
shows that, despite high reported scores on original GSM8K problems, the performance declines
rapidly to less than half of the original GSM8K performance when multiple problems are chained
together. No model performs above 50% at 6 questions chained or above 30% at 10 questions chained.
A preliminary evaluation with OpenAI o1-preview shows it outperforms current frontier models at
longer questions. At backward chain length of 5, o1-preview solves 23 out of 25 questions while
no other model performs above 50%.
**2** **Approach**
At the heart of our technique is chaining problems together. We introduce two techniques to create
_n-length chains of GSM8K problems, where n is the number of problems used in the chain. These_
problems are chained together to create a single, composite problem. The problems we create contain
branching paths and use randomness to prevent the LLM from simply being able to memorize which
path through the branches is the correct one. The first technique is forward chaining. In forward
chaining, problems are chained together such that the resulting chain of problems can be solved in
the order it is written. The other technique, backward chaining, requires that at any problem in the
chain, information from a future problem is necessary to solve the current problem. Both of these
techniques can generalize to any chain length, and the problems can be chained in any order.
To explain how we chain problems, we first introduce some notation. For a math reasoning question
_Q, let Q1 be the first logical premise of Q. For example, if Q = "Alice has 3 apples. Bob has 2_
apples. How many apples do Alice and Bob have in total?", then Q1 = "Alice has 3 apples." We use
_Qp to denote the remaining premises of the problem, in our example Qp = "Bob has 2 apples.", but_
there could be many sentences in Qp or possible no sentences. We let Qq be the question or statement
asking for the solution to the problem. In our example, Qq = "How many apples do Alice and Bob
have in total?". Qc denotes the conclusion of the problem, written in natural language. For example,
_Qc = "Alice and Bob have 3 apples in total." We additionally use Q[′]c_ [and][ Q]1[′] [for each question, a]
wrong conclusion and an alternate first premise respectively.
We chain problems together in two ways, forward chaining and backward chaining. At any point in
the chain, to chain two problems together we create a branching "if then else" statement. For forward
chaining, take n = 2 as an example, and let A and B be the two problems. Forward chaining A and
_B results in one of the following, selected at random:_
_A1 + Ap + (Ac =_ _B1_ _Ac =_ _B1[′]_ [) +][ B][p] [+][ B][q]
_⇒_ _∧¬_ _⇒_
_A1 + Ap + (A[′]c_ [=]⇒ _B1[′]_ _[∧¬][A]c[′]_ [=]⇒ _B1) + Bp + Bq_
Here, thetrue, then: [ +B symbol is string concatenation, and1] is true, otherwise: [B1[′] [] is true." Importantly, for any question] Ac =⇒ _B1 ∧¬Ac =⇒_ _B[ Q]1[′][,][ Q][denotes "If: [][′]1_ [has the property][A][c][] is]
that Q[′]1 [⇏] _[Q][c][, meaning if the wrong branch is taken, the corresponding premise of that branch will]_
lead to an incorrect conclusion. Figure 1 shows how forward chaining generalizes, and provides two
example problems.
Backward chaining also branches in a similar way, but unlike forward chaining, backward chaining
requires information from future problems in order to solve the current problem in the chain. For
example, the result of backward chaining problems A and B results in one of the following, selected
at random:
(Bc = _A1_ _Bc =_ _A[′]1[) +][ A][p]_ [+][ B][1] [+][ B][p] [+][ A][q]
_⇒_ _∧¬_ _⇒_
-----
Figure 1: Forward chaining generalization and example.
(Bc[′] [=]⇒ _A[′]1_ _[∧¬][B]c[′]_ [=]⇒ _A1) + Ap + B1 + Bp + Aq_
Notice that in order to get the first premise of A, problem B must be solved. However, the premises
of problem B do not appear until after problem A. Importantly, notice that in backward chaining
the final question is Aq, meaning all intermediate questions must be solved in order to solve the
final question. Backward chaining also generalizes to any length. For the sake of showing the
generalization simply, if we remove the randomness then backward chaining generalizes as follows:
(Q2c = _Q11_ _Q2c =_ _Q1[′]1[)+][Q][1][p]_ [+(][Q][3][c] [=] _Q21_ _Q3c =_ _Q2[′]1[)+][Q][2][p]_ [+][...][+][Q][1][q]
_⇒_ _∧¬_ _⇒_ _⇒_ _∧¬_ _⇒_
This generalization shows that as the chain length increases, the reasoning required to solve the
problem becomes increasingly nested. That is, information from Q2 is required to determine the first
premise of Q1, information from Q3 is required to determine the first premise of Q2, and so on. In
our results, we will show that LLMs struggle with this kind of reasoning, performing much better on
forward reasoning than on backward reasoning.
**3** **Evaluation**
Using our Scheherazade over GSM8K, we create GSM8K-Scheherazade, where we generate 1,000
examples for a chain of 2–10 and included both forward and backward chaining methods, resulting
in a total of 18,000 new examples. We evaluate 4 state-of-the-art LLMs: OpenAI’s GPT-4o (Aug.
6th 2024), o1-preview, Anthropic Claude 3.5 Sonnet, and Meta Llama 3.1 70B. For all models
except o1-preview, we run this evaluation on the entire GSM8K-Scheherazade. Because access to
```
o1-preview is limited, we run a preliminary evaluation of 25 samples at chain lengths 1 to 7 for
```
backwards chaining only. These lengths are picked because o1-preview’s accuracy declines most
dramatically over these lengths.
Table 1: Raw accuracy numbers up to length 10. o1-preview is run up to length 8. Despite
near-perfect performance by frontier models at length 1 (original GSM8K problems), the performance
rapidly declines.
|Length|1 2 3 4 5 6 7 8 9 10|
|---|---|
|Forward Claude 3.5 gpt-4o Llama3.1 70B|0.986 0.280 0.302 0.274 0.240 0.236 0.197 0.177 0.173 0.156 0.971 0.438 0.365 0.347 0.333 0.291 0.253 0.214 0.195 0.167 0.971 0.268 0.187 0.124 0.067 0.044 0.015 0.011 0.007 0.005|
|Backward Claude 3.5 gpt-4o Llama3.1 70B o1-preview|0.986 0.879 0.599 0.319 0.179 0.102 0.074 0.056 0.045 0.032 0.971 0.932 0.645 0.393 0.231 0.147 0.097 0.080 0.052 0.001 0.971 0.477 0.265 0.113 0.064 0.035 0.015 0.014 0.002 0.001 1.000 0.880 0.800 0.800 0.920 0.520 0.600 0.562 - -|
The results of the evaluation are shown in and Table 1. For each chain length we give the raw accuracy
numbers between 0 and 1. The top half provides accuracy numbers for forward chaining and lower
half for backward chaining. Fig. 2 shows the performance normalized to accuracy on problems of
-----
|GPT4o Au Llama 3.1 Claude 3.5 GPT4o Au Llama 3.1 Claude 3.5 o1 BW (N=|GPT4o Au Llama 3.1 Claude 3.5 GPT4o Au Llama 3.1 Claude 3.5 o1 BW (N=|g. 6 2024 70B Sonnet g. 6 2024 BW 70B BW Sonnet BW 25, L<=8 only)|
|---|---|---|
Model Performance over Chain Length
1.0 GPT4o Aug. 6 2024
Llama 3.1 70B
Claude 3.5 Sonnet
0.8 GPT4o Aug. 6 2024 BW
Llama 3.1 70B BW
Claude 3.5 Sonnet BW
o1 BW (N=25, L<=8 only)
0.6
0.4
0.2
Correctness (Self-Normalized to L = 1)
0.0
2 4 6 8 10
Chain Length (L)
Figure 2: Accuracy of LLMs declines when the chains become longer. With the exception of
```
o1-preview, LLMs find backward chains harder than forward chains at longer lengths. The *Agent
```
solves L GSM8K question independently with the accuracy measured at L = 1. If the accuracy at
_L = 1 is a, then the agent is computed to perform at length L with accuracy a[L]._
length 1, which are identical to GSM8K problems. The horizontal axis denotes the number of chained
questions, while the vertical axis represents normalized accuracy.
Looking at the raw accuracy presented in Table 1, we see that accuracy quickly declines for every
model except o1-preview at 2 to 3 chains. In general, models other than o1-preview perform
slightly better with backwards chaining with shorter questions but accuracy on backward-chained
questions declines quicker than forward-chained questions, with the latter overtaking the former at
around length 6.
Normalized to the original GSM8K performance in Fig. 2, we see that the accuracy declines rapidly
for every model except o1-preview. With four problems chained, no model perform above 60% of
the original GSM8K accuracy at length 8 and every model except o1-preview. Likewise, for every
model other than o1-preview, the accuracy at longer lengths is worse with backward chaining. We
posit the reason all models struggled with backward chaining is that backward chaining requires the
LLMs to reason in reverse of traditional CoT. Manually analyzing the 41 questions o1-preview got
incorrect, it did not answer every question in the chain for 12 of them. This ratio increases to 4 out of
10 for chains of length 7 and up, the longest chain we evaluated o1-preview on.
**4** **Conclusion and Future Work**
The results of running GSM8K-Scheherazade on frontier models suggests several avenues for future
work. First, it would be useful to run Scheherazade with benchmarks other than GSM8K. Other, more
difficult math reasoning benchmarks exist, and Scheherazade may slow their depreciation. Second,
logical operators other than if-then-else should be explored. It is possible to combine problems
with conjunctions or disjunctions in addition to implications. When the benchmarks are numerical,
numerical operators such as taking sums of solutions are also relevant. Third, given that o1-preview
performs better with backward chaining, combining Scheherazade with more fine-grained reorderings
of the questions remains to be explored. While we presented purely backward and purely forward
chaining, hybrid combinations of both forward and backward chaining may allow us to figure out the
scope of o1-preview’s reasoning abilities.
Benchmarks are the foundation upon which the current language model ecosystem stands. Their
rapidly eroding value is a cause for concern. We presented Scheherazade, a technique for generating
new, larger benchmarks by logically chaining existing benchmarks. Using Scheherazade on GSM8K,
we created a benchmark set that defeats existing frontier models and exposes surprising reasoning
behavior in o1-preview, performing better at backward reasoning than forward.
-----
**References**
[1] OpenAI, J. Achiam, and Others, “Gpt-4 technical report,” 2024.
[2] A. Dubey, A. Jauhri,, and Others, “The llama 3 herd of models,” 2024.
[3] G. Team, T. Mesnard,, and Others, “Gemma: Open models based on gemini research and
technology,” 2024.
[4] OpenAI, “Learning to reason with llms (openai o1),” Sept. 2024. Accessed: 2024-09-25.
[5] H. Zhang, J. Da, D. Lee, V. Robinson, C. Wu, W. Song, T. Zhao, P. Raja, D. Slack, Q. Lyu,
S. Hendryx, R. Kaplan, M. Lunati, and S. Yue, “A careful examination of large language model
performance on grade school arithmetic,” 2024.
[6] A. Matton, T. Sherborne, D. Aumiller, E. Tommasone, M. Alizadeh, J. He, R. Ma, M. Voisin,
E. Gilsenan-McMahon, and M. Gallé, “On leakage of code generation evaluation datasets,”
2024.
[7] K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek,
J. Hilton, R. Nakano, C. Hesse, and J. Schulman, “Training verifiers to solve math word
problems,” CoRR, vol. abs/2110.14168, 2021.
[8] D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt,
“Measuring mathematical problem solving with the MATH dataset,” in Thirty-fifth Conference
_on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021._
[9] X. Chen, R. A. Chi, X. Wang, and D. Zhou, “Premise order matters in reasoning with large
language models,” 2024.
[10] Y. Zhang, Y. Luo, Y. Yuan, and A. C.-C. Yao, “Training language models with syntactic data
generation,” 2024.
[11] Z. Li, B. Jasani, P. Tang, and S. Ghadar, “Synthesize step-by-step: Tools, templates and llms as
data generators for reasoning-based chart vqa,” in 2024 IEEE/CVF Conference on Computer
_Vision and Pattern Recognition (CVPR), (Los Alamitos, CA, USA), pp. 13613–13623, IEEE_
Computer Society, jun 2024.
[12] L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y. Yang, J. Callan, and G. Neubig, “PAL:
Program-aided language models,” in Proceedings of the 40th International Conference on
_Machine Learning (A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett,_
eds.), vol. 202 of Proceedings of Machine Learning Research, pp. 10764–10799, PMLR, 23–29
Jul 2023.
-----
| [
"Stephen, Miner",
"Simeng, Han",
"Yoshiki, Takashima",
"Ferhat, Erata",
"Timos, Antonopoulos",
"Ruzica, Piskac",
"Scott J., Shapiro"
] | 2024-09-30T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.00151 | https://arxiv.org/abs/2410.00151 | https://www.semanticscholar.org/paper/a3eacbe7f70240a5c8830ec729ad786d018b922e |
SciDFM: A Large Language Model with Mixture-of-Experts for Science | Recently, there has been a significant upsurge of interest in leveraging large language models (LLMs) to assist scientific discovery. However, most LLMs only focus on general science, while they lack domain-specific knowledge, such as chemical molecules and amino acid sequences. To bridge these gaps, we introduce SciDFM, a mixture-of-experts LLM, which is trained from scratch and is able to conduct college-level scientific reasoning and understand molecules and amino acid sequences. We collect a large-scale training corpus containing numerous scientific papers and books from different disciplines as well as data from domain-specific databases. We further fine-tune the pre-trained model on lots of instruction data to improve performances on downstream benchmarks. From experiment results, we show that SciDFM achieves strong performance on general scientific benchmarks such as SciEval and SciQ, and it reaches a SOTA performance on domain-specific benchmarks among models of similar size. We further analyze the expert layers and show that the results of expert selection vary with data from different disciplines. To benefit the broader research community, we open-source SciDFM at https://huggingface.co/OpenDFM/SciDFM-MoE-A5.6B-v1.0. | SciDFM, a mixture-of-experts LLM, is introduced, which is trained from scratch and is able to conduct college-level scientific reasoning and understand molecules and amino acid sequences and reaches a SOTA performance on domain-specific benchmarks among models of similar size. | ## SciDFM: A Large Language Model with Mixture-of-Experts for Science
**Liangtai Sun[1]** **Danyu Luo[1]** **Da Ma[1]** **Zihan Zhao[1]** **Baocai Chen[1]**
**Zhennan Shen[1]** **Su Zhu[3]** **Lu Chen[1,2]** **Xin Chen[1,2]** **Kai Yu[1,2]**
1X-LANCE Lab, Department of Computer Science and Engineering
MoE Key Lab of Artificial Intelligence, SJTU AI Institute
Shanghai Jiao Tong University, Shanghai, China
2Suzhou Laboratory, Suzhou, China
3AI Speech Co, .Ltd., Suzhou, China
{slt19990817, chenlusz, kai.yu}@sjtu.edu.cn
**Abstract**
Recently, there has been a significant upsurge of interest in leveraging large language models (LLMs) to assist scientific discovery. However, most LLMs only
focus on general science, while they lack domain-specific knowledge, such as
chemical molecules and amino acid sequences. To bridge these gaps, we introduce
SciDFM, a mixture-of-experts LLM, which is trained from scratch and is able to
conduct college-level scientific reasoning and understand molecules and amino acid
sequences. We collect a large-scale training corpus containing numerous scientific
papers and books from different disciplines as well as data from domain-specific
databases. We further fine-tune the pre-trained model on lots of instruction data to
improve performances on downstream benchmarks. From experiment results, we
show that SciDFM achieves strong performance on general scientific benchmarks
such as SciEval and SciQ, and it reaches a SOTA performance on domain-specific
benchmarks among models of similar size. We further analyze the expert layers
and show that the results of expert selection vary with data from different disciplines. To benefit the broader research community, we open-source SciDFM at
[https://huggingface.co/OpenDFM/SciDFM-MoE-A5.6B-v1.0.](https://huggingface.co/OpenDFM/SciDFM-MoE-A5.6B-v1.0)
**1** **Introduction**
The advent of Large Language Models (LLMs) [1, 2, 3] has ignited a revolution in the realm of
artificial intelligence and has pushed the field of AI for Science (AI4S) to an unprecedented new height.
LLMs have demonstrated promising performances in assisting and accelerating scientific discovery [4,
5, 6], such as protein design [7], weather forecasting [8], and geoscience [9]. Despite remarkable
achievements in science, LLMs primarily focus on general scientific knowledge represented in text
form, ignoring domain-specific contents such as molecules in chemistry and proteins in biology,
which are fundamental to advances in these fields.
To overcome this limitation and fully exploit the potential of LLMs for scientific discovery, we
introduce SciDFM, a mixture-of-experts Large Language Model trained from scratch with 18.2
billion parameters in total and 5.6 billion parameters activated. SciDFM integrates a mixture-ofexperts (MoE) [10, 11, 12] architecture into a transformer-based [13] framework, aiming at enhancing
Preprint. Under review.
-----
its sophisticated scientific reasoning and understanding capabilities and better modeling similarities
and differences across different disciplines and modalities, i.e. text, molecule, protein. In this paper,
we detail the pretraining and instruction tuning process of SciDFM, including training corpus and
settings. SciDFM leverages a carefully curated corpus of scientific literature and domain-specific
databases for pretraining to capture vast scientific knowledge, and is also trained on a large general
corpus to retain general knowledge, consuming about 1.1T tokens in total. We meticulously finetune SciDFM using a set of instruction-following data containing about 9.3M samples, including
interpreting molecular structures and amino acid sequences, thereby improving the performance on
downstream benchmarks and bridging the gap in domain-specific knowledge.
To illustrate the prowess of SciDFM, we conduct extensive experiments on several scientific benchmarks. Empirical evaluations affirm the efficacy of SciDFM, demonstrating its superiority on
both general scientific benchmarks like SciEval [14] and SciQ [15], as well as achieving stateof-the-art (SOTA) performance in domain-specific tasks among models of similar size, such as
Mol-Instructions [16]. Our analysis further delves into the results of the expert layer selection,
revealing their adaptability to different scientific disciplines, and demonstrating the effectiveness of
the MoE model in multidisciplinary scenarios. To benefit the broader research community, we will
make SciDFM openly accessible.
**2** **SciDFM**
In this section, we introduce the pretraining and instruction tuning details of SciDFM, including the
training data construction, model architecture and infrastructure.
**2.1** **Pretraining**
**2.1.1** **Model Architecture**
SciDFM is based on a transformer architecture [13], and
follows modifications of Llama [2], i.e. RMSNorm, RoPE
and SwiGLU. SciDFM uses the same hyper-parameters of
OpenLLaMa-3B [17], the details are shown in Table 1. And
in order to better model knowledge of different disciplines, we
replace the feed-forward block with Mixture-of-Expert (MoE)
layers [10, 11].
We also use the tokenizer of OpenLLaMa-3B, which is trained
from scratch using the Bype-Pair Encoding (BPE) method. To
better encode the molecules and amino acid sequences and distinguish them from normal characters for better modeling, we
treat each chemical atom and amino acid character as a single
token and add them into the vocabulary, with special identifiers
wrapped [4]. For example, molecules C(C(=O)O)N will be encoded as C,(,C,(,=,O,),O,),N, and amino acid sequences MIRLGAPQTL will be encoded as M,I,R,L,G,A,P,Q,T,L, where the
special identifiers are omitted.
**2.1.2** **Data Construction**
Parameter Value
dim 3200
n_layers 26
head_dim 100
ffn_hidden_dim 8640
n_heads 32
n_kv_heads 32
context_len 8192
vocab_size 32192
num_experts 8
topk_experts 2
Table 1: Hyper-parameters of
SciDFM
To enhance the understanding and reasoning abilities of SciDFM on science domain, we collect a
large-scale training corpus containing a large number of open-access scientific papers and books of
different disciplines. And to acquire domain-specific knowledge, we also include data from some
databases. Furthermore, in order to maintain the generic capabilities of SciDFM, we use data from
some open-source generic corpora. The details of our pretraining corpus are shown in Table 2. Our
pretraining corpus contains about 300B science-domain tokens and 270B general-domain tokens,
with 570B tokens in total. We train SciDFM for two epochs, and for the second epoch, we re-sample
data of C4, CC, and Github subsets of SlimPajama [23, 24]. And for the-stack dataset, we only
use data from programming languages that are relevant to scientific computing, such as Matlab and
Scilab.
-----
|Data Source|Domain # Tokens (B)|
|---|---|
|AMPS [18] OEIS Proof-Pile-2 [19] MathGLM-dataset [20] MathPile [21] PubChem Compound USPTO Uniprot BioRxiv MedRxiv RefSeq Genome Uniref GeoNames the-stack [22] Scientific Papers Scientific Books SlimPajama-Arxiv [23] LibreText|Math 1.07 Math 0.08 Math 25.32 Math 9.66 Math 0.21 Chemistry 18.34 Chemistry 0.13 Biology 0.52 Biology 2.77 Biology 0.31 Biology 1.38 Biology 16.85 Geography 0.82 Science Code 9.65 General Science 182.65 General Science 7.00 General Science 29.78 General Science 0.05|
|---|---|
|Total Science|- 306.59|
|---|---|
|WikiBooks En-Zh-Trans Wikipedia-Zh Baike SlimPajama(w/o Arxiv) [23]|General 0.14 General 3.10 General 0.56 General 23.16 General 241.64|241.75|
|---|---|
|Total General|- 268.60|268.71|
|---|---|
Table 2: The training corpus details of SciDFM.
**2.1.3** **Training Details**
Following Llama [2], SciDFM is trained from scratch using the AdamW optimizer [43] with β1 =
0.9, β2 = 0.95. We use a cosine learning rate schedule, such that the final learning rate is equal to
10% of the initial learning rate. We use a weight decay of 0.1 and gradient clipping of 1.0, and we set
the macro batch size to 4M tokens and sequence length to 8192 tokens. For MoE layers, we set the
auxiliary loss factor to 0.02 and set the expert capacity factor to 1.0. In total, we train SciDFM for
two epochs, resulting in about 1.1T tokens fed. We use a learning rate of 3e − 4 for the first epoch
and 3e − 5 for the second epoch, while the other settings remain the same as the above. We train
SciDFM on a cluster of 16 nodes, with 8 A800 GPUs on each node for about two months.
**2.2** **Instruction Tuning**
To improve the performance of SciDFM on downstream benchmarks, we collect a number of
instruction-tuning data from open-source datasets. The details are shown in Table 3. We fine-tune the
pre-trained SciDFM for 5 epochs in total. During fine-tuning, we use a learning rate of 2e-5, and we
set the sequence length to 2048 and macro batch size to 32. The other settings are the same as the
pretraining stage.
**3** **Evaluation**
In this section, we show the performance of SciDFM on some general and domain-specific benchmarks, and analyze the results of expert selection in different domains.
-----
|Data|Domain # Samples|
|---|---|
|Arxivphy MATH [18] MetaMathQA [25] MathInstruct [26] OrcaMath [27] DeepLoc2 [28] BioASQ [29] MedMCQA [30] MedQA [31] PubMedQA [32] Mol-Instructions [16] ChemDFM-sft [5] SciQ [15] Camel [33] SciInstruct [6] WebInstructSub [34] MMLU [35] LIMA [36] ROPES [37] QASC [38] OpenBookQA [39] Dolly [40] SlimOrca-Dedup [41] GPT4All [42] Other|Physics 30231 Math 7500 Math 395000 Math 262040 Math 200035 Biology 22642 Biology 8021 Biology 182822 Biology 10178 Biology 500 Chemistry & Biology 1863630 Chemistry 1818154 General Science 11679 General Science 110000 General Science 82640 General Science 2327291 General 99842 General 1000 General 10924 General 8134 General 4957 General 15011 General 363491 General 806199 General 994896|
|---|---|
|Total(after dedup)|- 9325310|
|---|---|
Table 3: Details of instruction tuning dataset.
|Model|① ② ③|④ ⑤|⑥ ⑦ ⑧|Avg.|
|---|---|---|---|---|
|Llama2-7B Galactica-6.7B Llama2-13B ChatGLM2-6B Galactica-30B Llama3-8B ChatGLM3-6B SciGLM-6B ChatGLM3-6Bb Llama3-8B-it SciDFM(ours)|27.06 57.00 36.43/46.59 46.28 74.20 44.28/61.83 33.88 78.10 56.66/72.35 54.25 75.80 57.08/73.57 54.24 83.10 57.85/75.04 59.70 90.00 71.16/84.05 51.13 77.60 60.84/75.97 61.22 88.70 77.47/86.57 60.34 89.00 78.58/87.37 64.91 91.60 76.45/87.33 62.48 88.00 64.76/81.48|3.94 3.96 2.80 6.32 22.82 3.90 25.09 7.18 13.65 8.66 5.91 7.00 60.27 23.52 42.23 16.40 59.82 22.64 76.57 26.26 59.14 27.28|26.32 29.84 66.80 30.48 36.46 48.80 32.68 34.28 77.80 27.42 34.21 60.40 37.71 48.43 58.80 48.78 52.74 26.60 24.59 31.39 51.80 42.81 44.94 73.60 42.73 45.14 74.40 56.48 59.31 72.00 44.54 53.10 78.00|32.95 38.91 45.45 45.94 48.35 49.59 50.53 59.12 61.96 67.44 61.56|
|---|---|---|---|---|
Table 4: Main Results on 8 general scientific language understanding and reasoning tasks. ChatGLM36Bb stands for ChatGLM3-6B-base, Llama3-8B-it stands for Llama3-8B-Instruct. The columns
from ① to ⑧ stand for SciEval, SciQ, ARC, GSM8K, MATH, MedQA, MedMCQA. PubMedQA,
respectively, where the results of ARC are shown in the form of ARC_C/ARC_E.
.
**3.1** **Evaluation Setup**
**Evaluation Tasks** Summarization of evaluation tasks are shown in Table 5. These evaluation
datasets cover a wide range of subjects, including math, chemistry, physics, biology, protein and
molecule.
- SciEval [14] is a comprehensive and multidisciplinary benchmark designed to assess the scientific
research capabilities of Large Language Models.
-----
|Dataset|# Samples Subject|
|---|---|
|SciEval SciQ ARC_C ARC_E GSM8K MATH MedQA MedMCQA PubMedQA Mol-Instructions MoleculeNet|15,901 ♢, △, □, ♡ 1,000 ♢, △, □, ♡ 1,172 ♢, △, □, ♡ 2,376 ♢, △, □, ♡ 1,319 □ 5,000 □ 1,273 △ 4,183 △ 500 △ 24,324, ♢ △ 11,767 ♢|
|---|---|
Table 5: Overview of evaluation benchmarks. (♢: Chemistry, △: Biology, □: Math, ♡: Physics)
|Model|Avg.G Avg.M Avg.B|
|---|---|
|LLaMa2-7B Galactica-6.7B LLaMa2-13B ChatGLM2-6B Galactica-30B LLaMa3-8B ChatGLM3-6B SciGLM-6B ChatGLM3-6B-base Llama3-8B-Instruct SciDFM(ours)|41.77 3.95 40.99 56.65 4.56 38.58 60.25 13.36 48.25 65.18 16.14 40.68 67.56 11.16 48.31 76.23 6.46 42.71 66.39 41.90 35.93 78.49 29.32 53.78 78.82 41.23 54.09 80.07 51.42 62.60 74.18 43.21 58.55|
|---|---|
Table 6: Average Results of General Science, Math and Biology tasks on scientific language understanding and reasoning tasks. Avg.G, Avg.M and Avg.B stand for average performance on general
science, math and biology tasks respectively.
- SciQ [15] comprises high-quality, domain-targeted multiple-choice science exam questions, created
through a novel method that combines crowd-sourcing with AI-assisted document and answer
option selection.
- ARC [44] presents a sophisticated question set, designed to advance AI research in complex
question answering within the context of grade-school science, composed of an easy subset and a
challenging subset.
- GSM8K [45] is composed of diverse and linguistically rich grade school math word problems,
designed to benchmark and improve the multi-step mathematical reasoning abilities of LLMs,
revealing their limitations in handling complex reasoning tasks.
- MATH [18] contains challenging competition mathematics problems, each with detailed solutions,
designed to assess and enhance mathematical problem-solving and reasoning capabilities of LLMs,
accompanied by an auxiliary pretraining dataset to bolster fundamental math understanding.
- MedQA [31] is a pioneering multiple-choice dataset for open-domain question answering in the
medical field, encompassing a number of questions sourced from professional medical board
exams.
- MedMCQA [30] is a large-scale medical multiple-choice question-answering dataset, spanning
2,400 healthcare topics and 21 subjects, designed to challenge models with diverse reasoning skills
across various medical domains.
- PubMedQA [32] is a biomedical question-answering dataset based on PubMed abstracts, requiring
quantitative reasoning over research texts.
- Mol-Instructions [16] is a specialized, meticulously curated dataset containing diverse biomolecular
instructions, designed to enhance LLMs’ understanding and prediction capabilities within the
realms of molecular, protein, and broader biomolecular texts.
-----
- MoleculeNet [46] is a comprehensive benchmark dataset for molecular machine learning, featuring
curated public datasets, standardized evaluation metrics, and open-source implementation of
various molecular featurization and learning methods.
In our experiments, we take Mol-Instructions and MoleculeNet as domain-specific tasks, and take the
remaining benchmarks as general scientific language understanding and reasoning tasks.
|Model|bace bbbp ClinTox HIV Tox21 Avg|
|---|---|
|LLaMa2-13B-chat GPT4o(0513) Galactica-30B ChemDFM-13B SciDFM(ours)|26.0 60.3 45.7 29.0 51.7 42.54 62.5 61.5 51.6 65.9 55.2 59.34 72.7 59.6 82.2 75.9 68.5 71.78 78.4 66.7 89.9 73.6 79.8 77.68 76.4 64.8 98.5 71.5 75.6 77.36|
|---|---|
Table 7: The Results of molecular property prediction tasks on MoleculeNet in AUC-ROC scores.
**Evaluation Methods** Since SciDFM is an instruction-following model by default, we conduct all
experiments using zero-shot settings. And most of the models we select for comparison are able to
follow instructions:
- Galactica [4] is a large language model specifically designed to store, combine, and reason about
vast amounts of scientific knowledge, outperforming existing models on various scientific tasks
and aiming to serve as a new, advanced interface for scientific research. We select Galactica-30B
and Galactica-6.7B for comparison.
- Llama [2] is a series of open-source powerful language models, ranging from 7 billion to 70 billion
parameters, trained on massive public datasets, and outperforms many of the available open-source
models on common benchmarks. We select Llama3-8B, Llama3-8B-Instruct, Llama2-7B and
Llama2-13B for comparison.
- ChatGLM [47, 48] is a series of advanced language models, excel in various metrics and tasks,
rivaling or surpassing counterparts like GPT-4, thanks to their extensive training on multilingual
data, specialized alignment techniques, and the ability to integrate diverse tools dynamically. We
select ChatGLM2-6B, ChatGLM3-6B and ChatGLM3-6B-base for comparison.
- SciGLM [6] is a suite of scientific language models that enhance college-level scientific reasoning
through a self-reflective instruction annotation framework, addressing data scarcity in the science
domain, and improving upon ChatGLM in handling complex scientific and mathematical problems
without compromising language understanding. We select SciGLM-6B for comparison.
- ChemDFM [5] is specifically trained for Chemistry, combining knowledge from chemical literature
and general domains to excel in understanding, reasoning, and applying chemical information,
outperforming generic LLMs and even GPT-4 on chemical tasks. We select ChemDFM-13B for
comparison.
**3.2** **Main Results**
**General Scientific Benchmark** Table 4 presents the evaluation results on eight general scientific
language understanding and reasoning tasks. The results show that SciDFM reaches a better performance on average than Galactica-series models, Llama-series models except Llama3-8B-Instruct
and ChatGLM-series models except ChatGLM3-6B-base. In Table 6, we also present the average
performance of general science, math and biology tasks on the above eight benchmarks, in which
SciEval, SciQ and ARC belong to general science task, GSM8K and MATH belong to math task,
MedQA, MedMCQA and PubMedQA belong to biology task. We find that SciDFM outperforms
all models except Llama3-8B-Instruct on math and biology domain, while it is weak in general
science tasks. In conclusion, SciDFM can reach a similar performance to top-tier models of similar
amount of compute, while it is weaker compared to models that are larger and trained using more
data.
**Domain-specific Scientific Benchmark** Table 7 presents the performance of molecular property
prediction tasks on MoleculeNet. From the results shown in AUC-ROC scores, we find that SciDFM
-----
|Model|Exact Levenshtein RDK FTS MACCS FTS Morgan FTS Validity ↑ ↓ ↑ ↑ ↑ ↑|
|---|---|
_Description Guided Molecule Design_
|ChatGLM3-6B Galactica-6.7B Text+Chem T5 Mol-Instructions SciDFM(ours)|0.001 199.47 0.103 0.174 0.059 0.236 0.000 44.152 0.134 0.248 0.088 0.992 0.097 41.819 0.352 0.474 0.353 0.721 0.002 41.367 0.231 0.412 0.147 1.000 0.084 43.414 0.586 0.704 0.443 0.994|
|---|---|
_Reagent Prediction_
|ChatGLM3-6B Galactica-6.7B Text+Chem T5 Mol-Instructions SciDFM(ours)|0.000 154.73 0.044 0.144 0.046 0.275 0.000 35.021 0.156 0.257 0.097 0.946 0.239 20.413 0.705 0.789 0.652 0.762 0.045 27.262 0.313 0.509 0.262 1.000 0.192 17.527 0.476 0.576 0.452 0.999|
|---|---|
_Retrosynthesis_
|ChatGLM3-6B Galactica-6.7B Text+Chem T5 Mol-Instructions SciDFM(ours)|0.000 59.062 0.636 0.695 0.570 0.492 0.000 30.760 0.036 0.127 0.051 0.995 0.000 49.323 0.039 0.186 0.052 0.313 0.044 23.167 0.237 0.364 0.213 1.000 0.665 6.45 0.916 0.937 0.888 0.998|
|---|---|
Table 8: The Results of molecular generation tasks on Mol-Instructions. Exact stands for exact
matches, and validity stands for valid molecules. RDK, MACCS and Morgan are three kinds of
molecular fingerprints.
|Model|Protein Function Functional Description Catalytic Activity Domain/Motif|
|---|---|
|GPT4o(0513) ChatGLM Mol-Instruction SciDFM(ours)|0.06 0.05 0.07 0.06 0.15 0.14 0.13 0.10 0.43 0.44 0.52 0.46 0.60 0.72 0.76 0.55|
|---|---|
Table 9: The Results of protein understanding tasks on Mol-Instructions. All tasks are evaluated in
ROUGE-L score.
outperforms most LLMs except ChemDFM-13B. To further evaluate model’s ability on domain-specific
benchmarks, we present the performance of molecule and protein understanding tasks on MolInstructions in Table 8 and Table 9. We find that SciDFM can reach a SOTA performance on molecule
and protein understanding tasks.
**3.3** **Expert Choices Analysis**
In this subsection, we conduct analysis on expert choice results on data from different domains.
Formally, we denote the output of thelength and d is the hidden dimension, and we denote the weight of the ith attention layer as hi ∈ R[l][×][d], where ith gate network in the l is the sequence
corresponding MoE layer as Wg ∈ R[d][×][e], where e represents the number of experts. Then, we have
_gSuppose the number of hidden layers isi = hi · Wg ∈_ R[l][×][e], representing the probability of each token being assigned to each expert. N, for a given text T, we define the expert choice results as:
_ei = Softmax(_
_gi[j, :]) ∈_ R[e], (1)
_j=1_
X
_ET = Concat([e1, e2, . . ., eN_ ]) ∈ R[Ne]. (2)
We randomly select 100 research papers each in the fields of math, chemistry, biology, and physics,
and also select 100 chemical molecules and 100 amino acid sequences as analysis data. For each text
-----
Figure 1: Visualization of t-SNE results using data from different domains.
_T_, we calculate ET using the above formulas. Then, we use the t-SNE [49] algorithm to reduce them
to three dimensions and visualize them.
The visualization result is shown in Figure 1. It can be found that research papers in mathematics,
chemistry, physics, and biology demonstrate a clear clustering pattern, indicative of discipline-specific
language characteristics, while chemical molecules and amino acid sequences do not exhibit such
a clustering phenomenon. In addition, the separation of molecular and protein data from other
categories is stark, likely due to the unique vocabularies inherent to these domains, which are not
shared with the remainder of the dataset.
Furthermore, the spatial proximity observed in the visualization adds another layer of insight: the
clusters of mathematics and physics are in close proximity, as are those of chemistry and biology,
with chemistry also showing affinity towards physics. This aligns well with the interrelationships and
overlapping nature of knowledge between these scientific disciplines, as reflected in their linguistic
characteristics.
**4** **Related Works**
The success of pretraining language models like BERT [50] and GPT [51] makes researchers wonder
whether the language model can bring about improved performance in the field of Science.
**Domain-Specific Language Model for Science** BioGPT [52] is a domain-specific generative
language model pre-trained on large-scale biomedical literature, which outperforms previous models
on six biomedical-related tasks. Based on case studies, the researchers further demonstrated the
advantages of BioGPT in generating fluent descriptions for biomedical terms in biomedical literature.
ProGen2 [53] is a protein language model pre-trained on a corpus of more than one billion protein
sequences including genome, metagenome, and immune library databases. ProGen2 shows optimal
performance in capturing observed evolutionary sequence distributions, generating new protein
sequences, and predicting protein fitness without additional fine-tuning. Med-PaLM [54] is a large
language model (LLM) designed to provide high-quality answers to medical questions, which is
an instruction prompt-tuned version of Flan-PaLM [55] specialized for the medical domain. They
reveal limitations of Flan-PaLM in scientific grounding, harm, and bias through evaluation, while
Med-PaLM significantly reduces the gap (or even compares favorably) to clinicians on several of
these axes, according to both clinicians and lay users. MTL-BERT [56] proposes to use large-scale
pre-training, multi-task learning, and SMILES enumeration to alleviate the data sparsity problem. It
mines the rich contextual information in SMILES strings through self-supervised pre-training, and
then fine-tunes the pre-trained model simultaneously using multiple downstream tasks. At the same
time, it combines SMILES enumeration as a data augmentation strategy to increase data diversity.
Experimental results show that MTL-BERT can achieve optimal performance on molecular datasets.
ChemDFM [5] is the first LLM towards Chemical General Intelligence (CGI), which is trained on
-----
34B tokens from chemical literature, textbooks, and instructions as well as various data from the
general domain. It can store, understand, and reason over chemical knowledge and languages while
still possessing advanced free-form language comprehension capabilities. Extensive quantitative
evaluation shows that ChemDFM can significantly outperform the representative open-sourced LLMs.
**General-domain Language Model for Science** SciBERT [57] is a pre-trained language model
based on the BERT model architecture, which aims to address the lack of high-quality, largescale labeled scientific data. SciBERT uses a large multi-domain scientific publication corpus for
pre-training to improve the performance of downstream scientific benchmarks and has achieved
state-of-the-art performance on multiple tasks. Galactica [4] is a large language model that can
store, combine and reason about scientific knowledge, which is trained on a large scientific corpus of
papers, reference material, knowledge bases and many other sources. Galactica outperforms previous
models on a range of scientific tasks and sets a new state-of-the-art on downstream tasks such as
PubMedQA and MedMCQA. SciGLM [6] is a suite of scientific language models designed to enhance
college-level scientific reasoning. It utilizes a self-reflective instruction annotation framework to
address data scarcity in the science domain. SciGLM significantly improves upon ChatGLM by
effectively handling complex scientific and mathematical problems, all while maintaining strong
language understanding capabilities.
Compared to prior works, SciDFM either can achieve a better performance, or is more generalized.
With the utilization of Mixture-of-Experts architecture, SciDFM can better model similarities and
differences across different disciplines and modalities and have stronger sophisticated scientific
reasoning and understanding capabilities.
**5** **Conclusion**
In this paper, we introduce SciDFM, a mixture-of-experts LLM able to conduct college-level scientific
reasoning and understand molecules and amino acid sequences. We show the pretraining and
instruction-tuning process of SciDFM in detail, including data, architecture and hyper-parameters.
We conduct evaluation on eight general scientific language understanding and reasoning tasks and
two domain-specific tasks. From the results, we show that SciDFM achieves strong performance
on general scientific benchmarks such as SciEval and SciQ, and it reaches a SOTA performance on
domain-specific benchmarks among models of similar size. We further analyze the expert choices of
MoE layers and show that the results of expert selection vary with data from different disciplines and
exhibit clustering phenomena related to their relationships.
**References**
[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni
Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4
[technical report. arXiv preprint arXiv:2303.08774, 2023.](http://arxiv.org/abs/2303.08774)
[2] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open
[and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.](http://arxiv.org/abs/2302.13971)
[3] Machel Reid, Nikolay Savinov, Denis Teplyashin, Dmitry Lepikhin, Timothy Lillicrap, Jeanbaptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, et al.
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv
_[preprint arXiv:2403.05530, 2024.](http://arxiv.org/abs/2403.05530)_
[4] Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis
Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. Galactica: A large language
[model for science. arXiv preprint arXiv:2211.09085, 2022.](http://arxiv.org/abs/2211.09085)
[5] Zihan Zhao, Da Ma, Lu Chen, Liangtai Sun, Zihao Li, Hongshen Xu, Zichen Zhu, Su Zhu,
Shuai Fan, Guodong Shen, et al. Chemdfm: Dialogue foundation model for chemistry. arXiv
_[preprint arXiv:2401.14818, 2024.](http://arxiv.org/abs/2401.14818)_
[6] Dan Zhang, Ziniu Hu, Sining Zhoubian, Zhengxiao Du, Kaiyu Yang, Zihan Wang, Yisong Yue,
Yuxiao Dong, and Jie Tang. Sciglm: Training scientific language models with self-reflective
instruction annotation and tuning, 2024.
-----
[7] Qizhi Pei, Lijun Wu, Kaiyuan Gao, Jinhua Zhu, Yue Wang, Zun Wang, Tao Qin, and Rui Yan.
Leveraging biomolecule and natural language through multi-modal learning: A survey. arXiv
_[preprint arXiv:2403.01528, 2024.](http://arxiv.org/abs/2403.01528)_
[8] Kaifeng Bi, Lingxi Xie, Hengheng Zhang, Xin Chen, Xiaotao Gu, and Qi Tian. Pangu-weather:
A 3d high-resolution model for fast and accurate global weather forecast. arXiv preprint
_[arXiv:2211.02556, 2022.](http://arxiv.org/abs/2211.02556)_
[9] Zhouhan Lin, Cheng Deng, Le Zhou, Tianhang Zhang, Yi Xu, Yutong Xu, Zhongmou He,
Yuanyuan Shi, Beiya Dai, Yunchong Song, et al. Geogalactica: A scientific large language
[model in geoscience. arXiv preprint arXiv:2401.00434, 2023.](http://arxiv.org/abs/2401.00434)
[10] Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang,
Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with condi[tional computation and automatic sharding. arXiv preprint arXiv:2006.16668, 2020.](http://arxiv.org/abs/2006.16668)
[11] Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris
Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand,
[et al. Mixtral of experts. arXiv preprint arXiv:2401.04088, 2024.](http://arxiv.org/abs/2401.04088)
[12] Samyam Rajbhandari, Conglong Li, Zhewei Yao, Minjia Zhang, Reza Yazdani Aminabadi,
Ammar Ahmad Awan, Jeff Rasley, and Yuxiong He. Deepspeed-moe: Advancing mixture-ofexperts inference and training to power next-generation ai scale. In International conference on
_machine learning, pages 18332–18346. PMLR, 2022._
[13] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information
_processing systems, 30, 2017._
[14] Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, and Kai
Yu. Scieval: A multi-level large language model evaluation benchmark for scientific research. In
_Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 19053–19061,_
2024.
[15] Johannes Welbl, Nelson F Liu, and Matt Gardner. Crowdsourcing multiple choice science
[questions. arXiv preprint arXiv:1707.06209, 2017.](http://arxiv.org/abs/1707.06209)
[16] Yin Fang, Xiaozhuan Liang, Ningyu Zhang, Kangwei Liu, Rui Huang, Zhuo Chen, Xiaohui
Fan, and Huajun Chen. Mol-instructions: A large-scale biomolecular instruction dataset for
large language models. In ICLR. OpenReview.net, 2024.
[17] Xinyang Geng and Hao Liu. Openllama: An open reproduction of llama, May 2023.
[18] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn
Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset.
_[arXiv preprint arXiv:2103.03874, 2021.](http://arxiv.org/abs/2103.03874)_
[19] Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer,
Albert Q. Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language
model for mathematics, 2023.
[20] Zhen Yang, Ming Ding, Qingsong Lv, Zhihuan Jiang, Zehai He, Yuyi Guo, Jinfeng Bai,
and Jie Tang. Gpt can solve mathematical problems without a calculator. arXiv preprint
_[arXiv:2309.03241, 2023.](http://arxiv.org/abs/2309.03241)_
[21] Zengzhi Wang, Rui Xia, and Pengfei Liu. Generative ai for math: Part i – mathpile: A
[billion-token-scale pretraining corpus for math. arXiv preprint arXiv:2312.17120, 2023.](http://arxiv.org/abs/2312.17120)
[22] Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Muñoz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau,
Leandro von Werra, and Harm de Vries. The stack: 3 tb of permissively licensed source code.
_Preprint, 2022._
[23] Daria Soboleva, Faisal Al-Khateeb, Robert Myers, Jacob R Steeves, Joel Hestness, and Nolan
Dey. Slimpajama: A 627b token cleaned and deduplicated version of redpajama, 2023.
[24] Together Computer. Redpajama: An open source recipe to reproduce llama training dataset,
2023.
[25] Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok,
Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical
[questions for large language models. arXiv preprint arXiv:2309.12284, 2023.](http://arxiv.org/abs/2309.12284)
-----
[26] Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu
Chen. Mammoth: Building math generalist models through hybrid instruction tuning. arXiv
_[preprint arXiv:2309.05653, 2023.](http://arxiv.org/abs/2309.05653)_
[27] Arindam Mitra, Hamed Khanpour, Corby Rosset, and Ahmed Awadallah. Orca-math: Unlocking
the potential of slms in grade school math, 2024.
[28] Vineet Thumuluri, José Juan Almagro Armenteros, Alexander Rosenberg Johansen, Henrik
Nielsen, and Ole Winther. Deeploc 2.0: multi-label subcellular localization prediction using
protein language models. Nucleic Acids Research, 50(W1):W228–W234, 2022.
[29] George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias
Zschunke, Michael R Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris
Polychronopoulos, et al. An overview of the bioasq large-scale biomedical semantic indexing
and question answering competition. BMC bioinformatics, 16(1):138, 2015.
[30] Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. Medmcqa: A large-scale
multi-subject multi-choice dataset for medical domain question answering. In Gerardo Flores,
George H Chen, Tom Pollard, Joyce C Ho, and Tristan Naumann, editors, Proceedings of
_the Conference on Health, Inference, and Learning, volume 174 of Proceedings of Machine_
_Learning Research, pages 248–260. PMLR, 07–08 Apr 2022._
[31] Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. What
disease does this patient have? a large-scale open domain question answering dataset from
medical exams. Applied Sciences, 11(14):6421, 2021.
[32] Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. Pubmedqa: A
dataset for biomedical research question answering. In Proceedings of the 2019 Conference on
_Empirical Methods in Natural Language Processing and the 9th International Joint Conference_
_on Natural Language Processing (EMNLP-IJCNLP), pages 2567–2577, 2019._
[33] Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard
Ghanem. Camel: Communicative agents for "mind" exploration of large scale language model
society, 2023.
[34] Xiang Yue, Tuney Zheng, Ge Zhang, and Wenhu Chen. Mammoth2: Scaling instructions from
[the web. arXiv preprint arXiv:2405.03548, 2024.](http://arxiv.org/abs/2405.03548)
[35] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and
Jacob Steinhardt. Measuring massive multitask language understanding. Proceedings of the
_International Conference on Learning Representations (ICLR), 2021._
[36] Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma,
Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment. Advances in Neural
_Information Processing Systems, 36, 2024._
[37] Kevin Lin, Oyvind Tafjord, Peter Clark, and Matt Gardner. Reasoning over paragraph effects in
situations. In MRQA@EMNLP, 2019.
[38] Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. Qasc: A
[dataset for question answering via sentence composition. arXiv:1910.11473v2, 2020.](http://arxiv.org/abs/1910.11473)
[39] Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct
electricity? a new dataset for open book question answering. In EMNLP, 2018.
[40] Mike Conover, Matt Hayes, Ankit Mathur, Jianwei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick
Wendell, Matei Zaharia, and Reynold Xin. Free dolly: Introducing the world’s first truly open
instruction-tuned llm. Company Blog of Databricks, 2023.
[41] Wing Lian, Guan Wang, Bleys Goodson, Eugene Pentland, Austin Cook, Chanvichet Vong,
"Teknium", and Nathan Hoos. Slimorca dedup: A deduplicated subset of slimorca, 2023.
[42] Yuvanesh Anand, Zach Nussbaum, Brandon Duderstadt, Benjamin Schmidt, and Andriy Mulyar.
Gpt4all: Training an assistant-style chatbot with large scale data distillation from gpt-3.5-turbo.
_GitHub, 2023._
[43] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint
_[arXiv:1711.05101, 2017.](http://arxiv.org/abs/1711.05101)_
[44] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick,
and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning
[challenge. arXiv preprint arXiv:1803.05457, 2018.](http://arxiv.org/abs/1803.05457)
-----
[45] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to
[solve math word problems. arXiv preprint arXiv:2110.14168, 2021.](http://arxiv.org/abs/2110.14168)
[46] Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S
Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine
learning. Chemical science, 9(2):513–530, 2018.
[47] Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang,
Yifan Xu, Wendi Zheng, Xiao Xia, et al. Glm-130b: An open bilingual pre-trained model. arXiv
_[preprint arXiv:2210.02414, 2022.](http://arxiv.org/abs/2210.02414)_
[48] Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang.
Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of
_the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long_
_Papers), pages 320–335, 2022._
[49] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine
_learning research, 9(11), 2008._
[50] Jacob Devlin. Bert: Pre-training of deep bidirectional transformers for language understanding.
_[arXiv preprint arXiv:1810.04805, 2018.](http://arxiv.org/abs/1810.04805)_
[51] Alec Radford. Improving language understanding by generative pre-training. 2018.
[52] Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon, and Tie-Yan Liu.
Biogpt: generative pre-trained transformer for biomedical text generation and mining. Briefings
_in bioinformatics, 23(6):bbac409, 2022._
[53] Erik Nijkamp, Jeffrey A Ruffolo, Eli N Weinstein, Nikhil Naik, and Ali Madani. Progen2:
exploring the boundaries of protein language models. Cell systems, 14(11):968–978, 2023.
[54] Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung,
Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. Large language models
encode clinical knowledge. Nature, 620(7972):172–180, 2023.
[55] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam
Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm:
Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1–
113, 2023.
[56] Xiao-Chen Zhang, Cheng-Kun Wu, Jia-Cai Yi, Xiang-Xiang Zeng, Can-Qun Yang, Ai-Ping Lu,
Ting-Jun Hou, and Dong-Sheng Cao. Pushing the boundaries of molecular property prediction
for drug discovery with multitask learning bert enhanced by smiles enumeration. Research,
2022:0004, 2022.
[57] Iz Beltagy, Kyle Lo, and Arman Cohan. Scibert: A pretrained language model for scientific
[text. arXiv preprint arXiv:1903.10676, 2019.](http://arxiv.org/abs/1903.10676)
-----
| [
"Liangtai, Sun",
"Danyu, Luo",
"Da, Ma",
"Zihan, Zhao",
"Su, Zhu",
"Baocai, Chen",
"Lu, Chen",
"Kai, Yu",
"Zhennan, Shen",
"Xin, Chen"
] | 2024-09-26T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.18412 | https://arxiv.org/abs/2409.18412 | https://www.semanticscholar.org/paper/8f53bd416d1868015b72ee8275f81cdbb8877fcb |
SciInstruct: a Self-Reflective Instruction Annotated Dataset for Training Scientific Language Models | Large Language Models (LLMs) have shown promise in assisting scientific discovery. However, such applications are currently limited by LLMs' deficiencies in understanding intricate scientific concepts, deriving symbolic equations, and solving advanced numerical calculations. To bridge these gaps, we introduce SciInstruct, a suite of scientific instructions for training scientific language models capable of college-level scientific reasoning. Central to our approach is a novel self-reflective instruction annotation framework to address the data scarcity challenge in the science domain. This framework leverages existing LLMs to generate step-by-step reasoning for unlabelled scientific questions, followed by a process of self-reflective critic-and-revise. Applying this framework, we curated a diverse and high-quality dataset encompassing physics, chemistry, math, and formal proofs. We analyze the curated SciInstruct from multiple interesting perspectives (e.g., domain, scale, source, question type, answer length, etc.). To verify the effectiveness of SciInstruct, we fine-tuned different language models with SciInstruct, i.e., ChatGLM3 (6B and 32B), Llama3-8b-Instruct, and Mistral-7B, enhancing their scientific and mathematical reasoning capabilities, without sacrificing the language understanding capabilities of the base model. We release code and SciInstruct at https://github.com/THUDM/SciGLM. | null | null | null | null | NeurIPS 2024 | true | 0 | 0 | null | https://neurips.cc/virtual/2024/poster/97744 | null | null |
Self-Demos: Eliciting Out-of-Demonstration Generalizability in Large Language Models | Large language models (LLMs) have shown promising abilities of in-context learning (ICL), adapting swiftly to new tasks with only few-shot demonstrations. However, current few-shot methods heavily depend on high-quality, query-specific demos, which are often lacking. When faced with out-of-demonstration (OOD) queries, methods that rely on hand-crafted demos or external retrievers might fail. To bridge the gap between limited demos and OOD queries, we propose Self-Demos, a novel prompting method that elicits the inherent generalizability in LLMs by query-aware demo generation. The generated demos strategically interpolate between existing demos and the given query, transforming the query from OOD to ID. To evaluate the effectiveness of our approach, we manually constructed OOD-Toolset, a dataset in the tool-using scenario with over 300 real-world APIs and 1000 instances, each consisting of three tool-use cases as demos and an OOD query. Thorough experiments on our dataset and two public math benchmarks have shown that our method can outperform state-of-the-art baselines in the OOD setting. Moreover, we conduct a range of analyses to validate Self-Demos’s generalization and provide more insights. | This work proposes Self-Demos, a novel prompting method that elicits the inherent generalizability in LLMs by query-aware demo generation that can outperform state-of-the-art baselines in the OOD setting. | # SELF-DEMOS: Eliciting Out-of-Demonstration Generalizability in Large Language Models
**Wei He[1], Shichun Liu[1], Jun Zhao[1], Yiwen Ding[1], Yi Lu[1],**
**Zhiheng Xi[1], Tao Gui[2][*], Qi Zhang[1][*], Xuanjing Huang[1]**
1
School of Computer Science, Fudan University
2
Institute of Modern Languages and Linguistics, Fudan University
[email protected], {tgui,qz}@fudan.edu.cn
**Abstract** **Query: How do I drive from**
Large language models (LLMs) have shown
promising abilities of in-context learning (ICL),
adapting swiftly to new tasks with only fewshot demonstrations. However, current fewshot methods heavily depend on high-quality,
query-specific demos, which are often lacking. When faced with out-of-demonstration
(OOD[1]) queries, methods that rely on handcrafted demos or external retrievers might fail.
To bridge the gap between limited demos and
OOD queries, we propose SELF-DEMOS, a
novel prompting method that elicits the inherent generalizability in LLMs by query-aware
demo generation. The generated demos strategically interpolate between existing demos and
the given query, transforming the query from
OOD to ID. To evaluate the effectiveness of
our approach, we manually constructed OOD**Toolset, a dataset in the tool-using scenario**
with over 300 real-world APIs and 1000 instances, each consisting of three tool-use cases
as demos and an OOD query. Thorough experiments on our dataset and two public math
benchmarks have shown that our method can
outperform state-of-the-art baselines in the
OOD setting. Moreover, we conduct a range
of analyses to validate SELF-DEMOS’s generalization and provide more insights.[2]
**1** **Introduction**
Large language models (LLMs) have achieved impressive performance across a wide range of tasks,
ranging from mathematical reasoning to tool using
(Brown et al., 2020a; Kojima et al., 2022; Qin et al.,
2023; Xi et al., 2023). The models learn to perform
unseen downstream tasks simply by conditioning
on a prompt containing input-output pairs (i.e., few_shot demonstrations, Brown et al., 2020a). This_
- Corresponding authors.
1OOD refers to “Out-of-Demonstration” in this paper, not
the commonly understood “Out-of-Distribution”. Similarly,
ID stands for “In-Demonstration”.
[2Code & Data: https://github.com/hewei2001/Self-Demos.](https://github.com/hewei2001/Self-Demos)
**Query: How do I drive from**
**Big Ben to the London Eye?**
Q: How many shops are around Times Square in 3km?
A: We should first call SEARCH API and then...
Q: How far is Beijing to Shanghai?
A: We should call DISTANCE API...
**Query-aware**
**Demo Generation**
Q: How can I go from Beijing to...
A: We should call ROUTE API...
**Query: How do I drive from**
**Big Ben to the London Eye?**
Q: How many shops are around Times Square in 3km?
A: We should first call SEARCH API and then...
Q: How far is Beijing to Shanghai?
A: We should call DISTANCE API...
Original Scope Existing Demos OOD Query
Extended Scope Generated Demos ID Query
Figure 1: An example of how query-aware demo generation works. In the tool-using scenario, there is a
gap between the user query and the available tool-use
cases in the original scope since they require different
APIs. This can lead to errors if the LLM is unfamiliar
with the ROUTE API. After interpolating new demos
between the existing ones and the OOD query, LLMs
can perform better in the extended scope.
paradigm, also known as in-context learning (ICL),
has been found its effectiveness considerably influenced by the quality and relevance of the demos
provided (Liu et al., 2022; Dong et al., 2023). Thus,
how to provide high-quality demos becomes an essential challenge in LLM applications.
The leading few-shot techniques typically hinge
on hand-crafted task-specific demos or extensive
3829
-----
demo libraries (Wei et al., 2022c; Liu et al., 2022;
Rubin et al., 2022). However, crafting demos for
each unique query is impractical, and the demo
libraries are also unable to cover all the potential
queries. The issue arises when faced with out-ofdemonstration (OOD) queries, resulting in poorer
performance due to the gap between existing demos
and new queries.
An alternative strategy is prompting the LLMs to
self-generate relevant demos, thereby guiding themselves toward resolving the query (Kim et al., 2022;
Chen et al., 2023b; Yasunaga et al., 2023). However, these works often overlook a critical point:
instead of blindly recalling relevant demos based
on queries, we can perform interpolation between
existing demos and queries, as depicted in Figure 1.
By strategically interpolating, we can derive more
relevant and accurate demos from existing ones,
which have proven helpful for the final response
(Liu et al., 2022; Halawi et al., 2023). Specifically,
we introduce SELF-DEMOS, a novel prompting
method that may fully elicit the model’s potential
out-of-demonstration generalizability. Unlike previous works, we developed a complete workflow incorporating pre- and post-processing steps around
the demo generation. Before the demos are generated, we first prompt the model to “give a general
_understanding of the user query”, thereby simpli-_
fying the complexity of the analysis in subsequent
steps. Then, we generate query-aware demos and
select the most high-quality ones through Best-of-N
_sampling (Nakano et al., 2021). These selected de-_
mos will be used for the final response along with
the initial available demos.
To evaluate our approach’s efficacy in the OOD
context, we manually construct OOD-Toolset, a
dataset tailored for tool-using scenarios as delineated by Tang et al. (2023). Our dataset includes over 300 real-world APIs and 1000 instances, each consisting of three tool-use cases as
demos and an OOD query. Moreover, we benchmarked our method with two public mathematical
datasets, GSM8K (Cobbe et al., 2021) and MATH
(Hendrycks et al., 2021), to validate its adaptability in different scenarios. The primary experimental findings reveal that SELF-DEMOS outperforms
state-of-the-art baselines in solving OOD queries.
We also conducted ablation studies and other extensive experiments to gain more insights into our
method. Collectively, our analyses show that we
have found a more efficient way to elicit the potential OOD generalizability in LLMs.
Our contributions are summarized as follows:
1. We proposed SELF-DEMOS, a novel prompting method to elicit the out-of-demonstration
(OOD) generalizability in LLMs.
2. We manually constructed OOD-Toolset, a toolusing dataset for better verifying the potential
OOD generalizability in LLMs.
3. We conducted extensive experiments to validate
SELF-DEMOS’s effectiveness and generalization under different settings.
**2** **Related Work**
**2.1** **In-Context Learning**
The rise of LLMs such as ChatGPT (OpenAI, 2022)
and LLaMA (Touvron et al., 2023) has revolutionized the field. With the model size scaling, LLMs
demonstrate remarkable capabilities of ICL (Brown
et al., 2020b; Wei et al., 2022b), which learns to
perform tasks by specific instructions and demonstrations. Additionally, insights from scaling laws
(Wei et al., 2022b) also highlight the LLMs’ potential for out-of-distribution generalization. It refers
to the challenge where model inputs deviate from
their training distribution (Wang et al., 2023a). If
stimulated effectively, this generalization capability can empower LLMs to address queries outside
the training corpus (Collins et al., 2022), enhancing
utility in dynamic and open-ended scenarios.
**2.2** **Optimizing Demonstrations for ICL**
The performance of LLMs may be influenced by
the quantity, relevance, diversity, and truthfulness
of demonstrations (Chen et al., 2023a; Levy et al.,
2023; Min et al., 2022; Halawi et al., 2023). There
are two primary paradigms to optimize demonstrations and steer models towards generalization.
**Demo Retrieval for ICL.** LLMs are sensitive
to the choice of demonstrations. Therefore, researchers have focused on using retrieval modules to find the most representative demos for ICL.
One effective strategy is leveraging existing retrievers based on semantic similarity metrics between
the available demos and queries (Liu et al., 2022;
Agrawal et al., 2023; Gao et al., 2023; Luo et al.,
2023). Another method employs ranking scores
derived from fine-tuned language models (Rubin
et al., 2022; Shi et al., 2022).
3830
-----
**Demo Generation for ICL.** Rather than extracting existing demos, demo generation aims to selfgenerate exemplars that closely align with the input. Kim et al. (2022) initially employed language
models to produce demos from pre-defined labels.
Subsequent works adopted a two-stage approach
of generating and selecting demos (Li et al., 2022;
Zhang et al., 2023; Shao et al., 2023). In contrast, our work leverages the intrinsic capabilities
of LLMs to identify superior demos via best-of-N
sampling.
Besides, there are approaches akin to ours. Chen
et al. (2023b) adopt multi-steps to construct demonstration pairs, while Yasunaga et al. (2023) prompt
LLMs to recall relevant demos before answering.
However, our method stands out by combining preand post-processing steps around demo generation
to guarantee the high quality of generated demos.
**2.3** **Eliciting LLMs’ Power with Prompts**
Efforts to enhance LLMs include finetuning with
specific instructions (Wei et al., 2022a) and employing prompting strategies like Chain-of-Thought
(CoT, Wei et al., 2022c). Our approach adopts
the prompt-based strategy and draws inspiration
from studies of the “self” series (Madaan et al.,
2023; Wang et al., 2023b; Chen et al., 2023b). The
essence of “self” is to leverage the model’s inherent power, without external modules. Our method
positions the LLM itself as an analyzer, generator,
and selector, aiming to elicit its intrinsic generalizability to resolve OOD queries.
**3** **Methodology**
In this section, we first introduce the construction
process of OOD-Toolset. Next, we provide a detailed description of the SELF-DEMOS method,
which is illustrated in Figure 2.
**3.1** **OOD-Toolset Construction**
Recent works are evaluated on benchmarks such as
BIG-Bench (Srivastava et al., 2022) and GSM8K
(Cobbe et al., 2021). However, since these datasets
may have been inadvertently included in the training data of LLMs, there is a risk of overestimating
their ability to generalize to OOD query (Zhou
et al., 2023). To mitigate this, we chose the toolusing scenarios that are less likely to occur during
model training for assessment. Specifically, we
constructed the dataset following the two steps:
**Data Collection.** Our original data derives from
the tool-use corpus created by ToolAlpaca (Tang
et al., 2023). It was composed of a wide range of
real-world APIs complete with API descriptions,
usage specifications, and multiple simulated tooluse cases. However, despite the dataset’s comprehensiveness, we noted that the initial AI-generated
tool-use cases contain some errors, such as ambiguous queries and incorrect API calls in response.
These minor errors may prevent accurate judgment
in our evaluation. Therefore, we engaged human
annotators to manually refine the corpus, producing
a high-quality version for more reliable assessment.
Additional details and an example of OOD-Toolset
are provided in Appendix B.
**OOD Setting.** We retained the user’s queries and
corresponding API calls from tool-use cases as
input-output pairs for the evaluation. In addition,
we kept the API descriptions and usage specifications from the refined corpus as context for LLMs.
For each test instance, we provided three cases
from the same API as initial available few-shot
demos (also referred to as seed demos, or Dseed).
Notably, in the OOD setting, the sub-APIs in seed
demos differ from those needed in the final query.
Take the MAP tool for example, which contains
three sub-APIs: DISTANCE, ROUTE, and SEARCH
API. For instance, if the DISTANCE and SEARCH
APIs serve as seed demos, the user’s query might
pertain to the ROUTE API. This design tests the
model’s ability to understand and apply tool-using
patterns across different functions, allowing us to
explore the OOD generalizability in LLMs.
**3.2** **SELF-DEMOS**
We executed the whole workflow by prompting the
model itself. The prompt template for each step is
illustrated in Appendix C.
**Query Understanding.** The first step involves
comprehensive query understanding. Given the
model M and a query q, we employ a zero-shot
method:
_u =_ (p1 _q),_ (1)
_M_ _||_
where p1 is the prompt for query understanding,
_|| denotes concatenation, and u is the generated_
understanding. During this pre-processing step, we
aim to reduce the disparity between the initial seed
demonstrations and the ultimate target query. As
shown in Figure 2, when given a query that involves
MAP API, we guide the model to generate an un
3831
-----
|Col1|Col2|ow can I go from Beijing to Shanghai?|Col4|
|---|---|---|---|
|||ow can I go from Beijing to Shanghai?||
|Q: How can I go from Beijing to Shanghai? A: ROUTE(position=“Beijing”, target=“Shanghai”)|: How can I go from Beijing to Shanghai?|ow can I go from Beijing to Shanghai?||
|||||
|Col1|: How can I go from Beijing to Shanghai?|
|---|---|
|Q: How can I go from Beijing to Shanghai? A: ROUTE(start=“Beijing”, target=“Shanghai”)|: How can I go from Beijing to Shanghai?|
**Instruction: The Map API including 3 sub-APIs. In this task, you need to generate the API calls for a given query.**
**Query: How do I drive from Big Ben to Tower Bridge, and then to the London Eye ?**
**Request**
**Step #1: Query Understanding** **Step #2: Demo Generation**
**Query** **Seed Demos:**
**?** **Q: How far is… A: We should call DISTANCE…**
**Understanding**
**Q: How many… A: We should call SEARCH API…**
**API: DISTANCE**
The query involves
**args: (start, target)** **Understanding** Query-aware
finding directions…
**API: ROUTE** To solve this type of
**args: (start, target)** **Q: How can I go from Beijing to Shanghai?**
query, we should call
**API: SEARCH** ROUTE API to... **A: We should call ROUTE API to get the directions**
**args: (target, position,** from Beijing to Shanghai. The function call is
distance) ROUTE(position=“Beijing”, target= “Shanghai”).
**Step #3: Best-of-N Sampling** **Step #4: Response Generation**
Given Criteria
**Instruction & Query**
**API Specifications**
**Q: How can I go from Beijing to Shanghai?**
**Seed Demos & Selected Demos**
**A: ROUTE(position=“Beijing”, target=“Shanghai”)**
**Final answer: The function call is**
**Q: How can I go from Beijing to Shanghai?** ROUTE(start=“Big Ben”, target=“Tower Bridge”),
**A: ROUTE(start=“Beijing”, target=“Shanghai”)** ROUTE(start=“Tower Bridge”, target=“London Eye”).
Figure 2: An overview of the proposed SELF-DEMOS prompting method in tool-using scenario.
derstanding focused on the more specific ROUTE
sub-API. Furthermore, this step resembles a chainof-thought process (Wei et al., 2022c), which may
reduce the cognitive load in subsequent steps. This
is helpful to enhance the relevance and accuracy of
the generated demos.
**Query-aware Demo Generation.** Based on the
distilled understanding u and seed demos Dseed,
we generate query-aware demos as:
_Dgen = {d1, d2, ..., dN_ _} = M(p2 || q, u, Dseed),_
(2)
where p2 is the prompt for demo generation, Dgen
is the set of generated demos, and N is the number
of demos to be generated. The seed demos, while
not directly linked to the specific query, showcase
potential tool-using patterns of MAP API, offering
guidance for the generation. We call the model N
times to generate N demos separately, alleviating
the difficulty of a single try and avoiding the model
falling into consecutive errors in one response. In
this phase, we extend the original scope of the
demos to a broader boundary.
**Best-of-N Sampling.** It has been argued that
LLMs are unlikely to self-critique their outputs
without an external validator (Stechly et al., 2023;
Valmeekam et al., 2023). Consequently, we assume
that while models might not calibrate and refine
outputs, they could still discern the superior output
from a variety. Therefore, we employ a Best-of-N
sampling strategy, where the model is prompted
to select the best K demos from the N generated
demos based on special criteria:
_DtopK = M(p3 || Dgen, C, K),_ (3)
where p3 is the prompt for sampling, DtopK is the
subset of K demos sampled from the generated
ones, conditioned on criteria C.
This process is inspired by preference learning,
where multiple samples are generated and the one
with the highest reward model score is chosen
(Nakano et al., 2021). It is worth noting that our criteria, which include the demos’ accuracy, relevance,
3832
-----
and potential helpfulness for the final response, are
given to the model via prompts. Our sampling criteria are more nuanced and do not rely on an external
retriever. This is where SELF-DEMOS differs from
methods such as Synthetic Prompting (Shao et al.,
2023), which also selects demos after generation.
**Response Generation.** Finally, we leverage the
sampled demos DtopK and the initial seed demos
_Dseed to generate the final response:_
_r =_ (p4 _Dseed_ _DtopK, q),_ (4)
_M_ _||_ _∪_
where p4 is the prompt for response generation, ∪
denotes the concatenation of two sets and the r is
the final response. The concatenation ensures that
the model benefits from the query-specific demos
in DtopK, while also incorporating the beneficial
diversity and quality of Dseed (Levy et al., 2023;
Halawi et al., 2023).
**4** **Experiments**
To evaluate the effectiveness of SELF-DEMOS, we
conduct extensive experiments for comparison and
analysis.
**4.1** **Experimental Setups**
**Foundation Models.** We use GPT-3.5 (the
gpt-3.5-turbo-0613 version) for most of our experiments, with only one additional experiment
using the Llama-2-Chat model family, to validate
the generalization of SELF-DEMOS across different
model sizes. For all LLMs, we set the parameter
_temperature = 0 for stable responses except for_
the sampling step, where we set temperature =
0.7 to introduce diversity.
**Tasks & Datasets.** We evaluate the proposed
method in two reasoning-intensive tasks: toolusing and mathematical problem-solving.
In the tool task, we developed the OOD-Toolset
for evaluation. Details of the construction process
are described in section 3.1. In the math task, we
employed two public datasets: GSM8K (Cobbe
et al., 2021), featuring elementary math word problems, and MATH (Hendrycks et al., 2021), containing complex problems from high school competitions. We evaluate the entire GSM8K testing set
and a randomly selected subset from the MATH
testing set. Distinct OOD settings are designed for
math tasks. For GSM8K, we manually created several outlier samples, ensuring that the testing set
did not contain problems with similar contexts. For
MATH, since the problems were categorized into
seven subjects and five difficulty levels, we used
problems from different subjects but the same level
to meet the OOD condition. The dataset statistics
are presented in Table 1.
**Evaluation Metric.** In the report for the math
tasks, we present the exact match accuracy for each
problem. For the tool task, which may require multiple API calls in one case, we assess accuracy using both exact and partial matches. Partial matches
are awarded half the score if the model’s response
includes only part of the required API calls.
**4.2** **Baselines**
We compare SELF-DEMOS with the following
baselines, including two methods that are designed
for demo generation:
**Zero-shot and Zero-shot + CoT (Brown et al.,**
**2020a; Kojima et al., 2022).** Prompt the model
with the task description, test input, and no demonstration. Besides, the CoT method integrates a
trigger prompt “let’s think step by step”.
**Few-shot (Wei et al., 2022c).** Employ a fixed set
of seed demos we constructed for each OOD query.
For the GSM8K and MATH datasets, which include solutions with labeled reasoning steps, the demos also feature CoT steps to enhance the model’s
problem-solving capabilities.
**Self-ICL (Chen et al., 2023b).** A multi-step
framework for zero-shot in-context learning by
prompting the LLM itself to generate pseudoinputs and labels. Unlike our method, they generate
inputs and labels separately and then merge them
into demos, with no other pre- and post-processing
steps. We have also adapted it into a few-shot variant to make it comparable.
**Analogical Prompting (Yasunaga et al., 2023).**
A single-step prompting method that guides LLM
to recall relevant demos and knowledge before solving a given problem. Here we let it generate demos
for the vanilla version and our few-shot variant.
The vanilla Self-ICL and Analogical Prompting
methods initially generate three demos each. However, in the few-shot variant, we adjust this to two
demos to better align with our approach.
**4.3** **Main Results**
Table 2 shows the performance of each method on
three datasets. We can find that: (1) The better
3833
-----
**Dataset** **Avg. #tokens** **Avg. #tokens** **Avg. #tokens of**
**Size** **Demo Source**
**Name** **of Query** **of Demo** **Context (Few-shot)**
**OOD-Toolset** 1,057 Same tool, different sub-APIs 35.5 53.8 496.0
**GSM8K** 1,319 Manually created outliers 59.0 136.8 526.1
**MATH** 1,000 Same level, different subjects 69.1 291.9 1002.1
Table 1: Statistics of three datasets in the OOD setting.
**OOD-Toolset** **GSM8K** **MATH**
**Prompting Method** **Average**
Exact Acc Part Acc Acc Acc
Zero-shot 64.5 68.4 75.0[∗] 33.0[∗] 60.2
Zero-shot + CoT 66.1 70.9 75.8[∗] 33.9[∗] 61.7
Few-shot 71.9 76.6 76.2 35.1 65.0
Self-ICL (Zero-shot) 67.0 71.1 76.6 34.6 62.3
Self-ICL (Few-shot) 71.5 76.0 78.0 **37.9** 65.9
Analogical Prompting (Zero-shot) 67.8 72.0 77.8[∗] 37.3[∗] 63.7
Analogical Prompting (Few-shot) 71.1 75.4 75.7 36.3 64.6
SELF-DEMOS (ours) **75.1** **79.4** **78.2** **37.9** **67.7**
Table 2: Main results of different prompting methods on three datasets. All the results are with GPT-3.5-Turbo. The
best performance for each task is in bold. The ([∗]) indicates that results are from Yasunaga et al. (2023).
performance of few-shot over zero-shot (+ CoT)
shows the LLM’s capacity to discern and apply underlying patterns from seed demos to OOD queries,
indicating a degree of inherent generalizability. Furthermore, the OOD-Toolset measures this ability
more accurately than the two public math datasets,
validating the necessity of creating unseen scenarios and OOD structures of instances. (2) Only a
few-shot method does not fully unlock the model’s
capability. In contrast, the methods with demo generation, especially SELF-DEMOS, present superior
performance, underscoring their potential to serve
as a reliable prompting strategy in OOD scenarios. (3) Self-ICL, which generates Q&A separately,
serves a similar purpose to our Best-of-N Sampling
step by enhancing the accuracy of generated demos.
Thus, it yields performance that is closest to our
method. However, this framework may also lead to
mismatches of Q&A pairs, i.e., the model fails to
answer the questions it generates, which may affect
subsequent responses. (4) Seed demos bring little
benefit to the Analogical Prompting method and
may even be harmful. This could be because the
additional demos are irrelevant to the instructions
of analogical reasoning, which require the model to
do multiple tasks. The seed demos fail to guide the
model in different tasks and may distract the model
from the whole process. Overall, SELF-DEMOS
outperforms all baselines in solving OOD queries.
**Pre- & Post-processing Method** **OOD-Toolset**
w/o Pre-processing 72.9 / 77.5
+ Directly Answering 72.3 / 77.0
+ Query Understanding **75.1 / 79.4**
w/o Post-processing 74.1 / 78.7
+ Self-Critique 74.3 / 78.8
+ Best-of-N Sampling **75.1 / 79.4**
+ Best-of-N Sampling & Self-Critique 74.6 / 79.0
Table 3: Ablation study of pre- & post-processing methods on OOD-Toolset. The upper rows show the impact
of different pre-processing steps, with the other steps
remaining consistent with the original. The following
rows show the impact of post-processing steps, again
keeping all other steps consistent with the original.
**4.4** **Ablation Study**
Table 3 presents the results of our ablation study.
We compare a range of pre- and post-processing
methods and their influence.
**Pre-processing Methods.** We performed the following settings: no pre-processing before generating demos, directly answering the query before generating, and query understanding before generating.
The result shows that either no pre-processing or
directly answering will compromise performance.
Notably, the absence of pre-processing tends to
yield homogenous outputs despite our introduction of randomness, potentially due to the model’s
3834
-----
challenge in reconciling the demanded relevance
and diversity. Direct answer generation also diminishes performance, as initial errors propagate,
leading to more erroneous or ambiguous answers in
subsequent steps. Hence, a robust pre-processing
strategy enhances model performance by ensuring
diverse and correct initial responses.
**Post-processing Methods.** We performed the following settings: no post-processing after generating two demos, self-critique after generating two,
sampling the best two demos after generating five,
and self-critique after sampling. In the self-critique
step, we prompt the model to verify and refine the
_Dgen or DTopK according to the same criteria C._
However, the result indicates that LLMs are no
better at verifying their own outputs, echoing the
findings of Stechly et al. (2023). This also discourages us from constantly improving the quality of
demos through iterative verification.
**5** **Discussion**
**5.1** **Consistency when Model Scaling**
Figure 3 presents the results on varying sizes of
the foundation model, ranging from Llama-2-7BChat to Llama-2-70B-Chat. According to the results, analogical reasoning did not work on smaller
models, likely due to their limited capacity to follow hard instructions. The Self-ICL method encountered similar issues, with the small models’
inability to provide accurate demos compromising
their effectiveness. In contrast, our method, which
incorporates extra processing steps around demo
generation and lowers the task difficulty, proved
more adaptable even when the model is weaker
(∼10B parameters). It suggests that our approach
is highly adaptable and can be more effective for
resource-limited or mobile scenarios.
**5.2** **Effectiveness Toward Complex Tasks**
In the main results, we can observe that both SelfICL and SELF-DEMOS have shown a considerable improvement on the most challenging MATH
datasets. This may suggest that the methods of
generating demos in advance are more effective for
complex tasks, as we will detail here.
Table 4 presents the full results on the MATH
dataset across different complexity levels. Analogical Prompting, as a single-step prompting method,
is most effective for simple problems, showing an
entirely different trend from the other methods.
This aligns with our previous analysis that high
**Prompting Method**
**Level**
FS Self-ICL + FS Analog + FS SELF-DEMOS
1 70.2 71.3 (↑ 1.6) **80.9 (↑** 15.2) 74.5 (↑ 6.1)
2 58.9 61.9 (↑ 5.1) **63.1 (↑** 7.1) 58.3 (↓ 1.0)
3 37.4 38.7 (↑ 3.5) **39.9 (↑** 6.7) 39.1 (↑ 4.5)
4 28.0 **34.7 (↑** 23.9) 24.0 (↓ 14.3) **34.7 (↑** 23.9)
5 12.4 13.8 (↑ 11.3) 11.6 (↓ 6.4) **14.6 (↑** 17.7)
Table 4: Evaluating prompting methods on the MATH
dataset at different complexity levels. The Level corresponds to problem complexity, with higher values
indicating greater difficulty. The percentage of performance improvements / declines compared to the fewshot method (FS) is denoted by (↑) / (↓).
model ability is required for analogical reasoning.
In contrast, Self-ICL and our method significantly
gain in more complex problems. With its greater
focus on the relevance and correctness of demos,
SELF-DEMOS outperforms others in solving the
most difficult level 5 problems.
**5.3** **Comparing with Demo Retrieval**
A key motivation for our idea is to provide relevant
demos for problem-solving, without using an external retriever or demo library. So, is our approach
comparable enough to retrieval-based solutions?
To answer this question, we created two baselines
that retrieve exemplars relevant to the given query
from external data (i.e., the training set of GSM8K
and MATH, which includes labeled Q&A pairs).
Table 5 shows the results of these methods on two
math datasets.
Undoubtedly, the retrieval-based methods perform well, with the dense retriever achieving the
highest scores due to its effective representation
of latent semantics (Karpukhin et al., 2020). Besides, SELF-DEMOS also shows competitive performance, especially on the MATH dataset. This
could be due to more complex questions in the
MATH dataset, resulting in intricate semantic connections that cannot be easily captured by a statistical algorithm like BM25 (Robertson et al., 2009).
In contrast, the GSM8K dataset has more uniform
and centrally distributed questions, making it more
suitable for retrieval-based approaches.
Overall, SELF-DEMOS can still be a good option when resources are limited and retrieval is
less feasible. Moreover, it’s worth noting that the
techniques of demo generation and retrieval are
not mutually exclusive. Our method is particularly
well-suited for a “cold start” and once a certain
amount of demos is accumulated, we can then em
3835
-----
|Col1|OOD-T oolset|
|---|---|
|Few-shot||
|Self-ICL (Few-shot) Analog (Zero-shot) Analog (Few-shot)||
|Self-Demos||
|||
|||
|||
|||
|||
|||
7B 13B 70B
|Col1|GSM8K|
|---|---|
|Few-shot||
|Self-ICL (Few-shot) Analog (Zero-shot) Analog (Few-shot)||
|Self-Demos||
|||
|||
|||
7B 13B 70B
|Col1|MATH|
|---|---|
|Few-shot Self-ICL (Few-shot) Analog (Zero-shot) Analog (Few-shot)||
|Self-Demos||
|||
|||
|||
7B 13B 70B
65
60
55
50
45
40
35
30
25
20
10
50
40
30
20
10
Figure 3: Performance comparison on Llama-2-Chat model family. SELF-DEMOS consistently improves performance across multiple model sizes from 7B, 13B to 70B parameters.
7B
76
75
74
73
72
71
180
160
140
120
100
80
**Dataset**
**Demonstrating Method**
**GSM8K** **MATH**
Demo Retrieval (Sparse) 79.5 37.0
Demo Retrieval (Dense) **79.7** **38.1**
Demo Generation (SELF-DEMOS) 78.2 37.9
Table 5: Comparison with demo retrieval methods on the GSM8K and MATH datasets. The
(Sparse) means sparse retrieval using the BM25 algorithm, and the (Dense) means dense retrieval using
text-embedding-ada-002 API to generate sentence
embedding and apply cosine similarity. Both baselines
retrieve the Top 5 similar samples from the training set
as demonstrations.
|K = 1 K = 2|Col2|
|---|---|
|K = 3||
|||
|||
|||
|Col1|relevant and correct relevant but incorrect irrelevant|
|---|---|
|||
|||
|||
|||
|||
|||
K = 1
K = 2
K = 3
Number of Generated Demos (N)
(a) (b)
Figure 4: (a) Comparison of SELF-DEMOS with vary
relevant and correct
relevant but incorrect
irrelevant
Self-ICL + FS Analog + FS Self-Demos
Prompting Methods
(b)
ing numbers of self-generated demonstrations (N ) and
selected training exemplars (K). (b) Error distribution
of different methods. Demos yielding incorrect answers
can be categorized into three types based on relevance
and accuracy. Both results are on the OOD-Toolset.
ploy a complementary retrieval strategy to improve
efficiency and reduce incremental costs.
**5.4** **Number of Demonstrations Matters**
egorized into three distinct types: (1) Irrelevant
**demos: These exemplars are generated in a similar**
distribution and fail to interpolate between seed demos and given queries. (2) Relevant but incorrect
**demos: This category includes syntactical errors**
and redundant or inaccurate parameters. The issues contribute to false information propagation
and interfere with the final output. (3) Relevant
**and correct demos: Even with correct demonstra-**
tions, errors can occur due to the model’s inherent
limitations and the generalization gap. Based on
Figure 4b, all three methods have similar results in
Category 3 with approximately 140 errors. However, SELF-DEMOS stands out by greatly lowering
the errors in the first two categories. This suggests
that SELF-DEMOS is better at generating relevant
exemplars, which improves generalization across
novel and unseen tasks.
**5.6** **Computational Overhead Analysis**
We examine the impact of varying the number of
self-generated demos (N ) and selected demos (K)
in the tool-using task. The details are shown in Figure 4a. Notably, the model performs better when
selecting two demos. We suspect that a singular
demo is insufficient to grasp all using patterns of an
API and additional samples (K = 3) may introduce
noise and instabilities and hinder model learning.
Our configuration (K = 2, N = 5) not only maximizes accuracy but also ensures efficiency in computational costs. In our experiments, we further
observed a tendency for the model to preferentially
select demos positioned towards the front, indicating the phenomenon of position bias (Ko et al.,
2020; Nori et al., 2023).
**5.5** **Error Analysis**
Furthermore, we manually analyze the errors of
SELF-DEMOS, comparing with the two baselines
of demo generation in Figure 4b. Errors were cat
Our method, based on a multi-step framework, naturally leads to additional computational overhead.
In Table 6, we detail this overhead for each method
3836
-----
aware demos. Our method strategically interpolates between existing demonstrations and the
OOD queries, effectively transforming them into
in-demonstration (ID) queries. In an OOD setting,
SELF-DEMOS achieved state-of-the-art results on
the proposed OOD-Toolset and two public mathematical benchmarks. For future works, we aim to
explore the scalability of the SELF-DEMOS method
across diverse domains and to integrate unsupervised learning techniques to refine the quality of
generated demos further.
**Limitations**
We summarize the limitations of our method as follows: (1) SELF-DEMOS is designed to resolve the
out-of-demonstration queries, which can steadily
improve downstream task performance, but the process involves additional costs. In Section 5.6, we
explore the computational overhead, allowing users
to make informed trade-offs depending on their specific task scenarios. (2) Our method necessitates
certain capabilities of the model. Although we have
done empirical experiments and demonstrated our
approach works for weaker models compared to
other baselines, it still requires the models to have
a certain degree of instruction-following ability.
**Ethics Statement**
In this paper, we have followed ethical standards
and principles to ensure the accuracy and validity of
our research. The dataset was manually cleansed to
ensure the removal of any sensitive or personal information. The human-annotated data is collected
and used in compliance with relevant ethical guidelines. During the data construction process, we
followed ToolAlpaca’s terms under the Apache License 2.0 (Tang et al., 2023).
**Acknowledgment**
The authors wish to thank the anonymous reviewers
for their helpful comments. This work was partially
funded by National Natural Science Foundation of
China (No.62206057,61976056,62076069), Shanghai Rising-Star Program (23QA1400200), Natural
Science Foundation of Shanghai (23ZR1403500),
Program of Shanghai Academic Research Leader
under grant 22XD1401100, CCF-Baidu Open
Fund, and CCF-Baichuan Fund.
**Prompting Method** **Cost** **OOD-Toolset**
Few-shot 0.54 71.9 / 76.6
Few-shot + SC (5 Paths) 2.71 72.5 / 77.2
Few-shot + SC (10 Paths) 5.41 72.2 / 77.0
Self-ICL (Few-shot) 2.37 71.5 / 76.0
Analogical Prompting (Few-shot) 1.21 71.1 / 75.4
Self-Demos (Standard) 4.81 **75.1 / 79.4**
Self-Demos (KV Cache Reuse) 2.84 **75.1 / 79.4**
Table 6: Comparison of computational costs on OOD**Toolset. The cost is calculated according to OpenAI**
price list[3], measured in dollars per thousand uses. The
methods with similar costs are underlined.
and present another computationally demanding
baseline, Self-Consistency, which samples various
reasoning paths and generates a consistent answer
using a majority vote strategy (Wang et al., 2023b).
Complete calculation specifics can be found in Appendix D. Statistically, the standard SELF-DEMOS
incurs a higher overhead compared to other approaches, primarily due to the demo generation
phase that involves repeating the input N times to
generate N demos. This leads to numerous redundant computations (i.e., KV vectors), a drawback
that can be alleviated through caching and reusing
(Pope et al., 2022). It can be achieved by specifying
the parameter n = N upon API invocation[4]. The
trick cuts overhead by approximately 41%, reaching computational efficiency on par with Self-ICL
and Self-Consistency (5 Paths). However, despite
Self-ICL’s step 2 necessitating multiple calls to
model, its distinct query for each input prevents
KV cache reuse (Chen et al., 2023b).
Moreover, SELF-DEMOS offer substantial long**term cost efficiency. When demos are limited, the**
use of our method does result in a higher computational overhead initially. But over time, the highquality demos that we generate can be preserved,
and when a certain amount of them is accumulated,
we can apply complementary demo selection methods to reduce the incremental cost and flatten the
cost curve. Refer to Appendix A for details.
**6** **Conclusion**
This paper focuses on addressing the challenge of
out-of-demonstration (OOD) queries in few-shot
learning scenario. We present a novel prompting
method, SELF-DEMOS, which elicits the OOD
generalizability in LLMs by generating query
[3API Pricing - OpenAI API](https://openai.com/pricing)
[4API Reference - OpenAI API](https://platform.openai.com/docs/api-reference/chat/create#chat-create-n)
3837
-----
**References**
Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke
[Zettlemoyer, and Marjan Ghazvininejad. 2023. In-](https://doi.org/10.18653/V1/2023.FINDINGS-ACL.564)
[context examples selection for machine translation.](https://doi.org/10.18653/V1/2023.FINDINGS-ACL.564)
In Findings of the Association for Computational
_Linguistics: ACL 2023, Toronto, Canada, July 9-14,_
_2023, pages 8857–8873. Association for Computa-_
tional Linguistics.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens
Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack
Clark, Christopher Berner, Sam McCandlish, Alec
Radford, Ilya Sutskever, and Dario Amodei. 2020a.
[Language models are few-shot learners.](https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf) In Ad_vances in Neural Information Processing Systems,_
volume 33, pages 1877–1901. Curran Associates,
Inc.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
Clemens Winter, Christopher Hesse, Mark Chen, Eric
Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess,
Jack Clark, Christopher Berner, Sam McCandlish,
Alec Radford, Ilya Sutskever, and Dario Amodei.
[2020b. Language models are few-shot learners. In](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html)
_Advances in Neural Information Processing Systems_
_33: Annual Conference on Neural Information Pro-_
_cessing Systems 2020, NeurIPS 2020, December 6-_
_12, 2020, virtual._
Jiuhai Chen, Lichang Chen, Chen Zhu, and Tianyi Zhou.
[2023a. How many demonstrations do you need for](http://arxiv.org/abs/2303.08119)
[in-context learning?](http://arxiv.org/abs/2303.08119)
Wei-Lin Chen, Cheng-Kuang Wu, and Hsin-Hsi
[Chen. 2023b. Self-icl: Zero-shot in-context learn-](https://doi.org/10.48550/ARXIV.2305.15035)
[ing with self-generated demonstrations.](https://doi.org/10.48550/ARXIV.2305.15035) _CoRR,_
abs/2305.15035.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training verifiers to solve math word prob-](http://arxiv.org/abs/2110.14168)
[lems. CoRR, abs/2110.14168.](http://arxiv.org/abs/2110.14168)
Katherine M. Collins, Catherine Wong, Jiahai Feng,
[Megan Wei, and Joshua B. Tenenbaum. 2022. Struc-](https://doi.org/10.48550/ARXIV.2205.05718)
[tured, flexible, and robust: benchmarking and im-](https://doi.org/10.48550/ARXIV.2205.05718)
[proving large language models towards more human-](https://doi.org/10.48550/ARXIV.2205.05718)
[like behavior in out-of-distribution reasoning tasks.](https://doi.org/10.48550/ARXIV.2205.05718)
_CoRR, abs/2205.05718._
Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong
Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, and
[Zhifang Sui. 2023. A survey on in-context learning.](http://arxiv.org/abs/2301.00234)
Lingyu Gao, Aditi Chaudhary, Krishna Srinivasan,
Kazuma Hashimoto, Karthik Raman, and Michael
Bendersky. 2023. [Ambiguity-aware in-context](https://doi.org/10.48550/ARXIV.2309.07900)
[learning with large language models.](https://doi.org/10.48550/ARXIV.2309.07900) _CoRR,_
abs/2309.07900.
Danny Halawi, Jean-Stanislas Denain, and Jacob Stein[hardt. 2023. Overthinking the truth: Understanding](https://doi.org/10.48550/ARXIV.2307.09476)
[how language models process false demonstrations.](https://doi.org/10.48550/ARXIV.2307.09476)
_CoRR, abs/2307.09476._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and Ja[cob Steinhardt. 2021. Measuring mathematical prob-](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/be83ab3ecd0db773eb2dc1b0a17836a1-Abstract-round2.html)
[lem solving with the MATH dataset. In Proceedings](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/be83ab3ecd0db773eb2dc1b0a17836a1-Abstract-round2.html)
_of the Neural Information Processing Systems Track_
_on Datasets and Benchmarks 1, NeurIPS Datasets_
_and Benchmarks 2021, December 2021, virtual._
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick
S. H. Lewis, Ledell Wu, Sergey Edunov, Danqi Chen,
[and Wen-tau Yih. 2020. Dense passage retrieval for](https://doi.org/10.18653/V1/2020.EMNLP-MAIN.550)
[open-domain question answering. In Proceedings of](https://doi.org/10.18653/V1/2020.EMNLP-MAIN.550)
_the 2020 Conference on Empirical Methods in Nat-_
_ural Language Processing, EMNLP 2020, Online,_
_November 16-20, 2020, pages 6769–6781. Associa-_
tion for Computational Linguistics.
Hyuhng Joon Kim, Hyunsoo Cho, Junyeob Kim, Taeuk
Kim, Kang Min Yoo, and Sang-goo Lee. 2022.
[Self-generated in-context learning: Leveraging auto-](https://doi.org/10.48550/ARXIV.2206.08082)
[regressive language models as a demonstration gen-](https://doi.org/10.48550/ARXIV.2206.08082)
[erator. CoRR, abs/2206.08082.](https://doi.org/10.48550/ARXIV.2206.08082)
Miyoung Ko, Jinhyuk Lee, Hyunjae Kim, Gangwoo
Kim, and Jaewoo Kang. 2020. [Look at the first](https://doi.org/10.18653/V1/2020.EMNLP-MAIN.84)
[sentence: Position bias in question answering. In](https://doi.org/10.18653/V1/2020.EMNLP-MAIN.84)
_Proceedings of the 2020 Conference on Empirical_
_Methods in Natural Language Processing, EMNLP_
_2020, Online, November 16-20, 2020, pages 1109–_
1121. Association for Computational Linguistics.
Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yu[taka Matsuo, and Yusuke Iwasawa. 2022. Large lan-](https://proceedings.neurips.cc/paper_files/paper/2022/file/8bb0d291acd4acf06ef112099c16f326-Paper-Conference.pdf)
[guage models are zero-shot reasoners. In Advances in](https://proceedings.neurips.cc/paper_files/paper/2022/file/8bb0d291acd4acf06ef112099c16f326-Paper-Conference.pdf)
_Neural Information Processing Systems, volume 35,_
pages 22199–22213. Curran Associates, Inc.
[Itay Levy, Ben Bogin, and Jonathan Berant. 2023. Di-](https://doi.org/10.18653/V1/2023.ACL-LONG.78)
[verse demonstrations improve in-context composi-](https://doi.org/10.18653/V1/2023.ACL-LONG.78)
[tional generalization. In Proceedings of the 61st An-](https://doi.org/10.18653/V1/2023.ACL-LONG.78)
_nual Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers), ACL 2023,_
_Toronto, Canada, July 9-14, 2023, pages 1401–1422._
Association for Computational Linguistics.
Junlong Li, Zhuosheng Zhang, and Hai Zhao. 2022.
[Self-prompting large language models for open-](https://doi.org/10.48550/ARXIV.2212.08635)
[domain QA. CoRR, abs/2212.08635.](https://doi.org/10.48550/ARXIV.2212.08635)
Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan,
Lawrence Carin, and Weizhu Chen. 2022. [What](https://doi.org/10.18653/V1/2022.DEELIO-1.10)
[makes good in-context examples for gpt-3? In Pro-](https://doi.org/10.18653/V1/2022.DEELIO-1.10)
_ceedings of Deep Learning Inside Out: The 3rd Work-_
_shop on Knowledge Extraction and Integration for_
_Deep Learning Architectures, DeeLIO@ACL 2022,_
3838
-----
_Dublin, Ireland and Online, May 27, 2022, pages_
100–114. Association for Computational Linguistics.
Man Luo, Xin Xu, Zhuyun Dai, Panupong Pasupat, Seyed Mehran Kazemi, Chitta Baral, Vaiva
Imbrasaite, and Vincent Y. Zhao. 2023. [Dr.icl:](https://doi.org/10.48550/ARXIV.2305.14128)
[Demonstration-retrieved in-context learning. CoRR,](https://doi.org/10.48550/ARXIV.2305.14128)
abs/2305.14128.
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler
Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon,
Nouha Dziri, Shrimai Prabhumoye, Yiming Yang,
Sean Welleck, Bodhisattwa Prasad Majumder,
Shashank Gupta, Amir Yazdanbakhsh, and Peter
[Clark. 2023. Self-refine: Iterative refinement with](https://doi.org/10.48550/ARXIV.2303.17651)
[self-feedback. CoRR, abs/2303.17651.](https://doi.org/10.48550/ARXIV.2303.17651)
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe,
Mike Lewis, Hannaneh Hajishirzi, and Luke Zettle[moyer. 2022. Rethinking the role of demonstrations:](https://doi.org/10.18653/V1/2022.EMNLP-MAIN.759)
[What makes in-context learning work? In Proceed-](https://doi.org/10.18653/V1/2022.EMNLP-MAIN.759)
_ings of the 2022 Conference on Empirical Methods_
_in Natural Language Processing, EMNLP 2022, Abu_
_Dhabi, United Arab Emirates, December 7-11, 2022,_
pages 11048–11064. Association for Computational
Linguistics.
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu,
Long Ouyang, Christina Kim, Christopher Hesse,
Shantanu Jain, Vineet Kosaraju, William Saunders,
Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen
Krueger, Kevin Button, Matthew Knight, Benjamin
[Chess, and John Schulman. 2021. Webgpt: Browser-](http://arxiv.org/abs/2112.09332)
[assisted question-answering with human feedback.](http://arxiv.org/abs/2112.09332)
_CoRR, abs/2112.09332._
Harsha Nori, Yin Tat Lee, Sheng Zhang, Dean Carignan,
Richard Edgar, Nicolò Fusi, Nicholas King, Jonathan
Larson, Yuanzhi Li, Weishung Liu, Renqian Luo,
Scott Mayer McKinney, Robert Osazuwa Ness, Hoifung Poon, Tao Qin, Naoto Usuyama, Chris White,
[and Eric Horvitz. 2023. Can generalist foundation](https://doi.org/10.48550/ARXIV.2311.16452)
[models outcompete special-purpose tuning? case](https://doi.org/10.48550/ARXIV.2311.16452)
[study in medicine. CoRR, abs/2311.16452.](https://doi.org/10.48550/ARXIV.2311.16452)
OpenAI. 2022. Openai: Introducing chatgpt. Website.
[https://openai.com/blog/chatgpt.](https://openai.com/blog/chatgpt)
Reiner Pope, Sholto Douglas, Aakanksha Chowdhery,
Jacob Devlin, James Bradbury, Anselm Levskaya,
Jonathan Heek, Kefan Xiao, Shivani Agrawal, and
[Jeff Dean. 2022. Efficiently scaling transformer in-](https://doi.org/10.48550/ARXIV.2211.05102)
[ference. CoRR, abs/2211.05102.](https://doi.org/10.48550/ARXIV.2211.05102)
Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen,
Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang,
Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su,
Huadong Wang, Cheng Qian, Runchu Tian, Kunlun
Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen
Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi,
Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong,
Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan,
Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng
Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liu, and
[Maosong Sun. 2023. Tool learning with foundation](http://arxiv.org/abs/2304.08354)
[models.](http://arxiv.org/abs/2304.08354)
Stephen Robertson, Hugo Zaragoza, et al. 2009. The
probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Re_trieval, 3(4):333–389._
Ohad Rubin, Jonathan Herzig, and Jonathan Berant.
[2022. Learning to retrieve prompts for in-context](https://doi.org/10.18653/V1/2022.NAACL-MAIN.191)
[learning. In Proceedings of the 2022 Conference of](https://doi.org/10.18653/V1/2022.NAACL-MAIN.191)
_the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, NAACL 2022, Seattle, WA, United States,_
_July 10-15, 2022, pages 2655–2671. Association for_
Computational Linguistics.
Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, and Weizhu Chen. 2023.
[Synthetic prompting: Generating chain-of-thought](https://proceedings.mlr.press/v202/shao23a.html)
[demonstrations for large language models. In Pro-](https://proceedings.mlr.press/v202/shao23a.html)
_ceedings of the 40th International Conference on_
_Machine Learning, volume 202 of Proceedings of_
_Machine Learning Research, pages 30706–30775._
PMLR.
Peng Shi, Rui Zhang, He Bai, and Jimmy Lin. 2022.
[XRICL: cross-lingual retrieval-augmented in-context](https://doi.org/10.18653/V1/2022.FINDINGS-EMNLP.384)
[learning for cross-lingual text-to-sql semantic pars-](https://doi.org/10.18653/V1/2022.FINDINGS-EMNLP.384)
[ing. In Findings of the Association for Computa-](https://doi.org/10.18653/V1/2022.FINDINGS-EMNLP.384)
_tional Linguistics: EMNLP 2022, Abu Dhabi, United_
_Arab Emirates, December 7-11, 2022, pages 5248–_
5259. Association for Computational Linguistics.
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao,
Abu Awal Md Shoeb, Abubakar Abid, Adam
Fisch, Adam R. Brown, Adam Santoro, Aditya
Gupta, Adrià Garriga-Alonso, Agnieszka Kluska,
Aitor Lewkowycz, Akshat Agarwal, Alethea Power,
Alex Ray, Alex Warstadt, Alexander W. Kocurek,
Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell,
Amanda Dsouza, Ameet Rahane, Anantharaman S.
Iyer, Anders Andreassen, Andrea Santilli, Andreas
Stuhlmüller, Andrew M. Dai, Andrew La, Andrew K.
Lampinen, Andy Zou, Angela Jiang, Angelica Chen,
Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi,
Arfa Tabassum, Arul Menezes, Arun Kirubarajan,
Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakas, and
[et al. 2022. Beyond the imitation game: Quantifying](https://doi.org/10.48550/ARXIV.2206.04615)
[and extrapolating the capabilities of language models.](https://doi.org/10.48550/ARXIV.2206.04615)
_CoRR, abs/2206.04615._
Kaya Stechly, Matthew Marquez, and Subbarao Kamb[hampati. 2023. GPT-4 doesn’t know it’s wrong: An](https://doi.org/10.48550/ARXIV.2310.12397)
[analysis of iterative prompting for reasoning prob-](https://doi.org/10.48550/ARXIV.2310.12397)
[lems. CoRR, abs/2310.12397.](https://doi.org/10.48550/ARXIV.2310.12397)
Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han,
[Qiao Liang, and Le Sun. 2023. Toolalpaca: Gener-](https://doi.org/10.48550/ARXIV.2306.05301)
[alized tool learning for language models with 3000](https://doi.org/10.48550/ARXIV.2306.05301)
[simulated cases. CoRR, abs/2306.05301.](https://doi.org/10.48550/ARXIV.2306.05301)
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
3839
-----
Bhosale, Dan Bikel, Lukas Blecher, Cristian CantonFerrer, Moya Chen, Guillem Cucurull, David Esiobu,
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa,
Isabel Kloumann, Artem Korenev, Punit Singh Koura,
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten,
Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu,
Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas
[Scialom. 2023. Llama 2: Open foundation and fine-](https://doi.org/10.48550/ARXIV.2307.09288)
[tuned chat models. CoRR, abs/2307.09288.](https://doi.org/10.48550/ARXIV.2307.09288)
Karthik Valmeekam, Matthew Marquez, and Subbarao
[Kambhampati. 2023. Can large language models](https://doi.org/10.48550/ARXIV.2310.08118)
[really improve by self-critiquing their own plans?](https://doi.org/10.48550/ARXIV.2310.08118)
_CoRR, abs/2310.08118._
Jindong Wang, Cuiling Lan, Chang Liu, Yidong
Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wen[jun Zeng, and Philip S. Yu. 2023a. Generalizing to](https://doi.org/10.1109/TKDE.2022.3178128)
[unseen domains: A survey on domain generalization.](https://doi.org/10.1109/TKDE.2022.3178128)
_IEEE Trans. Knowl. Data Eng., 35(8):8052–8072._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V.
Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023b. [Self-consistency](https://openreview.net/pdf?id=1PL1NIMMrw)
[improves chain of thought reasoning in language](https://openreview.net/pdf?id=1PL1NIMMrw)
[models. In The Eleventh International Conference](https://openreview.net/pdf?id=1PL1NIMMrw)
_on Learning Representations, ICLR 2023, Kigali,_
_Rwanda, May 1-5, 2023. OpenReview.net._
Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin
Guu, Adams Wei Yu, Brian Lester, Nan Du, An[drew M. Dai, and Quoc V. Le. 2022a. Finetuned](https://openreview.net/forum?id=gEZrGCozdqR)
[language models are zero-shot learners. In The Tenth](https://openreview.net/forum?id=gEZrGCozdqR)
_International Conference on Learning Representa-_
_tions, ICLR 2022, Virtual Event, April 25-29, 2022._
OpenReview.net.
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel,
Barret Zoph, Sebastian Borgeaud, Dani Yogatama,
Maarten Bosma, Denny Zhou, Donald Metzler, Ed H.
Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy
[Liang, Jeff Dean, and William Fedus. 2022b. Emer-](https://openreview.net/forum?id=yzkSU5zdwD)
[gent abilities of large language models. Trans. Mach.](https://openreview.net/forum?id=yzkSU5zdwD)
_Learn. Res., 2022._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le,
[and Denny Zhou. 2022c. Chain-of-thought prompt-](https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf)
[ing elicits reasoning in large language models. In](https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf)
_Advances in Neural Information Processing Systems,_
volume 35, pages 24824–24837. Curran Associates,
Inc.
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen
Ding, Boyang Hong, Ming Zhang, Junzhe Wang,
Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan,
Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran
Wang, Changhao Jiang, Yicheng Zou, Xiangyang
Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng,
Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan
Zheng, Xipeng Qiu, Xuanjing Huan, and Tao Gui.
[2023. The rise and potential of large language model](https://doi.org/10.48550/ARXIV.2309.07864)
[based agents: A survey. CoRR, abs/2309.07864.](https://doi.org/10.48550/ARXIV.2309.07864)
Michihiro Yasunaga, Xinyun Chen, Yujia Li, Panupong
Pasupat, Jure Leskovec, Percy Liang, Ed H. Chi,
[and Denny Zhou. 2023. Large language models as](https://doi.org/10.48550/ARXIV.2310.01714)
[analogical reasoners. CoRR, abs/2310.01714.](https://doi.org/10.48550/ARXIV.2310.01714)
Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex
[Smola. 2023. Automatic chain of thought prompting](https://openreview.net/pdf?id=5NTt8GFjUHkr)
[in large language models. In The Eleventh Inter-](https://openreview.net/pdf?id=5NTt8GFjUHkr)
_national Conference on Learning Representations,_
_ICLR 2023, Kigali, Rwanda, May 1-5, 2023. Open-_
Review.net.
Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen,
Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen,
[and Jiawei Han. 2023. Don’t make your LLM an eval-](https://doi.org/10.48550/ARXIV.2311.01964)
[uation benchmark cheater. CoRR, abs/2311.01964.](https://doi.org/10.48550/ARXIV.2311.01964)
**Appendix**
**A** **Supplementary Experiments on GPT-4**
**Model** **OOD-Toolset** **Cost**
**Few-shot**
GPT-4 76.50 / 79.75 _∼_ 1.12
**SELF-DEMOS**
GPT-4 in all steps 80.50 / 83.50 _∼_ 4.95
GPT-3.5 in all steps 75.50 / 79.50 _∼_ 0.57
GPT-3.5 reuse GPT-4 demos in step4 76.50 / 79.75 _∼_ 0.13
Table 7: Comparison of performance and overhead on
more powerful models (i.e, GPT-4). The cost is calculated according to OpenAI price list, measured by total
dollars spent on 200 instances.
We conducted GPT-4 tests on 200 random OODToolset instances and used its generated demos
as inputs for GPT-3.5 in SELF-DEMOS step 4, as
detailed in Table 7.
Based on the results, we observe that: (1) GPT4’s advanced capabilities allow it to match the performance of GPT-3.5 using SELF-DEMOS with
simply a few-shot approach. However, given the
model’s enhanced capabilities, it comes with a
higher cost. (2) GPT-4 still benefits from the our
proposed method, and the high-quality demos it
generates remain effective for weaker models. This
shows the reusability of demos and proves the way
for SELF-DEMOS to reduce long-term costs.
3840
-----
**B** **Details of OOD-Toolset**
The raw data from ToolAlpaca (Tang et al., 2023)
including the training and testing sets, comprises
468 tool APIs and 4,369 tool-use cases. Due to a
lack of validation of the content generated by GPT3.5, the dataset may contain specific errors, such as
ambiguous queries due to outdated or insufficient
information and incorrect API calls due to null
or wrong values being passed. To address these
issues, we implemented a data cleansing process in
the following steps:
**Rule-based Cleaning.** We structured each tool
API in the raw data into a dictionary with keys
for API Name, Description, Usage Specification,
and Tool-use Cases. The API name identifies the
tool, and the description outlines its purpose. The
usage specification clarifies the API call format and
required parameters. The tool-use cases consist of
user queries and corresponding function call lists.
The rule-based cleaning process involved:
- We removed entries with missing keys and formatting errors, particularly those that did not follow the JSON format in the function calls.
- We removed user queries that required more than
three function calls to be resolved due to their
complexity.
- We removed parameters not directly related to
the core functionality of tools, such as API keys
and sensitive user information.
- We removed tools with fewer than 3 instances
or fewer than 3 functions to ensure that OOD
scenarios could be built.
After the first cleaning round, a total of 322 tools
and 2,788 instances remained.
**Manual Data Cleaning.** In manual data cleaning,
we emphasize the solvability of given queries. The
manual data cleaning process involved:
- We strive to minimize dependencies between
function calls, avoiding scenarios where a subsequent function call relies on the results returned
by preceding ones. This is to ensure that these
queries can be answered in a round of dialog.
- While we avoided the exposure of sensitive user
information, some necessary parameters within
function calls, such as the email address in the
email API, are subjected to obfuscation using a
placeholder, for instance, [email protected].
- Time and location information should be explicitly mentioned in the queries, avoiding the use
of ambiguous pronouns such as ‘today’, ‘tomorrow’, and ‘my home’.
- We confirmed the consistency of parameter values with their data types as defined in the usage
specifications.
After the second cleaning round, the dataset
comprised 321 tools and 2,625 instances. Table
8 presents an illustrative example of the cleaned
dataset.
**Query and Demonstration Construction.** After
two rounds of data cleaning, the correctness and
solvability of the data have been ensured. Then,
we proceeded to select instances from the tool-use
cases and construct corresponding demonstrations.
During the selection process, we tended to choose
longer instances as queries, considering them to be
more challenging. Following that, we randomly
sampled three other instances from the remaining
use cases of the same tool as demos. Note that
the sub-APIs to be called for the demos should be
different from those required for the chosen queries
to fulfill the OOD settings.
Finally, we obtained a set of 1,057 queries, forming our testing set. Table 9 presents an instance of
OOD-Toolset.
**C** **Prompt Templates**
The prompt templates of SELF-DEMOS for each
step in tool-using tasks are presented in Table 10,
11, 12, and 13. Similarly, the prompt templates in
mathematical problem-solving tasks are presented
in Table 14, 15, 16, and 17.
**D** **Details of Computational Overhead**
The details about the computational overhead of
each methods are shown in Table 18.
**E** **Case Study**
Even SELF-DEMOS performs better than all other
methods, there are instances where it falied while
others succeeded. We have picked up 3 representative cases for further analysis: (1) SELF-DEMOS
succeeded while few-shot / Self-ICL failed, (2) fewshot succeeded while SELF-DEMOS failed, and (3)
both failed. Due to space constraints, we put the
full case study in our GitHub repository.
3841
-----
**API Name: MAP**
**Description: MAP API is used for calculating distances, planning routes, and locating points.**
**Usage Specifications:**
DISTANCE: Calculate the distance between two points.
Parameters: {“start”: “Required. String. The starting point for the distance calculation.”, “target”:
“Required. String. The destination point for the distance calculation.”}
ROUTE: Generate a travel route between two points.
Parameters: {“start”: “Required. String. The starting point for the route.”, “target”: “Required. String.
The destination point for the route.”}
SEARCH: Locate nearby points within a set distance.
Parameters: {“target”: “Required. String. The target point to search around.”, “position”: “Required.
String. The current position of the user.”, “distance”: “Required. Integer. The search radius in kilometers.”}
**Tool-use Cases:**
Query: How far is Beijing to Shanghai?
Function calls: [DISTANCE(start=“Beijing”, target=“Shanghai”)]
Query: How many shops are around Times Square in 3km?
Function calls: [SEARCH(target=“shop”, position=“Times Square”, distance=3)]
Query: Show me the route from Los Angeles to San Francisco.
Function calls: [ROUTE(start=“Los Angeles”, target=“San Francisco”)]
Query: Are there any bookstores within 5km of Central Park?
Function calls: [SEARCH(target=“bookstore”, position=“Central Park”, distance=5)]
Query: How do I drive from Big Ben to Tower Bridge, and then to the London Eye?
Function calls: [ROUTE(start=“Big Ben”, target=“Tower Bridge”),
ROUTE(start=“Tower Bridge”, target=“London Eye”)]
Query: What’s the distance from my home at 123 Main St to the grocery store at 456 Oak St, and from
there to my office at 789 Pine St?
Function calls: [DISTANCE(start=“123 Main St”, target=“456 Oak St”),
DISTANCE(start=“456 Oak St”, target=“789 Pine St”)]
Table 8: An illustrative example of the cleaned dataset, composed of four parts: API Name, Description, Usage
**Specifications, and Tool-use Cases. Among them, the tool-use cases are stored as lists.**
**Seed Demos:**
Query: How far is Beijing to Shanghai?
Function calls: [DISTANCE(start=“Beijing”, target=“Shanghai”)]
Query: How many shops are around Times Square in 3km?
Function calls: [SEARCH(target=“shop”, position=“Times Square”, distance=3)]
Query: Are there any bookstores within 5km of Central Park?
Function calls: [SEARCH(target=“bookstore”, position=“Central Park”, distance=5)]
**Query: How do I drive from Big Ben to Tower Bridge, and then to the London Eye?**
Table 9: An instance of OOD-Toolset corresponds to the tool in Table 8, where the function required for the Query
is ROUTE. Consequently, tool-use cases related to this sub-API should not be included in the Seed Demos.
3842
-----
The {tool_name} API is used for {description}. In this task, you need to give a general understanding
of the user query and determine which function should be called to solve the query.
**# Tool Specification:**
{specification}
**# User Query:**
{query}
**# Instruction:**
Generate a general understanding here. In particular, you need to explicitly indicate the name of the
function that should be called.
Table 10: Prompt template for Query Understanding (Step 1) on the OOD-Toolset.
The {tool_name} API is used for {description}. In this task, you need to give an example of when to
use the API based on the specification.
**# Tool Specification:**
{specification}
**# Demonstration:**
{seed_demos}
**# Instruction:**
Generate an example of how to use the {function_mentioned_in_step1} function.
- After "Query: ", describe the user query.
- After "Function Calls: ", give the function calls in the format of ["function_name(parameter=value)"].
Table 11: Prompt template for Query-aware Demo Generation (Step 2) on the OOD-Toolset.
The {tool_name} API is used for {description}. Here are some examples of how to use the API. In
this task, you must check the examples for correctness and select one or two best examples to keep.
**# Tool Specification:**
{specification}
**# Check List:**
- Syntax errors: the function calls should conform to the format like "function_name(parameter=value)".
- Redundant parameters: the function calls must conform to the parameter list in the tool specification.
- Value passing errors: the values of parameters should be of the correct type and reasonable.
- Unsolvable errors: the query should be solvable with the given function.
**# Examples to be Checked:**
{generated_demos}
**# Instruction:**
Select one or two best examples to keep. If there are not enough correct examples, just keep one.
For your output:
- After "Selection: ", give the serial numbers of your choice in the format of <x>, <y>.
- After "Explanation: ", give the reason why you keep the examples.
Table 12: Prompt template for Best-of-N Sampling (Step 3) on the OOD-Toolset.
3843
-----
The {tool_name} API is used for {description}. In this task, you must generate the function calls for
a given query.
**# Tool Specification:**
{specification}
**# Demonstration:**
{seed_demos}
{selected_demos}
**# Instruction:**
Solve the following user query.
Query: {query}
Function calls: Give your answer in the format of ["function_name(parameter=value)"].
Table 13: Prompt template for Response Generation (Step 4) on the OOD-Toolset.
In this task, you need to give a general understanding of mathematical problems, which can be applied to
all similar questions in the same scenario.
**# Problem:**
{question}
**# Instruction:**
Give a general understanding of this problem in one line. Highlight the general solution methodologies to
solve this type of problem. Focus on the problem-solving approach without delving into specific numerical
values or answers.
You can refer to this template for your understanding: This problem involves...To solve this type of
problem...
Table 14: Prompt template for Query Understanding (Step 1) on the GSM8K and MATH datasets.
In this task, you need to recall mathematical problems. When presented with a math problem, recall
another relevant problem as an example. The example should help answer the initial problem.
**# Problem:**
**## The initial problem:**
{question}
**## The understanding you can refer to:**
{understanding}
**# Demonstration:**
{seed_demos}
**# Instruction:**
Recall one example of a math problem relevant to the initial problem. The example should be distinct
from the initial problem (e.g., involving different numbers and names).
- After "Question: ", describe the problem you generate in one line.
- After "Answer: ", explain the step-by-step solution and enclose the ultimate answer in \boxed{}.
Table 15: Prompt template for Query-aware Demo Generation (Step 2) on the GSM8K and MATH datasets.
3844
-----
In this task, you need to check the correctness of these math Q&A pairs and select one or two best
examples to keep for answering the initial problem.
**# The initial problem:**
{Question}
**# Check List:**
- The calculation process in the solution must be correct and without ambiguity.
- The examples should be relevant and helpful in solving the initial problem.
**# Examples to be checked:**
{generated_demos}
**# Instruction:**
Select one or two best examples to keep. If there are not enough correct and helpful examples, just keep
one.
For your answer:
- After "Selection: ", give the serial numbers of your choice in the format of <x>, <y>.
- After "Explanation: ", give the reason why you keep this example.
Table 16: Prompt template for Best-of-N Sampling (Step 3) on the GSM8K and MATH datasets.
Your task is to tackle mathematical problems step by step. You can refer to these demonstrations to give
your reasoning process.
**# Demonstration:**
{seed_demos}
{selected_demos}
**# Instruction:**
Solve the following problem step by step.
Question: {Question}
Answer: Explain the step-by-step solution and enclose the ultimate answer in \boxed{} here.
Table 17: Prompt template for Response Generation (Step 4) on the GSM8K and MATH datasets.
**Prompting Method** **Avg. #tokens of Input** **Avg.#tokens of Output** **Cost** **OOD-Toolset**
Few-shot 496.0 22.6 0.54 71.9 / 76.6
Few-shot + SC (5 Paths) 496.0 × 5 = 2480.0 22.6 × 5 = 113.0 2.71 72.5 / 77.2
Few-shot + SC (10 Paths) 496.0 × 10 = 4960.0 22.6 × 10 = 226.0 5.41 72.2 / 77.0
Self-ICL (Few-shot) 456.4 + 498.4 × 2 + 625.1 = 2078.3 78.7 + 23.6 × 2 + 22.2 = 148.1 2.37 71.5 / 76.0
Analogical Prompting (Few-shot) 598.0 304.5 1.21 71.1 / 75.4
Self-Demos (Standard) 323.6 + 490.8 × 5 + 776.4 + 606.4 = 4160.4 3.4 + 58.0 × 5 + 7.7 + 22.5 = 323.6 4.81 **75.1 / 79.4**
Self-Demos (KV Cache Reuse) 323.6 + 490.8 + 776.4 + 606.4 = 2197.2 3.4 + 58.0 × 5 + 7.7 + 22.5 = 323.6 2.84 **75.1 / 79.4**
Table 18: Average number of input and output tokens of different methods on OOD-Toolset. In the equation, each
term being added represents the average number of tokens per step (used only within a multi-step framework), while
each multiplier indicates the number of times that step is called.
3845
-----
| [
"Jun, Zhao",
"Zhiheng, Xi",
"Qi, Zhang",
"Tao, Gui",
"Kevin, Duh",
"Wei, He",
"Xuanjing, Huang",
"Helena, Gomez",
"Yiwen, Ding",
"Steven, Bethard",
"Shichun, Liu",
"Yi, Lu"
] | 2024-06-01T00:00:00 | NAACL 2024 Findings | false | 0 | 0 | null | https://aclanthology.org/2024.findings-naacl.243 | https://arxiv.org/abs/2404.00884 | https://www.semanticscholar.org/paper/ddb938f86f66478bfde2ef0c871b0a8940e045ed |
Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards | Training on large amounts of rationales (i.e., CoT Fine-tuning) is effective at improving the reasoning capabilities of large language models (LLMs). However, acquiring human-authored rationales or augmenting rationales from proprietary models is costly and not scalable. In this paper, we study the problem of whether LLMs could self-improve their reasoning capabilities. To this end, we propose Self-Explore, where the LLM is tasked to explore the first wrong step (i.e., the first pit) within the rationale and use such signals as fine-grained rewards for further improvement. On the GSM8K and MATH test set, Self-Explore achieves 11.57% and 2.89% improvement on average across three LLMs compared to supervised fine-tuning (SFT). Our code is available at https://github.com/hbin0701/Self-Explore. | null | #### SELF-EXPLORE to Avoid the PIT: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards
**Hyeonbin Hwang[1]** **Doyoung Kim[1]** **Seungone Kim[1][,][2]** **Seonghyeon Ye[1]** **Minjoon Seo[1]**
KAIST AI[1] Carnegie Mellon University[2]
{hbin0701, doyoungkim, seonghyeon.ye, minjoon}@kaist.ac.kr [email protected]
**Abstract**
Training on large amounts of rationales (i.e.,
CoT Fine-tuning) is effective at improving the
reasoning capabilities of large language models
(LLMs). However, acquiring human-authored
rationales or augmenting rationales from proprietary models is costly and not scalable. In
this paper, we study the problem of whether
LLMs could self-improve their reasoning capabilities. To this end, we propose SELFEXPLORE, where the LLM is tasked to explore
the first wrong step (i.e., the first pit) within the
rationale and use such signals as fine-grained rewards for further improvement. On the GSM8K
and MATH test set, SELF-EXPLORE achieves
11.57% and 2.89% improvement on average
across three LLMs compared to supervised fine[tuning (SFT). Our code is available here.](https://github.com/hbin0701/Self-Explore)
Hence, such strategies are limited in advancing
frontier models (Stanton et al., 2021; Gudibande
et al., 2023). One potential solution to address this
issue is to enhance the reasoning capabilities of
LLMs through self-training (Gulcehre et al., 2023;
Chen et al., 2024; Yuan et al., 2024).
Inspired by prior works that focus on aligning
LLMs to user preferences through self-training, we
propose SELF-EXPLORE, a training method designed to self-improve the reasoning capabilities
of LLMs by extracting granular learning signals
from its own generated rationales. Specifically,
a target model conducts step-level exploration to
identify the first wrong step (i.e., first pit) within
each rationale by sampling multiple continuations.
Then, we construct a pair-wise dataset by sorting
the rationales into positive and negative samples at
a step level. Finally, by applying an arbitrary preference learning objective (e.g., Direct Preference
Optimization (DPO) (Rafailov et al., 2023)) on a
step-level, we increase the probability of generating positive rationales and lower the probability of
generating negative ones in a fine-grained manner.
Through experiments, we find that SELFEXPLORE constantly improves the performance
across different base models (Mistral-7B (Jiang
et al., 2023), Llemma-7B (Azerbayev et al., 2023),
and Deepseek-Math 7B (Shao et al., 2024)) without any distillation from proprietary models. For
each model, we observe a 13.19%, 10.23%, and
11.30% improvement on GSM8K (Cobbe et al.,
2021) and a 1.98%, 3.16%, and 3.54% improvement on MATH (Hendrycks et al., 2021) compared
to supervised fine-tuning (SFT). Also, we find that
constructing a pair-wise dataset in a granular manner based on a step-by-step basis (i.e., identifying
the first pit) outperforms a naive approach of constructing based on the correctness of the final prediction, leading to a 3.64% and 2.76% margin on
the GSM8K and MATH dataset, respectively.
**1** **Introduction**
Recent works have shown that large language models (LLMs) can solve complex reasoning tasks
with Chain-of-Thought (CoT) Prompting, which involves generating a rationale before its final prediction (Wei et al., 2023; Kojima et al., 2023; OpenAI
et al., 2023; Team et al., 2023). In contrast, relatively smaller models show limited performance
improvements, and thus prior works have focused
on augmenting rationales from proprietary LLMs
and distilling them to smaller models (Li et al.,
2022; Fu et al., 2023; Kim et al., 2023b; Mukherjee et al., 2023; Yu et al., 2023b; Liu et al., 2023a;
Mitra et al., 2023, 2024a; Li et al., 2024).
However, acquiring high-quality rationales remains challenging. For humans, hand-crafting detailed step-by-step rationale annotations is timeconsuming and costly (Kim et al., 2023a). On the
other hand, using close-sourced models through
APIs incurs high expenses and distillation-based
methods are inherently limited by the performance
of their teacher model, which acts as the upper
bound (Gudibande et al., 2023; Ye et al., 2023).
-----
**Ques/on**
**+ All preceding steps**
**DDpairpair** **Problem** **Correct Comple/on from the** **Problem + Step 1** **DDg-pairpair**
**ŷCORRECT** **ŷINCORRECT** **Step Before the First Pit** **C2** **Step 2**
**First Pit**
**Problem**
Joy can read 8 pages of a book in 20 minutes. How many hours **Explora(on Result**
will it take her to read 120 pages?
**Step 1: Success**
**ŷINCORRECT** **C1** **C2** **C3** **C4** **...**
Since 1 hour is 60 minutes, then 20 minutes is 20/60 = 1/3 of an hour. **Pass** 🔎 **Step 2: Fail**
If Joy can read 8 pages in 20 minutes, then she can read **First** **Target Model** **C1** **C2** **C3** **C4** **...**
8 x (1/3) = <<8*(1/3)=2.66>>2.66 pages in 1 hour. **Pit**
**_Completes N Times_**
### 🕳
Therefore, it will take her 120/2.66 =
<<120/2.66=44>>44 hours to read 120 pages. **_Trains with_**
The answer is 44
Figure 1: Overview of SELF-EXPLORE. From a pairwise dataset ( pair) made through outcome supervision, we
_D_
use the incorrect rationales and make the target model generate multiple completions starting from each step. If
none of the completions reach the answer, we mark that step as the first pit. Then, with the identified first pit, we
reorganize Dpair into a granular preference dataset (Dg-pair) which provides better learning signal during training.
**2** **Related Works**
**2.1** **Mathematical Reasoning of LLMs**
To make a stronger math-reasoning model, previous works have either continually pre-trained
the base model on large math corpus (Lewkowycz
et al., 2022; Azerbayev et al., 2023) or used supervised fine-tuning with a large amount of synthetic
dataset distilled from the frontier models (Luo et al.,
2023; Yu et al., 2023b; Liu et al., 2023a; Mitra et al.,
2024b; Shao et al., 2024; Toshniwal et al., 2024).
There is also a growing number of works focusing
on increasing test-time compute, namely generating multiple rationales then marginalizing over various reasoning paths (Wang et al., 2023), developing
either an outcome-level or process-level separate
verifier that could rank the rationales (Cobbe et al.,
2021; Lightman et al., 2023; Liu et al., 2023a; Hosseini et al., 2024) or decoding under the guidance
of a value-model (Xie et al., 2023; Liu et al., 2023b;
Yu et al., 2023a). Our approach instead focuses on
enhancing the model’s top-1 performance which
reduces test-time computational burden.
**2.2** **Step-level Supervision**
Many studies have suggested the advantages of
step-level guidance (Cobbe et al., 2021; Lightman
et al., 2023), yet acquiring such labels is expensive. Thus, concurrent works rely on pseudo labels,
evaluating whether the model can reach the correct
answer when provided up to each successive step
as input (Wang et al., 2024a,b; Jiao et al., 2024;
Havrilla et al., 2024b). However, most of these
works leverage acquired labels to train a verifier
model, which is either used for PPO (Schulman
et al., 2017) or inference time re-ranking. Our approach does not require any separate module, much
simplifying the overall framework.
**2.3** **Self-Training for Mathematical Reasoning**
Another line of works focus on self-training methods that compensate for the scarcity of high-quality
training data. This includes utilizing self-generated
correct rationales for training (Zelikman et al.,
2022; Yuan et al., 2023; Ni et al., 2023), and also
self-generated incorrect rationales (Havrilla et al.,
2024b; Hosseini et al., 2024) - which can together
form a pairwise dataset that can be trained with
preference learning techniques, such as Direct Preference Optimization (DPO) (Rafailov et al., 2023).
These strategies are particularly effective for
many Math Word Problem (MWP) tasks, where
models demonstrate a much higher performance
when multiple attempts are allowed (pass@k)
rather than just one (pass@1) (Havrilla et al.,
2024a). This indicates that the model indeed has
the potential to reach the correct answer, yet its answer distribution is misaligned. Our work aims to
more precisely steer this distribution towards more
optimal policy with fine-grained supervision.
-----
**3** **Preliminaries**
Given a language model πθ and a dataset D,
_Self-Training algorithms comprise of two stages:_
(1) dataset growth, where the dataset D is augmented with a πθ’s generations, and (2) policy improvement, where the pre-trained model improves
human-alignment through preference learning followed by supervised fine-tuning (Gulcehre et al.,
2023; Yuan et al., 2024; Chen et al., 2024). Here,
we describe two relevant methods that are employed in our framework for self-training.
**3.1** **Rejection Sampling Fine-Tuning**
RFT (Yuan et al., 2023) is a training method where
the pre-trained model MPT is allowed to fine-tune
on its own correct generations. To do so, we first
need a base generator with zero-shot reasoning
ability, which is obtained by training MPT on the
initial dataset D with the MLE objective:
John pays for a candy bar with 4 quarters, 3 dimes, and a
nickel. He got 4 cents back in change. How many cents did the Input
candy bar cost?
He paid with 4*.25 = <<4*.25=1>>1 quarter
He paid with 3*.10 = <<3*.10=0.30>>0.30 in dimes Preceding
Correct
He paid with .05 = <<.05=0.05>>0.05 in nickels Step(s)
So he paid with 1+0.30+0.05 =$<<1+0.30+0.05= 1.35>>1.35
Since he got 4 cents in change that means he paid 1.35+.04 =
$<<1.35+.04=1.39>>1.39 for the candy bar First Pit
So the candy bar cost 1.39*100 = Subsequent
<<1.39*100=139>>139 cents Step(s)
The answer is 139 Conclusion
Figure 2: Example of a rejected sample from GSM8K:
In the First Pit, 0.04 was mistakenly added instead of
being subtracted.
**4** **Method**
**4.1** **Self-Training with Outcome Supervision**
To achieve autonomous improvement for multi-step
reasoning, we follow the general offline preference
recipe from recent models (SFT + DPO) (Tunstall
et al., 2023; Ivison et al., 2023). Yet we only utilize the initial human-curated dataset D and the
training model’s self-generated data. In this light,
we initialize the reference policy πref by applying
Rejection Sampling Fine-Tuning to the pre-trained
model MPT to obtain MRFT.
To construct the pairwise dataset pair, we start
_D_
by adopting the conventional approach of designating a correct solution as a favorable sample (y[+])
and an incorrect solution as an unfavorable sample
(y[−]) for a given problem x, using outcome supervision to determine correctness (Yu et al., 2023a;
Hosseini et al., 2024). We pair each correct solution ˆyi,j in DRFT with the incorrect solution ˆyi,k
from DGEN that has maximum edit distance inbetween, in light of Pal et al. (2024). Overall, we
utilize each solution only once and continue this
pairing process until no further pairs can be formed.
For additional details on pair formation, please see
Appendix C.
After forming the pairwise dataset pair, we
_D_
train the model MRFT using the objective specified in eq. 2. This approach guides the model on
a holistic level to favor policies that generate solutions leading to the correct answer, relative to those
that result in an incorrect answer. In the following
sections, references to DPO specifically denote this
outcome-supervised preference learning approach
which we employ as a baseline method for our
experiments.
_|D|_
log pθ(yi _xi)_ (1)
_|_
_i=1_
X
_LMLE = −_
With the resulting model MSFT, we sample
N candidate rationales ˆyi for each question with
a nonzero temperature T to form GEN =
_D_
(xi, ˆyi,j)[N]j=1
_{cate rationales using heuristics, each solution[|][ x][i][ ∈]_ _[Q][}][. After removing dupli-] ˆyi,j_
is labeled as correct or incorrect by extracting their
predicted final answer with extractor function F
and comparing to the actual answer ai. This set of
correct rationales forms DRFT, and MPT is trained
on this dataset with objective in eq. 1. Note that
_DRFT does not include initial dataset D in our set-_
ting.
**3.2** **Direct Preference Optimization**
DPO (Rafailov et al., 2023) training requires a pairwise dataset consisting of a chosen completion y[+]
and a rejected completion y[−] for a given input x.
Its objective effectively increases the log-likelihood
of the chosen completion over the rejected one:
_LDPO = −E[log σ(ˆrθ(x, y[+]) −_ _rˆθ(x, y[−]))]_
_rˆθ(x, y) = β log_ _[π][θ][(][y][ |][ x][)]_ (2)
_πref_ (y _x)_
_|_
Here, reference model πref is generally initialized with supervised fine-tuning (SFT) with preferred completions for a single epoch to minimize
distribution shift from the true reference distribution.
-----
**4.2** **Multi-Step Preference Learning**
Consider a multi-step problem setting given agent
_πθ, input x, and target sequence y comprised of_
steps {y[1], ..., y[n]}. Then, we can naively define
the agent reward by evaluating the predicted final
answer at the terminal state y[n] against the ground
truth answer a, using extractor function F.
initial wrong step yw corresponding to the first
_pit. If w ̸= 1, the reward of the preceding steps_
_r(y[i]|x, y[1], ..., y[i][−][1]), such that i ≤_ _w −_ 1 should
not be penalized.
**(2) Steps after the first pit For the steps**
subsequent to _yw,_ while it’s clear that
_y[w]_ is flawed, decreasing the likelihood of
_ni=w+1_ _[P]_ [(][y][i][|][x, y][1][, ..., y][i][−][1][)] could adversely
impact the coherency of the model. This concern
P
arises because the error in y[w] may be due to
a minor computation error or wrong formula
construction, whereas the subsequent reasoning
and steps could still be logically sound. (Figure 2)
**4.3** **Self-Explore**
In this light, we apply the reward design from eq. 4
to transform Dpair into step-level granular pairwise dataset Dg-pair. This requires modifying each
_rejected sample within Dpair so that we only reduce_
the likelihood of the first pit y[w]. To find such a step,
we employ our model as a self-guided explorer.
We assess whether the target model can reach
the correct answer by sampling k completions with
a non-zero temperature T from each step. If none
of the completions yield the correct answer, we
label that step as y[w]. This indicates that the step
has low Q-value or potential, suggesting that the
step is either incorrect or is beyond the model’s
capability to utilize it effectively. On the other, if
we do not find y[w] until the end, we discard that
sample. This is because the absence of y[w] suggests
that the sample, produced by the base generator
(MSFT), may not actually be infeasible from the
perspective of the explorer (MRFT).
To form a Dg-pair instance, we set the first pit
_s[w]_ as the new rejected sample. The new input
is then created by concatenating the original input
(question) with all steps prior to the first pit. For the
new chosen sample, we randomly select one correct
completion from s[w][−][1], which matches this new
input. We intentionally use the whole completion
from the explorer to maximize the expected reward,
or the likelihood of deriving the correct answer. In
a similar manner, if w = 1, we simply use the
original chosen sample. Then, we apply preference
learning objective in eq. 2 on our reference model
_MRFT using the new dataset Dg-pair._
Our step-level annotation strategy builds on the
framework first introduced in Wang et al. (2024a).
However, unlike Wang’s approach which utilizes
different models for each role (i.e. completer, tar
1, if F(y[n]) = a (3)
_−1,_ if F(y[n]) ̸= a
_r(x, (y[1], ..., y[n])) =_
If the answer space is large enough, we can
safely assume that a match between these two indicates that all the prior steps are also correct. On
the other hand, if the terminal state yn reached an
incorrect answer, this suggests that the sequence
generation has encountered at least one "pit" - an
irreversible error in its prior steps {y[1], ...y[i][−][1]} that
caused the agent to deviate from the correct path.
Yet once we identify the first pit, we can consider
all subsequent steps as non-relevant, given that
they are already compromised by the preceding
pit. Then, we can re-design the reward for generating each step y[i] in the multi-step problem setting
as follows:
_−1,_ if y[i] is a first pit
1, if i = n
and F(y[i]) = a
0, otherwise
_r(x, (y[1], ..., y[i])) =_
(4)
Meanwhile, the challenge posed by multi-step
tasks is that it is hard to avoid the pit and reach the
right terminal state, especially when the problem
requires many steps to solve. For simplicity, if
we assume there is a constant probability of ϵ to
fall into the pit in each stage, then the expected
reward after generating t steps becomes (1 − _ϵ)[t],_
which exponentially decreases as t gets larger. In
order to minimize this risk, previous works have
utilized DPO to enable the original model as a
reward model, steering away from the episodes that
the model fell into the pit. However, DPO objective
shown in eq. 2 relatively decreases the likelihood
of all tokens in the rejected solution y[−]. In light
of eq. 4, we claim that only the step corresponding
to the first pit should be discouraged. To elucidate,
we consider the following two cases.
**(1) Steps before the first pit For a rejected so-**
lution y[−] = {y[1], ..., y[n]}, there always exists an
-----
**GSM8K**
**GSM8K** **MATH**
Figure 3: Result of three models trained with diverse learning methods. SELF-EXPLORE shows consistent superiority
over other training methods in both GSM8K and MATH benchmark. For 4-Shot, we report the best performance
achieved across three distinct prompts.
get model, and reward model), our method forms
a preference pair using this label which allows the
integration of these distinct systems into a single
model, much simplifying the overall training process.
**4.4** **Experiments**
**Datasets and Models We conduct our experiments**
on two widely used MWP datasets GSM8K (Cobbe
et al., 2021) and MATH (Hendrycks et al., 2021).
GSM8K dataset consists of 7,473 training and
1,319 test problems, while the MATH dataset
contains 7,500 training and 5,000 test problems.
We test SELF-EXPLORE across 3 different models: Mistral-7B (Jiang et al., 2023), Llemma7B (Azerbayev et al., 2023), and Deepseek-Math7B-Base (Shao et al., 2024).[1]
**Hyperparameters For the base generator**
_MSFT, we only train for 2 epochs, yet report the_
performance of the best checkpoint over 5 training
epochs to ensure a fair comparison. Similarly, For
_MRFT, we train the model for one epoch, yet re-_
port the best performance achieved over the course
of 5 epochs. For all supervised fine-tuning, we use
overall batch size of 64 and conduct learning rate
search between {1e[−][6], 1e[−][5]} for all models. To
construct RFT, we use N = 100, with T = 0.7.
_D_
For step-level exploration, we also use temperature
of 0.7, and generate k = 4 at each step. All our generations were carried out using vllm (Kwon et al.,
2023). For DPO training, we use overall batch
1All datasets and models are under MIT license, except for
Mistral-7B which is under Apache 2.0. We use these solely
for research purposes.
size of 32, conduct learning rate search among
_{1e[−][6], 5e[−][6], 1e[−][7]}, and train for 3 epochs to re-_
port the best performance.
**5** **Results**
**5.1** **Main Results**
As shown in Figure 3, SELF-EXPLORE shows
the highest performance in MATH and GSM8K
compared to other methods. Especially, our
method shows 13.19%, 10.23%, 11.30% increase
in GSM8K and 1.98%, 3.16%, 3.54% increase in
MATH compared to Supervised Fine-Tuning (SFT)
for each model. Also, it consistently performs better than training DPO with outcome-supervised
rewards from pair, which shows the strength of
_D_
our step-level reward.
Meanwhile, DPO performs worse than RFT in
MATH dataset for Llemma and DeepSeek-Math.
Note that this does not mean that DPO brought performance degradation, but rather RFT (1 epoch) +
DPO achieved less performance than the optimal
checkpoint achieved by RFT alone. For instance,
when DPO was applied to the one-epoch RFT
checkpoint, the performance showed a marginal
increase from 34.82 to 34.92, whereas applying
SELF-EXPLORE to the same checkpoint achieved
37.68. Unlike the granular supervision provided by
SELF-EXPLORE, we hypothesize that outcome supervision offers a significantly weaker training signal. This weaker signal is more challenging for the
model to interpret and utilize effectively, making it
harder to guide the model towards a successful policy. This may rather lead to reward exploitation or
-----
Data Type GSM8K MATH
Pairwise 74.83 34.92
Granular Pairwise **78.47** **37.68**
- Choose only First Step 75.74 35.76
- Reject All 75.89 36.82
Table 1: DeepSeek-Math’s GSM8K test set accuracy
when trained with DPO on various types of preference
data.
undesired penalization of correct steps that may not
necessarily improve its general reasoning ability.
We also note that the performance gain in MATH
is much lower when compared to GSM8K, which
is primarily due to its difficulty. Not only the task
itself is more inherently challenging, but also the
training dataset is limited in size, which is then
tested against a large pool of test problems. We
hypothesize that low performance of MRFT as
both generator and completer prevents an effective
exploration process when conducting both overall generation and step-level search. In fact, for
the MATH dataset, we observe number of unique
question-level samples in DRFT resulting significantly less. For more details about the dataset
statistics, please refer to Appendix D.
**5.2** **Step-Level Reward Design**
To better justify our design for step-level finegrained reward, we conducted tests on DeepSeekMath using two additional settings from our current
dataset, Dg-pair. 1) Choose Only First Step: For the
new chosen sample, we take only the first correct
_step, rather than the entire completion. This ap-_
proach aligns with the new rejected sample, where
we only minimize the likelihood of the first pit
alone. 2) Reject All: For the new rejected sample,
we reject the first pit along with its all subsequent
steps. We no longer regard the steps after the first
pit as irrelevant; instead, we aim to reduce their
likelihood as well.
As shown in Table 1, we observe that training
with our fine-grained reward yields the best performance in both datasets. While the two other
settings perform better than training with outcomesupervised pairwise dataset, they both result in suboptimal performances. This again highlights the
idea that the learning signal becomes the most effective when maximally utilizing the whole correct
solution while decreasing only the first pit, which
is in line with the eq. 4.
Dataset _k = 4_ _k = 8_ _k = 16_ _k = 32_
GSM8K **70.96** 69.9 70.81 70.05
MATH **17.48** 17.4 17.44 17.10
Table 2: Performance of Mistral-7B for GSM8K and
MATH datasets, with varying exploration size k.
**6** **Analysis**
**6.1** **Ablation Studies**
**Effect of Exploration Space We further analyze**
whether larger exploration space leads to a better performance. Specifically, we aim to analyze
whether steps in the rejected sample which have
**low, non-zero total expected reward (i.e. low prob-**
ability of reaching to the correct answer) should
not be discouraged. These could be found by exploring more paths with larger k. On the other, one
could argue that it is better to prevent the model
from going through such path from the outset by
rigorously evaluating each step against a strict standard of smaller k. Therefore, we test Mistral-7B
with varying step-level exploration size k among
_{4, 8, 16, 32}, with which we accordingly build_
each Dg-pair and train the target model with the
DPO objective.
As shown in Table 2, we see that increasing
exploration size does not lead to performance increase, yet rather often leads to degradation. First
pit detection indeed does occur in later stages when
using larger exploration space - for instance, for
MATH dataset the mean index of s[w] becomes
1.86 → 2.19 → 2.61 → 3.13 with increasing k
values. However, this does not necessarily extend
to a better resulting model performance.
We believe that while it may be technically fea_sible to reach an answer through a certain step, it_
does not necessarily mean that it is favorable. For
instance, if a model has a high probability ϵ of
falling into the pit after a given correct step (i.e. it
tends to associate post-sequences that is logically
incorrect), sometimes it may be more effective to
avoid such step from the beginning, if there are
other correct alternatives that can lead to the correct answer with less future risk. In this manner,
we hypothesize it is favorable to optimize the steps
with high total expected rewards, or otherwise it
may introduce unnecessary noise.
**Effect of Explorer We also investigate the poten-**
tial of enhancing model performance by adopting a
different explorer (or supervisor). Current labeling
-----
Method Acc.
RFT 63.68
DPO 66.64
SELF-EXPLORE: Completers
MistralSFT 67.70
MistralRFT (Ours) 68.46
DeepSeekRFT 66.79
GPT-4 **69.14**
Table 3: GSM8K Test Set Accuracy of the Mistral-7B
when trained DPO with 5.8K instances of supervised by
different completers.
method guarantees a fairly reasonable step-level
accuracy (Wang et al., 2024a), yet as Dg-pair data
quality heavily depends on the explorer’s capability,
we hypothesize that our final model performance
may be bottlenecked by the explorer’s limitations.
With this end, we train DPO objective on
Mistral-7B MRFT with Dg-pair completed by a
range of models, i.e. MistralSFT, MistralRFT,
Deepseek-MathRFT, and GPT-4 (OpenAI et al.,
2023). We use the same step-level exploration
approach in SELF-EXPLORE except for GPT-4,
which showed tendency to identify the wrong step
instead of completing from the given steps even
when provided with explicit instructions. Therefore, we directly prompted GPT-4 to pinpoint the
first wrong step and to generate correct sequence
from there while ensuring it maintains the original
style of the preceding steps. To leverage GPT-4
as the oracle completer, we curated a specialized
subset of Dpair to start with. We first chose one
sample per each unique problem xi pair, and
_∈D_
only included samples where GPT-4 successfully
arrived at the correct conclusion, resulting in total
of 5.8K samples.
As shown in Table 3, we see applying DPO with
either Dpair and Dg-pair results in lower performance due to the dataset’s smaller size. Yet, we
observe that SELF-EXPLORE still performs better
than outcome-supervised DPO in small-scale. Also,
while DeepSeekRFT itself performs better as a gen_erator than MistralRFT (i.e. 71.42 vs 63.68), as a_
completer for MistralRFT, the former yields higher
efficiency. We deduce this may be due to the fact
that DPO generally works better when the training
data, especially when the chosen completions are
closer to its distribution, which is also suggested
by the common practice of training SFT for one
epoch prior to DPO (Rafailov et al., 2023; Yuan
Figure 4: Mistral-7B performance when trained with
different preference learning objectives using outcomelevel supervision (Dpair) or SELF-EXPLORE (Dg-pair)
et al., 2024).
Finally, we observe that using oracle completer
GPT-4 results in a better final model performance
than using the same model’s M . We believe
RFT
that as the generated completions by GPT-4 does
not fully represent the target model’s distribution,
if the completions were generated by a hypothetical oracle M
RFT of the same model, performance
would have been even higher. We believe that this
suggests that our method could be further improved
with more robust exploration methods.
**Effect of Objective Function We also ana-**
lyze whether the effectiveness of our fine-grained
data can be extended to other preference learning
objectives, such as IPO (Azar et al., 2023) and
KTO (Ethayarajh et al., 2024). With other settings
equal, we train Mistral-7B’s MRFT using Dpair and
_Dg-pair, for 1 epoch and τ = 0.01 for IPO._
In Figure 4, we see that for both datasets using fine-grained supervision consistently results
in better model performance than using outcomesupervised pairwise data. This shows the robustness of SELF-EXPLORE across various objectives,
highlighting the general effectiveness of our finegrained data. We have also experimented using
high values of τ for IPO and ORPO (Hong et al.,
2024), however they showed degraded performance
for both types of supervisions.[2]
**6.2** **Qualitative Analysis**
We also qualitatively analyze whether the numerical performance gains also translate into improved
solution quality. To do so, we randomly select 100
questions from GSM8K[3] Test set and generate re
2We posit that the efficacy of self-training hinges on the
introduction of a strong distinct positive signal for the chosen
examples and negative signal for the rejected ones.
3We use GSM8K to guarantee a robust evaluation performance of GPT-4.
-----
sponse from DeepSeek-Math models trained with
RFT, DPO, and SELF-EXPLORE. Then, we use
GPT-4 as our evaluator using FLASK (Ye et al.,
2023), effectively assessing the given solution’s
logical robustness, efficiency and correctness in a
scale of 1-5 against the ground truth solution.
As shown in the Table 4, we see that SELFEXPLORE scores the best result in all criteria. Also,
the general trend in the table implies that increased
numerical performance does indicate a better quality in terms of correctness, robustness, and efficiency. We hypothesize that our method guides
the model to better utilize its available knowledge,
leading to the generation of solutions that are both
more efficient and robust. For additional details
and examples on FLASK evaluation, please see
Appendix G.
**7** **Conclusion**
In this paper, we propose SELF-EXPLORE where
LLMs can self-improve from given initial human curated dataset using fine-grained supervision.
By utilizing automatic self-exploratory annotation,
SELF-EXPLORE effectively integrates the roles of
the annotator, target, and reward models into a single system. On mathematical reasoning datasets
GSM8K and MATH, our method outperforms traditional supervised fine-tuning (SFT) method by
11.57% and 2.89% in average across three different
models, respectively. We hope our work could motivate future works to explore self-training methods
that could more robustly generalize to a broader
reasoning space across various domains, with ends
of advancing the frontier of LLM reasoning.
**Limitations**
We propose a method on how to better exploit the
_solution space to provide a better fine-grained su-_
pervision for self-improving reasoning capabilities.
Yet given limited amount of questions, which is a
quite common scenario, preference learning with
self-generated samples may be prone to exploitation (or overfitting) and thus increases top-1 performance at the expense of diminished test-time exploration robustness. (See Appendix A). We leave
as a future work to explore the potential of integrating collection of diverse datasets as in Longpre
et al. (2023) to mitigate this issue, thereby enabling
the model to generalize across a broader question
space.
Also, our work is currently conducted with 7B
**Model** **Robustness** **Correctness** **Efficiency**
RFT 3.87 3.86 4.07
DPO 4.19 4.15 4.35
Self-Explore **4.27** **4.28** **4.44**
Table 4: Comparison of FLASK Logical Metrics Across
Different Training Methods, using DeepSeek-Math on
GSM8K.
pre-trained models and does not consider extensively fine-tuned CoT models or larger scale architectures that have shown stronger reasoning capabilities (Yu et al., 2023b; Mitra et al., 2024b;
Shao et al., 2024). We believe for practical selftraining applications, it is crucial to explore continuous training processes on these sophisticated
models. We encourage future works to investigate
about how these open-sourced, advanced reasoning
models can further enhance their self-improvement
processes in a robust and effective manner.
**References**
Mohammad Gheshlaghi Azar, Mark Rowland, Bilal
Piot, Daniel Guo, Daniele Calandriello, Michal
Valko, and Rémi Munos. 2023. A general theoretical paradigm to understand learning from human
preferences. arXiv preprint arXiv:2310.12036.
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster,
Marco Dos Santos, Stephen McAleer, Albert Q.
Jiang, Jia Deng, Stella Biderman, and Sean Welleck.
2023. Llemma: An open language model for mathematics. arXiv preprint arXiv:2310.10631.
Zixiang Chen, Yihe Deng, Huizhuo Yuan, Kaixuan Ji,
and Quanquan Gu. 2024. Self-play fine-tuning converts weak language models to strong language models. arXiv preprint arXiv:2401.01335.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. Bert: Pre-training of deep
bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff,
Dan Jurafsky, and Douwe Kiela. 2024. Kto: Model
alignment as prospect theoretic optimization. arXiv
_preprint arXiv:2402.01306._
Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and
Tushar Khot. 2023. Specializing smaller language
-----
models towards multi-step reasoning. arXiv preprint
_arXiv:2301.12726._
Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang
Geng, Hao Liu, Pieter Abbeel, Sergey Levine, and
Dawn Song. 2023. The false promise of imitating
proprietary llms. arXiv preprint arXiv:2305.15717.
Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek
Sharma, Aditya Siddhant, Alex Ahern, Miaosen
Wang, Chenjie Gu, Wolfgang Macherey, Arnaud
Doucet, Orhan Firat, and Nando de Freitas. 2023.
Reinforced self-training (rest) for language modeling.
_arXiv preprint arXiv:2308.08998._
Alex Havrilla, Yuqing Du, Sharath Chandra Raparthy,
Christoforos Nalmpantis, Jane Dwivedi-Yu, Maksym
Zhuravinskyi, Eric Hambro, Sainbayar Sukhbaatar,
and Roberta Raileanu. 2024a. Teaching large language models to reason with reinforcement learning.
_arXiv preprint arXiv:2403.04642._
Alex Havrilla, Sharath Raparthy, Christoforus Nalmpantis, Jane Dwivedi-Yu, Maksym Zhuravinskyi,
Eric Hambro, and Roberta Railneau. 2024b. Glore:
When, where, and how to improve llm reasoning
via global and local refinements. _arXiv preprint_
_arXiv:2402.10963._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. arXiv preprint
_arXiv:2103.03874._
Jiwoo Hong, Noah Lee, and James Thorne. 2024. Orpo:
Monolithic preference optimization without reference model. arXiv preprint arXiv:2403.07691.
Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron
Courville, Alessandro Sordoni, and Rishabh Agarwal. 2024. V-star: Training verifiers for self-taught
reasoners. arXiv preprint arXiv:2402.06457.
Hamish Ivison, Yizhong Wang, Valentina Pyatkin,
Nathan Lambert, Matthew Peters, Pradeep Dasigi,
Joel Jang, David Wadden, Noah A. Smith, Iz Beltagy, and Hannaneh Hajishirzi. 2023. Camels in a
changing climate: Enhancing lm adaptation with tulu
2. arXiv preprint arXiv:2311.10702.
Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud,
Marie-Anne Lachaux, Pierre Stock, Teven Le Scao,
Thibaut Lavril, Thomas Wang, Timothée Lacroix,
and William El Sayed. 2023. Mistral 7b. _arXiv_
_preprint arXiv:2310.06825._
Fangkai Jiao, Chengwei Qin, Zhengyuan Liu, Nancy F.
Chen, and Shafiq Joty. 2024. Learning planningbased reasoning by trajectories collection and
process reward synthesizing. _arXiv preprint_
_arXiv:2402.00658._
Seungone Kim, Se June Joo, Yul Jang, Hyungjoo Chae,
and Jinyoung Yeo. 2023a. Cotever: Chain of thought
prompting annotation toolkit for explanation verification. arXiv preprint arXiv:2303.03628.
Seungone Kim, Se June Joo, Doyoung Kim, Joel
Jang, Seonghyeon Ye, Jamin Shin, and Minjoon
Seo. 2023b. The cot collection: Improving zeroshot and few-shot learning of language models
via chain-of-thought fine-tuning. _arXiv preprint_
_arXiv:2305.14045._
Robert Kirk, Ishita Mediratta, Christoforos Nalmpantis,
Jelena Luketina, Eric Hambro, Edward Grefenstette,
and Roberta Raileanu. 2024. Understanding the effects of rlhf on llm generalisation and diversity. arXiv
_preprint arXiv:2310.06452._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2023. Large language models are zero-shot reasoners. arXiv preprint
_arXiv:2205.11916._
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying
Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E.
Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language
model serving with pagedattention. arXiv preprint
_arXiv:2309.06180._
Aitor Lewkowycz, Anders Andreassen, David Dohan,
Ethan Dyer, Henryk Michalewski, Vinay Ramasesh,
Ambrose Slone, Cem Anil, Imanol Schlag, Theo
Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy
Gur-Ari, and Vedant Misra. 2022. Solving quantitative reasoning problems with language models. arXiv
_preprint arXiv:2206.14858._
Chen Li, Weiqi Wang, Jingcheng Hu, Yixuan Wei, Nanning Zheng, Han Hu, Zheng Zhang, and Houwen
Peng. 2024. Common 7b language models already
possess strong math capabilities. _arXiv preprint_
_arXiv:2403.04706._
Shiyang Li, Jianshu Chen, Yelong Shen, Zhiyu Chen,
Xinlu Zhang, Zekun Li, Hong Wang, Jing Qian,
Baolin Peng, Yi Mao, Wenhu Chen, and Xifeng
Yan. 2022. Explanations from large language models make small reasoners better. _arXiv preprint_
_arXiv:2210.06726._
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri
Edwards, Bowen Baker, Teddy Lee, Jan Leike,
John Schulman, Ilya Sutskever, and Karl Cobbe.
2023. Let’s verify step by step. _arXiv preprint_
_arXiv:2305.20050._
Bingbin Liu, Sebastien Bubeck, Ronen Eldan, Janardhan Kulkarni, Yuanzhi Li, Anh Nguyen, Rachel Ward,
and Yi Zhang. 2023a. Tinygsm: achieving >80% on
gsm8k with small language models. arXiv preprint
_arXiv:2312.09241._
Jiacheng Liu, Andrew Cohen, Ramakanth Pasunuru,
Yejin Choi, Hannaneh Hajishirzi, and Asli Celikyilmaz. 2023b. Don’t throw away your value
-----
model! making ppo even better via value-guided
monte-carlo tree search decoding. arXiv preprint
_arXiv:2309.15028._
Shayne Longpre, Le Hou, Tu Vu, Albert Webson,
Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V.
Le, Barret Zoph, Jason Wei, and Adam Roberts.
2023. The flan collection: Designing data and methods for effective instruction tuning. arXiv preprint
_arXiv:2301.13688._
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wizardmath: Empowering mathematical reasoning for
large language models via reinforced evol-instruct.
_arXiv preprint arXiv:2308.09583._
Arindam Mitra, Luciano Del Corro, Shweti Mahajan,
Andres Codas, Clarisse Simoes, Sahaj Agarwal, Xuxi
Chen, Anastasia Razdaibiedina, Erik Jones, Kriti Aggarwal, Hamid Palangi, Guoqing Zheng, Corby Rosset, Hamed Khanpour, and Ahmed Awadallah. 2023.
Orca 2: Teaching small language models how to reason. arXiv preprint arXiv:2311.11045.
Arindam Mitra, Hamed Khanpour, Corby Rosset, and
Ahmed Awadallah. 2024a. Orca-math: Unlocking
the potential of slms in grade school math. arXiv
_preprint arXiv:2402.14830._
Arindam Mitra, Hamed Khanpour, Corby Rosset, and
Ahmed Awadallah. 2024b. Orca-math: Unlocking
the potential of slms in grade school math. arXiv
_preprint arXiv:2402.14830._
Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed
Awadallah. 2023. Orca: Progressive learning from
complex explanation traces of gpt-4. arXiv preprint
_arXiv:2306.02707._
Ansong Ni, Jeevana Priya Inala, Chenglong Wang, Oleksandr Polozov, Christopher Meek, Dragomir Radev,
and Jianfeng Gao. 2023. Learning math reasoning
from self-sampled correct and partially-correct solutions. arXiv preprint arXiv:2205.14318.
OpenAI, :, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin,
Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mo Bavarian, Jeff Belgum, Irwan Bello,
Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman,
Tim Brooks, Miles Brundage, Kevin Button, Trevor
Cai, Rosie Campbell, Andrew Cann, Brittany Carey,
Chelsea Carlson, Rory Carmichael, Brooke Chan,
Che Chang, Fotis Chantzis, Derek Chen, Sully Chen,
Ruby Chen, Jason Chen, Mark Chen, Ben Chess,
Chester Cho, Casey Chu, Hyung Won Chung, Dave
Cummings, Jeremiah Currier, Yunxing Dai, Cory
Decareaux, Thomas Degry, Noah Deutsch, Damien
Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti,
Tyna Eloundou, David Farhi, Liam Fedus, Niko
Felix, Simón Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian
Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh,
Rapha Gontijo-Lopes, Jonathan Gordon, Morgan
Grafstein, Scott Gray, Ryan Greene, Joshua Gross,
Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse
Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade
Hickey, Peter Hoeschele, Brandon Houghton, Kenny
Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu
Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger
Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie
Jonn, Heewoo Jun, Tomer Kaftan, Łukasz Kaiser,
Ali Kamali, Ingmar Kanitscheider, Nitish Shirish
Keskar, Tabarak Khan, Logan Kilpatrick, Jong Wook
Kim, Christina Kim, Yongjik Kim, Hendrik Kirchner, Jamie Kiros, Matt Knight, Daniel Kokotajlo,
Łukasz Kondraciuk, Andrew Kondrich, Aris Konstantinidis, Kyle Kosic, Gretchen Krueger, Vishal
Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan
Leike, Jade Leung, Daniel Levy, Chak Ming Li,
Rachel Lim, Molly Lin, Stephanie Lin, Mateusz
Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue,
Anna Makanju, Kim Malfacini, Sam Manning, Todor
Markov, Yaniv Markovski, Bianca Martin, Katie
Mayer, Andrew Mayne, Bob McGrew, Scott Mayer
McKinney, Christine McLeavey, Paul McMillan,
Jake McNeil, David Medina, Aalok Mehta, Jacob
Menick, Luke Metz, Andrey Mishchenko, Pamela
Mishkin, Vinnie Monaco, Evan Morikawa, Daniel
Mossing, Tong Mu, Mira Murati, Oleg Murk, David
Mély, Ashvin Nair, Reiichiro Nakano, Rajeev Nayak,
Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh,
Long Ouyang, Cullen O’Keefe, Jakub Pachocki, Alex
Paino, Joe Palermo, Ashley Pantuliano, Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex
Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov,
Henrique Ponde de Oliveira Pinto, Michael, Pokorny, Michelle Pokrass, Vitchyr Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl,
Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh,
Cameron Raymond, Francis Real, Kendra Rimbach,
Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders, Shibani Santurkar,
Girish Sastry, Heather Schmidt, David Schnurr, John
Schulman, Daniel Selsam, Kyla Sheppard, Toki
Sherbakov, Jessica Shieh, Sarah Shoker, Pranav
Shyam, Szymon Sidor, Eric Sigler, Maddie Simens,
Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin
Sokolowsky, Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever,
Jie Tang, Nikolas Tezak, Madeleine Thompson, Phil
Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya,
Chelsea Voss, Carroll Wainwright, Justin Jay Wang,
Alvin Wang, Ben Wang, Jonathan Ward, Jason Wei,
CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner,
-----
Clemens Winter, Samuel Wolrich, Hannah Wong,
Lauren Workman, Sherwin Wu, Jeff Wu, Michael
Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba, Rowan Zellers, Chong
Zhang, Marvin Zhang, Shengjia Zhao, Tianhao
Zheng, Juntang Zhuang, William Zhuk, and Barret
Zoph. 2023. Gpt-4 technical report. arXiv preprint
_arXiv:2303.08774._
Arka Pal, Deep Karkhanis, Samuel Dooley, Manley Roberts, Siddartha Naidu, and Colin White.
2024. Smaug: Fixing failure modes of preference optimisation with dpo-positive. arXiv preprint
_arXiv:2402.13228._
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano
Ermon, Christopher D. Manning, and Chelsea Finn.
2023. Direct preference optimization: Your language
model is secretly a reward model. arXiv preprint
_arXiv:2305.18290._
John Schulman, Filip Wolski, Prafulla Dhariwal,
Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint
_arXiv:1707.06347._
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu,
Junxiao Song, Mingchuan Zhang, Y. K. Li, Y. Wu,
and Daya Guo. 2024. Deepseekmath: Pushing the
limits of mathematical reasoning in open language
models. arXiv preprint arXiv:2402.03300.
Samuel Stanton, Pavel Izmailov, Polina Kirichenko,
Alexander A. Alemi, and Andrew Gordon Wilson.
2021. Does knowledge distillation really work?
_arXiv preprint arXiv:2106.05945._
Gemini Team, Rohan Anil, Sebastian Borgeaud,
Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu
Soricut, Johan Schalkwyk, Andrew M. Dai, Anja
Hauth, Katie Millican, David Silver, Slav Petrov,
Melvin Johnson, Ioannis Antonoglou, Julian Schrittwieser, Amelia Glaese, Jilin Chen, Emily Pitler,
Timothy Lillicrap, Angeliki Lazaridou, Orhan Firat, James Molloy, Michael Isard, Paul R. Barham,
Tom Hennigan, Benjamin Lee, Fabio Viola, Malcolm
Reynolds, Yuanzhong Xu, Ryan Doherty, Eli Collins,
Clemens Meyer, Eliza Rutherford, Erica Moreira,
Kareem Ayoub, Megha Goel, George Tucker, Enrique Piqueras, Maxim Krikun, Iain Barr, Nikolay
Savinov, Ivo Danihelka, Becca Roelofs, Anaïs White,
Anders Andreassen, Tamara von Glehn, Lakshman
Yagati, Mehran Kazemi, Lucas Gonzalez, Misha
Khalman, Jakub Sygnowski, Alexandre Frechette,
Charlotte Smith, Laura Culp, Lev Proleev, Yi Luan,
Xi Chen, James Lottes, Nathan Schucher, Federico
Lebron, Alban Rrustemi, Natalie Clay, Phil Crone,
Tomas Kocisky, Jeffrey Zhao, Bartek Perz, Dian Yu,
Heidi Howard, Adam Bloniarz, Jack W. Rae, Han
Lu, Laurent Sifre, Marcello Maggioni, Fred Alcober,
Dan Garrette, Megan Barnes, Shantanu Thakoor, Jacob Austin, Gabriel Barth-Maron, William Wong,
Rishabh Joshi, Rahma Chaabouni, Deeni Fatiha,
Arun Ahuja, Ruibo Liu, Yunxuan Li, Sarah Cogan,
Jeremy Chen, Chao Jia, Chenjie Gu, Qiao Zhang,
Jordan Grimstad, Ale Jakse Hartman, Martin Chadwick, Gaurav Singh Tomar, Xavier Garcia, Evan
Senter, Emanuel Taropa, Thanumalayan Sankaranarayana Pillai, Jacob Devlin, Michael Laskin, Diego
de Las Casas, Dasha Valter, Connie Tao, Lorenzo
Blanco, Adrià Puigdomènech Badia, David Reitter,
Mianna Chen, Jenny Brennan, Clara Rivera, Sergey
Brin, Shariq Iqbal, Gabriela Surita, Jane Labanowski,
Abhi Rao, Stephanie Winkler, Emilio Parisotto, Yiming Gu, Kate Olszewska, Yujing Zhang, Ravi Addanki, Antoine Miech, Annie Louis, Laurent El
Shafey, Denis Teplyashin, Geoff Brown, Elliot Catt,
Nithya Attaluri, Jan Balaguer, Jackie Xiang, Pidong Wang, Zoe Ashwood, Anton Briukhov, Albert Webson, Sanjay Ganapathy, Smit Sanghavi,
Ajay Kannan, Ming-Wei Chang, Axel Stjerngren,
Josip Djolonga, Yuting Sun, Ankur Bapna, Matthew
Aitchison, Pedram Pejman, Henryk Michalewski,
Tianhe Yu, Cindy Wang, Juliette Love, Junwhan Ahn,
Dawn Bloxwich, Kehang Han, Peter Humphreys,
Thibault Sellam, James Bradbury, Varun Godbole,
Sina Samangooei, Bogdan Damoc, Alex Kaskasoli,
Sébastien M. R. Arnold, Vijay Vasudevan, Shubham
Agrawal, Jason Riesa, Dmitry Lepikhin, Richard Tanburn, Srivatsan Srinivasan, Hyeontaek Lim, Sarah
Hodkinson, Pranav Shyam, Johan Ferret, Steven
Hand, Ankush Garg, Tom Le Paine, Jian Li, Yujia Li, Minh Giang, Alexander Neitz, Zaheer Abbas,
Sarah York, Machel Reid, Elizabeth Cole, Aakanksha
Chowdhery, Dipanjan Das, Dominika Rogozi´nska,
Vitaly Nikolaev, Pablo Sprechmann, Zachary Nado,
Lukas Zilka, Flavien Prost, Luheng He, Marianne
Monteiro, Gaurav Mishra, Chris Welty, Josh Newlan,
Dawei Jia, Miltiadis Allamanis, Clara Huiyi Hu,
Raoul de Liedekerke, Justin Gilmer, Carl Saroufim,
Shruti Rijhwani, Shaobo Hou, Disha Shrivastava,
Anirudh Baddepudi, Alex Goldin, Adnan Ozturel,
Albin Cassirer, Yunhan Xu, Daniel Sohn, Devendra Sachan, Reinald Kim Amplayo, Craig Swanson, Dessie Petrova, Shashi Narayan, Arthur Guez,
Siddhartha Brahma, Jessica Landon, Miteyan Patel,
Ruizhe Zhao, Kevin Villela, Luyu Wang, Wenhao
Jia, Matthew Rahtz, Mai Giménez, Legg Yeung,
Hanzhao Lin, James Keeling, Petko Georgiev, Diana Mincu, Boxi Wu, Salem Haykal, Rachel Saputro, Kiran Vodrahalli, James Qin, Zeynep Cankara,
Abhanshu Sharma, Nick Fernando, Will Hawkins,
Behnam Neyshabur, Solomon Kim, Adrian Hutter, Priyanka Agrawal, Alex Castro-Ros, George
van den Driessche, Tao Wang, Fan Yang, Shuo yiin
Chang, Paul Komarek, Ross McIlroy, Mario Luˇci´c,
Guodong Zhang, Wael Farhan, Michael Sharman,
Paul Natsev, Paul Michel, Yong Cheng, Yamini
Bansal, Siyuan Qiao, Kris Cao, Siamak Shakeri,
Christina Butterfield, Justin Chung, Paul Kishan
Rubenstein, Shivani Agrawal, Arthur Mensch, Kedar
Soparkar, Karel Lenc, Timothy Chung, Aedan Pope,
Loren Maggiore, Jackie Kay, Priya Jhakra, Shibo
Wang, Joshua Maynez, Mary Phuong, Taylor Tobin,
Andrea Tacchetti, Maja Trebacz, Kevin Robinson,
Yash Katariya, Sebastian Riedel, Paige Bailey, Kefan Xiao, Nimesh Ghelani, Lora Aroyo, Ambrose
Slone, Neil Houlsby, Xuehan Xiong, Zhen Yang,
-----
Elena Gribovskaya, Jonas Adler, Mateo Wirth, Lisa
Lee, Music Li, Thais Kagohara, Jay Pavagadhi, Sophie Bridgers, Anna Bortsova, Sanjay Ghemawat,
Zafarali Ahmed, Tianqi Liu, Richard Powell, Vijay
Bolina, Mariko Iinuma, Polina Zablotskaia, James
Besley, Da-Woon Chung, Timothy Dozat, Ramona
Comanescu, Xiance Si, Jeremy Greer, Guolong Su,
Martin Polacek, Raphaël Lopez Kaufman, Simon
Tokumine, Hexiang Hu, Elena Buchatskaya, Yingjie
Miao, Mohamed Elhawaty, Aditya Siddhant, Nenad
Tomasev, Jinwei Xing, Christina Greer, Helen Miller,
Shereen Ashraf, Aurko Roy, Zizhao Zhang, Ada Ma,
Angelos Filos, Milos Besta, Rory Blevins, Ted Klimenko, Chih-Kuan Yeh, Soravit Changpinyo, Jiaqi
Mu, Oscar Chang, Mantas Pajarskas, Carrie Muir,
Vered Cohen, Charline Le Lan, Krishna Haridasan,
Amit Marathe, Steven Hansen, Sholto Douglas, Rajkumar Samuel, Mingqiu Wang, Sophia Austin,
Chang Lan, Jiepu Jiang, Justin Chiu, Jaime Alonso
Lorenzo, Lars Lowe Sjösund, Sébastien Cevey,
Zach Gleicher, Thi Avrahami, Anudhyan Boral,
Hansa Srinivasan, Vittorio Selo, Rhys May, Konstantinos Aisopos, Léonard Hussenot, Livio Baldini
Soares, Kate Baumli, Michael B. Chang, Adrià Recasens, Ben Caine, Alexander Pritzel, Filip Pavetic,
Fabio Pardo, Anita Gergely, Justin Frye, Vinay
Ramasesh, Dan Horgan, Kartikeya Badola, Nora
Kassner, Subhrajit Roy, Ethan Dyer, Víctor Campos, Alex Tomala, Yunhao Tang, Dalia El Badawy,
Elspeth White, Basil Mustafa, Oran Lang, Abhishek Jindal, Sharad Vikram, Zhitao Gong, Sergi
Caelles, Ross Hemsley, Gregory Thornton, Fangxiaoyu Feng, Wojciech Stokowiec, Ce Zheng, Phoebe
Thacker, Ça˘glar Ünlü, Zhishuai Zhang, Mohammad Saleh, James Svensson, Max Bileschi, Piyush
Patil, Ankesh Anand, Roman Ring, Katerina Tsihlas,
Arpi Vezer, Marco Selvi, Toby Shevlane, Mikel Rodriguez, Tom Kwiatkowski, Samira Daruki, Keran
Rong, Allan Dafoe, Nicholas FitzGerald, Keren
Gu-Lemberg, Mina Khan, Lisa Anne Hendricks,
Marie Pellat, Vladimir Feinberg, James CobonKerr, Tara Sainath, Maribeth Rauh, Sayed Hadi
Hashemi, Richard Ives, Yana Hasson, YaGuang
Li, Eric Noland, Yuan Cao, Nathan Byrd, Le Hou,
Qingze Wang, Thibault Sottiaux, Michela Paganini,
Jean-Baptiste Lespiau, Alexandre Moufarek, Samer
Hassan, Kaushik Shivakumar, Joost van Amersfoort, Amol Mandhane, Pratik Joshi, Anirudh
Goyal, Matthew Tung, Andrew Brock, Hannah Sheahan, Vedant Misra, Cheng Li, Nemanja Raki´cevi´c,
Mostafa Dehghani, Fangyu Liu, Sid Mittal, Junhyuk
Oh, Seb Noury, Eren Sezener, Fantine Huot, Matthew
Lamm, Nicola De Cao, Charlie Chen, Gamaleldin
Elsayed, Ed Chi, Mahdis Mahdieh, Ian Tenney, Nan
Hua, Ivan Petrychenko, Patrick Kane, Dylan Scandinaro, Rishub Jain, Jonathan Uesato, Romina Datta,
Adam Sadovsky, Oskar Bunyan, Dominik Rabiej,
Shimu Wu, John Zhang, Gautam Vasudevan, Edouard
Leurent, Mahmoud Alnahlawi, Ionut Georgescu, Nan
Wei, Ivy Zheng, Betty Chan, Pam G Rabinovitch,
Piotr Stanczyk, Ye Zhang, David Steiner, Subhajit
Naskar, Michael Azzam, Matthew Johnson, Adam
Paszke, Chung-Cheng Chiu, Jaume Sanchez Elias,
Afroz Mohiuddin, Faizan Muhammad, Jin Miao,
Andrew Lee, Nino Vieillard, Sahitya Potluri, Jane
Park, Elnaz Davoodi, Jiageng Zhang, Jeff Stanway,
Drew Garmon, Abhijit Karmarkar, Zhe Dong, Jong
Lee, Aviral Kumar, Luowei Zhou, Jonathan Evens,
William Isaac, Zhe Chen, Johnson Jia, Anselm
Levskaya, Zhenkai Zhu, Chris Gorgolewski, Peter
Grabowski, Yu Mao, Alberto Magni, Kaisheng Yao,
Javier Snaider, Norman Casagrande, Paul Suganthan, Evan Palmer, Geoffrey Irving, Edward Loper,
Manaal Faruqui, Isha Arkatkar, Nanxin Chen, Izhak
Shafran, Michael Fink, Alfonso Castaño, Irene Giannoumis, Wooyeol Kim, Mikołaj Rybi´nski, Ashwin
Sreevatsa, Jennifer Prendki, David Soergel, Adrian
Goedeckemeyer, Willi Gierke, Mohsen Jafari, Meenu
Gaba, Jeremy Wiesner, Diana Gage Wright, Yawen
Wei, Harsha Vashisht, Yana Kulizhskaya, Jay Hoover,
Maigo Le, Lu Li, Chimezie Iwuanyanwu, Lu Liu,
Kevin Ramirez, Andrey Khorlin, Albert Cui, Tian
LIN, Marin Georgiev, Marcus Wu, Ricardo Aguilar,
Keith Pallo, Abhishek Chakladar, Alena Repina, Xihui Wu, Tom van der Weide, Priya Ponnapalli, Caroline Kaplan, Jiri Simsa, Shuangfeng Li, Olivier
Dousse, Fan Yang, Jeff Piper, Nathan Ie, Minnie
Lui, Rama Pasumarthi, Nathan Lintz, Anitha Vijayakumar, Lam Nguyen Thiet, Daniel Andor, Pedro
Valenzuela, Cosmin Paduraru, Daiyi Peng, Katherine Lee, Shuyuan Zhang, Somer Greene, Duc Dung
Nguyen, Paula Kurylowicz, Sarmishta Velury, Sebastian Krause, Cassidy Hardin, Lucas Dixon, Lili
Janzer, Kiam Choo, Ziqiang Feng, Biao Zhang,
Achintya Singhal, Tejasi Latkar, Mingyang Zhang,
Quoc Le, Elena Allica Abellan, Dayou Du, Dan McKinnon, Natasha Antropova, Tolga Bolukbasi, Orgad
Keller, David Reid, Daniel Finchelstein, Maria Abi
Raad, Remi Crocker, Peter Hawkins, Robert Dadashi,
Colin Gaffney, Sid Lall, Ken Franko, Egor Filonov,
Anna Bulanova, Rémi Leblond, Vikas Yadav, Shirley
Chung, Harry Askham, Luis C. Cobo, Kelvin Xu,
Felix Fischer, Jun Xu, Christina Sorokin, Chris Alberti, Chu-Cheng Lin, Colin Evans, Hao Zhou, Alek
Dimitriev, Hannah Forbes, Dylan Banarse, Zora
Tung, Jeremiah Liu, Mark Omernick, Colton Bishop,
Chintu Kumar, Rachel Sterneck, Ryan Foley, Rohan
Jain, Swaroop Mishra, Jiawei Xia, Taylor Bos, Geoffrey Cideron, Ehsan Amid, Francesco Piccinno,
Xingyu Wang, Praseem Banzal, Petru Gurita, Hila
Noga, Premal Shah, Daniel J. Mankowitz, Alex
Polozov, Nate Kushman, Victoria Krakovna, Sasha
Brown, MohammadHossein Bateni, Dennis Duan,
Vlad Firoiu, Meghana Thotakuri, Tom Natan, Anhad Mohananey, Matthieu Geist, Sidharth Mudgal,
Sertan Girgin, Hui Li, Jiayu Ye, Ofir Roval, Reiko
Tojo, Michael Kwong, James Lee-Thorp, Christopher Yew, Quan Yuan, Sumit Bagri, Danila Sinopalnikov, Sabela Ramos, John Mellor, Abhishek Sharma,
Aliaksei Severyn, Jonathan Lai, Kathy Wu, HengTze Cheng, David Miller, Nicolas Sonnerat, Denis
Vnukov, Rory Greig, Jennifer Beattie, Emily Caveness, Libin Bai, Julian Eisenschlos, Alex Korchemniy, Tomy Tsai, Mimi Jasarevic, Weize Kong, Phuong
Dao, Zeyu Zheng, Frederick Liu, Fan Yang, Rui
Zhu, Mark Geller, Tian Huey Teh, Jason Sanmiya,
-----
Evgeny Gladchenko, Nejc Trdin, Andrei Sozanschi,
Daniel Toyama, Evan Rosen, Sasan Tavakkol, Linting Xue, Chen Elkind, Oliver Woodman, John Carpenter, George Papamakarios, Rupert Kemp, Sushant
Kafle, Tanya Grunina, Rishika Sinha, Alice Talbert, Abhimanyu Goyal, Diane Wu, Denese OwusuAfriyie, Cosmo Du, Chloe Thornton, Jordi PontTuset, Pradyumna Narayana, Jing Li, Sabaer Fatehi,
John Wieting, Omar Ajmeri, Benigno Uria, Tao Zhu,
Yeongil Ko, Laura Knight, Amélie Héliou, Ning
Niu, Shane Gu, Chenxi Pang, Dustin Tran, Yeqing
Li, Nir Levine, Ariel Stolovich, Norbert Kalb, Rebeca Santamaria-Fernandez, Sonam Goenka, Wenny
Yustalim, Robin Strudel, Ali Elqursh, Balaji Lakshminarayanan, Charlie Deck, Shyam Upadhyay, Hyo
Lee, Mike Dusenberry, Zonglin Li, Xuezhi Wang,
Kyle Levin, Raphael Hoffmann, Dan HoltmannRice, Olivier Bachem, Summer Yue, Sho Arora,
Eric Malmi, Daniil Mirylenka, Qijun Tan, Christy
Koh, Soheil Hassas Yeganeh, Siim Põder, Steven
Zheng, Francesco Pongetti, Mukarram Tariq, Yanhua Sun, Lucian Ionita, Mojtaba Seyedhosseini,
Pouya Tafti, Ragha Kotikalapudi, Zhiyu Liu, Anmol Gulati, Jasmine Liu, Xinyu Ye, Bart Chrzaszcz,
Lily Wang, Nikhil Sethi, Tianrun Li, Ben Brown,
Shreya Singh, Wei Fan, Aaron Parisi, Joe Stanton,
Chenkai Kuang, Vinod Koverkathu, Christopher A.
Choquette-Choo, Yunjie Li, TJ Lu, Abe Ittycheriah,
Prakash Shroff, Pei Sun, Mani Varadarajan, Sanaz Bahargam, Rob Willoughby, David Gaddy, Ishita Dasgupta, Guillaume Desjardins, Marco Cornero, Brona
Robenek, Bhavishya Mittal, Ben Albrecht, Ashish
Shenoy, Fedor Moiseev, Henrik Jacobsson, Alireza
Ghaffarkhah, Morgane Rivière, Alanna Walton, Clément Crepy, Alicia Parrish, Yuan Liu, Zongwei
Zhou, Clement Farabet, Carey Radebaugh, Praveen
Srinivasan, Claudia van der Salm, Andreas Fidjeland, Salvatore Scellato, Eri Latorre-Chimoto, Hanna
Klimczak-Pluci´nska, David Bridson, Dario de Cesare, Tom Hudson, Piermaria Mendolicchio, Lexi
Walker, Alex Morris, Ivo Penchev, Matthew Mauger,
Alexey Guseynov, Alison Reid, Seth Odoom, Lucia
Loher, Victor Cotruta, Madhavi Yenugula, Dominik
Grewe, Anastasia Petrushkina, Tom Duerig, Antonio
Sanchez, Steve Yadlowsky, Amy Shen, Amir Globerson, Adam Kurzrok, Lynette Webb, Sahil Dua, Dong
Li, Preethi Lahoti, Surya Bhupatiraju, Dan Hurt, Haroon Qureshi, Ananth Agarwal, Tomer Shani, Matan
Eyal, Anuj Khare, Shreyas Rammohan Belle, Lei
Wang, Chetan Tekur, Mihir Sanjay Kale, Jinliang
Wei, Ruoxin Sang, Brennan Saeta, Tyler Liechty,
Yi Sun, Yao Zhao, Stephan Lee, Pandu Nayak, Doug
Fritz, Manish Reddy Vuyyuru, John Aslanides, Nidhi
Vyas, Martin Wicke, Xiao Ma, Taylan Bilal, Evgenii Eltyshev, Daniel Balle, Nina Martin, Hardie
Cate, James Manyika, Keyvan Amiri, Yelin Kim,
Xi Xiong, Kai Kang, Florian Luisier, Nilesh Tripuraneni, David Madras, Mandy Guo, Austin Waters,
Oliver Wang, Joshua Ainslie, Jason Baldridge, Han
Zhang, Garima Pruthi, Jakob Bauer, Feng Yang, Riham Mansour, Jason Gelman, Yang Xu, George
Polovets, Ji Liu, Honglong Cai, Warren Chen, XiangHai Sheng, Emily Xue, Sherjil Ozair, Adams Yu,
Christof Angermueller, Xiaowei Li, Weiren Wang, Julia Wiesinger, Emmanouil Koukoumidis, Yuan Tian,
Anand Iyer, Madhu Gurumurthy, Mark Goldenson,
Parashar Shah, MK Blake, Hongkun Yu, Anthony
Urbanowicz, Jennimaria Palomaki, Chrisantha Fernando, Kevin Brooks, Ken Durden, Harsh Mehta,
Nikola Momchev, Elahe Rahimtoroghi, Maria Georgaki, Amit Raul, Sebastian Ruder, Morgan Redshaw, Jinhyuk Lee, Komal Jalan, Dinghua Li, Ginger Perng, Blake Hechtman, Parker Schuh, Milad
Nasr, Mia Chen, Kieran Milan, Vladimir Mikulik, Trevor Strohman, Juliana Franco, Tim Green,
Demis Hassabis, Koray Kavukcuoglu, Jeffrey Dean,
and Oriol Vinyals. 2023. Gemini: A family of
highly capable multimodal models. arXiv preprint
_arXiv:2312.11805._
Shubham Toshniwal, Ivan Moshkov, Sean Narenthiran, Daria Gitman, Fei Jia, and Igor Gitman. 2024.
Openmathinstruct-1: A 1.8 million math instruction
tuning dataset. arXiv preprint arXiv:2402.10176.
Lewis Tunstall, Edward Beeching, Nathan Lambert,
Nazneen Rajani, Kashif Rasul, Younes Belkada,
Shengyi Huang, Leandro von Werra, Clémentine
Fourrier, Nathan Habib, Nathan Sarrazin, Omar Sanseviero, Alexander M. Rush, and Thomas Wolf. 2023.
Zephyr: Direct distillation of lm alignment. arXiv
_preprint arXiv:2310.16944._
Peiyi Wang, Lei Li, Zhihong Shao, R. X. Xu, Damai Dai,
Yifei Li, Deli Chen, Y. Wu, and Zhifang Sui. 2024a.
Math-shepherd: Verify and reinforce llms step-bystep without human annotations. _arXiv preprint_
_arXiv:2312.08935._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le,
Ed Chi, Sharan Narang, Aakanksha Chowdhery, and
Denny Zhou. 2023. Self-consistency improves chain
of thought reasoning in language models. _arXiv_
_preprint arXiv:2203.11171._
Zihan Wang, Yunxuan Li, Yuexin Wu, Liangchen Luo,
Le Hou, Hongkun Yu, and Jingbo Shang. 2024b.
Multi-step problem solving through a verifier: An
empirical analysis on model-induced process supervision. arXiv preprint arXiv:2402.02658.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and
Denny Zhou. 2023. Chain-of-thought prompting
elicits reasoning in large language models. arXiv
_preprint arXiv:2201.11903._
Yuxi Xie, Kenji Kawaguchi, Yiran Zhao, Xu Zhao, MinYen Kan, Junxian He, and Qizhe Xie. 2023. Selfevaluation guided beam search for reasoning. arXiv
_preprint arXiv:2305.00633._
Seonghyeon Ye, Doyoung Kim, Sungdong Kim,
Hyeonbin Hwang, Seungone Kim, Yongrae Jo,
James Thorne, Juho Kim, and Minjoon Seo. 2023.
Flask: Fine-grained language model evaluation
based on alignment skill sets. _arXiv preprint_
_arXiv:2307.10928._
-----
**A** **Post-Training Distribution**
Here we analyze how the model’s distribution
changes after applying different training methods,
including RFT, DPO and SELF-EXPLORE. Specifically, we use our best performing model DeepSeekMath with a special focus on MATH dataset to
explore potential directions on how LLMs could
better self-improve in more advanced reasoning
capabilities.
While we previously used greedy decoding to
report the top-1 performance, here we sample 100
predictions per problem from the test set with temperature of 0.7 and sort the generations by the overall sequence likelihood in descending order. Then,
we report its performance in three different metrics, total accuracy, self-consistency (maj@k), and
pass@k, in Figure 5.
For the total accuracy, we observe a general trend
of curves decreasing with the inclusion of samples
with lower overall likelihood. Yet, we observe DPO
and SELF-EXPLORE displaying smaller gaps of reduction. Numerically, as k goes from 1 to 100,
SELF-EXPLORE performs 0.388 → 0.367, DPO
0.367 → 0.337, and RFT 0.376 → 0.336. We believe preference learning with self-generated samples minimizes the risk as even token generation
with comparatively lower likelihood to be sampled
eventually lead to the correct answer.
However, this comes at the cost of reduction
in sample diversity where preference learning (i.e.
RLHF) has been previously reported to promote
similar phenomena (Kirk et al., 2024). We believe
this even intensifies as we are training with our
own generated data. To support this, we leverage
BERT (Devlin et al., 2019) to extract embeddings
of model generations and express solution diversity as the average per-input pairwise distance of
embeddings, i.e. for i[th] sample, this is given as:
Fei Yu, Anningzhe Gao, and Benyou Wang. 2023a.
Outcome-supervised verifiers for planning in mathematical reasoning. arXiv preprint arXiv:2311.09724.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2023b.
Metamath: Bootstrap your own mathematical questions for large language models. _arXiv preprint_
_arXiv:2309.12284._
Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho,
Xian Li, Sainbayar Sukhbaatar, Jing Xu, and Jason Weston. 2024. Self-rewarding language models.
_arXiv preprint arXiv:2401.10020._
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting
Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and
Jingren Zhou. 2023. Scaling relationship on learning
mathematical reasoning with large language models.
_arXiv preprint arXiv:2308.01825._
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D.
Goodman. 2022. Star: Bootstrapping reasoning with
reasoning. arXiv preprint arXiv:2203.14465.
_N_ _−1_
_j=1_
X
_di =_
_d(hi,j, hi,k)_
_k=j+1_
X
_N · (N −_ 1)
where h is the embedding and N = 100 in this
case.
We plot the distribution of di for each training
method in the boxplot shown in Figure 6. We observe a general decrease in embedding distances
from left to right. Particularly DPO and SELFEXPLORE display lower embedding distance than
SFT and RFT, hinting at relatively reduced diversity. This phenomena also explains why Pass@K
-----
**Accuracy**
_higher probability samples_ **Top-K Predictions** _lower probability samples_
(b) Performance on GSM8K
**MATH**
**Accuracy**
_higher probability samples_ **Top-K Predictions** _lower probability samples_
Figure 5: Performance of DeepSeek-Math Model on different datasets when trained with diverse training methods we report using three metrics: total accuracy, maj@k (i.e. self-consistency), and pass@k.
for SFT and RFT is higher compared to those
trained with preference learning objective, as SFT
and RFT may engage in more exploration during
test-time.
In addition, it is important to recognize that a policy characterized by reduced diversity may exhibit
limited generalization capabilities, which could be
seen as a drawback. Note that for Self-Consistency
(maj@k), RFT and SFT surpasses SELF-EXPLORE
at K= 6 and 15, respectively. We find that the reason for this phenomena is due to the concentration
of the answer space stemming from the lack of
solution diversity, as demonstrated in Appendix B.
Our models trained with preference learning tend
to heavily favor what they identify as an optimal
answer. Specifically, the reward accuracy when
training these models quickly converge to 1, which
is illustrated in Appendix E, indicating a potential
reward exploitation that may lead to limitation in
the model’s ability to generalize. We hypothesize
that this stems from self-training focused on exploring a confined solution space, which may not
effectively extend to a broader question space.
Consequently, the solution distribution becomes
skewed, leading to the emergence of overly confident peaks (modes) that may accurately represent
the training data but fail to generalize to new unseen questions during testing, as shown by the reduced diversity. In contrast, models trained with
SFT or RFT adopt a more uniform distribution
across potential answers, whereas marginalizing
over answers allow for slightly more pronounced
peaks to be observable. (i.e. Self-Consistency)
Overall, these benefits appear diminished when
training with preference learning objective with
self-generated data.
In fact, we also observe a similar pattern for the
solution distribution in GSM8K. There is also less
reduction in total accuracy with increasing k for
models trained with preference learning objective.
This again can be explained as a risk minimization behavior. Regarding the other two metrics,
we see that SFT and RFT models exhibit lower
performance at lower k values, but they eventually
converge to the similar level (maj@k) or even surpass (pass@k) with increasing k. We hypothesize
this trend again reflects the reduction in diversity
within the model’s predictions for DPO and SELF
-----
**C.2** **Excluding Conclusion**
When we first ran DPO training, we observed
model performance significantly degrading when
including Conclusion part within the rejected sample (Figure 2). In such case, our trained model was
frequently presenting self-contradictory statements
in the conclusion, yielding random answers that
were unrelated to the reasoning presented in the
preceding steps. We believe it is due to the concurrent presence of definitive statements like "The
answer is X" in the chosen and "The answer is Y"
in the rejected sample, causing model confusion
during training. Therefore we decided to omit the
conclusion section (+ eos token) from all rejected
samples.
**D** **Training Dataset Size**
In this section, we discuss about the dataset size utilized for each training method and model. Despite
the seemingly comparable amount of training samples (Table 5), we highlight several observations
based on the proportion of question-level unique
instances in each dataset, which is shown in Table
6 and 7:
**1. Few incorrect samples for GSM8K Transi-**
tioning from the RFT to the paired dataset, there is
a notable reduction in the number of unique questions for GSM8K compared to MATH. This occurred because in several instances, the model generated all 100 solutions correctly, or there were
fewer than four incorrect solutions. This overall
hints at the scarcity of generated incorrect samples
when training with GSM8K dataset.
**2. Few correct samples for MATH Despite the**
model achieving high pass@k rates on the training set (over 90% for GSM8K and over 70% for
MATH), the actual number of instances that pass
is notably small for the MATH dataset. Especially,
there is a large decline in Table 7 when considering
the number of unique questions with more than 4
instances. This suggests that for many questions,
the models barely reach the correct answer within
100 generations.
**E** **DPO Training**
While in the original paper (Rafailov et al., 2023),
DPO training displayed chosen completion’s win
rate over the rejected completion around 60-70%,
we observe in Figure 8 that the reward of chosen
sample quickly suprasses that of rejected in our
early stages, with winrate converging to 1 in both
Figure 6: Average Per-Input Pairwise Distance of Embeddings of DeepSeek-Math, when trained with different methods.
EXPLORE. At the same time, we see that for DPO,
rather the performance increases with inclusion of
lower-k predictions. This indicates a potential misalignment, which explains the need for the granular
supervision during training for a better learning
signal.
**B** **Answer Distribution**
On the left side of Figure 7, we see that the number of unique answers decrease in order of SFT,
RFT, DPO, then SELF-EXPLORE. Meanwhile on
the right, we see that DPO and SELF-EXPLORE
shows the highest proportion of dominant answer,
suggesting a concentrated or skewed distribution
of the answer space. These observations support
the hypothesis that the model may exhibit overconfidence in its ’optimal’ answers, when applied
preference learning with self-generated solutions.
Such confidence, without sufficient generalization
power, could indicate potential overfitting to the
training data.
**C** **Pairwise Dataset Formation**
**C.1** **Maximum Pair Constraints**
We initially set no upper limit on the number of response pairs per problem in our dataset. However,
preliminary analysis suggested that problems with
nearly balanced correct and incorrect responses
could potentially generate disproportionately many
pairs, risking data overfitting. Thus, we have decided to adopt a maximum threshold of eight pairs
(N=8) for each problem xi. While we did not observe such cases many times, we adopted this strategy to ensure a more equitable distribution across
different questions.
-----
Figure 7: Final answer diversity and the proportion of the dominant (most common) answer within top-k predictions
of DeepSeek-Math on MATH Test dataset.
datasets. We hypothesize that this occurs for two
reasons. 1) Chosen completion is generated by
_MRFT which is closer to the target model’s distri-_
bution, while rejected is generated by MSFT. 2)
Models can also quickly learn to distinguish the
preference within the limited question numbers,
which may nonetheless lead to overfitting.
**F** **Examples**
In Figure 9, we see that while the DPO model
concludes prematurely after the 5th step, falling
into a pit, the SELF-EXPLORE model continues to
generate subsequent steps robustly, ultimately arriving at the correct answer. This sample effectively
illustrates how our method achieves step-level robustness through targeted step-level supervision.
**G** **FLASK Prompt**
In Figure 10, we present the prompt used for the
GPT-4 FLASK evaluation, which assesses three
key logical skills: robustness, correctness, and efficiency. These skills are evaluated against the
ground truth (GT) solution using a deterministic
rubric for each criterion.
When evaluating the example responses present
in Figure 9, we see that DPO model receives a
score of 2, 3, 2 while SELF-EXPLORE gets a full
score of 5, 5, 5 for Logical robustness, Correctness
and Efficiency, respectively, as shown in Figure 11.
GPT-4’s coherent explanation of these scores adds
credibility to the overall FLASK evaluation result
in Table 4, underscoring the superior quality of
responses generated by the SELF-EXPLORE model.
-----
**Methods** **Mistral** **Llemma** **DeepSeek**
**Dataset: GSM8K**
FT 7,473 7,473 7,473
RFT 67,755* 38,989 52,005
pair 56,443* 37,058 38,872
g-pair 56,283* 36,812 38,618
**Dataset: MATH**
FT 7,500 7,500 7,500
RFT 31,839 34,419 40,654
pair 31,527 34,124 39,769
g-pair 31,248 33,960 39,496
Table 5: Dataset size used for each training method, by each model.
- denotes no maximum pair formation constraint
**Mistral** **Llemma** **DeepSeek**
FT 1.0 1.0 1.0
**NUMBER OF SAMPLES ≥** 1
RFT 0.9830 0.9252 0.9917
pair 0.9213 0.9113 0.8955
g-pair 0.9212 0.9098 0.8947
**NUMBER OF SAMPLES ≥** 4
RFT 0.7376 0.7281 0.8616
pair 0.6204 0.5739 0.6063
g-pair 0.6195 0.5700 0.6024
Table 6: Proportion of questions in GSM8K with at least N instances for each training method, by each model.
**Mistral** **Llemma** **DeepSeek**
FT 1.0 1.0 1.0
**NUMBER OF SAMPLES ≥** 1
RFT 0.7345 0.7587 0.8240
pair 0.7345 0.7356 0.8225
g-pair 0.7345 0.7353 0.7971
**NUMBER OF SAMPLES ≥** 4
RFT 0.4904 0.5375 0.6479
pair 0.4844 0.5320 0.6309
g-pair 0.4819 0.5309 0.6292
Table 7: Proportion of questions in MATH with at least N instances for each training method, by each model.
-----
**GSM8K: DPO**
c
**GSM8K: Self-Explore**
**MATH: DPO**
c
**MATH: Self-Explore**
Figure 8: Reward accuracy (i.e. winrate of chosen over rejected samples) of DPO and SELF-EXPLORE during
training of DeepSeek-Math. For both methods, the accuracy quickly converges to 1 regardless of the supervision
type.
-----
**Question**
Randy has 9 oatmeal cookies, 4 chocolate chip cookies, and 5 sugar cookies. He ate 3 cookies for an
early day snack, one of each flavor. He ate 2 oatmeal cookies for lunch. He gives 2 sugar cookies
to his friends. Then, he bakes 4 of each flavor for dinner. How many cookies does he have now?
**DPO** **Self-Explore**
He has 600 movies and a third are in series He has 600 movies and a third are in series
so that's 600/3=<<600/3=200>>200 movies so that's 600/3=<<600/3=200>>200 movies
He has 600 movies and 200 are in series so He has 600 movies and 200 are in series
that means he has 600-200=<<600-200=400>>400 so that's 600-200=<<600-200=400>>400
movies left movies
He has 400 movies and 40% are older movies so He has 400 movies and 40% are older movies
that's 400*.4=<<400*.4=160>>160 movies so that's 400*.4=<<400*.4=160>>160 movies
He has 400 movies and 160 are older movies so He has 400 movies and 160 are older
that means he has 400-160=<<400-160=240>>240 movies so that's 400-160=<<400movies left 160=240>>240 movies
He has 240 movies that are not in series or He has 240 movies that aren't in series or
older and they cost $10 each so that's older and they cost $10 each so that’s
240*$10=$<<240*10=2400>>2400 240*10=$<<240*10=2400>>2400
**The answer is 2400** **He has 200 movies in series that cost $6**
**each so that's 200*6=$<<200*6=1200>>1200**
He has 160 older movies that cost $5 each
so that's 160*5=$<<160*5=800>>800
He has 2400+1200+800=$<<2400+1200+800=4400
>>4400 The answer is 4400
Figure 9: Examples for sample generated by DPO and Self-Explore, respectively.
-----
We would like to request your feedback on the performance of the response of the assistant to the user
instruction displayed below. In the feedback, I want you to rate the quality of the response in these 3
categories according to each score rubric:
[Skill 1. Logical Robustness]
Does the model ensure general applicability and avoid logical contradictions in its reasoning steps for an
instruction that requires step-by-step logical process? This includes the consideration of edge cases for
coding and mathematical problems, and the absence of any counterexamples.
Score 1: The logic of the model’s response is completely incoherent.
Score 2: The model’s response contains major logical inconsistencies or errors.
Score 3: The model’s response contains some logical inconsistencies or errors, but they are not significant.
Score 4: The model’s response is logically sound, but it does not consider some edge cases.
Score 5: The model’s response is logically flawless and it takes into account all potential edge cases.
[Skill 2. Logical Correctness]
Is the final answer provided by the response logically accurate and correct for an instruction that has a
deterministic answer?
Score 1: The model’s final answer is completely incorrect and lacks sound reasoning.
Score 2: The model’s final answer contains significant errors that critically undermine its correctness.
Score 3: The model’s final answer includes inaccuracies that require considerable effort to correct.
Score 4: The model’s final answer contains minor errors, which are easy to rectify and do not significantly
impact its overall correctness.
Score 5: The model’s final answer is completely accurate and sound.
[Skill 3. Logical Efficiency]
Is the response logically efficient? The logic behind the response should have no redundant step, remaining
simple and efficient. For tasks involving coding, the proposed solution should also consider time complexity.
Score 1: The logic behind the response is significantly inefficient and redundant, necessitating a complete
reorganization of logic for clarity and efficiency.
Score 2: The logic of the response lacks efficiency and conciseness, requiring a substantial reorganization for
better optimization.
Score 3: The logic of the response is not efficient enough, necessitating major edits for improved optimization.
Score 4: The logic of the response is largely efficient, but it still has some redundant steps. It could be
handled from minor edits for better optimization.
Score 5: The logic of the response is optimally efficient, requiring no further optimization.
[Instruction]
{question}
[Ground truth Answer]
{ground truth answer}
[Assistant’s Response]
{answer}
[The End of Assistant’s Response]
Please give feedback on the assistant’s responses. Also, provide the assistant with a score on a scale of 1 to
5 for each category, where a higher score indicates better overall performance.
Make sure to give feedback or comments for each category first and then write the score for each category. Only
write the feedback corresponding to the score rubric for each category. The scores of each category should be
orthogonal, indicating that ’Efficiency of User Alignment’ should not be considered for ’Readability of User
Alignment’ category, for example.
Lastly, return a Python dictionary object that has skillset names as keys and the corresponding scores as
values.
Figure 10: GPT-4 prompt used for FLASK evaluation.
-----
**DPO** **Self-Explore**
**Logical Robustness: The assistant's response** **Logical Robustness: The assistant's response**
demonstrates a logical process in breaking down the demonstrates a high degree of logical robustness.
number of movies into categories and calculating It accurately breaks down the problem into
the cost for each category. However, it fails to manageable steps, ensuring a clear and systematic
include the calculations for the series movies and approach to solving the problem. By calculating
the older movies in the final summation, leading to the costs for each category of movies (series,
an incomplete and therefore logically flawed older, and normal) separately and then summing
conclusion. This oversight represents a significant these to find the total cost, the response
logical inconsistency because it omits crucial effectively addresses the problem without
steps in the problem-solving process, affecting the overlooking any details or introducing logical
coherence of the solution. Therefore, the inconsistencies. It considers all potential
response's logic is not entirely coherent, as it categories of movies and their respective costs,
neglects to account for all components of the thereby ensuring a comprehensive solution.
problem statement. However, because the assistant Therefore, the logic employed in the assistant's
correctly applies mathematical operations within response can be considered logically flawless in
each considered category, it does not fall to the the context of the given instruction.
lowest rating. **Score: 5**
**Score: 2**
**Logical Correctness: The assistant's final answer** **Logical Correctness: The assistant's final answer**
is incorrect because it only includes the cost for is completely accurate and logically sound. It
the 240 movies that are neither in series nor old. correctly identifies the number of movies in each
It neglects the cost for the 200 series movies and category (series, older, and neither), accurately
the 160 older movies, which were both explicitly calculates the cost for each category based on the
mentioned in the instructions. Due to this given prices, and correctly sums these costs to
significant oversight, the final answer provided is arrive at the total cost of replacing the movies.
not just minorly inaccurate; it critically The mathematical operations and the reasoning
undermines the correctness of the response, as it leading to the final answer are correct, making
only accounts for a portion of the total cost. the response logically accurate in accordance with
Therefore, the provided answer significantly the requirements of the instruction.
deviates from the accurate total. **Score: 5**
**Score: 2**
**Logical Efficiency: In the steps it does take, the** **Logical Efficiency: The response is logically**
assistant's logic is straightforward and direct, efficient, following a straightforward and direct
calculating the costs associated with each category approach to solve the problem without introducing
of movies without unnecessary steps. However, since any unnecessary steps or complications. By
it fails to include all necessary calculations for sequentially addressing each movie category and
a comprehensive answer, the efficiency of its logic calculating its cost before summing these costs,
is inherently compromised. The assistant's response the assistant employs an efficient logical
is not burdened by redundant steps in the progression. This method not only simplifies the
calculations it does perform, but the omission of problem-solving process but also avoids any
critical steps leads to an incomplete solution. redundancy, making the logic behind the response
This oversight does not align with the highest optimally efficient for the task at hand.
standards of efficiency, as the most efficient **Score: 5**
response would include and correctly sum all
relevant costs. Despite these issues, since the ```python
assistant does not introduce unnecessary complexity {
in the steps it takes, it does not warrant the **"Logical Robustness": 5,**
lowest score in efficiency. **"Logical Correctness": 5,**
**Score: 3** **"Logical Efficiency": 5**
}
```python ```
{
**"Logical Robustness": 2,**
**"Logical Correctness": 2,**
**"Logical Efficiency": 3**
}
```
Figure 11: Results of GPT-4 FLASK evaluation for the generated solutions shown in Figure 9.
-----
| [
"Minjoon, Seo",
"Hyeonbin, Hwang",
"Doyoung, Kim",
"Seungone, Kim",
"Seonghyeon, Ye"
] | 2024-05-16T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2404.10346 | https://arxiv.org/abs/2404.10346 | null |
Self-Harmonized Chain of Thought | Chain-of-Thought (CoT) prompting reveals that large language models are capable of performing complex reasoning via intermediate steps. CoT prompting is primarily categorized into three approaches. The first approach utilizes straightforward prompts like ``Let's think step by step'' to generate a sequential thought process before yielding an answer. The second approach makes use of human-crafted, step-by-step demonstrations to guide the model's reasoning process. The third automates the generation of reasoned demonstrations with the 'Let's think step by step'.This approach sometimes leads to reasoning errors, highlighting the need to diversify demonstrations to mitigate its misleading effects. However, diverse demonstrations pose challenges for effective representations. In this work, we propose ECHO, a self-harmonized chain-of-thought prompting method. It consolidates diverse solution paths into a uniform and effective solution pattern.ECHO demonstrates the best overall performance across three reasoning domains. | ECHO is proposed, a self-harmonized chain-of-thought prompting method that consolidates diverse solution paths into a uniform and effective solution pattern and demonstrates the best overall performance across three reasoning domains. | [
"Ziqi, Jin",
"Wei, Lu"
] | 2024-09-06T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.04057 | https://arxiv.org/abs/2409.04057 | https://www.semanticscholar.org/paper/46648c014df998c2c57bc6971db36c6a5830afdf |
|
Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning | Teaching small-scale language models to perform math reasoning is a valuable yet challenging task. Besides obtaining labeled data from human experts, one of the most common ways to collect high-quality data is by sampling from a larger and more powerful language model. Although previous works have demonstrated the effectiveness of this method, such a knowledge distillation paradigm can be costly and unstable, especially considering that many large language models, such as GPT-4, are closed-sourced, proprietary, and their behaviors are unpredictable. In this work, to avoid relying on outputs from large models, we demonstrate that the reasoning abilities of small-scale language models can be enhanced through self-training, which involves training models with their own outputs. We also show that the vanilla self-training can be further augmented by an alignment algorithm, direct preference optimization (DPO). We empirically found that models trained with the DPO objective are capable of making better generations that largely benefit multi-turn self-training. The experiments show our models outperform the state-of-the-art models with comparable sizes on a series of downstream math reasoning tasks with minimal resource requirements. | This work demonstrates that the reasoning abilities of small-scale LMs can be enhanced through self-training, a process where models learn from their own outputs, and shows that the conventional self-training can be further augmented by a preference learning algorithm called Direct Preference Optimization (DPO). | # Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning
**Tianduo Wang[†]**, Shichen Li[‡], Wei Lu[†]
_†StatNLP Research Group, Singapore University of Technology and Design_
_‡Soochow University_
{tianduo_wang,luwei}@sutd.edu.sg, [email protected]
**Abstract**
Teaching small-scale language models to perform math reasoning is a valuable yet challenging task. Besides obtaining labeled data
from human experts, one of the most common
ways to collect high-quality data is by sampling from a larger and more powerful language
model. Although previous works have demonstrated the effectiveness of this method, such a
knowledge distillation paradigm can be costly
and unstable, especially considering that many
large language models, such as GPT-4 (OpenAI, 2023), are closed-source, proprietary, and
their behaviors are unpredictable. In this work,
to avoid relying on outputs from large models, we demonstrate that the reasoning abilities of small-scale language models can be enhanced through self-training, which involves
training models with their own outputs. We
also show that the conventional self-training
can be further augmented by an alignment algorithm called Direct Preference Optimization
(DPO) (Rafailov et al., 2023). We empirically
found that models trained with the DPO objective are capable of making better generations
that largely benefit multi-turn self-training. The
experimental results show our models outperform the existing models with comparable sizes
on the GSM8K benchmark with minimal resource requirements.[1]
|Col1|GSM8K Performance v.s. Compute cost|
|---|---|
||Ours Calcformer (Flan-T5-L) (T5-3B)|
|Dis PaL|Distill from PaLM (T5-11B) Calcformer (T5-L) Distill from PaLM Distill from Codex (T5-3B) (Flan-T5-11B) Distill from Codex (Flan-T5-3B) till from M (T5-L) Distill from Codex (Flan-T5-L)|
1e+18 3e+18 1e+19 3e+19
Compute Cost (FLOPs)
40
35
30
25
20
15
Figure 1: GSM8K performance v.s. computational
cost. Our approach outperforms baseline models with
comparable sizes while minimizing the required compute. Comparisons include two knowledge distillation
techniques, i.e., Codex distillation (Fu et al., 2023), and
PaLM distillation (Magister et al., 2023), and a data augmentation method, Calcformer (Kadlˇcík et al., 2023).
improve LMs with smaller sizes (e.g., less than 1
billion parameters) is still under-explored.
Recent studies (Fu et al., 2023; Ho et al., 2023;
Magister et al., 2023) demonstrate that the reasoning capabilities of smaller LMs can be significantly enhanced through learning from the outputs of larger and more advanced LMs, such as
Codex (Chen et al., 2021) and PaLM (Chowdhery
et al., 2022). Despite the relative ease of implementation, the associated costs can be substantial.
The computational demand, measured in floating_point operations (FLOPs), is considerably higher_
for large LMs. Additionally, the use of proprietary and closed-source large LMs for data annotation can incur significant economic costs. Ho
et al. (2023) demonstrated that annotating multiple
reasoning chains for a single question with large
LMs can markedly enhance the performance of
smaller models. Hence, a trade-off between cost
and performance exists.
**1** **Introduction**
Making language models (LMs) perform mathematical reasoning is a valuable, yet challenging
research objective (Hendrycks et al., 2021; Cobbe
et al., 2021). Recent works focus on enhancing
large-scale LMs’ reasoning abilities, e.g., chainof-thought prompting (Wei et al., 2022b; Kojima
et al., 2022), continual pretraining (Azerbayev
et al., 2024), or adding external verifiers (Li et al.,
2023b). However, the research question of how to
[1Our code and data are released at https://github.com/](https://github.com/tianduowang/dpo-st)
[tianduowang/dpo-st.](https://github.com/tianduowang/dpo-st)
11917
-----
**Algorithm 1 Self-training for CoT reasoning tasks**
**Input: pre-trained language model fθ**
**Input: labeled dataset** = (x[i], y[i], a[i]) _i=1_
_L_ _{_ _}[l]_
**Input: unlabeled dataset** = (x[i], a[i]) _i=1_
_U_ _{_ _}[u]_
**Output: fine-tuned model fθ′**
Another line of work focuses on making improvements via self-training (Zelikman et al., 2022;
Gulcehre et al., 2023; Singh et al., 2023). Instead of
using data generated by larger models, the essence
of self-training methods is to make small models
learn from their own generations. While these
methods demonstrate the effectiveness of utilizing
self-generated data, their success largely depends
upon the pre-existing abilities of the models. For
example, Zelikman et al. (2022) started by few-shot
prompting a large model, i.e., GPT-J (Wang and
Komatsuzaki, 2021), to self-generate rationales,
which is an emergent ability that only comes with
sufficiently large models (Wei et al., 2022a). However, it is still unclear whether small-scale LMs can
benefit from self-training.
Recently, we have witnessed that reinforce_ment learning from human feedback (RLHF) has_
emerged as a prominent method to precisely modify
LMs’ behavior towards human preference (Ouyang
et al., 2022; Casper et al., 2023). In this work, we
propose to augment the self-training algorithm with
an RLHF training process, i.e., Direct Preference
Optimization (Rafailov et al., 2023), for its better
performance and stability. Our experimental results
demonstrate that the proposed method can significantly improve LMs’ reasoning capabilities while
minimizing the computational costs. We visualize the relationship between the downstream task
performance and computational cost over a series
of specialized models in Figure 1. The computational costs for each model are estimated following
previous practice (Kaplan et al., 2020; Yuan et al.,
2023). It can be observed that our method not only
achieves the highest accuracy, but also minimizes
the computational demand by learning from its own
generations. Overall, the main contribution of this
work can be summarized as follows:
- We propose a novel extension to the classic
self-training framework with Direct Preference
Optimization, and we show its effectiveness
through standard math problem-solving tasks.
- We demonstrate that this novel extension improves the reasoning abilities of the LMs with
minimal computational resource requirements.
- We propose an efficient method for integrating
LMs with external tools, significantly improving the performance without sacrificing much
inference speed.
1: Fine-tune fθ on to get fθ′
_L_
2: repeat
3: Build pseudo-labeled dataset S:
= (x[i], ˆy[i], ˆa[i]) _i=1_
_S_ _{_ _}[s]_
where x[i] and ˆy[i], ˆa[i] _fθ′(_ _x[i])_
_∼U_ _∼_ _·|_
4: Select S _[α]_ _⊂S when ˆa[i]_ = a[i]
5: Update L ←S _[α]_ _∪L_
6: Train fθ on to get a new fθ′
_L_
7: until convergence or max iteration is reached
**2** **Background**
**Math word problem solving** The math word
problem solving task can be formulated as a
sequence-to-sequence task where the input x is a
question asking for an unknown value and the output y is a rationale leading to the answer a (Cobbe
et al., 2021). Normally, the answers can be extracted from the rationales via some rule-based
methods, e.g., regular expressions. A generated
rationale ˆy is regarded as correct if the extracted
answer ˆa matches the gold answer a. Formally, the
labeled dataset for a math word problem solving
task with l instances can be represented as:
_L = {(x[i], y[i], a[i])}i[l]=1[.]_ (1)
A common way for specializing a LM fθ towards
math reasoning with the labeled dataset L is super_vised fine-tuning (SFT). It optimizes fθ by mini-_
mizing the negative log likelihood loss LSFT(θ):
log fθ(yt|x, y1:t−1), (2)
_t=1_
X i
(x,y)∼L
where T is the length of the rationale y and we use
_yt to represent the t-th token in y._
**Self-training** Self-training is one of the earliest
approaches in semi-supervised learning (Scudder,
1965; Fralick, 1967) that has risen in popularity
recently (He et al., 2019; Amini et al., 2022). This
method first regards a base model trained with a
labeled dataset L as teacher, and uses it to build
a pseudo-labeled dataset S by annotating an unlabeled dataset U. Then, a student model is trained
with the combination of L and S that are expected
11918
-----
to outperform the teacher model. Such a framework **Algorithm 2 DPO-augmented self-training**
has been shown effective in a wide range of natural
language processing tasks, e.g., natural language **Input: pre-trained language model fθ**
**Input: labeled dataset** = (x[i], y[i], a[i]) _i=1_
understanding (Vu et al., 2021) and generation (He _L_ _{_ _}[l]_
**Input: unlabeled dataset** = (x[i], a[i]) _i=1_
et al., 2019). A formal description of a self-training _U_ _{_ _}[u]_
**Output: fine-tuned model fθ′**
algorithm for CoT reasoning tasks is provided in
# Warm-up stage
Algorithm 1.
Previous studies have demonstrated that the qual- 1: Fine-tune fθ on L to get fθ′
2: repeat
ity of the pseudo-labels largely impacts the over
# DPO step
all performance of the self-training algorithm (He
3: Generate DPO dataset :
et al., 2019; Amini et al., 2022). For example, Gul- _D_
cehre et al. (2023) proposed to select high-quality _D = {( x[i], yw[i]_ _[, y]l[i]_ [)][}]i[N]=1
pseudo-labels with a learned reward function.likman et al. (2022) filtered the generated ratio- Ze- 4: Tunewhere fθ′ with x[i] _∼U LDPO and on y Dw[i] to get[, y]l[i]_ _[∼] f[f]θ[θ]d[′][(][·|][x][i][)]_
# SFT step
nales to include only the ones that lead to correct
5: Build pseudo-labeled dataset :
answers. Although many methods are proposed _S_
to select pseudo-labels, few works discuss how _S = {(x[i], ˆy[i], ˆa[i])}i[s]=1_
to improve the fine-tuned model fθ′ so that more where x[i] _∼U and ˆy[i], ˆa[i]_ _∼_ _fθd(·|x[i])_
6: Select when ˆa[i] = a[i]
high-quality pseudo-labels can be generated. In _S_ _[α]_ _⊂S_
7: Update
this paper, we present a method to enhance fθ′ in _L ←S_ _[α]_ _∪L_
each iteration so that higher-quality pseudo-labeled 8: Train fθ on L to get a new fθ′
9: until convergence or max iteration is reached
data can be generated.
**Direct Preference Optimization** The Reinforcement Learning from Human Feedback which can be defined as follows:
(RLHF) methods align LMs with human preference (Ouyang et al., 2022; Bai et al., 2022). The (x,ywE,yl) _−_ log σ _r(yw|x) −_ _r(yl|x)_, (5)
standard pipeline of RLHF requires to first train a _∼D_
[]
reward model from human preference data. Then,
the reward model is used to fine-tune language where r(·|x) = β log _π[π]ref[θ][(]([·|]·|[x]x[)])_ [and][ β][ is a coefficient]
models via reinforcement learning objective, e.g., that controls πθ’s deviation from πref.
Proximal Policy Optimization (Schulman et al.,
**3** **Method**
2017). A recent study propose Direct Preference
Optimization (DPO) (Rafailov et al., 2023) to avoid
In this section, we first describe the proposed ap
explicitly training a reward model so that language
proach. Then, we demonstrate how we integrate
models can be directly tuned with human prefer
an external calculator into the model’s decoding
ence data.
process which significantly improves LMs’ perfor
The DPO pipeline can be described as follows. mance on the downstream tasks.
First, given some prompt x, sample several completions from the reference model πref (normally it **3.1** **DPO-Augmented Self-Training**
is the model after supervised fine-tuning):
Our approach starts with a warm-up stage, and
then followed by an iterative process, where each
_y1, y2 ∼_ _πref(· | x)._ (3) iteration is composed of two sub-steps: DPO step
and SFT step. The iterative process ends when
Next, construct the DPO dataset from the com_D_ the model performance converges or reaches the
pletions based on the human preference:
maximum iteration. A formal description of the
proposed method is illustrated in Algorithm 2. An
_D = {( x[i], yw[i]_ _[, y]l[i]_ [)][}]i[N]=1[,] (4) illustration of our method is presented in Figure 2.
where yw[i] [and][ y]l[i] [represent the winning and los-] **Warm-up stage** Like classic self-training, we
ing completions respectively. Then, we optimize start by fine-tuning the base model fθ to optimize
the language model πθ to minimize LDPO(πθ; πref) _LSFT(θ) on the labeled data L to get a new model_
11919
-----
Human-labeled
SFT data SFT data Pseudo-labeled data
Deduplication
Preference data
Sampling
&
filtering
Sampling
Pre-trained
SFT model DPO model
model Supervised DPO
fine-tuning training
Iteration n
Figure 2: An illustration of the proposed DPO-augmented Self-Training framework. The conventional Self-Training
method uses the SFT model to generate the pseudo-labels for the next iteration. In contrast, our method first optimize
the SFT model with Direct Preference Optimization (DPO), and use the DPO model to produce the pseudo-labels.
Q: James writes a 3-page letter to 2
different friends twice a week. How many
pages does he write a year?
A: He writes each friend
3*2=<<3*2=6>>6 pages a week.
So he writes
6*2=<<6*2=12>>12 pages every week.
That means he writes
12*52=<<12*52=624>>624 pages a year.
#### 624
Figure 3: An example from the GSM8K dataset. The
calculation annotations are highlighted in blue. All
calculation steps are wrapped within special tokens
<<...>>. During decoding, the calculator will be triggered when such patterns exist and the model’s output
tokens will be overridden by the calculator results. Following Cobbe et al. (2021), the calculation is performed
with the python eval() function.
**3.2** **Batch Decoding with Calculator**
We empirically observed that, in contrast to large
LMs which excel at basic arithmetic calculations (Brown et al., 2020), smaller LMs like FlanT5-Large exhibit poor performance on similar arithmetic tasks. This deficiency significantly impacts
their ability to handle math reasoning tasks. To
address this, various studies (Parisi et al., 2022;
Schick et al., 2023; Kadlˇcík et al., 2023) have proposed augmenting these smaller models with an
external calculator to enhance their math reasoning capabilities. However, many of these existing
methods are limited to a batch size of one during
decoding. This constraint substantially reduces the
inference speed, which hinders their widespread
adoption.
_fθ′. After this stage, we assume that fθ′ is capa-_
ble of solving certain math problems. Specifically,
given a math question x, fθ′ will generate a rationale ˆy with answer ˆa.
**Iterative step 1: DPO step** In this step, we first
sample rationales ˆy from the fine-tuned model fθ′
given some questions x from U. For each question x, we generate multiple rationales to build the
DPO training dataset D. As mentioned, for math
problem solving tasks, it is easy to know whether a
generated rationale ˆy can be considered as correct.
We label rationales with correct answers as winning completions, while consider rationales with
incorrect answers as losing completions. Then, we
train fθ′ on to optimize the objective function
_D_
_LDPO and get a DPO model fθd in the end._
**Iterative step 2: SFT step** After obtaining fθd,
we use it to generate a new pseudo-labeled dataset
_S for the next-round supervised fine-tuning:_
= (x, ˆy) _x_ _, ˆy_ _fθd(_ _x)_ (6)
_S_ _{_ _|_ _∼U_ _∼_ _·|_ _}_
After generation, we clean S by eliminating rationales with incorrect answers and removing duplicates. Therefore, the pseudo-labeled dataset we
obtained in the end is a subset of the original one,
i.e., S _[α]_ _⊂S. The final training dataset is the com-_
bination of the original labeled dataset L and the
newly-generated pseudo-labeled dataset S _[α]._
Notice that during this process, once we collect a
new dataset, we train from the original base model
_fθ instead of continually fine-tuning fθ′ to avoid_
overfitting, following previous practice (Zelikman
et al., 2022; Singh et al., 2023).
11920
-----
**Dataset** **Split** **# Data**
GSM8K (Cobbe et al., 2021) Train 6,705
Validation 0,768
Test 1,319
MultiArith (Roy and Roth, 2015) Test 0,600
ASDiv (Miao et al., 2020) Test 2,096
SVAMP (Patel et al., 2021) Test 1,000
Table 1: Statistics of the datasets used in our experiments. The original GSM8K dataset only contains train
and test split. We randomly select 768 training examples
to construct the validation dataset in our experiments.
20
15
10
16 32
|Calcformer 19.9x Ours w/ calculator Ours w/o calculator 13.9x 12.3x 9.5x 6.9x 5.8x 1.0x|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||
|||||||||||||||
|||||||||||||||
|||||||||||||||
|||||||||||||||
|||||||||||||||
|||||||||||||||
|||||||||||||||
Batch Size
Figure 4: Inference speed comparison between
our methods (w/ and w/o calculator) and Calcformer (Kadlˇcík et al., 2023) with varying batch sizes.
The results are measured on a single NVIDIA A40 GPU.
To alleviate this issue, we propose a simple
yet efficient method to enables the use of a large
batch size during inference with an external calculator. We adopt the calculator annotation provided in the original GSM8K dataset (Cobbe et al.,
2021). Figure 3 demonstrates an example of this
annotation and describes how such annotations
can be used during decoding. Our models are
built with the Transformers library (Wolf et al.,
2020). During generation, we adopt a customized
LogitsProcessor[2] to override the model’s generation. LogitsProcessor provides an interface to
modify the language model’s output tokens during
generation.
To demonstrate the efficiency of the proposed
solution, We compare the inference speed of our
methods (w/ and w/o calculator) based on Flan-T5Large against an open-source tool-using method,
Calcformer (Kadlˇcík et al., 2023) based on T5Large, in Figure 4. We find that when the batch size
equals 1, all three methods have a similar inference
speed of around 40 tokens per second. However, as
the inference batch size increases, the speedup of
our methods increases significantly.
**4.1** **Setup**
**Base models** We employ Flan-T5 models (Chung
et al., 2024) as the base models in our experiments.
Specifically, we consider two models of different
sizes from the Flan-T5 model family: Flan-T5Base (250M) and Flan-T5-Large (780M). We select Flan-T5 over the original T5 models (Raffel
et al., 2019) as our backbone models based on
the evidence from previous research (Chung et al.,
2024; Fu et al., 2023), which demonstrated that
instruction-tuned models, i.e., Flan-T5, outperform
their pre-trained counterparts in math reasoning
tasks. Besides Flan-T5 models, we also consider
the Llama models (Touvron et al., 2023a,b; Meta,
2024) as our base models.
**Datasets** The labeled dataset L used in our experiments comes from the training split of the GSM8K
dataset. Our unlabeled dataset U is also built upon
GSM8K’s training data by removing its annotated
rationales y. For evaluation, we consider three
other commonly used math reasoning tasks besides
GSM8K: MultiArith (Roy and Roth, 2015), ASDiv (Miao et al., 2020), and SVAMP (Patel et al.,
2021). Table 1 shows the statistics information of
each dataset. Following previous practice (Fu et al.,
2023), we only fine-tune the base models on the
GSM8K training data while utilizing the rest three
datasets to evaluate our models’ out-of-domain performance as they do not have an official in-domain
training split.
**Evaluation metrics** We use accuracy of the
greedy decoding results as the main evaluation metric. The questions in the datasets in our experiments ask about the values of the unknown variables. The answers to these questions are real numbers that can be extracted from the model-generated
rationales.
**4** **Experiments**
In this section, we first describe our experimental setup. Next, we present the performance of
our models across a series of math word problem
solving tasks, comparing them against a selection
of competitive baselines. Finally, we empirically
analyze what makes the proposed method effective.
[2https://huggingface.co/docs/transformers/](https://huggingface.co/docs/transformers/internal/generation_utils#logitsprocessor)
[internal/generation_utils#logitsprocessor](https://huggingface.co/docs/transformers/internal/generation_utils#logitsprocessor)
11921
-----
**Method** **Base Model** **GSM8K** **MultiArith** **ASDiv** **SVAMP**
Supervised Fine-Tuning Flan-T5-Base 18.1 54.2 26.2 19.5
Self-Training Flan-T5-Base 25.9 73.8 28.2 **24.2**
DPO-aug Self-Training (Ours) Flan-T5-Base **27.2** **74.3** **29.2** 22.6
Supervised Fine-Tuning Flan-T5-Large 30.8 77.2 38.1 33.6
Self-Training Flan-T5-Large 35.6 86.2 42.5 34.8
DPO-aug Self-Training (Ours) Flan-T5-Large **37.4** **89.0** **42.8** **36.8**
Table 2: Overall accuracies (%) over four math word problem solving tasks. Inspired by the previous practice (Fu
et al., 2023), all the models in this table are only trained with the GSM8K training set (Cobbe et al., 2021). Hence,
we report the in-distribution performance for GSM8K, while reporting the out-of-distribution performance for the
other three datasets, i.e., MultiArith, ASDiv, and SVAMP.
**Implementation details** In every DPO step, we
sample rationales from the SFT model fθ′ to build
the DPO training data. We sample 5 rationales
from fθ′ per question with a temperature of 0.7.
We regard generated rationales ˆy as winning ones
_yw if it contains the correct answer, while regard-_
ing the rest as the losing ones yl. For the SFT
steps, we generate 3 rationales per question from
the DPO-tuned model fθd also with a temperature
of 0.7. Only the correct generated rationales ˆy will
be selected to build the pseudo-labeled dataset S.
For both generated DPO data and SFT data, we
make simple deduplication based on the Jaccard
similarity scores with a threshold of 0.7.
**4.2** **Main Results**
30
25
20
15
34
Accuracy (%)32
30
**Baselines** We mainly consider two baseline methods to compare with our method: Supervised FineTuning (SFT) and Self-Training (ST). SFT baselines are trained on the original GSM8K annotations with the LSFT(θ) objective. The Self-Training
baseline adheres to the procedure outlined in Algorithm 1. Consequently, this makes it the most
directly comparable and relevant baseline for evaluating the effectiveness of our approach.
**Comparison with baselines** Table 2 shows the
performance of our method compared with the
baselines using two base models over four datasets.
It can be observed that both the Self-Training baseline and our proposed method outperform the supervised fine-tuning baseline by a large margin, indicating the effectiveness of the general self-training
framework. Although the Self-Training baselines
make significant improvements over the SFT baselines, the proposed DPO-augmented Self-Training
models consistently outperform them on both indomain and out-of-domain tasks.
zero as the iterative process proceeds, suggesting
|Flan-T5-Base|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
|ST Ours 26.0 26.6 25.9 27.2 24.2 24.6 18.1 18.1|||||||||||||
||||||||26.0||||||
||||||||||||||
||||||||||||||
||ite|r 0||ite F|r 1 lan-T|5-|ite Larg|r 2 e||ite|r 3||
|ST 37.4 Ours 35.6 35.6 35.1 34.1 32.9 30.8 30.8|||||||||||||
|||ST|||||||||||
||Ours||||||||||||
||||||||||||||
||||||||||||||
||||||||||||||
||||||||||||||
||||||||||||||
|5: o S lt st n an d K i fo d p r|ite The n G elf-T s of t of it rate s us -T5 start ann terat rms offer eate ee o|r 0 per SM8 raini he S erat how ing -Lar wi otat ions ST, s a d tra f im|fo K n F iv o tw g th io p w su in pr|ite rma ove g ba T ba e t ur m o b e. B a S n at rogr hich bsta ing ove|r 1 nce ( r thr selin selin rain ode ase oth FT iter ess, su ntia cyc men|ac ee e. es in l m th st ati o gg l le t|ite cura iter The . g imp ode e ST ep on ur m ests imp s. W grad|r 2 cy) atio "ite In rove ls: ba usin 0. W etho tha rove e a ual|of ns. r Fi o Fl se g e d t t m ls ly|ite the "S 0" r gur ver an- line the can con he p ent o no app|r 3 prop T" st epre e 5, mul T5- and orig obs sist rop ove tice roa|os an se it Ba o in er en os r th ch|
that the model has converged in the last iteration.
11922
-----
**Method** **Base Model** **# Annotations** **Annotator** **Tools** **Acc.**
_Supervised fine-tuning_
CoT (Shridhar et al., 2023) GPT-2-Large 007K Human % 14.1
Self-consistency (Khalifa et al., 2023) Flan-T5-Large 007K Human ! 33.3
GRACE (Khalifa et al., 2023) Flan-T5-Large 007K Human ! 36.3
Calcformer (Kadlˇcík et al., 2023) T5-Large 030K Human ! 34.2
_Knowledge Distillation_
Socratic CoT (Shridhar et al., 2023) GPT-2-Large 007K GPT-3 175B % 21.1
CoT from CodeX (Fu et al., 2023) Flan-T5-Large 100K CodeX % 20.2
CoT from PaLM (Magister et al., 2023) T5-Large 006K PaLM 540B ! 22.2
_Ours_
DPO-aug Self-Training (K=3) Flan-T5-Large 007K Human ! 37.4
DPO-aug Self-Training (K=5) Flan-T5-Large 007K Human ! 39.1
DPO-aug Self-Training (K=10) Flan-T5-Large 007K Human ! **40.0**
Table 3: Detailed comparison among existing methods with comparable model sizes on the GSM8K test set. The
“Annotator” column indicates how the rationales of the training data are generated. In this column, “Human” refers
to the labels from the original GSM8K dataset (Cobbe et al., 2021) that are written by human annotators. The
“Tools” column indicates whether external tools, e.g., calculator or code interpreter, are applied during inference.
**4.3** **Comparison with Existing Methods**
In this section, we compare our methods with existing approaches. Additionally, we scale up the
proposed method by increasing the number of sampled pseudo-labels per question, denoted by the
hyperparameter K as in Yuan et al. (2023).
Table 3 presents a detailed comparison between our method and exisiting methods using
a simialr base model size. The base models we
considered include GPT-2-Large (Radford et al.,
2019), T5-Large (Raffel et al., 2019), and FlanT5-Large (Chung et al., 2024). All these models
have approximately 770 million parameters. As
shown in Table 3 our approach not only outperforms other methods on the GSM8K benchmark,
but also demonstrates remarkable label efficiency
by utilizing only the annotations from the original
GSM8K dataset.
In Table 4, we further verify the effectiveness of
the proposed method with the Llama model family (Touvron et al., 2023a,b; Meta, 2024). comparing them with several state-of-the-art closed-source
models as well as open-source models with similar
sizes. We observe a substantial performance gap between proprietary and open-source models. Among
the open-source models, those utilizing knowledge
distillation consistently outperform their counterparts without such enhancement. Notably, our models using Llama-1-7b (Touvron et al., 2023a) and
Llama-2-7b (Touvron et al., 2023b) base models
surpass other open-source alternatives that do not
employ knowledge distillation, achieving accura
cies of 44.7% and 54.7% respectively. Furthermore,
our model employing the latest Llama-3-8b (Meta,
2024) matches or exceeds the performance of earlier models with knowledge distillation, demonstrating a significant accuracy of 68.8%.
**Method** **Base Model** **Acc.**
_Closed-source models_
Claude-3-Opus (Anthropic, 2024) - 95.0
Claude-2 (Anthropic, 2023) - 88.0
GPT-4 (OpenAI, 2023) - 92.0
Flan-PaLM-2 (Anil et al., 2023) - 84.7
PaLM-2 (Anil et al., 2023) - 80.7
_Models w/ knowledge distillation_
RFT-U13B (Yuan et al., 2023) Llama-1-7b 49.3
RFT-U33B (Yuan et al., 2023) Llama-2-7b 51.2
MAmooTH-CoT (Yue et al., 2023) Llama-2-7b 50.5
LEMA (An et al., 2023) Llama-2-7b 54.1
WizardMath (Luo et al., 2023) Llama-2-7b 54.9
MetaMath (Yu et al., 2024) Llama-2-7b 66.5
MuggleMath (Li et al., 2023a) Llama-2-7b 68.4
ToRA (Gou et al., 2024) Llama-2-7b 68.8
_Models w/o knowledge distillation_
SFT (Yuan et al., 2023) Llama-1-7b 35.9
RFT (K=12) (Yuan et al., 2023) Llama-1-7b 41.6
RFT (K=100) (Yuan et al., 2023) Llama-1-7b 41.7
SFT (Yuan et al., 2023) Llama-2-7b 41.6
RFT (K=12) (Yuan et al., 2023) Llama-2-7b 45.3
RFT (K=100) (Yuan et al., 2023) Llama-2-7b 47.5
_Ours_
DPO-ST (K=10) Llama-1-7b 44.7
DPO-ST (K=10) Llama-2-7b 54.7
DPO-ST (K=10) Llama-3-8b **68.8**
Table 4: Comparison with the state-of-the-art closesource models and open-source models based on Llama
model family (Touvron et al., 2023a,b; Meta, 2024).
11923
-----
# generated
CoT pseudo-labels
Pass@1
Pass@10
50
40
39
36
30
20
33
10
|w/ calculator w/o calculator 43.9 44.8 40.5 36.7 16.3 17.1 17.8 18.0|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||17.8||18.0||
||||17.1||||||
||16.3||||||||
||||||||||
||iter 0||iter 1||iter 2||iter 3||
Figure 7: GSM8K development set accuracy of FlanT5-Large with and without the use of an external calculator during inference.
process. To evaluate the impact of this integration,
we conduct an ablation study by omitting the calculator and present the findings in Figure 7. Our
results indicate that decoding without the calculator markedly reduces accuracy across all iterations.
We believe that this is because models will generate
large amount of false positive pseudo-labels without calculator, that is, the generated pseudo-labels
may have correct final answers but make errors in
the intermediate reasoning steps.
**5** **Related Work**
|Pass@1|Col2|66|Pass@10|Col5|Co|oT pseudo-lab|Col8|
|---|---|---|---|---|---|---|---|
|36.5 36.1||66 64 (%) accuracy 62 Dev 60|64.8 62.9||3000 2700 2400|2940 2495||
|||||||||
|||||||||
|||||||||
|||||||||
|6: edy 10, t ght: fter ffec ntio pos DPO e ho 6 c d af ss@ hat a robl th th|||||||tep we re ddle: mpera d pse betw -trai We -train odel ratio he pr solut a ga e mo vesti ge. elds el in et. H|
||Befo|re DPO|step||After|DPO s|tep|
||Effec decod he so We c dedu ts of ned e ed m step w th ompa ter th K me t lea em i e qu|||||eft: . Mi h te erate ence self cess. self of m t ite res t ated es as of th e in -Lar p yi mod ent s||
multiple rationales, i.e., 10 solutions per question,
are sampled with temperature 0.7 (measured with
the Pass@10 metric). This indicates that the DPO
training objective makes language models inclined
to generate rationales of both high quality and diversity. We also compare the number of generated
rationales on the training set L for models with
and without the DPO step. Figure 6 (right) clearly
shows that the model after the DPO step can produce more SFT data for the next iteration.
**Learning from pseudo-labels** Supervised finetuning (SFT) is prevalent technique employed to
enhance the performance of pre-trained language
models on specific downstream tasks (Ouyang
et al., 2022; Chung et al., 2024). However, this
method heavily depends on the availability of highquality labeled data, which can be both expensive
and labor-intensive to procure (Brown et al., 2020).
To address this limitation, various strategies have
been developed to generate high-quality pseudolabels using either unlabeled or synthetic data for
a wide range of applications, including text classification (Xie et al., 2020), sentence representation learning (Wang and Lu, 2022), instruction
tuning (Honovich et al., 2022), and math reasoning (Wang and Lu, 2023). Recent advancements
in this area primarily focus on two directions: selftraining and knowledge distillation. The key difference between these methods lies in the source
of the pseudo-labels: self-training uses the model’s
own predictions on unlabeled data, while knowledge distillation utilizes the insights from larger,
more powerful models.
**4.5** **Effects of External Calculator**
Driven by the observation that small-scale LMs
frequently make basic calculation errors, we develop a simple yet efficient method that integrates
an external calculator into the models’ decoding
11924
-----
**6** **Conclusion**
We present an effective and resource-efficient
method called DPO-augmented Self-Training
(DPO-ST), which augments the original SelfTraining algorithm with Direct Preference Optimization (Rafailov et al., 2023). Unlike previous
studies that improve small-scale language models’
reasoning abilities by distilling a larger and more
powerful model, we argue that small models that
are trained merely on the limited human-labeled
data can improve themselves significantly. We also
empirically find that models trained with DPO loss
can generate pseudo-labeled data with higher quality and diversity. Our experiments demonstrate that
the proposed method not only outperforms existing methods with comparable model sizes on the
GSM8K benchmark, but also achieves remarkable
resource efficiency in terms of both computational
cost and the requirements of human-labeled data.
**Limitations**
**Use of unlabeled data** Our method is built upon
the classic self-training algorithm, which provides
an effective semi-supervised learning framework
that makes good use of unlabeled data. However,
this work doesn’t explore the use of unlabeled data
explicitly. Future research efforts can be made to
explore how to collect high-quality unlabeled data
for math word problem solving. In other words,
we need to find an efficient method for collecting
unlabeled data U = {(xi, ai)}i[u]=1 [that for each]
math question xi, there is a corresponding groundtruth answer ai.
**Generalization to other tasks** One of the limitations of this work is the narrow scope of our
experiments, which were exclusively conducted on
math reasoning tasks. The primary reason for this
limitation is the lack of appropriate training data
for other reasoning tasks. As our method requires
a set of training data with chain-of-thought labels,
many existing reasoning tasks lack such annotations, making it challenging to extend our experiments beyond the current scope. Future research
may focus on identifying and developing suitable
datasets for a wider range of reasoning tasks to
fully evaluate the applicability and effectiveness of
our method across different reasoning tasks.
**Self-training in language model** Recently, we
have witnessed a large number of works focusing on self-training algorithms for language models (He et al., 2019; Zelikman et al., 2022; Yuan
et al., 2023). Most of such methods are built
upon the classic self-training framework (Scudder, 1965). He et al. (2019) empirically studied
the effectiveness of self-training in natural language generation tasks, e.g., summarization and
translation. Zelikman et al. (2022) proposed self_taught reasoner (STaR), which demonstrated that_
language models can be iteratively improved from
its own generation, even there are no gold rationales provided. Yuan et al. (2023) proposed re_jection sampling fine-tuning to improve language_
models’ math reasoning abilities. This method can
be interpreted as only executing one iteration of
the self-training algorithm. Singh et al. (2023) proposed ReST[EM], a self-improving algorithm based
on expectation-maximization framework. This
method demonstrates significant improvements in
problem-solving tasks, e.g., math reasoning and
code generation.
**Knowledge distillation from LLMs** Many of
the recent research efforts demonstrated large
language models (LLMs) are capable of doing
math reasoning (Wei et al., 2022b; Gao et al.,
2022; OpenAI, 2023; Anil et al., 2023; Anthropic,
2023, 2024). Therefore, a recent line of work
focuses on improving smaller language models’
reasoning abilities by distilling chain-of-thought
pseudo-labels from LLMs (Ho et al., 2023; Magister et al., 2023; Fu et al., 2023). For example,
Luo et al. (2023) proposed Reinforcement Learning from Evol-Instruct Feedback built upon EvolInstruct (Xu et al., 2023), which requires ChatGPT
to provide the training signals. An et al. (2023)
demonstrated that language models can effectively
learn from the mistakes that can be corrected by
larger models, e.g., GPT-4 (OpenAI, 2023), during
supervised fine-tuning. Yu et al. (2024) proposed a
novel question bootstrapping method with the help
of larger models to augment the existing training
dataset. Although these methods are shown to have
promising experimental results, they are costly to
implement as large models cost more FLOPs during inference. Our work demonstrates that smallscale language models can also learn from their
own generations like the larger ones (Zelikman
et al., 2022), which is more resource-efficient compared with the knowledge distillation methods.
11925
-----
**Acknowledgements**
This work was done when Shichen Li was a visiting student at the StatNLP Research Group of
SUTD. We would like to thank the anonymous reviewers, our meta-reviewer, and senior area chairs
for their constructive comments and support on
this work. This research/project is supported by
Ministry of Education, Singapore, under its Academic Research Fund (AcRF) Tier 2 Programme
(MOE AcRF Tier 2 Award No: MOET2EP201220011), the National Research Foundation Singapore and DSO National Laboratories under the
AI Singapore Program (AISG Award No: AISG2RP-2020-016), and Ministry of Education, Singapore, under its Tier 3 Programme (The Award No.:
MOET320200004). Any opinions, findings and
conclusions or recommendations expressed in this
material are those of the authors and do not reflect
the views of the funding agencies.
**References**
Massih-Reza Amini, Vasilii Feofanov, Loïc
Pauletto, Emilie Devijver, and Yury Maximov.
2022. [Self-training: A survey.](https://arxiv.org/abs/2202.12040) _arXiv preprint_
_arXiv:2202.12040._
Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng,
[Jian-Guang Lou, and Weizhu Chen. 2023. Learn-](https://arxiv.org/abs/2310.20689)
[ing from mistakes makes llm better reasoner. arXiv](https://arxiv.org/abs/2310.20689)
_preprint arXiv:2310.20689._
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak
Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng
[Chen, et al. 2023. Palm 2 technical report. arXiv](https://arxiv.org/abs/2305.10403)
_preprint arXiv:2305.10403._
[Anthropic. 2023. Claude 2. https://www.anthropic.](https://www.anthropic.com/news/claude-2)
[com/news/claude-2. Accessed: 2024-05-06.](https://www.anthropic.com/news/claude-2)
[Anthropic. 2024. The claude 3 model family: Opus,](https://www-cdn.anthropic.com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_Card_Claude_3.pdf)
[sonnet, haiku. Accessed: 2024-05-06.](https://www-cdn.anthropic.com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_Card_Claude_3.pdf)
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster,
Marco Dos Santos, Stephen McAleer, Albert Q Jiang,
Jia Deng, Stella Biderman, and Sean Welleck. 2024.
[Llemma: An open language model for mathematics.](https://arxiv.org/abs/2310.10631)
In Proceedings of ICLR.
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda
Askell, Anna Chen, Nova DasSarma, Dawn Drain,
Stanislav Fort, Deep Ganguli, Tom Henighan, et al.
[2022. Training a helpful and harmless assistant with](https://arxiv.org/abs/2204.05862)
[reinforcement learning from human feedback. arXiv](https://arxiv.org/abs/2204.05862)
_preprint arXiv:2204.05862._
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
[Gretchen Krueger, et al. 2020. Language models](https://arxiv.org/abs/2005.14165)
[are few-shot learners. In Proceedings of NeurIPS.](https://arxiv.org/abs/2005.14165)
Stephen Casper, Xander Davies, Claudia Shi,
Thomas Krendl Gilbert, J’er’emy Scheurer, Javier
Rando, Rachel Freedman, Tomasz Korbak, David
Lindner, et al. 2023. [Open problems and funda-](https://arxiv.org/abs/2307.15217)
[mental limitations of reinforcement learning from](https://arxiv.org/abs/2307.15217)
[human feedback. Transactions on Machine Learning](https://arxiv.org/abs/2307.15217)
_Research._
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming
Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph,
Greg Brockman, et al. 2021. [Evaluating large](https://arxiv.org/abs/2107.03374)
[language models trained on code. arXiv preprint](https://arxiv.org/abs/2107.03374)
_arXiv:2107.03374._
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul
Barham, Hyung Won Chung, Charles Sutton, Sebas[tian Gehrmann, et al. 2022. Palm: Scaling language](https://jmlr.org/papers/volume24/22-1144/22-1144.pdf)
[modeling with pathways. Journal of Machine Learn-](https://jmlr.org/papers/volume24/22-1144/22-1144.pdf)
_ing Research._
Hyung Won Chung, Le Hou, Shayne Longpre, Barret
Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi
Wang, Mostafa Dehghani, Siddhartha Brahma, et al.
[2024. Scaling instruction-finetuned language models.](https://arxiv.org/abs/2210.11416)
_Journal of Machine Learning Research, 25(70):1–53._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
[Nakano, et al. 2021. Training verifiers to solve math](https://arxiv.org/abs/2110.14168)
[word problems. arXiv preprint arXiv:2110.14168.](https://arxiv.org/abs/2110.14168)
[Stanley C. Fralick. 1967. Learning to recognize patterns](https://api.semanticscholar.org/CorpusID:11609879)
[without a teacher. IEEE Trans. Inf. Theory.](https://api.semanticscholar.org/CorpusID:11609879)
Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and
[Tushar Khot. 2023. Specializing smaller language](https://proceedings.mlr.press/v202/fu23d.html)
[models towards multi-step reasoning. In Proceedings](https://proceedings.mlr.press/v202/fu23d.html)
_of ICML._
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Gra[ham Neubig. 2022. Pal: Program-aided language](https://arxiv.org/abs/2211.10435)
[models. arXiv preprint arXiv:2211.10435.](https://arxiv.org/abs/2211.10435)
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yujiu Yang,
Minlie Huang, Nan Duan, Weizhu Chen, et al. 2024.
[Tora: A tool-integrated reasoning agent for mathe-](https://arxiv.org/abs/2309.17452)
[matical problem solving. In Proceedings of ACL.](https://arxiv.org/abs/2309.17452)
Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek
Sharma, Aditya Siddhant, Alex Ahern, Miaosen
Wang, Chenjie Gu, et al. 2023. [Reinforced self-](https://arxiv.org/abs/2308.08998)
[training (rest) for language modeling. arXiv preprint](https://arxiv.org/abs/2308.08998)
_arXiv:2308.08998._
11926
-----
Junxian He, Jiatao Gu, Jiajun Shen, and Marc’Aurelio
Ranzato. 2019. Revisiting [self-training](https://arxiv.org/abs/1909.13788) for
[neural sequence generation.](https://arxiv.org/abs/1909.13788) _arXiv preprint_
_arXiv:1909.13788._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
[Jacob Steinhardt. 2021. Measuring mathematical](https://arxiv.org/abs/2103.03874)
[problem solving with the math dataset. In Proceed-](https://arxiv.org/abs/2103.03874)
_ings of NeurIPS._
Namgyu Ho, Laura Schmid, and Se-Young Yun. 2023.
[Large language models are reasoning teachers. In](https://aclanthology.org/2023.acl-long.830)
_Proceedings of ACL._
Or Honovich, Thomas Scialom, Omer Levy, and Timo
[Schick. 2022. Unnatural instructions: Tuning lan-](https://arxiv.org/abs/2212.09689)
[guage models with (almost) no human labor. arXiv](https://arxiv.org/abs/2212.09689)
_preprint arXiv:2212.09689._
Marek Kadlˇcík, Michal Štefánik, Ondˇrej Sotoláˇr, and
[Vlastimil Martinek. 2023. Calc-x and calcformers:](https://arxiv.org/abs/2305.15017)
[Empowering arithmetical chain-of-thought through](https://arxiv.org/abs/2305.15017)
[interaction with symbolic systems. In Proceedings](https://arxiv.org/abs/2305.15017)
_of EMNLP._
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B
Brown, Benjamin Chess, Rewon Child, Scott Gray,
Alec Radford, Jeffrey Wu, and Dario Amodei. 2020.
[Scaling laws for neural language models. ArXiv.](https://arxiv.org/abs/2001.08361)
Muhammad Khalifa, Lajanugen Logeswaran, Moon[tae Lee, Honglak Lee, and Lu Wang. 2023. Grace:](https://aclanthology.org/2023.findings-emnlp.1022/)
[Discriminator-guided chain-of-thought reasoning. In](https://aclanthology.org/2023.findings-emnlp.1022/)
_Findings of EMNLP._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu[taka Matsuo, and Yusuke Iwasawa. 2022. Large lan-](https://arxiv.org/abs/2205.11916)
[guage models are zero-shot reasoners. In Proceed-](https://arxiv.org/abs/2205.11916)
_ings of NeurIPS._
Chengpeng Li, Zheng Yuan, Guanting Dong, Keming
Lu, Jiancan Wu, Chuanqi Tan, Xiang Wang, and
[Chang Zhou. 2023a. Query and response augmenta-](https://arxiv.org/abs/2310.05506)
[tion cannot help out-of-domain math reasoning gen-](https://arxiv.org/abs/2310.05506)
[eralization. arXiv preprint arXiv:2310.05506.](https://arxiv.org/abs/2310.05506)
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
[Jian-Guang Lou, and Weizhu Chen. 2023b. Making](https://aclanthology.org/2023.acl-long.291)
[language models better reasoners with step-aware](https://aclanthology.org/2023.acl-long.291)
[verifier. In Proceedings of ACL.](https://aclanthology.org/2023.acl-long.291)
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
[Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wiz-](https://arxiv.org/abs/2308.09583)
[ardmath: Empowering mathematical reasoning for](https://arxiv.org/abs/2308.09583)
[large language models via reinforced evol-instruct.](https://arxiv.org/abs/2308.09583)
_arXiv preprint arXiv:2308.09583._
Lucie Charlotte Magister, Jonathan Mallinson, Jakub
Adamek, Eric Malmi, and Aliaksei Severyn. 2023.
[Teaching small language models to reason. In Pro-](https://aclanthology.org/2023.acl-short.151)
_ceedings of ACL._
Meta. 2024. Llama 3. [https://llama.meta.com/](https://llama.meta.com/llama3/)
[llama3/. Accessed: 2024-06-01.](https://llama.meta.com/llama3/)
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su.
[2020. A diverse corpus for evaluating and developing](https://aclanthology.org/2020.acl-main.92/)
[english math word problem solvers. In Proceedings](https://aclanthology.org/2020.acl-main.92/)
_of ACL._
[OpenAI. 2023. Gpt-4 technical report. arXiv preprint](https://arxiv.org/abs/2303.08774)
_arXiv:2303.08774._
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, John
Schulman, Jacob Hilton, Fraser Kelton, Luke E.
Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Francis Christiano, Jan Leike, and Ryan J.
[Lowe. 2022. Training language models to follow](https://proceedings.neurips.cc/paper_files/paper/2022/file/b1efde53be364a73914f58805a001731-Paper-Conference.pdf)
[instructions with human feedback. In Proceedings of](https://proceedings.neurips.cc/paper_files/paper/2022/file/b1efde53be364a73914f58805a001731-Paper-Conference.pdf)
_NeurIPS._
[Aaron Parisi, Yao Zhao, and Noah Fiedel. 2022. Talm:](https://arxiv.org/abs/2205.12255)
[Tool augmented language models. arXiv preprint](https://arxiv.org/abs/2205.12255)
_arXiv:2205.12255._
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
[2021. Are NLP models really able to solve simple](https://aclanthology.org/2021.naacl-main.168)
[math word problems? In Proceedings of NAACL.](https://aclanthology.org/2021.naacl-main.168)
Alec Radford, Jeff Wu, Rewon Child, David Luan,
[Dario Amodei, and Ilya Sutskever. 2019. Language](https://api.semanticscholar.org/CorpusID:160025533)
[models are unsupervised multitask learners. OpenAI](https://api.semanticscholar.org/CorpusID:160025533)
_blog._
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano
Ermon, Christopher D Manning, and Chelsea Finn.
[2023. Direct preference optimization: Your language](https://openreview.net/pdf?id=HPuSIXJaa9)
[model is secretly a reward model. In Proceedings of](https://openreview.net/pdf?id=HPuSIXJaa9)
_NeurIPS._
Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi
[Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the](https://api.semanticscholar.org/CorpusID:204838007)
[limits of transfer learning with a unified text-to-text](https://api.semanticscholar.org/CorpusID:204838007)
[transformer. Journal of Machine Learning Research.](https://api.semanticscholar.org/CorpusID:204838007)
[Subhro Roy and Dan Roth. 2015. Solving general arith-](https://aclanthology.org/D15-1202)
[metic word problems. In Proceedings of EMNLP.](https://aclanthology.org/D15-1202)
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta
Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola
[Cancedda, and Thomas Scialom. 2023. Toolformer:](https://openreview.net/pdf?id=Yacmpz84TH)
[Language models can teach themselves to use tools.](https://openreview.net/pdf?id=Yacmpz84TH)
In Proceedings of NeurIPS.
John Schulman, Filip Wolski, Prafulla Dhariwal,
Alec Radford, and Oleg Klimov. 2017. [Proxi-](https://arxiv.org/abs/1707.06347)
[mal policy optimization algorithms. arXiv preprint](https://arxiv.org/abs/1707.06347)
_arXiv:1707.06347._
[H. J. Scudder. 1965. Probability of error of some adap-](https://api.semanticscholar.org/CorpusID:30807376)
[tive pattern-recognition machines. IEEE Trans. Inf.](https://api.semanticscholar.org/CorpusID:30807376)
_Theory._
Kumar Shridhar, Alessandro Stolfo, and Mrinmaya
[Sachan. 2023. Distilling reasoning capabilities into](https://aclanthology.org/2023.findings-acl.441)
[smaller language models. In Findings of ACL.](https://aclanthology.org/2023.findings-acl.441)
11927
-----
Avi Singh, John D Co-Reyes, Rishabh Agarwal, Ankesh
Anand, Piyush Patil, Peter J Liu, James Harrison, Jaehoon Lee, Kelvin Xu, Aaron Parisi, et al.
[2023. Beyond human data: Scaling self-training](https://arxiv.org/pdf/2312.06585.pdf)
[for problem-solving with language models. arXiv](https://arxiv.org/pdf/2312.06585.pdf)
_preprint arXiv:2312.06585._
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal
Azhar, et al. 2023a. Llama: [Open and effi-](https://arxiv.org/abs/2302.13971)
[cient foundation language models. arXiv preprint](https://arxiv.org/abs/2302.13971)
_arXiv:2302.13971._
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023b. [Llama 2: Open founda-](https://arxiv.org/abs/2307.09288)
[tion and fine-tuned chat models.](https://arxiv.org/abs/2307.09288) _arXiv preprint_
_arXiv:2307.09288._
Tu Vu, Minh-Thang Luong, Quoc Le, Grady Simon,
[and Mohit Iyyer. 2021. STraTA: Self-training with](https://aclanthology.org/2021.emnlp-main.462)
[task augmentation for better few-shot learning. In](https://aclanthology.org/2021.emnlp-main.462)
_Proceedings of EMNLP._
Ben Wang and Aran Komatsuzaki. 2021. GPT-J6B: A 6 Billion Parameter Autoregressive Lan[guage Model. https://github.com/kingoflolz/](https://github.com/kingoflolz/mesh-transformer-jax)
[mesh-transformer-jax.](https://github.com/kingoflolz/mesh-transformer-jax)
[Tianduo Wang and Wei Lu. 2022. Differentiable data](https://arxiv.org/abs/2210.16536)
[augmentation for contrastive sentence representation](https://arxiv.org/abs/2210.16536)
[learning. In Proceedings of EMNLP.](https://arxiv.org/abs/2210.16536)
[Tianduo Wang and Wei Lu. 2023. Learning multi-step](https://arxiv.org/abs/2306.01707)
[reasoning by solving arithmetic tasks. In Proceed-](https://arxiv.org/abs/2306.01707)
_ings of ACL._
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel,
Barret Zoph, Sebastian Borgeaud, Dani Yogatama,
Maarten Bosma, Denny Zhou, Donald Metzler, et al.
[2022a. Emergent abilities of large language models.](https://arxiv.org/pdf/2206.07682.pdf)
_Transactions on Machine Learning Research._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022b.
[Chain of thought prompting elicits reasoning in large](https://arxiv.org/abs/2201.11903)
[language models. In Proceedings of NeurIPS.](https://arxiv.org/abs/2201.11903)
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Pier[ric Cistac, Tim Rault, et al. 2020. Transformers:](https://aclanthology.org/2020.emnlp-demos.6/)
[State-of-the-art natural language processing. In Pro-](https://aclanthology.org/2020.emnlp-demos.6/)
_ceedings of EMNLP._
Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and
[Quoc Le. 2020. Unsupervised data augmentation for](https://arxiv.org/abs/1904.12848)
[consistency training. In Proceedings of NeurIPS.](https://arxiv.org/abs/1904.12848)
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng,
Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin
Jiang. 2023. [Wizardlm: Empowering large lan-](https://arxiv.org/abs/2304.12244)
[guage models to follow complex instructions. arXiv](https://arxiv.org/abs/2304.12244)
_preprint arXiv:2304.12244._
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo
[Li, Adrian Weller, and Weiyang Liu. 2024. Meta-](https://arxiv.org/abs/2309.12284)
[math: Bootstrap your own mathematical questions](https://arxiv.org/abs/2309.12284)
[for large language models. In Proceedings of ICLR.](https://arxiv.org/abs/2309.12284)
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting
[Dong, Chuanqi Tan, and Chang Zhou. 2023. Scal-](https://arxiv.org/pdf/2308.01825v2.pdf)
[ing relationship on learning mathematical reason-](https://arxiv.org/pdf/2308.01825v2.pdf)
[ing with large language models.](https://arxiv.org/pdf/2308.01825v2.pdf) _arXiv preprint_
_arXiv:2308.01825._
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
[2023. Mammoth: Building math generalist models](https://arxiv.org/abs/2309.05653)
[through hybrid instruction tuning. arXiv preprint](https://arxiv.org/abs/2309.05653)
_arXiv:2309.05653._
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Good[man. 2022. Star: Bootstrapping reasoning with rea-](https://openreview.net/pdf?id=_3ELRdg2sgI)
[soning. In Proceedings of NeurIPS.](https://openreview.net/pdf?id=_3ELRdg2sgI)
11928
-----
| [
"Tianduo, Wang",
"Wei, Lu",
"Vivek, Srikumar",
"Shichen, Li",
"Lun-Wei, Ku",
"Andre, Martins"
] | 2024-08-01T00:00:00 | ACL 2024 Long Papers | true | 0 | 0 | null | https://aclanthology.org/2024.acl-long.643 | https://arxiv.org/abs/2407.18248 | https://www.semanticscholar.org/paper/c6f21c64c08295e595c82602c37f0bcac96d3907 |
Self-training Language Models for Arithmetic Reasoning | Language models achieve impressive results in tasks involving complex multistep reasoning, but scaling these capabilities further traditionally requires expensive collection of more annotated data. In this work, we explore the potential of improving the capabilities of language models without new data, merely using automated feedback to the validity of their predictions in arithmetic reasoning (self-training). We find that models can substantially improve in both single-round (offline) and online self-training. In the offline setting, supervised methods are able to deliver gains comparable to preference optimization, but in online self-training, preference optimization shows to largely outperform supervised training thanks to superior stability and robustness on unseen types of problems. | This work explores the potential of improving models' reasoning capabilities without new data, merely using automated feedback to the validity of their predictions in arithmetic reasoning (self-training), and finds that models can substantially improve in both single-round (offline) and online self-training. | ## Self-training Language Models for Arithmetic Reasoning
**Marek Kadlčík[∗]** **and Michal Štefánik[∗]**
Faculty of Informatics, Masaryk University, Czech Republic
```
{kadlcik,stefanik.m}@mail.muni.cz
```
_∗equal contribution_
**Abstract**
Language models achieve impressive results
in tasks involving complex multistep reasoning, but scaling these capabilities further
traditionally requires expensive collection
of more annotated data. In this work, we
explore the potential of improving the capabilities of language models without new
data, merely using automated feedback to
the validity of their predictions in arithmetic reasoning (self-training).
We find that models can substantially improve in both single-round (offline) and online self-training. In the offline setting, supervised methods are able to deliver gains
comparable to preference optimization, but
in online self-training, preference optimization shows to largely outperform supervised
training thanks to superior stability and robustness on unseen types of problems.
SELF-TRAINING LOOP
**Sample a problem,**
**generate 16 predictions**
**feedback:**
**is result correct?**
✘ ✔ ✘ ✘ **... ✘** ✔
**training data**
_Variant - SFT:_ _Variant - Preferences:_
**• prediction 2** **• prediction 2 > prediction 1**
**• prediction 5** **• prediction 2 > prediction 3**
**• ...** **• ...**
**• prediction 16** **• prediction 16 > prediction 15**
Figure 1: Schema of self-training that we apply to
provide the model with feedback to its predictions.
In the offline variant, the model generates all predictions in a single round. In the online variant,
the training data is continuously generated.
model outputs (Hu et al., 2023) while reflecting
heavily on the model’s abstractive reasoning.
Our experiments address the two main research questions:
**RQ1: To what extent can abstract reasoning**
abilities of language models improve by selftraining without additional data?
**RQ2: Can the preference optimization bring**
further improvements to models’ capabilities
over traditional supervised fine-tuning?
We address these by implementing two variants of self-training: (1) an offline variant,
where the model responses, used to further
train the model, are generated in a single iteration (§3.1), and (2) an online variant, where
the model obtains feedback on its predictions
instantly during the training (§3.2).
Our experiments reveal that self-training provides a valuable training signal and significantly
**1** **Introduction**
Despite recent improvements in the practical
usability of language models (LMs) (Wang
et al., 2023), these models often struggle with
tasks requiring reasoning, i.e., a process of inferring a conclusion or decision logically and
systematically (Huang and Chang, 2023). Previous work improves the reasoning capabilities
of language models by scaling training data
to more diverse (Kadlčík et al., 2023) or complex (Hendrycks et al., 2021) collections, but
reaching further improvements in this direction
becomes exceedingly expensive.
In this work, we evaluate the potential of improving models’ capabilities by training with
implicit, automated feedback to models’ responses. Arithmetic reasoning provides a rare
environment where the quality of the model’s
responses can be automatically assessed against
the annotated correct results rather than expensive and possibly subjective judgments of
-----
**3** **Experiments**
Our experiments build upon the 3-billionparameter FLAN models fine-tuned specifically
for arithmetic reasoning in previous work of
Kadlčík et al. (2023). These relatively compact
calculator-assisted models called Calcformers were shown to perform noticeably well
on multi-step reasoning, while on single-step
and two-step problems perform similarly to
Llama-2 with 70B parameters (Touvron et al.,
2023). Another desiderata of these models is
their transparency of training data: they were
trained on a superset of our training collection,
meaning that in our experiments, we do not
train the models on any new data.
We self-train these models with the prompts
from Ape210K (Zhao et al., 2020), the largest
available dataset of over 200,000 math problems. In addition to Ape210K’s test set,
we evaluate our models on five other math
datasets, assessing the robustness of models’
capabilities in new types of math problems;
GSM8K (Cobbe et al., 2021) containing multistep elementary-grade problems requiring on
average 3.25 steps to achieve correct result,
AQuA-RAT (Ling et al., 2017) with more complex, multiple-choice tasks, and three simpler,
one to two-steps datasets: MAWPS (KoncelKedziorski et al., 2016), ASDiv-A (Miao et al.,
2020), and SVAMP (Patel et al., 2021).
In both online and offline self-training, we
use the model itself to generate training data
(see Fig. 1). The generated data consists of
the original input prompt (xi) and associated
model predictions (yi) in the form of a chain-ofthought sequence containing the model’s final
result at the end. For each prompt, we generate 16 predictions using sampled generation.
Annotations of correct results then allow us
to automatically annotate each prediction for
either being correct (yi[OK]), or incorrect (yi[NOK]),
assigning a set of both correct and incorrect
predictions to each input prompt.
In the case of supervised fine-tuning (SFT),
the dataset consists of pairs of (xi, yi[OK]). SFT
uses a standard next-token prediction with
cross-entropy loss and teacher forcing (Bahdanau et al., 2015). Further details of our
training setup can be found in Appendix A.
Preference optimization (PO) methods then
train on triples (xi, yi[OK], yi[NOK]), with the yi[OK]
improves the original model without any new
data. In the offline variant, these improvements
can be achieved by both supervised and preference optimization methods. However, the online variant reveals crucial issues in scaling the
supervised approach to autonomous settings
compared to preference optimization, which, in
an online setup, can better persist the original
abilities without supervision.
**2** **Related Work**
Luo et al. (2023) train models with PPO (Schulman et al., 2017) against feedback on individual steps given by ChatGPT 3.5. Uesato et al.
(2022) apply self-training on GMS8K and compare the effectiveness of giving outcome-based
(per solution) or process-based (per each step
in solution) feedback, concluding that the two
approaches result in comparable accuracy, but
outcome-based feedback delivers a higher error
rate in the rationales. Lightman et al. (2023)
study the same problem on a larger scale and
conclude that process-based feedback outperforms outcome-based at end-result accuracy.
Our work is closest to Parisi et al. (2022)
and Zelikman et al. (2022). Parisi et al. (2022)
apply self-training with a traditional supervised objective: they train the model on a
small set of seed data and continuously use
the trained model to generate solutions for a
larger set, from which correct solutions are
used in another training epoch. They show
that three such subsequent epochs can improve
the accuracy. Zelikman et al. (2022) experiment with self-training with supervised finetuning on commonsense and math reasoning.
They report positive results of self-training on
the model’s reasoning capabilities under specific conditions: (1) the initial model must be
capable enough to be able to achieve improvements, and (2) training tasks must hold a negligible chance of random success (unlike, e.g.,
binary classification). Our work builds upon
these findings but differs from previous work
in several important aspects, mainly: (1) we
experiment with several different objectives applied in self-training, and (2) we implement and
evaluate both offline and online variants of selftraining. Finally, we make our self-training implementations freely available for future work.[1]
[1https://github.com/prompteus/calc-x](https://github.com/prompteus/calc-x)
-----
GSM8K AQuA-RAT Ape210K MAWPS SVAMP ASDiv-A
Base checkpoint 43.2±2.7 37.8±6.1 26.3±2.1 61.9±4.2 51.8±3.2 78.7±2.3
SFT plain **46.1±2.7** 37.8±5.9 32.9±2.2 70.6±3. 8 56.2±3.0 81.9±2.2
SFT plain + LoRA 44.9±2.7 **39.0±5.9** **37.3±2.2** **80.8±3.5** 55.8±3.1 **82.8±2.1**
SFT balanced 45.8±2.7 37.4±5.9 33.6±2.2 66.7±3.9 **58.4±3.0** 82.0±2.2
SFT with negatives 41.8±2.7 33.1±5.7 28.0±2.1 65.2±4.1 52.2±3.1 75.9±2.4
DPO (β = 0.99) 45.3±2.7 37.0±5.9 29.2±2.1 69.6±3.9 54.2±3.1 83.1±2.1
DPO (β = 0.9) 37.2±2.6 40.9±6.1 32.8±2.3 61.2±4.1 52.2±3.1 78.1±2.3
DPO (β = 0.9) + LoRA 45.9±2.7 **41.3±6.1** 32.4±2.2 64.4±4.0 57.1±3.1 84.7±2.0
KTO (β = 0.3) **47.1±2.7** 38.6±6.1 36.4±2.2 **78.3±3.5** 55.8±3.1 85.3±2.0
KTO (β = 0.1) 47.0±2.7 40.6±6.1 **37.9±2.3** 68.3±3.9 57.2±3.1 86.4±1.9
KTO (β = 0.1) + LoRA 43.1±2.7 36.2±5.9 37.6±2.2 64.2±4.1 58.5±3.3 87.0±1.9
IPO (τ = 0.9) 38.4±2.7 39.0±5.9 26.9±2.1 71.3±3.8 64.6±3.0 87.4±1.9
IPO (τ = 0.99) 40.7±2.7 36.6±5.9 28.1±2.1 66.3±4.0 64.5±3.0 **87.8±1.8**
IPO (τ = 0.99) + LoRA 36.0±2.6 39.4±5.9 30.2±2.1 66.7±4.0 **65.6±3.0** **87.8±1.8**
Table 1: Correct results obtained in offline self-training on Ape210K. For each preference optimization
method, we report results for its two best-performing configurations. Bold entries denote the best results
among supervised and preference optimization methods per dataset. Confidence intervals are bootstrapped
(500 samples, 1,000 repeats).
marked as being preferred over yi[NOK] in each
method. We experiment with three recent preference optimization methods: Direct Preference Optimization; DPO (Rafailov et al., 2023),
Kahneman-Tversky Optimization; KTO (Ethayarajh et al., 2024) and Identity Preference
Optimization; IPO (Azar et al., 2023). These
methods differ in a variety of aspects in the
formulation of training loss. For brevity, we
direct the reader to the referenced work for
further details.
**3.1** **Offline Self-training**
In the offline variant, we perform a single iteration of collecting predictions with prompts from
Ape210K, resulting in over 24,000 prompts
with at least one positive and one negative
response.
All PO methods exhibit a crucial parameter β or τ that weights the KL regularization of the trained model according to the
original “reference” model. We perform a hyperparameter tuning of this parameter with
_β_ (0.01, 0.1, 0.3, 0.6, 0.9, 0.99) according to
_∈_
in-domain validation accuracy separately for
each method and report the results for the best
two configurations.
For SFT, we experiment with 3 variants.
SFT plain is trained on pairs (xi, yi[OK]). In
SFT balanced, we use two different correct
predictions yi[OK] for one xi. This variant compensates for the PO advantage of training on
two solutions per problem. Lastly, in SFT
with negatives, we use both positive yi[OK]
and negative yi[NOK] as targets for each xi. In
the training data constructed from yi[NOK], we
prefix xi with a phrase “Write incorrect solution for the following problem”
Finally, we re-train the best-performing run
of each method with a low-rank adaptation
(LoRA) (Hu et al., 2021), a commonly used finetuning regularization technique that restricts
the fine-tuning update of each weight to have a
specific low rank. We apply LoRA with a rank
of 32 on all linear projections in the model.
**Results** Table 1 compares the accuracy
achieved in offline self-training with each
method. A comparison of supervised and more
complex preference optimization methods reveals a relatively small difference between the
best-performing configurations of both categories. Especially thanks to LoRA regularization, SFT shows the ability to reach results
comparable in most datasets. Among all supervised methods, the SFT with negatives performs the worst, showing that using negative
feedback in supervised training analogically to
preference optimization is non-trivial.
Similar to SFT, LoRA regularization also
has a positive effect on DPO, indicating DPO’s
higher inclination to overfitting, as also evidenced by previous work (Azar et al., 2023).
On the practical side, we note that PO methods converge much faster than SFT methods,
-----
GSM8K AQuA-RAT Ape210K MAWPS SVAMP ASDiv-A
Base checkpoint 43.2±2.7 37.8±6.1 26.3±2.1 61.9±4.2 51.8±3.2 78.7±2.3
SFT 27.4±2.5 7.9±3.3 41.2±2.3 63.8±4.2 59.8±3.1 83.3±2.1
DPO (β = 0.9) 49.1±2.7 **39.8±5.9** 37.9±2.3 79.6±3.4 57.3±3.1 85.6±2.0
KTO (β = 0.1) **52.7±2.7** 36.6±6.1 **49.6±2.4** **85.2±3.0** **62.6±3.1** **90.6±1.6**
IPO (τ = 0.99) 49.1±2.8 35.8±5.9 42.2±2.3 81.5±3.4 56.8±3.0 86.6±1.9
Table 2: Correct results obtained in online self-training on Ape210K problems. Bold denotes the best
result per dataset. Confidence intervals are obtained from bootstrapping (500 samples, 1,000 repeats).
achieving the best validation scores on average
after around 2,400 training steps compared to
16,600 steps in supervised setups. A detailed
comparison can be found in Table 3.
**3.2** **Online Self-training**
In the online self-training, we generate the
training data on the fly. Therefore, throughout the whole training, both the positive and
negative predictions used for conditioning the
updates can realistically be generated by the
trained model. Previous work showed that
exposing the model to its own outputs might
itself improve its robustness (Štefánik et al.,
2023). In our settings, we reflect on the LM’s
capability to autonomously improve its logical
reasoning capability based on the up-to-date
feedback to its predictions.
A methodology of constructing training samples from the model’s predictions for both SFT
and PO methods remains identical to the offline variant. Details of data processing can
be found in Appendix A.1. As the generation
process in online training substantially slows
down updates, we restrain the scale of experiments to the best-performing configurations
from the offline variant.
**Results** Table 2 shows the accuracy of training methods in online self-training. This setting reveals much larger differences between
methods. Supervised fine-tuning (SFT) improves accuracy on simple one-step and twostep datasets (MAWPS, SVAMP, and ASDivA) but substantially degrades performance on
out-of-distribution GSM8K and AQuA-RAT.
Manual inspection reveals that the degradation
on AQuA-RAT is caused by the model’s forgetting of the response format of multiple-choice
questions, well-preserved by all PO methods.
Contrary to the SFT, PO methods deliver
significant improvements compared to both the
base checkpoint and their offline variants (Ta
ble 1). Noticeable is the improvement of DPO
by 11.9% on GSM8K, among other cases, suggesting that self-training can mitigate overfitting of PO methods. Best-performing KTO
method also substantially improved compared
to the offline variant; by 11.3% on in-domain
Ape210K, or by 16.9% on simpler, out-ofdomain MAWPS. Among all other online methods, KTO performs best on every dataset except for AQuA-RAT.
Appendix B provides a per-sample analysis of differences between outputs of SFT and
PO models. Noticeably, we find that while
SFT allows the model to achieve remarkable
improvements, this comes at the price of faithfulness and usability of its rationales, where the
SFT model learns to completely or partially
omit reliable rationale.
**4** **Conclusions**
This work explores the potential of autonomously improving language models for
arithmetic reasoning, where the task allows
automated, immediate, and objective feedback
based on the correct results. We experiment
with two settings: (i) offline self-training, collecting the feedback in a single iteration, and
(ii) online self-training, where the model trains
continuously from feedback to its up-to-date
predictions. In both settings, we apply and
compare recent preference optimization methods (DPO, KTO, IPO) with standard supervised training (SFT).
We find that both the offline and online
self-training provide an opportunity to improve models’ capabilities without any new
data, using exclusively models’ own predictions and automated feedback. In addition to
the offline variant, online self-training provides
further opportunities for substantial improvements thanks to the enhanced robustness of
preference optimization methods.
-----
**Limitations**
Despite the fact that our proposed self-training
methods do not require any new data, we acknowledge their limitations in the extensive
computational requirements given by generating the data. While the data generation for the
offline variant can be parallelized, this is more
difficult for the online variant, where the model
is trained with its own most recent predictions.
As a result, our self-training experiments took
between 15 and 30 days to converge on a single
Nvidia A100 GPU.
The time-demanding character of online selftraining experiments is a direct cause of another limitation: a constrained diversity of
models and datasets that we experiment with.
As such, the experiments and conclusions of
our work should inspire experiments with selftraining in other applications but may not be
generalized to claims on the general efficiency
of self-training.
**References**
Mohammad Gheshlaghi Azar, Mark Rowland, Bilal
Piot, Daniel Guo, Daniele Calandriello, Michal
[Valko, and Rémi Munos. 2023. A general the-](https://arxiv.org/abs/2310.12036)
[oretical paradigm to understand learning from](https://arxiv.org/abs/2310.12036)
[human preferences. Preprint, arXiv:2310.12036.](https://arxiv.org/abs/2310.12036)
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua
[Bengio. 2015. Neural Machine Translation by](http://arxiv.org/abs/1409.0473)
[Jointly Learning to Align and Translate. In 3rd](http://arxiv.org/abs/1409.0473)
_International Conference on Learning Represen-_
_tations, ICLR 2015, San Diego, USA._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christo[pher Hesse, and John Schulman. 2021. Training](https://arxiv.org/abs/2110.14168)
[verifiers to solve math word problems. CoRR,](https://arxiv.org/abs/2110.14168)
abs/2110.14168.
Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff,
Dan Jurafsky, and Douwe Kiela. 2024. [Kto:](https://arxiv.org/abs/2402.01306)
[Model alignment as prospect theoretic optimiza-](https://arxiv.org/abs/2402.01306)
[tion. Preprint, arXiv:2402.01306.](https://arxiv.org/abs/2402.01306)
Dan Hendrycks, Collin Burns, Saurav Kadavath,
Akul Arora, Steven Basart, Eric Tang, Dawn
[Song, and Jacob Steinhardt. 2021. Measuring](https://arxiv.org/abs/2103.03874)
[mathematical problem solving with the MATH](https://arxiv.org/abs/2103.03874)
[dataset. CoRR, abs/2103.03874.](https://arxiv.org/abs/2103.03874)
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan
Allen-Zhu, Yuanzhi Li, Shean Wang, and Weizhu
[Chen. 2021. Lora: Low-rank adaptation of large](https://arxiv.org/abs/2106.09685)
[language models. CoRR, abs/2106.09685.](https://arxiv.org/abs/2106.09685)
Yebowen Hu, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Hassan Foroosh, and Fei Liu. 2023.
[DecipherPref: Analyzing influential factors in](https://doi.org/10.18653/v1/2023.emnlp-main.519)
[human preference judgments via GPT-4. In Pro-](https://doi.org/10.18653/v1/2023.emnlp-main.519)
_ceedings of the 2023 Conference on Empirical_
_Methods in Natural Language Processing, pages_
8344–8357, Singapore. Association for Computational Linguistics.
[Jie Huang and Kevin Chen-Chuan Chang. 2023. To-](https://doi.org/10.18653/v1/2023.findings-acl.67)
[wards reasoning in large language models: A sur-](https://doi.org/10.18653/v1/2023.findings-acl.67)
[vey. In Findings of the Association for Computa-](https://doi.org/10.18653/v1/2023.findings-acl.67)
_tional Linguistics: ACL 2023, pages 1049–1065,_
Toronto, Canada. Association for Computational
Linguistics.
Marek Kadlčík, Michal Štefánik, Ondrej Sotolar,
[and Vlastimil Martinek. 2023. Calc-X and Cal-](https://doi.org/10.18653/v1/2023.emnlp-main.742)
[cformers: Empowering Arithmetical Chain-of-](https://doi.org/10.18653/v1/2023.emnlp-main.742)
[Thought through Interaction with Symbolic Sys-](https://doi.org/10.18653/v1/2023.emnlp-main.742)
[tems. In Proceedings of the 2023 Conference on](https://doi.org/10.18653/v1/2023.emnlp-main.742)
_Empirical Methods in Natural Language Process-_
_ing, pages 12101–12108, Singapore. ACL._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini,
Nate Kushman, and Hannaneh Hajishirzi. 2016.
[MAWPS: A math word problem repository. In](https://doi.org/10.18653/v1/N16-1136)
_Proceedings of the 2016 Conference of the North_
_American Chapter of the Association for Com-_
_putational Linguistics: Human Language Tech-_
_nologies, pages 1152–1157, San Diego, California._
Association for Computational Linguistics.
Hunter Lightman, Vineet Kosaraju, Yura Burda,
Harri Edwards, Bowen Baker, Teddy Lee, Jan
Leike, John Schulman, Ilya Sutskever, and Karl
[Cobbe. 2023. Let’s verify step by step. arXiv](https://arxiv.org/abs/2305.20050)
_preprint arXiv:2305.20050._
Wang Ling, Dani Yogatama, Chris Dyer, and Phil
[Blunsom. 2017. Program induction by rationale](https://arxiv.org/abs/1705.04146)
[generation: Learning to solve and explain alge-](https://arxiv.org/abs/1705.04146)
[braic word problems. CoRR, abs/1705.04146.](https://arxiv.org/abs/1705.04146)
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao,
Jianguang Lou, Chongyang Tao, Xiubo Geng,
Qingwei Lin, Shifeng Chen, and Dongmei Zhang.
2023. [Wizardmath: Empowering mathemati-](https://arxiv.org/abs/2308.09583)
[cal reasoning for large language models via rein-](https://arxiv.org/abs/2308.09583)
[forced evol-instruct. Preprint, arXiv:2308.09583.](https://arxiv.org/abs/2308.09583)
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih
[Su. 2020. A diverse corpus for evaluating and](https://doi.org/10.18653/v1/2020.acl-main.92)
[developing English math word problem solvers.](https://doi.org/10.18653/v1/2020.acl-main.92)
In Proceedings of the 58th Annual Meeting of the
_Association for Computational Linguistics, pages_
975–984, Online. Association for Computational
Linguistics.
Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory F. Diamos, Erich Elsen, David
García, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao
Wu. 2017. [Mixed precision training.](https://arxiv.org/abs/1710.03740) _CoRR,_
abs/1710.03740.
-----
Aaron Parisi, Yao Zhao, and Noah Fiedel.
[2022. Talm: Tool augmented language models.](https://arxiv.org/abs/2205.12255)
_Preprint, arXiv:2205.12255._
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
[2021. Are NLP models really able to solve simple](https://doi.org/10.18653/v1/2021.naacl-main.168)
[math word problems? In Proceedings of the 2021](https://doi.org/10.18653/v1/2021.naacl-main.168)
_Conference of the North American Chapter of_
_the Association for Computational Linguistics:_
_Human Language Technologies, pages 2080–2094,_
Online. Association for Computational Linguistics.
Rafael Rafailov, Archit Sharma, Eric Mitchell,
Stefano Ermon, Christopher D. Manning, and
[Chelsea Finn. 2023. Direct preference optimiza-](https://arxiv.org/abs/2305.18290)
[tion: Your language model is secretly a reward](https://arxiv.org/abs/2305.18290)
[model. Preprint, arXiv:2305.18290.](https://arxiv.org/abs/2305.18290)
John Schulman, Filip Wolski, Prafulla Dhariwal,
[Alec Radford, and Oleg Klimov. 2017. Proxi-](https://arxiv.org/abs/1707.06347)
[mal policy optimization algorithms. Preprint,](https://arxiv.org/abs/1707.06347)
arXiv:1707.06347.
[Noam Shazeer and Mitchell Stern. 2018. Adafactor:](https://arxiv.org/abs/1804.04235)
[Adaptive learning rates with sublinear memory](https://arxiv.org/abs/1804.04235)
[cost. CoRR, abs/1804.04235.](https://arxiv.org/abs/1804.04235)
Michal Štefánik, Marek Kadlcik, and Petr Sojka.
[2023. Soft alignment objectives for robust adap-](https://doi.org/10.18653/v1/2023.acl-long.492)
[tation of language generation. In Proceedings](https://doi.org/10.18653/v1/2023.acl-long.492)
_of the 61st Annual Meeting of the Association_
_for Computational Linguistics (Volume 1: Long_
_Papers), pages 8837–8853, Toronto, Canada. As-_
sociation for Computational Linguistics.
Hugo Touvron, Louis Martin, Kevin R. Stone, Peter
Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Daniel M. Bikel, Lukas
Blecher, Cristian Cantón Ferrer, Moya Chen,
Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony S. Hartshorn, Saghar Hosseini, Rui Hou,
Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Kloumann, A. V. Korenev, Punit Singh Koura, Marie-Anne Lachaux,
Thibaut Lavril, Jenya Lee, Diana Liskovich,
Yinghai Lu, Yuning Mao, Xavier Martinet, Todor
Mihaylov, Pushkar Mishra, Igor Molybog, Yixin
Nie, Andrew Poulton, Jeremy Reizenstein, Rashi
Rungta, Kalyan Saladi, Alan Schelten, Ruan
Silva, Eric Michael Smith, R. Subramanian, Xia
Tan, Binh Tang, Ross Taylor, Adina Williams,
Jian Xiang Kuan, Puxin Xu, Zhengxu Yan,
Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie
Kambadur, Sharan Narang, Aurelien Rodriguez,
Robert Stojnic, Sergey Edunov, and Thomas
[Scialom. 2023. Llama 2: Open foundation and](https://arxiv.org/abs/2307.09288)
[fine-tuned chat models. ArXiv, abs/2307.09288.](https://arxiv.org/abs/2307.09288)
Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Hig[gins. 2022. Solving math word problems with](https://arxiv.org/abs/2211.14275)
[process- and outcome-based feedback. Preprint,](https://arxiv.org/abs/2211.14275)
arXiv:2211.14275.
[Shibo Wang and Pankaj Kanwar. 2023. Bfloat16:](https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus)
[The secret to high performance on cloud tpus.](https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus)
Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi,
Xingshan Zeng, Wenyong Huang, Lifeng Shang,
[Xin Jiang, and Qun Liu. 2023. Aligning large](https://arxiv.org/abs/2307.12966)
[language models with human: A survey. arXiv](https://arxiv.org/abs/2307.12966)
_preprint arXiv:2307.12966._
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah
[Goodman. 2022. Star: Bootstrapping reasoning](https://proceedings.neurips.cc/paper_files/paper/2022/file/639a9a172c044fbb64175b5fad42e9a5-Paper-Conference.pdf)
[with reasoning. In Advances in Neural Infor-](https://proceedings.neurips.cc/paper_files/paper/2022/file/639a9a172c044fbb64175b5fad42e9a5-Paper-Conference.pdf)
_mation Processing Systems, volume 35, pages_
15476–15488. Curran Associates, Inc.
Wei Zhao, Mingyue Shang, Yang Liu, Liang Wang,
[and Jingming Liu. 2020. Ape210k: A large-scale](https://arxiv.org/abs/2009.11506)
[and template-rich dataset of math word problems.](https://arxiv.org/abs/2009.11506)
_CoRR, abs/2009.11506._
**A** **Training Details**
In every configuration of both preference and
supervised training, the model is trained with
Adafactor (Shazeer and Stern, 2018) optimizer
with an effective batch size of 32, a learning rate
of 2 10[−][5] with 1,000 warmup steps, and a linear
_·_
decay to 0 in 1 million steps. The models were
trained in bfloat16 (Wang and Kanwar, 2023)
precision with mixed precision training (Micikevicius et al., 2017). The training terminates
after convergence on the in-domain dataset
(Ape210K), and then the best checkpoint from
the training is selected according to in-domain
validations.
Each of our experiments can be reproduced
on a single Nvidia A100 graphic card with
32GB of RAM. Note that especially the online
self training experiments can take up to 31 days
to converge.
**A.1** **Online self-training**
To create new data in online self-training, we
sample a random problem from Ape210K and
generate predictions with the current model.
Next, we label each solution as correct if its
result matches the one in the data. The online
self-training process is illustrated in Figure 1.
In this experiment, we again compare supervised training and preference optimization. In
all variants, we generate 16 solutions per problem with top-k=50 sampling using the latest
model, but the subsequent data processing is
method-specific.
-----
gets sampled, it is removed from the buffer,
and new data are generated with the correct
model to fill the empty slots.
**B** **Output analyses**
Aiming to better understand the difference between self-training with preference optimization methods and supervised training, we manually analyze a set of randomly chosen rationales generated for prompts of the GSM8K
test set. We collect the rationales from (i) the
original checkpoint, (ii) the checkpoint trained
in online self-training and supervised method
(denoted SFT), and (iii) the checkpoint trained
on online self-training with the best-performing
method (KTO). Due to the time complexity
of evaluating long chain-of-thought output sequences, we analyze 20 predictions marked as
correct for each checkpoint.
Within the analysis, we encounter 5 types of
dominant flaws that models’ outcomes exhibit,
even when being correct:
1. Inconsistency: Within the rationale, the
model generates a new reasoning step
which is not logically consistent with previous ones.
2. Missing association: Model’s rationale
contains steps that are difficult to assess
for consistency, as they lack the associations of units (e.g., of size, distance, or
volume) or subjects from input prompt or
intermediate computation.
3. Missing rationale: Model only generates
the result without any rationale associated
with it.
4. Missing rationale part: Model’s rationale is missing a specific segment, making
it impossible to fully check the model’s
computation process.
5. Not understandable: Model’s rationale
contains text that is incomprehensible by
the annotator, thus impossible to judge
for logical correctness.
The results of this analysis are summarized
in Table 4. A set of predictions for identical
prompts and responses of SFT and KTO checkpoints can also be found in Appendix B.1.
Method Training steps
SFT plain 16,000
SFT plain + LoRA 98,000
SFT balanced 14,000
SFT with negatives 20,000
DPO β = 0.99 1,800
DPO β = 0.9 1,800
DPO β = 0.9 LoRA 2,600
KTO β = 0.3 3,800
KTO β = 0.1 4,800
KTO β = 0.1 LoRA 16,400
IPO τ = 0.9 1,200
IPO τ = 0.99 1,200
IPO τ = 0.99 LoRA 1,600
Table 3: Number of steps that different methods
take until convergence in offline self-training shows
that preference optimization methods converge 5–
20 times faster than supervised training.
**Supervised training:** After generating the
solutions, we discard the incorrect ones. The
correct solutions are oversampled to generate
32 training examples. Each solution is sampled
at most 4 times each, and all solutions are used
almost the same number of times (maximal
difference of one).
**Preference Optimization:** After the solutions are generated, we create all possible pairs
of solutions where one solution has a correct
result and the other one does not. We then
sample with repetition from the pairs, such
that:
1. every correct solution is used at most 4
times,
2. the number of preference pairs per problem is 32 if possible without violating the
condition 1,
3. all correct solutions are used almost the
same number of times,
4. all incorrect solutions are used almost the
same number of times.
Almost the same number of times means a
maximal difference of one.
In both supervised and preference training,
the training instances are put into a buffer
with 8192 slots, from which they are sampled
randomly for training. When a batch of data
-----
Original SFT KTO
Inconsistency 20% 5% 30%
Missing association 0% 70% 0%
Missing rationale 0% 30% 0%
Missing ratio. part 5% 100% 15%
Not understandable 30% 0% 30%
Table 4: Output analysis: ratio of model predictions
containing one of our identified flaws, evaluated on
a sample of models’ correct predictions.
**B.1** **Examples of predictions**
For an illustration of differences between the
supervised and preference optimization checkpoints, below we include a set of predictions
of SFT and KTO checkpoints for randomly
selected prompts of GSM8K collection. These
examples can also be found by associated IDs
in the original dataset repository[2]. For better
readability, we removed the navigation tokens
used by the model for interaction with the calculator tool during the generation.
**gsm8k__aBNdImSK9ddCwuYb**
**Question: Tobias bought a big pizza with**
60 pieces. He ate 2/5 of the pieces on the first
day, 10 pieces on the second day, and 7/13 of
the remaining pieces on the third day. How
many pizza pieces has he eaten so far?
**Correct answer: 48**
**Prediction with SFT:**
60 * 2/5 = 28 ✗
**Prediction with KTO:**
On the first day, Tobias ate 60 * 2/5
= 60*2/5 24 24 pieces. The number of pieces that remained after the
second day is 60 - 24 = 60-24 36 36
pieces. On the third day, he ate 36
- 7/13 = 36*7/13 252/13 = around
19.384615 25 pieces. The total number of pieces he has eaten so far is
10 + 252/13 = 10+252/13 382/13 =
around 29.384615 38 pieces. Tobias
ate a total of 24 + 10 + (382/13)
= 24+10+352/13 794/13 = around
61.076923 69 pieces. 794/13 = around
69.076923 ✗
[2https://huggingface.co/datasets/MU-](https://huggingface.co/datasets/MU-NLPC/Calc-gsm8k)
```
NLPC/Calc-gsm8k
```
**id: gsm8k__qIygQI2WNfRPWoYj**
**Question: Jennie is helping at her mom’s**
office. She has a pile of 60 letters needing
stamps, and a pile of letters already stamped.
She puts stamps on one-third of the letters
needing stamps. If there are now 30 letters in
the pile of already-stamped letters, how many
were in that pile when Jennie began?
**Correct answer: 10**
**Prediction with SFT:**
60 / 3 = 40 ✗
**Prediction with KTO:**
Jennie stamps 60 / 3 = 60/3 20 20
letters. Then the number of letters
still needing stamps is 60 - 20 = 60-20
40 40. The number of letters already
stamped is 30 - 20 = 30-20 10 10.
There were 40 + 10 = 40+10 50 50
letters in the pile when Jennie began.
50 ✗
**id: gsm8k__0oOjz5Ub66DF4inZ**
**Question: There are 6 trees in Chris’s yard.**
Ferdinand has half the number of trees that
Chris has. Harry has 5 more than twice the
number of trees that Ferdinand has. How many
more trees are in Harry’s yard than Ferdinand’s
yard?
**Correct answer: 8**
**Prediction with SFT:**
9 ✗
**Prediction with KTO:**
Half the number of trees that Chris
has is 6 trees / 2 = 6/2 3 3 trees.
Twice the number of trees that Ferdinand has is 3 trees * 2 = 3*2 6 6 trees.
Harry has 5 trees + 6 trees = 6+5
11 11 trees. There are 11 trees - 6
trees = 11-6 5 5 more trees in Harry’s
yard than in Ferdinand’s yard. 5 5
5 more trees in Harry’s yard than in
Ferdinand’s yard. 5 5 5 more trees
in Harry’s yard than in Ferdinand’s
yard. 11 11 5 more trees than in Ferdinand’s yard. 11-6 5 5 more trees in
Harry’s yard. 5 ✗
-----
| [
"Marek, Kadlčík",
"Michal, Štefánik"
] | 2024-07-11T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2407.08400 | https://arxiv.org/abs/2407.08400 | https://www.semanticscholar.org/paper/45ebe6cba6fdd7cfb61f2cc6df3178f65d9146ad |
Skywork-Math: Data Scaling Laws for Mathematical Reasoning in Large Language Models -- The Story Goes On | In this paper, we investigate the underlying factors that potentially enhance the mathematical reasoning capabilities of large language models (LLMs). We argue that the data scaling law for math reasoning capabilities in modern LLMs is far from being saturated, highlighting how the model's quality improves with increases in data quantity. To support this claim, we introduce the Skywork-Math model series, supervised fine-tuned (SFT) on common 7B LLMs using our proposed 2.5M-instance Skywork-MathQA dataset. Skywork-Math 7B has achieved impressive accuracies of 51.2% on the competition-level MATH benchmark and 83.9% on the GSM8K benchmark using only SFT data, outperforming an early version of GPT-4 on MATH. The superior performance of Skywork-Math models contributes to our novel two-stage data synthesis and model SFT pipelines, which include three different augmentation methods and a diverse seed problem set, ensuring both the quantity and quality of Skywork-MathQA dataset across varying difficulty levels. Most importantly, we provide several practical takeaways to enhance math reasoning abilities in LLMs for both research and industry applications. | null | #### Technical Report
## Skywork-Math: Data Scaling Laws for Mathematical Reasoning in Large Language Models — The Story Goes On
**Liang Zeng, Liangjun Zhong, Liang Zhao, Tianwen Wei,**
Liu Yang, Jujie He, Cheng Cheng, Rui Hu, Yang Liu, Shuicheng Yan, Han Fang, Yahui Zhou
{forename}.{surname}@kunlun-inc.com Skywork AI, Kunlun Inc.
### Abstract
In this paper, we investigate the underlying factors that potentially enhance the mathematical
reasoning capabilities of large language models (LLMs). We argue that the data scaling law
for math reasoning capabilities in modern LLMs is far from being saturated, highlighting
how the model’s quality improves with increases in data quantity. To support this claim, we
introduce the Skywork-Math model series, supervised fine-tuned (SFT) on common 7B LLMs
using our proposed 2.5M-instance Skywork-MathQA dataset. Skywork-Math 7B has achieved
impressive accuracies of 51.2% on the competition-level MATH benchmark and 83.9% on the
GSM8K benchmark using only SFT data, outperforming an early version of GPT-4 on MATH.
The superior performance of Skywork-Math models contributes to our novel two-stage data
synthesis and model SFT pipelines, which include three different augmentation methods and a
diverse seed problem set, ensuring both the quantity and quality of Skywork-MathQA dataset
across varying difficulty levels. Most importantly, we provide several practical takeaways to
enhance math reasoning abilities in LLMs for both research and industry applications.
Top@1 Accuracy (%)
50
40
30
20
10
50 60 70 80 90
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||Skywo|Sk rk-Math-DeepSeekMa|ywork-Math-Mistral-7 th-7B|B GP|T-4-0|314|
||Skywo|rk-Math-LLaMA2-7B DeepSe Xw|ekMath-Instruct-7B Xwin-Math- in-Math-Mistral-7B|13B GP|T-4||
|||MAmmoTH-70|B||||
|MA MAmmoTH-7B|mmoTH-13B|ChatGLM3-M InternLM InternLM2-Math-7|ath-SFT-32B/Xwin-Mat 2-Math-20B GPT-3.5-Turbo B|h-7B|||
||M|MetaMath-Mistral etaMath-13B Wiza|LLaMA3-8B -7B M etaMath-70B LEMA-LLaMA2 rdMath-70B|-70|B||
|WizardMath-7B|MetaMath-7B WizardMath-13B LEMA-LLaMA2-13B||||||
|LEMA-LLaMA2-7B|||||||
GSM8K
Figure 1 | Top1 accuracy on GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021)
using only SFT techniques, without using external toolkits and voting techniques. Following
MetaMath (Yu et al., 2024), we employ a zero-shot chain-of-thought evaluation framework.
Skywork-Math models achieve state-of-the-art accuracy among models smaller than 10B parameters using only synthetic SFT data and surpass an early version of GPT-4 on MATH.
-----
##### 1. Introduction
_More is different._
—-Philip W. Anderson, 1972
Reasoning ability is a hallmark of human intelligence (Gendron et al., 2024; Huang and
Chang, 2022; Wei et al., 2022b). Although Large Language Models (LLMs) have recently demonstrated significant capabilities in various tasks such as conversation (Achiam et al., 2023; Anthropic, 2024; Peng et al., 2023) and summarization (Almazrouei et al., 2023; Scao et al., 2022;
Wei et al., 2023b; Yang et al., 2023), they often struggle with complex reasoning tasks (Gendron
et al., 2024; Lu et al., 2023; Wu et al., 2023). One particularly challenging area is mathematical
reasoning (Arora et al., 2023; Cobbe et al., 2021; He et al., 2024; Hendrycks et al., 2021; Zhong
et al., 2023), which requires the ability to solve mathematical problems and derive logical conclusions in a step by step manner (Saxton et al., 2019; Shao et al., 2024; Toshniwal et al., 2024;
Wei et al., 2022b; Yu et al., 2024).
Two prevailing beliefs guide researchers and practitioners in enhancing mathematical reasoning abilities of LLMs. The first belief posits that complex reasoning abilities, especially
mathematical reasoning, are emergent abilities that exist in large language models but not in
small models (Wei et al., 2022a,b). Typically, models with more than 30 billion parameters
exhibit the strong mathematical reasoning ability (Brown et al., 2020). The second belief is the
seminal "superficial alignment" hypothesis (Zhou et al., 2023), which asserts that "A model’s
_knowledge and capabilities are learnt almost entirely during pre-training, while alignment teaches it_
_which sub-distribution of formats should be used when interacting with users.". According to this_
hypothesis, the alignment process, primarily through supervised fine-tuning (SFT), does not
inject new knowledge or improve inherent abilities but rather adjusts the output response format.
This implies that the strong mathematical reasoning ability may not be significantly improved
by a large amount of synthetic SFT data.
In this paper, we re-examine these two common beliefs mentioned above regarding mathematical reasoning abilities of LLMs. For the first belief, we introduce the Skywork-Math model
series, which are supervised fine-tuned (SFT) on common 7B pre-trained LLM models without
employing other complex alignment techniques such as RLHF (Bai et al., 2022; Casper et al.,
2023) and DPO (Rafailov et al., 2024). Skywork-Math 7B models have achieved impressive accuracies of 51.2% on the competition-level MATH (Hendrycks et al., 2021) benchmark and 83.9%
on the GSM8K (Cobbe et al., 2021) benchmark, notably outperforming an early version of GPT-4
on MATH. Our empirical findings, consistent with the conclusions in Li et al. (2024), suggest
that strong mathematical reasoning ability can indeed exist in common 7B language models.
Moreover, scaling up synthetic SFT data can further enhance the mathematical reasoning ability
of Skywork-Math 7B models.
For the second belief, we propose Skywork-MathQA high-quality SFT dataset containing
2.5 million instances, which is much larger than open-sourced dataset of its kind to date, such
as MetaMathQA (Yu et al., 2024) containing 395K samples. We empirically observe that the
scaling law curve on the SFT alignment for mathematical reasoning in modern LLMs is far from
being saturated (ref. Figure 5). We have carefully scaled the Skywork-MathQA SFT dataset with
diverse and high-quality samples specifically within the mathematical domain to enhance the
model’s capability in understanding and solving mathematical problems.
Due to the scarcity of high-quality and challenging mathematical data, various pipelines
-----
and prompts have been employed to generate synthetic mathematical data (Li et al., 2024; Shao
et al., 2024; Toshniwal et al., 2024; Wang et al., 2022; Wei et al., 2022b; Yu et al., 2024). To address
this deficiency, we employ GPT-4 to generate a substantial amount of synthetic data through
a novel two-stage data synthesis pipeline, in conjunction with the corresponding model SFT
process. In stage 1, our objective is to obtain normal synthetic problems to enhance the models’
general comprehension of mathematical problems. To maintain the diversity in data selection
process, we utilize the core-set approach (Sener and Savarese, 2017) on enlarged seed problems.
However, as the data volume increases, we empirically observe that the relationship between
performance and data quantity begins to plateau. Accordingly, in stage 2, we diversify the
dataset further by introducing a proportion of augmented hard problems (ref. Figure 3 for
illustrative examples), thereby exposing the model to more challenging mathematical questions.
Without continual pre-training on a large-scale math corpus (Azerbayev et al., 2023; Shao et al.,
2024), Skywork-Math models achieve impressive performance with just supervised fine-tuning
on common pre-trained LLMs containing only 7B parameters.
Most importantly, we provide valuable insights and practical takeaways to enhance the
mathematical reasoning ability in LLMs, benefiting both research and industry communities:
Highlighted Takeaways
- The potential for math reasoning capabilities in modern LLMs is far from exhausted.
The quality of LLMs can significantly improve with increases in data quantity (ref. Figure 5). Skywork-Math 7B models already demonstrate strong mathematical reasoning
abilities by SFTing on common 7B pre-trained LLM models.
- The learning process for accessing the math reasoning ability involves multiple stages.
Training LLM models in a meaningful order, from the easy problems to the hard ones,
can provide performance improvements.
- When scaling the synthetic SFT dataset, increasing the diversity of seed problems and
augmentation methods can improve the math reasoning performance of LLMs.
- Selecting influential data with high-quality from a large dataset is non-trivial (Engstrom
et al., 2024). Our empirical results indicate that some straightforward methods to select
the so-called "high-quality" data may not increase (and can even hurt) LLMs’ performance compared to randomly selecting data. The selection process involves multiple
constraints, and the "high-quality" data could significantly decrease the difficulty level
of problems, thus negatively impacting the performance of LLMs.
- The LLM models have strong knowledge transfer capabilities for mathematical reasoning across bilingual benchmarks (i.e., English and Chinese). We hypothesize that this
can be attributed to the inherent nature of symbols and numbers in math problems,
which retain their intrinsic meaning regardless of the language used.
- Although Skywork-Math 7B models have achieved considerable improvement in
robustness tests compared to other open-source LLM models, they remain sensitive to
the distractors in math word problems compared with proprietary GPT-4 models.
- Sparse MOE models cannot clearly exceed the performance upper bound of their dense
counterparts through SFT alignment in the context of math reasoning.
- Two subtle but crucial practical implementation techniques—preventing data leakage
and considering the influence of model maximum length—significantly impact the
final performance of LLM models.
-----
##### 2. Related Work
**Alignment in LLMs.** Large Language Models (LLMs) have recently transformed Natural
Language Processing (NLP) (Achiam et al., 2023; Anil et al., 2023; Anthropic, 2024; Touvron
et al., 2023), excelling in tasks such as automated summarization (Scao et al., 2022) and machine
translation (Almazrouei et al., 2023). Alignment in LLMs refers to the process of ensuring
that the model’s outputs adhere to user preferences (Shen et al., 2023). Various techniques
contribute to achieving alignment, including supervised fine-tuning (SFT) (Taori et al., 2023),
reinforcement learning from human feedback (RLHF) (Bai et al., 2022), and direct policy optimization (DPO) (Rafailov et al., 2024). Among these techniques, SFT is typically an indispensable
method for aligning LLMs and has achieved highly competitive performance across various
tasks (Chiang et al., 2023), particularly in mathematical reasoning (Li et al., 2024). SFT involves
fine-tuning a pre-trained large model using annotated data, making the model’s performance
more accurate for downstream tasks. Our work aims to deeply explore the performance boundaries of common 7B pre-trained LLMs using only the SFT alignment technique.
**Quantity and Quality of SFT Data.** Data is the fuel that powers the performance of LLMs.
This ongoing discussion about whether the quantity or quality of SFT data is more important
highlights their significance in enhancing the SFT performance of LLMs. (1) Quantity. Many
recent research demonstrates the scaling properties in LLM fine-tuning (Kaplan et al., 2020; Li
et al., 2024). The size of the fine-tuning dataset is a crucial factor affecting the LLMs’ performance.
However, the optimal fine-tuning data size is highly task-dependent (Zhang et al., 2024). (2)
**Quality. Several studies (Cao et al., 2023; Gunasekar et al., 2023; Li et al., 2023; Zhou et al.,**
2023) argue that the quality of fine-tuning data is equally critical. The renowned "less is more"
work (Zhou et al., 2023) suggests that substantial knowledge acquisition occurs during the pretraining stage, minimizing the need for extensive fine-tuning data. Additionally, the InstructionFollowing Difficulty (IFD) metric introduced by (Li et al., 2023) and the QaDS strategy proposed
in (Ni et al., 2024) aim to select diverse and high-quality instruction-following data to enhance
LLM fine-tuning efficiency. Collecting a huge number of high-quality mathematical reasoning
data is often time-consuming and labor-intensive. In this work, we generate a substantial
amount of SFT synthetic data to investigate how the quantity of data impacts the performance
of LLM models in mathematical reasoning.
**Mathematical Reasoning in LLMs.** LLMs have recently achieved significant progress in the
area of mathematical reasoning (Shao et al., 2024). Initial benchmarks, such as simple math
problems (Lan et al., 2022; Saxton et al., 2019), were readily solved by recent LLM models. This
success prompts the introduction of more challenging benchmarks, such as GSM8K (Cobbe
et al., 2021) and MATH (Hendrycks et al., 2021). Many recent works have proposed continual
pre-training on massive math corpora to improve their math reasoning capabilities (Azerbayev
et al., 2023; Jiang et al., 2023; Paster et al., 2024; Shao et al., 2024). Furthermore, significant
progress has been made in alignment for solving mathematical problems (Li et al., 2024; Luo
et al., 2023; Ni et al., 2024; Shao et al., 2024; Xu et al., 2024; Yu et al., 2024; Yue et al., 2023).
These studies focus on generating high-quality synthetic data or collecting human-labeled data
for model fine-tuning and alignment in the domain of math problem-solving. Additionally,
reasoning frameworks aim at improving math capacity, such as the chain-of-thought (COT)
prompting technique (Wang et al., 2022; Wei et al., 2022b), which enable LLMs to break down
the reasoning process into manageable steps, resulting in more accurate outputs. Moreover,
some complex math problems need the ability to conduct accurate arithmetic operations, a
-----
Stage 1
Data Synthesis Data Synthesis Base Model
Diversity
Seed Seed Normal
Selection
Problems Synthetic Synthetic
Problems Problems
Intermediate Model
Stage 2
Data Synthesis Intermediate Model
Hard Seed Hard
Problems Synthetic
Problems LLM
Skywork-Math Model
(a) Data Synthesis Pipeline (b) Model SFT Pipeline
Figure 2 | Overview of our proposed two-stage method. (a) The data synthesis pipeline of the
Skywork-MathQA dataset. (b) The model SFT pipeline of the Skywork-Math model series.
capability that LLMs often lack (Yuan et al., 2023). For tool-integrated math problem-solving,
program-of-thoughts (Chen et al., 2022; Shao et al., 2024; Toshniwal et al., 2024) prompts LLMs to
produce answers in the code format, which are then executed by a code interpreter. Preliminary
work indicates that SFT can improve the performance of open-source LLMs on mathematical
reasoning tasks by fine-tuning them on synthetic data (Li et al., 2024; Yu et al., 2024). Building
on this foundation, our work aims to thoroughly investigate the performance limits of common
7B pre-trained LLMs using only SFT synthetic data. We seek to determine the extent to which
data quantity impacts LLM quality and to understand the mechanisms behind this influence.
##### 3. Method
In this section, we present the detailed methodology of Skywork-Math 7B models, as illustrated
in Figure 2. Skywork-Math models aim to enhance math reasoning abilities during the model
alignment process, particularly in the SFT stage, using common and publicly available 7B pretrained models. We employ a two-stage SFT approach, in conjunction with two data synthesis
pipelines to produce high-quality data. In stage 1, we feed base pre-trained models with our
generated normal synthetic problems to produce an intermediate model. In stage 2, to mitigate
the diminishing returns in LLMs’ performance as the quantity of data increases, we generate
hard synthetic problems and develop our Skywork-Math models. To ensure the quality of
data, we primarily utilize GPT-4 [1] (Achiam et al., 2023) to generate 2.5M-instance synthetic
Skywork-MathQA dataset.
**Supervised Fine-Tuning (SFT).** SFT is an important and widely-used alignment technique in
LLMs to enhance pre-trained models for excelling at specific tasks (Shen et al., 2023). We denote
the token space of an input query and output response as X and Y, respectively. Typically,
1Without further clarification, the version of GPT-4 used in this paper is GPT-4-1106-preview.
-----
LLMs generate an output response sequence y = _𝑦1, 𝑦2, . . ., 𝑦𝑇_ in response to a given prompt
( )
query x = _𝑥1, 𝑥2, . . ., 𝑥𝑛_ . LLMs are the auto-regressive models characterized by a conditional
( )
probability distribution parameterized by 𝜃 as
**P𝜃(y | x) =**
**P𝜃(** _𝑦𝑡_ | x, 𝑦1:𝑡−1). (1)
_𝑡=1_
Ö
Let a mathematical reasoning SFT training dataset be D = {(x[𝑖], y[𝑖])}𝑖[𝑁]=1[, where][ x][𝑖] [and][ y][𝑖] [represent]
the 𝑖-th query and response, respectively [2]. Here, 𝑁 is the total quantity of the SFT training
dataset. Given such a dataset D, SFT can be performed using the following cross-entropy loss:
_𝑁_ _𝑇_
_𝜃_ = log P𝜃 _𝑦𝑡[𝑖]_ [|][ x][𝑖][,][ y]1:[𝑖] _𝑡_ 1[)][.] (2)
L( ) − _𝑁[1]_ ( −
_𝑖=1_ _𝑡=1_
∑︁ ∑︁
**Seed Problems.** We adopt publicly available high-quality mathematical datasets to generate
our Skywork-MathQA dataset. To prevent data leakage in the testing phase, we only use the
training sets from data sources. The data sources are as follows:
- MATH (Hendrycks et al., 2021) contains high school-level mathematical problems, some
of which are from competitions such as the AIME and AMC. This dataset consists of 7,500
training data entries. Solving these problems requires advanced reasoning abilities and a
comprehensive mathematical knowledge base. This dataset categorizes problems into five
levels of difficulty and seven subdomains of high school mathematics.
- We also use other data sources as seed problems. These included non-proving problems
from OlympiadBench (He et al., 2024), mathematical problems from AGIEval (Zhong et al.,
2023) benchmark, and various problems in calculus, differential, statistics domains from
SciBench (Wang et al., 2024) and JEEBench (Arora et al., 2023).
Here we do not use the training set of GSM8K as the seed problems because: (1) Math word
problems represent a narrow category compared to general math problems [3], and an excessive
focus on math word problems may reduce the diversity of the synthetic SFT data. (2) We
empirically find that the requirements of math reasoning ability to solve the easy problem in the
MATH benchmark are relatively equivalent to those needed for GSM8K.
**Synthesis Process.** We aim to answer the following question: as we gradually increase the
quantity 𝑁 of the Skywork-MathQA dataset, does the models’ math reasoning ability improve
correspondingly? For a given query/problem x[𝑖], particularly the challenging competition-level
math problems, manually annotating the response/answer y[𝑖] is time-consuming and often
infeasible for non-experts due to the required specific domain knowledge. Therefore, we utilize
the top-performing GPT-4 models to synthesize diverse, high-quality SFT data (Li et al., 2024).
The data synthesis process in the Skywork-MathQA dataset consists of two stages. In stage 1,
we generate 2.1 million normal synthetic problems. In stage 2, we further generate 0.4 million
hard synthetic problems, increasing the Skywork-MathQA dataset to a total of 2.5 million
instances. Note that all data samples in the Skywork-MathQA dataset strictly adhere to the
2In what follows, we use the terms query-response and question-answer pairs interchangeably.
3Typically, math word problems involve a mathematical exercise presented in a narrative form, requiring the
extraction of numbers from the text and performing a sequence of elementary calculations using basic arithmetic
operations (+ −×÷) to reach the final answer.
-----
Normal Problem
**Query: What is the domain of the function 𝑓** (𝑥) = [3]𝑥[𝑥]+[+]8[1] [? Enter your answer using interval]
notation.
**Response: The function 𝑓** (𝑥) = [3]𝑥[𝑥]+[+]8[1] [is defined for all real numbers except where the]
denominator is zero. Since the denominator 𝑥 + 8 is zero when 𝑥 = −8, we exclude this
value from the domain. Thus, the domain of 𝑓 (𝑥) in interval notation is (−∞, −8) ∪(−8, ∞).
The answer is (−∞, −8) ∪(−8, ∞).
Hard Problem
**Query: Let 𝑓** (𝑥) : R → **R be a function such that**
_𝑓_ _𝑥_ _𝑓_ _𝑦_ _𝑓_ _𝑥𝑦_
( ) ( 3) − ( ) = 𝑥 + 𝑦 + 2
for all 𝑥, 𝑦 ∈ **R. Find 𝑓** (𝑥).
**Response: Setting 𝑦** = 1 gives us
_𝑓_ _𝑥_ _𝑓_ 1 _𝑓_ _𝑥_
( ) ( 3) − ( ) = 𝑥 + 1 + 2,
which simplifies to
_𝑓_ (𝑥)( 𝑓 (1) − 1) = 3𝑥 + 9.
Since 𝑓 (𝑥) is a function that maps R to R, and since this holds for all 𝑥 ∈ **R, 𝑓** (1) − 1
cannot be zero because otherwise the right-hand side of the equation would always be 0,
which is not true. Thus, we can divide both sides of the equation by 𝑓 (1) − 1 to get
3𝑥 9
_𝑓_ _𝑥_ = +
( ) _𝑓_ 1 1 [.]
( ) −
Setting 𝑥 = 1 in this equation gives us 𝑓 1 = [3][(][1][) +][ 9]
( ) _𝑓_ 1 1 [, which is equivalent to][ 𝑓] [(][1][)(][ 𝑓] [(][1][) −]
( ) −
1) = 12. Let’s denote 𝑓 (1) as 𝑎, so we have 𝑎(𝑎 − 1) = 12. This is a quadratic equation
in 𝑎: 𝑎[2] − _𝑎_ − 12 = 0. The solutions to this quadratic equation are 𝑎 = −3 or 𝑎 = 4. Since
_𝑎_ represents 𝑓 (1) and we have established that 𝑓 (1) − 1 cannot be zero, we discard the
solution 𝑎 = 1 and accept 𝑎 = 4. Therefore, 𝑓 (1) = 4 and the function 𝑓 (𝑥) is
3𝑥 9
_𝑓_ _𝑥_ = + = 𝑥 3.
( ) _𝑓_ (1) − 1 [=][ 3]4[𝑥] −[+]1[ 9] [=][ 3][𝑥]3[+][ 9] +
The answer is x + 3.
Figure 3 | Two examples of query-response pairs in the Skywork-MathQA dataset. The top
figure illustrates a normal problem, and the bottom figure depicts a hard problem.
-----
same data format. We instruct the Skywork-Math models to use the prefix "\nThe answer
is " before generating answers in their responses. Figure 3 presents two examples from our
Skywork-MathQA dataset: one is a normal problem, and the other is a hard problem. In the
following sections, we will introduce the two-stage data synthesis pipeline along with its model
SFT process.
**3.1. Stage 1: Normal Synthetic Problems**
In this stage, we examine how the quality of Skywork-Math models improves as the quantity
of SFT data increases. We generate 2.1 million high-quality and diverse SFT data within math
reasoning domains by GPT-4. Our primary goal is to equip the model with a comprehensive
understanding of mathematical reasoning problems by exposing it to a diverse range of math
questions. Our empirical findings indicate that diversity is crucial for generating and scaling
SFT data (ref. Section 4.3.2). We investigate this issue from two perspectives: data augmentation
methods and diversity selection of seed problems.
**Data Augmentation Methods.** To ensure diversity in our synthetic data, we employ three
distinct methods to augment our Skywork-MathQA dataset. We notice that the differences
among these augmentation methods are subtle, however, combining these methods to improve
diversity indeed influences the model’s performance. Three data augmentation methods have
distinct approaches. By combining them, we can leverage the advantages of all three unique
approaches in our data synthesis pipeline. Figure 4 demonstrates three prompt snippets used in
our paper to highlight the characteristics of these distinct approaches. Detailed examples of the
same query with different responses using these three methods can be found in Appendix A.
The first data augmentation method we adopt is MetaMathQA (Yu et al., 2024), which
comprises four specific approaches: three for query bootstrapping and one for response augmentation. For query augmentation, we leave the corresponding query unchanged and employ
GPT-4 to refine its response. For query bootstrapping, the rephrasing method utilizes pre-defined
prompts to generate more questions, followed by the few-shot Chain-of-Thought (COT) (Wei
et al., 2022b) prompting to generate answers. Additionally, the FOBAR (Jiang et al., 2024b)
and self-verification (Weng et al., 2022) methods deterministically convert the problem into a
backward format to mimic backward reasoning, i.e., given the result and think backward to
determine the unknown variable in the question. After transforming the questions, we then
generate corresponding answers with COT techniques using GPT-4. We also strive to balance
the quantity of SFT data produced by these four augmentation approaches.
The second data augmentation method is the Evol-Instruct approach, as implemented in
WizardLM (Xu et al., 2023). Starting from the initial set of mathematical problems, Evol-Instruct
iteratively rewrites them step by step into more complex queries. We set the maximum length of
the evolutionary trajectory to five steps and employ the following five augmentation strategies:
- Rewrite the original problem to create a completely new problem of similar length and
difficulty.
- Add constraints and requirements to the original problem.
- Increase the complexity of the original problem in both depth and breadth.
- Replace general concepts with more specific ones.
- Explicitly request additional steps in the reasoning process of the original question.
-----
Prompts for MetaMath with Backward Reasoning
You are an AI assistant to help me solve math problem. I first give you an original question with an
unknown variable 𝑋, then I give you the answer to this original question. You help me determine
the value of 𝑋. Follow the given examples and answer the question.
**Query: On a map, a 12-centimeter length represents 𝑋** kilometers. How many kilometers does a
17-centimeter length represent? ### If we know the answer to the above question is 102, what is
the value of unknown variable 𝑋?
**Response: We know that on the map, a 12-centimeter length represents 𝑋** kilometers. To find
out how many kilometers a 17-centimeter length represents, we can set up a proportion using
the given information: 12 centimeters / 𝑋 kilometers = 17 centimeters / 102 kilometers. We can
cross-multiply and solve for 𝑋: 12 * 102 = 17 * 𝑋, 1224 = 17 * 𝑋. Dividing both sides by 17, we get:
_𝑋_ = 72 The value of X is 72.
**..... [Few-shot examples]**
Prompts for Evol with Evol-Instruct
I want you act as a Prompt Rewriter. Your objective is to rewrite a given prompt into a more
complex version to make those famous AI systems (e.g., ChatGPT and GPT4) a bit harder to
handle. But the rewritten prompt must be reasonable and must be understood and responded by
humans. Please add one more constraints/requirements into #Given Prompt#.
**...... [Omit some specific rules]**
#Given Prompt#: <Here is instruction.>
#Rewritten Prompt#:
Prompts for Xwin with Self-Correction
Please act as a professional math teacher. Your goal is to create high quality math word problems
to help students learn math. You will be given a math question. Please create a new question
based on the Given Question and following instructions. To achieve the goal, you have three jobs.
**...... [Omit some specific rules]**
VERIFICATION AND MODIFICATION: <solve the question step-by-step and modify it to follow
all principles>
FINAL CREATED QUESTION: <your final created question>
Figure 4 | Prompt snippets for MetaMath Yu et al. (2024), Evol Luo et al. (2023), and Xwin Li
et al. (2024) are showcased, with their distinct approaches highlighted in red. The prompts are
mainly derived from the original papers with minor modifications. For the sake of brevity, some
specific few-shot examples and rules have been omitted.
The third data augmentation method is question generation with self-correction, as practiced in
Xwin (Li et al., 2024). Specifically, we instruct GPT-4 to refine the input question and then verify
it step-by-step to assess its logical and mathematical consistency. If the question is found to be
imperfect, we instruct the GPT-4 to modify it based on the verification results.
**Diversity Selection of Seed Problems.** Initially, we simply use the training dataset of MATH
along with additional mathematical data from other sources as the seed problem to generate
queries and responses. To improve the diversity of seed problems, we employ the core-set
approach (Sener and Savarese, 2017), which selects a representative subset of data that maximizes
diversity while maintaining coverage of the original dataset’s key features. As shown in
-----
Figure 2, we first perform data synthesis on the initial seed problems and then apply the core-set
approach (Sener and Savarese, 2017) to obtain seed synthetic problems. We further perform data
synthesis on these seed synthesis problems to get the normal synthetic problems with 2.1 million
instances. We select common 7B pre-trained LLMs as base models and fine-tune these models
on normal synthetic problems to produce the intermediate models with a general understanding
of various mathematical problems and concepts.
**3.2. Stage 2: Hard Synthetic Problems**
As the quantity of data increased, we empirically observe that the relationship between performance and data quantity begins to plateau (ref. Section 4.3.1). Motivated by the concept of
curriculum learning (Bengio et al., 2009; Soviany et al., 2022), we recognize that models can learn
much better when data are organized in a meaningful order rather than presented randomly,
introducing more complex concepts and problems gradually. In the domain of math problemsolving, it is natural to first learn the basic math operations and then progressively tackle more
difficult problems. Therefore, we employ this strategy to guide the SFT data synthetic process.
The stage 2 in the data synthesis pipeline is specifically designed for models to focus on mastering the more challenging problems. In this stage, we utilize the challenging problems, i.e.,
those categorized as Level 4 or Level 5 in the MATH dataset (Hendrycks et al., 2021) to generate
additional 0.4 million query-response pairs. Finally, combined with 2.1M normal synthetic
problems in stage 1, we obtain the 2.5M-instance Skywork-MathQA dataset. The rationale
behind using these two stages and the experimental analysis of their impacts are discussed in
Section 4.3.1. We further fine-tune the intermediate models on these hard synthetic problems to
obtain the Skywork-Math model series, which exhibit strong mathematical reasoning abilities.
**Remark** It is worth noting that the accuracy of our utilized GPT-4 version on the MATH
benchmark is approximately 50%, indicating that about half of our synthetic data in SkyworkMathQA dataset may contain minor errors in their results and intermediate reasoning process.
However, scaling these SFT synthetic data reveals a clear positive trend in the performance
of LLMs (ref. Figure 5). An interesting experimental phenomenon is that before reaching the
upper bound performance of Skywork-Math 7B model series, data quantity seems to play a
more important role than data quality.
##### 4. Experiment
**4.1. Experimental Setup**
**_4.1.1. Evaluation Datasets_**
We primarily conduct our experiments on two benchmarks widely recognized for assessing
mathematical reasoning capabilities. (1) GSM8K (Cobbe et al., 2021) comprises a collection of
high-quality math word problems at the grade school level. It contains 1,319 test questions.
Typically, the reasoning steps in GSM8K vary between two and eight, ultimately yielding an
integer as the answer. (2) MATH (Hendrycks et al., 2021) contains 5,000 test questions, featuring
math competition-level problems. The answers in GSM8K are integer, making it relatively easy
for the regular expression matching program in evaluation frameworks to extract and verify
answers. However the answers in MATH may contain complex mathematical formulas (e.g.,
2 √2
+4 [,][ (]√2, √3)). We have explored several evaluation benchmarks to assess the results on
MATH (e.g., (GPT-4o, 2024; He et al., 2024; Shao et al., 2024; Yu et al., 2024)). Different evaluation
10
-----
benchmarks implement different regular expression rules to extract mathematical formulas,
leading to significant performance variations among them (in some cases, there are up to 5%
accuracy variations on MATH). In this paper, we adopt the same evaluation framework as in
MetaMath (Yu et al., 2024) because it is widely used and provides strict and robust evaluation
results using zero-shot and COT techniques.
**_4.1.2. Pre-Trained Models_**
We utilize three publicly available top-performing 7B pre-trained LLMs in the Skywork-MathQA
models to push the limit of mathematical reasoning abilities in small-scale LLMs. Our empirical
results indicate that Skywork-MathQA 7B models even outperform the recently released 70B
LLaMA-3 Instruct Model (AI@Meta, 2024) on the MATH benchmark.
- LLaMA2-7B (Touvron et al., 2023) is a general-purpose LLM model that has demonstrated significant performance across various benchmarks. However, it exhibits limited
mathematical reasoning abilities.
- Mistral-7B (Jiang et al., 2023) is another general-purpose LLM model that exhibits strong
reasoning abilities in math problem-solving and code generation.
- DeepSeekMath-Base-7B (Shao et al., 2024) is a specialized LLM model tailored for mathematics reasoning. It stems from DeepSeek-Coder-Base-v1.5-7B (Guo et al., 2024) and has
been further pre-trained on a mathematical corpus with 120 billion tokens. Due to this
extended pre-training on massive math corpus, we observe a notable performance divergence between the specialized model and general-purpose LLM model (ref. Section 4.2.2).
**_4.1.3. Implementation Details_**
We utilize the GPT-4 API with a temperature of 0.7 to generate query-response pairs in SkyworkMathQA dataset. To prevent data leakage, we evaluate the Skywork-Math models on the test
examples of GSM8K and MATH with a 30-gram hit, as suggested by (Azerbayev et al., 2023).
For all experiments, including ablations, Skywork-MathQA models are trained for 3 epochs.
A global batch size of 32 is used along with the AdamW optimizer without weight decay.
Following the original configurations of 7B pre-trained models, the learning rate is set to 2𝑒 − 5
for LLaMA2-7B and 2𝑒 − 6 for both Mistral-7B and DeekSeekMath-Base-7B. The learning rate
warm-up ratio is 0.03. All experiments are conducted on 8 Nvidia A800 GPUs with 80G memory.
For evaluation, we use the vLLM (Kwon et al., 2023) library to generate inference responses,
using the same prompt as in the SFT stage described in Section 3. Unless otherwise noted, we
set the maximum length of models to 2048 in both the model SFT stage and the evaluation stage.
We employ a stringent criterion similar to that used in Metamath (Yu et al., 2024), achieving
nearly 100% precision but at the cost of a relatively low recall rate. This approach results in
several instances where correct responses from the model are mistakenly labeled as incorrect
according to our criteria. Specific examples can be found in Appendix B.
**4.2. Main Results**
**_4.2.1. Comprehensive Performance Comparison with State-of-the-art Models_**
Table 1 presents the comparison of Skywork-Math model series with the state-of-the-art closedand open-source models on the test set of GSM8K and MATH benchmark to evaluate their
math reasoning abilities. Because GPT-4-Turbo is a commercially closed-source model and
cannot be fine-tuned to adhere to specific output formats, its responses are evaluated using a
11
-----
**Model** **#Params** **GSM8K(%)** **MATH(%)**
Closed-source models
GPT-3.5-Turbo (Peng et al., 2023) N/A 80.8 34.1
GPT-4-Turbo (Achiam et al., 2023) N/A 90.51 57.0
GPT-4 (Achiam et al., 2023) N/A 92.0 42.5
PaLM2 (Anil et al., 2023) 540B 80.7 34.3
Flan-PaLM2 (Anil et al., 2023) 540B 84.7 33.2
Minerva (Lewkowycz et al., 2022) 8B 16.2 18.1
Minerva (Lewkowycz et al., 2022) 62B 52.4 27.6
Minerva (Lewkowycz et al., 2022) 540B 58.8 33.6
ChatGLM3-32B-SFT-2312 (Xu et al., 2024) 32B 75.8 29.0
+RFT, DPO (Xu et al., 2024) 32B 82.6 40.6
Claude-3-Oppus (Anthropic, 2024) N/A **95.0** **60.1**
Open-source models (1-10B)
Baichuan-2 (Yang et al., 2023) 7B 24.5 5.6
LEMA-LLaMA2 (An et al., 2023) 7B 54.1 9.4
MetaMath (Yu et al., 2024) 7B 66.5 19.8
WizardMath-V1.1 (Luo et al., 2023) 7B 83.2 33.0
Xwin-Math-LLaMA (Li et al., 2024) 7B 82.6 40.6
Xwin-Math-Mistral (Li et al., 2024) 7B **89.2** 43.7
Xwin-Math-Llemma (Li et al., 2024) 7B 84.2 47.2
MAmmoTH (Yue et al., 2023) 7B 53.6 31.5
InternLM2-Math (Ying et al., 2024) 7B 78.1 34.6
DeepSeekMath-Instruct (Shao et al., 2024) 7B 82.9 46.8
Skywork-Math-LLaMA2 (ours) 7B 72.9 47.7
Skywork-Math-Mistral (ours) 7B 83.9 **51.2**
Skywork-Math-DeepSeekMath (ours) 7B 81.5 49.9
LLaMA3-Instruct (AI@Meta, 2024) 8B 79.6 30.0
Open-source models (10-50B)
LLaMA2 Touvron et al. (2023) 13B 28.70 3.90
Baichuan-2 (Yang et al., 2023) 13B 52.8 10.1
MetaMath (Yu et al., 2024) 13B 72.3 22.4
Wizard-Math (Luo et al., 2023) 13B 63.9 14.0
MAmmoTH (Yue et al., 2023) 13B 62.0 34.2
LEMA-LLaMA2 (An et al., 2023) 13B 65.7 12.6
Xwin-Math (Li et al., 2024) 13B **88.1** **44.9**
InternLM2-Math (Ying et al., 2024) 20B 82.6 37.7
LLaMA2 Touvron et al. (2023) 34B 42.20 6.20
LLema (Azerbayev et al., 2023) 34B 51.5 25.0
Open-source models (50-70B)
WizardMath (Luo et al., 2023) 70B 81.6 22.7
MetaMath (Yu et al., 2024) 70B 82.3 22.6
LLaMA2 (Touvron et al., 2023) 70B 56.8 13.5
LEMA-LLaMA2 (An et al., 2023) 70B 83.5 25.0
MAmmoTH (Yue et al., 2023) 70B 76.9 41.8
LLaMA3-Instruct (AI@Meta, 2024) 70B 90.0 50.4
Xwin-Math (Li et al., 2024) 70B **90.6** **52.8**
Table 1 | Summary of math reasoning performance of closed- and open-source LLM models
in terms of accuracy (%). All results for open-source models are reported as top1 accuracy
using only SFT techniques. Skywork-Math models employ zero-shot chain-of-thought (COT)
evaluation framework as implemented in MetaMath (Yu et al., 2024). The best result in each
block are highlighted in bold. GPT-4-Turbo is evaluated using the grading criteria with 4-shot
COT prompting as implemented in (Zheng et al., 2023). Skywork-Math 7B models, using only
synthetic SFT data, have achieved SOTA performance on MATH among models small than 10B
parameters, even outperforming 70B LLM models and an early version of GPT-4.
12
-----
grading criterion with 4-shot COT prompting as used in (Zheng et al., 2023). (1) For the MATH
benchmark, our Skywork-Math model series have achieved the state-of-the-art performance
among LLM models smaller than 10B parameters with only the SFT technique, even surpassing
the an early version of GPT-4. These results indicate that strong math reasoning abilities can be
injected during the SFT stage through the extensive and high-quality Skywork-MathQA dataset.
Moreover, Skywork-Math 7B models achieve competitive accuracy with 70B LLM models, which
suggests 7B common LLM models can possess the strong math reasoning abilities with sufficient
SFT process. These results demonstrate the significant effectiveness of our proposed two-stage
data synthesis and model SFT pipeline. (2) For the GSM8K benchmark, the Skywork-Math
model series also achieve comparable performance with several state-of-the-art models. It
is noteworthy that our Skywork-MathQA dataset contains no data referencing GSM8K. The
characteristics of math word problem (GSK8K) and math competition-level problems (MATH)
differ in their problem-answer formats and difficulty. We posit that the success can be attributed
to the difficulty of the relatively easy problems in MATH (Level 1&2) being similar to those in
GSM8K, and the knowledge learned from solving competition-level mathematical problems can
be effectively transferred to math word problems.
**_4.2.2. Scaling Laws in SFT on Mathematical Reasoning_**
In Figure 5, we illustrate the relationship between synthetic SFT dataset size and model performance on GSM8K and MATH. The curve clearly exhibits a scaling law relationship between the
size of SFT data and model’s performance. Here are some in-depth observations:
**Quantity Breeds Quality.** To enhance the mathematical reasoning abilities in LLMs, increasing
the quantity of synthetic data can significantly improve the quality of model performance.
This scaling trend implies that, while SFT with a small amount of data could achieve decent
results Zhou et al. (2023), utilizing a larger scale of synthetic SFT data can further improve math
reasoning performance.
**Diminishing Returns from Continual Pre-Training.** The DeepSeekMath-Base (Shao et al.,
2024) 7B model, which has been continually pre-trained with 120B math-related tokens sourced
from the web, initially demonstrates superior performance. However, as we increase the synthetic dataset size in the Skywork-MathQA dataset, this advantage diminishes and is eventually
surpassed by the Mistral (Jiang et al., 2023) 7B base model. As the amount of SFT data increases,
Skywork-Math-Mistral-7B and Skywork-Math-LLaMA2-7B catch up in performance to the
Skywork-Math-DeepSeekMath-7B. This suggests that while specialized pre-training provides a
strong initial boost, its benefits are not consistently scalable and can be matched by increasing
the quantity of synthetic SFT data.
**Effect of Problem Difficulty.** The accuracy performance for Skywork-Math 7B model series
significantly increases as the synthetic data size expands from 2.1M to 2.5M, corresponding to
the stage 2 in our data synthesis pipeline. This performance improvement in the final stage of
data scaling indicates that incorporating more complex problems— ranging from Level 3 to
Level 5 in the MATH dataset—has a substantial positive impact on model performance. This
finding underscores the importance of not only generating a large quantity of data but also
including more challenging problems to push the limits of math reasoning abilities of LLM
models. We will discuss this in more detail in Section 4.3.1.
13
-----
Figure 5 | The zero-shot top1 performance of Skywork-Math 7B model series improves significantly as we scale up the size of synthetic SFT data in the Skywork-MathQA dataset. There is
a clear trend indicating that the model’s math reasoning quality increases substantially with
increases in data quantity.
**4.3. Experimental Analysis**
**_4.3.1. Fine-Grained Analysis across Different Difficulty Levels_**
We explore model’s performance across various difficulty levels to analyze the internal relationship between data difficulty and LLM model’s capability. The difficulty level distribution
14
-----
**Difficulty Levels in MATH(%)**
**Base Model** **Dataset Size**
**Level-1 Level-2 Level-3 Level-4 Level-5**
LLaMA2-7B 7.5K 17.85 8.39 4.77 3.05 0.91
Mistral-7B 7.5K 37.99 25.17 15.12 8.48 2.49
DeepSeekMath-7B 7.5K 64.07 46.76 37.84 24.63 10.73
LLaMA2-7B 2.1M 78.03 60.29 48.19 35.09 19.56
Mistral-7B 2.1M 80.78 66.33 55.53 41.52 21.45
DeepSeekMath-7B 2.1M 80.78 65.21 58.00 41.60 21.83
LLaMA2-7B 7.5k + 0.4M (hard) 63.16 43.96 34.39 24.46 10.20
Mistral-7B 7.5k + 0.4M (hard) 71.62 57.27 48.72 34.60 16.99
DeepSeekMath-7B 7.5k + 0.4M (hard) 81.01 61.97 51.90 37.07 18.05
LLaMA2-7B 2.1M + 0.4M (hard) 78.03 62.42 52.87 37.48 18.73
Mistral-7B 2.1M + 0.4M (hard) 83.52 67.56 60.65 44.89 25.08
DeepSeekMath-7B 2.1M + 0.4M (hard) 82.84 67.23 58.71 42.01 21.30
GPT-4-Turbo - 82.84 73.38 65.34 52.88 34.06
Table 2 | Accuracies (%) across difficulty levels (from Level-1 to Level-5) with three base models in
Skywork-Math 7B model series before and after fine-tuning on stage 2 in the MATH benchmark.
7.5K data samples are randomly sampled from the Skywork-MathQA dataset. GPT-4-Turbo is
evaluated using our designed grading criteria with 4-shot COT prompting. In stage 1, SkyworkMath 7B models significantly improve the performance on easy problems in MATH (Level
1&2) using 2.1M synthetic SFT data. In stage 2, Skywork-Math 7B models show significant
improvements on hard problems in MATH (Level 3-5) using 2.5M synthetic SFT data.
of the training and test set in MATH is il- 2304
lustrated in Figure 6. We can find that the Test
number of hard problems (Level 3-5) is much 1690
larger than that of easy problem (Level 1&2) 1500 1592
in both training and test sets. This highlights 1131 1214
the value of hard problems to improve the #Instances1000 894
overall math reasoning performance. 566
In Table 2, we conduct comprehensive experiments with three pre-trained base LLM 0
models in Skywork-Math 7B model series. Difficulty Level
Train 2304
Test
2000
1690
1592
1500
1348 1324
1214
1131
#Instances1000 894
566
500 437
0
1 2 3 4 5
Difficulty Level
We observed a significant increase in accu
Figure 6 Difficulty level distribution of the
racy for easy problems (Level 1&2) when |
training and test set in the MATH benchmark.
scaling the dataset size from 7.5K to 2.1M,
even reaching accuracies comparable to GPT4-Turbo. However, the increase in accuracy for hard problems (Level 3-5) was relatively modest
compared to GPT-4-Turbo. This could be due to the lack of high-quality responses in hard
problems, motivating us to perform the stage 2 in our data synthesis pipeline to generate hard
synthetic problems. After fine-tuning our Skywork-Math 7B model with additional 0.4M hard
synthetic problems, we observe a further increase in model performance, particularly at Level-3
and Level-4 on MATH. For comparison, we conduct an experiment to fine-tune three base models in Skywork-Math 7B models using 0.4M hard synthetic problems along with the randomly
sampled 7.5k problems. We notice that for hard problems (Level 3-5), base models fine-tuned on
15
-----
Figure 7 | Performance of different base models in Skywork-Math 7B models with various
data augmentation methods on GSM8K and MATH. "Mix" represents a combination of data
generated by three augmentation methods detailed in Section 3.1. For this ablation study, we
utilize 60K synthetic SFT data in the Skywork-MathQA dataset.
the "2.1M + 0.4M (hard)" data perform significantly better than those fine-tuned on the "7.5k +
0.4M (hard)" data. This supports the rationale that LLM models should acquire mathematical
reasoning abilities progressively from easy to hard problems. More detailed experiments can be
found in Appendix C. In addition to testing on different levels, we also conducted experiments
on various math subjects, as detailed in Appendix D.
**_4.3.2. Effect of Data Diversity_**
**Diversity on Data Augmentation Methods.** One dimension of diversity is the data augmentation methods. We select 60K synthetic data in the Skywork-Math dataset to study this problem.
As shown in Figure 7, the "Mix" approach, a combination of synthetic data generated by three
augmentation methods, achieves the highest performance. Therefore, we utilize the "mix"
method to generate our Skywork-MathQA dataset. Moreover, the Xwin-style (Li et al., 2024)
approach and the MetaMathQA-style (Yu et al., 2024) approach require extensive time for answer
verification and two steps for data generation, respectively. For the consideration of efficiency,
we utilize the Evol-style (Luo et al., 2023) approach as a major component of the synthetic data
due to requiring fewer input and output tokens within LLM models. We also observe that the
impact of the mix rate of augmentation methods is not significant on the GSM8K and MATH
benchmarks. However, combining these data augmentation methods is crucial for enhancing
the data diversity of the Sykwork-MathQA dataset. Detailed exploration of data mixtures with
different data augmentation methods is left for future work.
**Diversity of Seed Problems.** Another dimension of diversity is the selection of seed problems.
We construct two SFT datasets, each comprising 360K entries. The first dataset uses only the
training set of MATH as the seed problems. The second dataset employs the diversity selection
method introduced in Section 3.1, which includes a wide range of non-proving problems from
multiple academic data sources and uses the diversity selection method to further ensure
the diversity. As illustrated in Table 3, the improved diversity of seed problems in SFT data
substantially enhances the math reasoning abilities in Skywork-Math models across three 7B
base LLM models.
16
-----
**Base Model** **Diversity Selection** **MATH(%)** **GSM8K(%)**
LLaMA2-7B ✗ 29.48 50.57
Mistral-7B ✗ 38.50 72.71
DeepSeekMath-7B ✗ 43.96 74.30
LLaMA2-7B ✓ 29.36 52.08
Mistral-7B ✓ 39.68 73.92
DeepSeekMath-7B ✓ 43.68 75.97
Table 3 | Ablation studies with the diversity selection method on 360K data samples applied
in stage 1 of the data synthesis pipeline. ✓ (✗) means that we evaluate w (w/o) the diversity
selection method.
**Base Model** **Dataset (Size)** **GSM8K(%) MATH(%)**
LLaMA2-7B Random selection (1M) 60.35 37.76
LLaMA2-7B Random selection (1.5M) 66.87 40.52
LLaMA2-7B Selection with a verifier (1M) 62.77 36.40
Mistral-7B Random selection (1M) 77.79 44.56
Mistral-7B Random selection (1.5M) 80.36 45.86
Mistral-7B Selection with a verifier (1M) 77.26 43.04
Table 4 | Comparisons of the model performance on GSM8K and MATH in terms of accuracy
using random selection and selection with a verifier. All data samples are selected from the
Skywork-MathQA dataset. Random selection on the math reasoning dataset is a simple but hardto-beat strategy. Without a carefully designed filtering strategy, it is non-trivial to outperform
random selection.
**_4.3.3. Data Selection with a Verifier_**
Since the accuracy of GPT-4 on MATH is around 50%, we can infer that approximately half of
the data samples in the Skywork-MathQA dataset may not have the right solving processes
and answers. To ensure the collection of high-quality data, it is a natural way to perform data
selection with a verifier to filter out wrong responses. We first eliminate synthetic data entries
that fail to align with the ground truth final answers. However, most data samples either lack
the ground truth final answers or contain errors in intermediate reasoning steps. Therefore,
we should design a more precise approach to ensure the entire solution is consistent with the
ground truth. We fine-tune a Mistral-7B (Jiang et al., 2023) base model with few-shot prompting
to verify if the reasoning paths and final answers are correct. Finally, we obtain approximately
1 million samples deemed correct by this fine-tuned verifier. With human verification of the
results judged by the trained Mistral-7B verifier, it achieves an accuracy of approximately 80%.
After implementing our filtering process, the fraction of correct data (80%) increases significantly
compared to its original fraction (50%). As shown in Table 4, we present the results selected
using the trained verifier in contrast to a random selection in the Skywork-Math dataset. We
initially anticipated that, after filtering for correctness to obtain the 1M filtered dataset, the
accuracies on GSM8K and MATH would range between 1M to 1.5M samples with random
selection due to their quantitative relationship. However, the actual performance on the LLaMA2-7B
_and Mistral-7B models showed that the 1M filtered dataset performed even worse than the 1M dataset_
_with random selection._
17
-----
**Base Model** **Dataset (Size)** **GSM8k(%) MATH(%)**
LLaMA2-7B Random selection (360K) 52.08 29.36
LLaMA2-7B Selection with hard problems (360K) 54.36 36.68
Mistral-7B Random selection (360K) 73.92 39.68
Mistral-7B Selection with hard problems (360K) 76.42 40.20
DeepSeekMath-7B Random selection (360K) 75.97 43.68
DeepSeekMath-7B Selection with hard problems (360K) 75.74 44.48
Table 5 | Comparisons of the model performance on GSM8K and MATH in terms of accuracy using random selection and our designed selection strategy with filtering for more hard problems.
All data samples are selected from the Skywork-MathQA dataset. Our strategy consistently
outperform random selection.
**GSM8K(%)** **MATH(%)**
**Model**
**English Chinese English Chinese**
LLaMA3-8B + Skywork-MathQA 75.97 58.83 50.30 44.10
Mixtral-8x7B + Skywork-MathQA 83.93 72.71 51.40 48.02
Llemma-7B + Skywork-MathQA 66.03 50.72 40.08 37.42
Skywork-Math-LLaMA2-7B 72.86 50.34 47.66 38.38
Skywork-Math-Mistral-7B 83.93 69.75 51.22 48.34
Skywork-Math-DeepSeekMath-7B 81.50 73.69 49.88 48.22
Table 6 | Results of bilingual language testing on GSM8K and MATH. Note that all models are
fine-tuned on English data. The Chinese version of GSM8K and MATH are translated from their
English counterparts using GPT-4. LLaMA3-8B, Mixtral-8x7B, Llemma-7B are fine-tuned on
our Skywork-MathQA datasets. Our empirical results indicates that the strong math reasoning
capabilities can be maintained between English and Chinese.
The experimental results align with the conclusion in DsDm (Engstrom et al., 2024). The
data selection process on math reasoning is non-trivial and there exist multiple objectives to
affect this data selection process. Our observation suggests that although the accuracy reaches
as high as 80%, the difficulty level of the selected problems significantly decreases. The selection
process improves the data quality but significantly decreases the difficulty level of problems,
thereby negatively impacting the performance of LLMs. In order to filter out correct problems,
the verifier model predominantly selects those problems with lower difficulty. To address
the scarcity of hard problems in the filtered dataset, we further utilize GPT-4 with the COT
prompt to pick out around 360K hard problems. Table 5 demonstrates that data selection with
hard problems is effective, as all base models in the Skywork-Math models show improved
performance on both the MATH and GSM8K benchmarks compared to their random selection
counterparts.
**_4.3.4. Can Math Reasoning Abilities Transfer Between Bilingual Language?_**
The common view holds that mathematical problems mainly consist of symbols and expressions, and the textual language used to state them is not crucial for understanding. To explore
whether math reasoning abilities can transfer between bilingual languages, we translate the
GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021) benchmarks from English to
18
-----
**Original Question: There were a total of 15 fish in the plate. After the kitten ate some, there**
were 10 fish left. How many fish did the kitten eat?
**Question with Distractors: There are 3 kittens in the house. There were a total of 15 fish in**
the plate, including 10 carp and 5 belt fish. After the kittens ate some fish, there were still 10
fish left. How many fish did the kittens eat?
**Distractors:**
1. "There are 3 kittens in the house."
2. "Including 10 carp and 5 belt fish."
Figure 8 | An example of an original question from GSM8K (Cobbe et al., 2021) and the same
question with two distrators as implemented in CMATH (Wei et al., 2023a).
Chinese for bilingual language testing. It is important to note that all models are fine-tuned
only on English data. As shown in Table 6, the overall math reasoning abilities are maintained
between English and Chinese. There is a relatively small-scale performance degradation on
MATH between English and Chinese, especially in Skywork-Math-Mistral-7B and SkyworkMath-DeepSeekMath-7B. However, there is a significant performance drop on GSM8K between
English and Chinese, with up to a 20-point drop in Skywork-Math-LLaMA2-7B. Since GSM8K is
grouped in the math word problem category, which requires more linguistic understanding, the
degradation in accuracy is greater than that for MATH. Notably, Skywork-Math-DeepSeekMath7B performs well in both English and Chinese. We hypothesize the reason for this is that the
120B continual pre-training corpus in the DeepSeekMath-Base-7B model includes many Chinese
sources, which improves its Chinese language understanding. These results highlight the challenges associated with language dependence in understanding and performing mathematical
reasoning tasks.
**_4.3.5. Can Math Reasoning Abilities Be Maintained in Robustness Tests?_**
As suggested in CMATH (Wei et al., 2023a), several open-sourced LLM models, except GPT-4Turbo, are vulnerable to robustness tests of math reasoning abilities influenced by distractors.
To ascertain if models effectively comprehend the fundamental elements of mathematical
word problems and their solutions, we inject each problem in GSM8K with 1-5 distractors as
implemented in CMATH (Wei et al., 2023a). An example of two distractors is shown in Figure 8.
As listed in Table 7, open-sourced fine-tuned LLM models are sensitive to the distractors
injected into math word problems. Compared to the MetaMathQA SFT dataset, our proposed
Skywork-MathQA dataset significantly improves robustness performance in GSM8K based
on common pre-trained models, such as Mistral-7B and DeepSeekMath-7B. We hypothesize
that the reason lies in the significantly larger size of the Skywork-MathQA dataset compared
to the MetaMathQA dataset. The improved diversity of the Skywork-MathQA dataset can
help the LLM models STF on it to better withstand robustness tests. However, GPT-4-Turbo
consistently excludes interference information and focuses on the relevant information, thereby
producing correct responses with even 5 distractors in GSM8K. These results suggest that
most of open-source SFT models cannot truly understand the semantic information of math
world problems but rather mechanically extract numbers from the sentence and calculate them.
Effectively improving math reasoning abilities while maintaining robustness like GPT-4-Turbo
is an important area for future exploration.
19
-----
**#Distractors in GSM8K**
**Model** **SFT Dataset (Size)** **GSM8K(%)**
1 2 3 4 5
GPT-4-Turbo - 90.51 95.30 91.44 88.98 88.02 85.37
DeepSeekMath-7B-Instruct - 82.90 73.77 62.97 51.44 48.22 43.88
Mistral-7B MetaMathQA (395K) 79.08 70.10 56.80 48.95 46.01 38.51
DeepSeekMath-7B MetaMathQA (395K) 82.49 73.20 60.33 50.26 42.31 39.40
LLaMA2-13B MetaMathQA (395K) 70.96 65.86 50.25 41.21 33.73 31.64
Llemma-7B Skywork-MathQA (2.5M) 66.03 61.40 52.90 46.06 40.38 38.21
LLaMA3-8B Skywork-MathQA (2.5M) 75.97 75.14 70.91 65.35 62.43 55.82
Mixtral-8x7B Skywork-MathQA (2.5M) 83.93 84.19 78.21 73.36 68.93 66.57
Skywork-Math-LLaMA2-7B Skywork-MathQA (2.5M) 72.86 64.72 58.56 54.20 49.41 44.63
Skywork-Math-Mistral-7B Skywork-MathQA (2.5M) 83.93 83.16 75.19 72.57 66.42 67.01
Skywork-Math-DeepSeekMath-7B Skywork-MathQA (2.5M) 81.50 78.35 72.54 64.70 59.17 57.31
Table 7 | Performance against the number of the distractors added to the original GSM8K dataset.
GPT-4 demonstrate remarkable robustness, while other models fail.
**Model** **Data Synthesis Pipeline (Size) GSM8K(%) MATH(%)**
Mistral-7B - 50.00 12.70
Skywork-Math-Mistral-7B Stage 1 (2.1M) 83.25 49.10
Skywork-Math-Mistral-7B Stage 2 (2.5M) 83.93 51.22
Mixtral-8×7B - 74.40 28.40
Mixtral-8×7B + Skywork-MathQA Stage 1 (2.1M) 85.06 50.02
Mixtral-8×7B + Skywork-MathQA Stage 2 (2.5M) 83.93 51.40
Table 8 | Performance comparison between the dense (Skywork-Math-Mistral-7B) and sparse
MOE (Mixtral-8×7B) LLM model. We fine-tune the corresponding base models using the
Skywork-MathQA dataset in both stage 1 and stage 2 of the data synthesis pipeline.
**_4.3.6. Ablation Studies Between Sparse MOE and Dense Models_**
Recent advancements have witnessed the rapid development of sparse MOE models (DeepSeekAI, 2024). To evaluate the generalization capability of our Skywork-MathQA dataset across
both sparse MOE and dense models, we select commonly used dense (Skywork-Math-Mistral7B (Jiang et al., 2023)) and sparse MOE (Mixtral-8×7B (Jiang et al., 2024a)) models as the
pre-trained LLM base models. We conduct experiments using the Skywork-MathQA dataset
in both stage 1 and stage 2. As shown in Table 8, the results confirm strong generalization
across different types of LLM models. However, the Mixtral-8×7B fine-tuned on the SkyworkMathQA dataset does not show superior performance compared with its dense counterpart.
The Mixtral-8×7B and Skywork-Math-Mistral-7B almost exhibit almost identical performance
on GSM8K and MATH. We posit the reason is that the sparse MoE model, due to its mixtureof-expert architecture, may not significantly improve the performance on the specific task (i.e.,
the math reasoning task), but can better handle task-specific knowledge without compromising
performance on other tasks (Wei et al., 2024; Xue et al., 2024).
**_4.3.7. Effect of Data Leakage_**
Though we never use the test data from MATH (Hendrycks et al., 2021) or GSM8K (Cobbe
et al., 2021) for fine-tuning LLM models, we utilize GPT-4 (Achiam et al., 2023) to synthesize
20
-----
**Question in Skywork-MathQA: Let 𝑥** and 𝑦 be nonzero real numbers such that
(3 − 4𝑖)(𝑥 + 𝑦𝑖)
is pure imaginary. Find 𝑥/𝑦.
**Question in the MATH test set: Let 𝑥** and 𝑦 be nonzero real numbers such that
_𝑥𝑦(𝑥[2]_ − _𝑦[2]) = 𝑥[2]_ + 𝑦[2].
Find the minimum value of 𝑥[2] + 𝑦[2].
Figure 9 | An example of the math questions that are completely different but get filtered by a
10-gram filter due to a common condition.
data, which may inadvertently contaminate our synthetic dataset with elements from the
test data in the evaluation benchmarks. Therefore, we follow a standard 30-gram filtering
process (Azerbayev et al., 2023) on test data of the corresponding benchmark to circumvent the
data leakage of the Skywork-MathQA dataset. We filter out approximately 6K samples for the
test set of MATH and none for GSM8K.
To assess the impact of the n-gram filter, we tested a stricter 10-gram filter, which is much
more stringent than the 30-gram filter. We observe that the 10-gram filter removes a lot of data
that has little relation to the data in the test set of MATH. As illustrated in Figure 9, there are
two entirely unrelated examples in our synthetic Skywork-MathQA dataset and the test set of
MATH. It is evident that "Let 𝑥 and 𝑦 be nonzero real numbers such that" is a very common
condition in math problems. The 10-gram filter results in the removal of many completely
unrelated problems in the synthetic data. Consequently, we use the 30-gram filter instead of the
10-gram filter to produce the Skywork-MathQA dataset.
We further conduct experiments to quantitatively analyze the difference between the 30gram and 10-gram filter using our Skywork-MathQA dataset in stage 1. Our Skywork-MathQA
dataset, which has already been filtered using the 30-gram filter, consists of 2.16M instances.
After applying 10-gram filtering, we have 2.10M instances. The filtered-out data, meaning the
data samples present in the 2.16 million instances but not in the 2.10 million instances, consists
of 60K samples. For a fair comparison, we also randomly select 60K data samples from the
Skywork-MathQA dataset. The results of accuracies on the MATH benchmark are reported
in Table 9. The observations are as follows: (1) The 10-gram filter is too strict, leading to the
removal of some specific types of problems in the math benchmark (ref. Figure 9), which results
in performance degradation. (2) The 60K randomly sampled data is much more useful than
the 60K filtered-out data for Skywork-Math-LLaMA2-7B and Skywork-Math-Mistral-7B. The
experimental results are reasonable, as the diversity in the randomly selected 60K data is much
greater than that in the filtered 60K data. (3) The performance of DeepSeekMath-7B after SFT
with the 2.10M dataset is significantly worse than with the 2.16M dataset. The filtered 60K
dataset performs even better than the randomly selected 60K dataset. We believe this is because
Skywork-Math-DeepSeekMath-7B may focus on the types of problems present in the filtered
60K data. Its base model, DeepSeekMath-Base-7B (Shao et al., 2024), is a specialized math LLM
model continually pre-trained on a large collection of math data that matches some of the types
in these filtered 60K problems.
21
-----
**Model** **Filter Method (size)** **MATH(%)**
Skywork-Math-LLaMA2-7B 30-gram (2.16M) 45.56
Skywork-Math-LLaMA2-7B 10-gram (2.10M) 37.54
Skywork-Math-LLaMA2-7B Filter-out (60K) 10.76
Skywork-Math-LLaMA2-7B Random selection (60K) 15.16
Skywork-Math-Mistral-7B 30-gram (2.16M) 49.10
Skywork-Math-Mistral-7B 10-gram (2.10M) 40.78
Skywork-Math-Mistral-7B Filter-out (60K) 22.32
Skywork-Math-Mistral-7B Random selection (60K) 27.84
Skywork-Math-DeepSeekMath-7B 30-gram (2.16M) 48.64
Skywork-Math-DeepSeekMath-7B 10-gram (2.10M) 36.68
Skywork-Math-DeepSeekMath-7B Filter-out (60K) 40.64
Skywork-Math-DeepSeekMath-7B Random selection (60K) 39.86
Table 9 | Accuracies (%) on MATH for the Skywork-Math models using the 30-gram and 10-gram
filter methods. "Filter-out" indicates samples present in the 30-gram filter method but not in the
10-gram filter method. For a fair comparison, we also randomly sampled 60K data points from
our Skywork-MathQA dataset.
**Model** **Model Maximum Length MATH(%) GSM8k(%)**
Skywork-Math-LLaMA2-7B 512 44.06 67.85
Skywork-Math-LLaMA2-7B 2048 47.66 72.86
Skywork-Math-Mistral-7B 512 50.56 82.41
Skywork-Math-Mistral-7B 2048 51.22 83.93
Skywork-Math-DeepSeekMath-7B 512 48.28 80.52
Skywork-Math-DeepSeekMath-7B 2048 49.88 81.50
Table 10 | Comparison of performance in Skywork-Math models using the 2.5M-instacne
Skywork-MathQA dataset with different maximum model lengths.
**_4.3.8. Effect of Model Maximum Length_**
As the difficulty level of problems increases, the length of reasoning steps typically becomes
longer, especially with those generated by LLMs. If the model’s maximum length is too small,
the response may be truncated. In our synthetic Skywork-MathQA SFT dataset, around 130K
problems exceed 512 tokens. Therefore, we set the maximum length of models to 2048 tokens
in both the SFT stage and the evaluation stage. As shown in Table 10, increasing the model’s
maximum length leads to improved performance, indicating that 7B models can comprehend
and execute long reasoning processes.
22
-----
##### 5. Closing Remarks and Future Directions
We study how to empower mathematical reasoning abilities for common 7B pre-trained LLM
models. We propose the Skywork-MathQA dataset, consisting of 2.5 million diverse and highquality SFT instances, implemented through our novel two-stage data synthesis pipeline. We
introduce Skywork-Math model series, demonstrating that common small-scale 7B language
models can stimulate strong mathematical reasoning ability using only synthetic SFT data.
Skywork-Math models achieve state-of-the-art accuracy among models smaller than 10B parameters using only synthetic SFT data, surpassing 70B LLM models and an early version of
GPT-4 on MATH. These results suggest that the data scaling law for mathematical reasoning
in LLM models remains significant and promising. Notably, this research provides several
valuable insights and practical takeaways to advance our understanding of the capabilities and
limitations of LLMs in mathematical reasoning.
Finally, we present two promising future directions for this work:
**Code-Integrated Math Reasoning.** Complex scientific calculations are essential for tackling
difficult mathematical problems. By embedding executable code, LLMs can dynamically generate and execute code to solve intricate mathematical problems, ensuring higher accuracy and
robustness. Some recent works have already been proposed to translate mathematical problems
into executable code (Gou et al., 2023; Toshniwal et al., 2024). However, code cannot always be
generated correctly on the first attempt. Therefore, iteratively utilizing code to solve challenging
math problems is a promising direction for future research.
**More General Reasoning Tasks.** Reasoning is a crucial ability for complex problem-solving.
Beyond mathematical reasoning, there are many other important reasoning tasks, such as logical
reasoning, causal reasoning, and commonsense reasoning (Sun et al., 2023). It is intriguing to
explore how our proposed method can be applied to these more general reasoning tasks.
##### 6. Acknowledgements
We would like to thank Longhui Yu (the author of MetaMath) and Chen Li (the author of
Xwin-Math) for their valuable discussions. Our deepest gratitude goes to our boss, Yahui Zhou,
whose financial assistance in scaling supervised fine-tuning data and providing access to GPU
computational resources was indispensable for the successful completion of this study.
23
-----
##### References
J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt,
S. Altman, S. Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
[AI@Meta. Llama 3 model card. 2024. URL https://github.com/meta-llama/llama3/bl](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md)
```
ob/main/MODEL_CARD.md.
```
E. Almazrouei, H. Alobeidli, A. Alshamsi, A. Cappelli, R. Cojocaru, M. Debbah, É. Goffinet,
D. Hesslow, J. Launay, Q. Malartic, et al. The falcon series of open language models. arXiv
preprint arXiv:2311.16867, 2023.
S. An, Z. Ma, Z. Lin, N. Zheng, J. Lou, and W. Chen. Learning from mistakes makes LLM
better reasoner. CoRR, abs/2310.20689, 2023. doi: 10.48550/ARXIV.2310.20689. URL
```
https://doi.org/10.48550/arXiv.2310.20689.
```
R. Anil, A. M. Dai, O. Firat, M. Johnson, D. Lepikhin, A. Passos, S. Shakeri, E. Taropa, P. Bailey,
Z. Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
[Anthropic. The claude 3 model family: Opus, sonnet, haiku. 2024. URL https://www-cdn.a](https://www-cdn.anthropic.com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_Card_Claude_3.pdf)
```
nthropic.com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_Card_Claud
e_3.pdf.
```
D. Arora, H. G. Singh, et al. Have llms advanced enough? a challenging problem solving
benchmark for large language models. arXiv preprint arXiv:2305.15074, 2023.
Z. Azerbayev, H. Schoelkopf, K. Paster, M. D. Santos, S. McAleer, A. Jiang, J. Deng, S. Biderman,
and S. Welleck. Llemma: An open language model for mathematics. In The 3rd Workshop
[on Mathematical Reasoning and AI at NeurIPS’23, 2023. URL https://openreview.net](https://openreview.net/forum?id=0QHZrCWCH0)
```
/forum?id=0QHZrCWCH0.
```
Y. Bai, A. Jones, K. Ndousse, A. Askell, A. Chen, N. DasSarma, D. Drain, S. Fort, D. Ganguli,
T. Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from
human feedback. arXiv preprint arXiv:2204.05862, 2022.
Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In Proceedings of the
26th annual international conference on machine learning, pages 41–48, 2009.
T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam,
G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neural
information processing systems, 33:1877–1901, 2020.
Y. Cao, Y. Kang, C. Wang, and L. Sun. Instruction mining: When data mining meets large
language model finetuning. arXiv preprint arXiv, 2307, 2023.
S. Casper, X. Davies, C. Shi, T. K. Gilbert, J. Scheurer, J. Rando, R. Freedman, T. Korbak, D. Lindner, P. Freire, et al. Open problems and fundamental limitations of reinforcement learning
from human feedback. arXiv preprint arXiv:2307.15217, 2023.
W. Chen, X. Ma, X. Wang, and W. W. Cohen. Program of thoughts prompting: Disentangling
computation from reasoning for numerical reasoning tasks. CoRR, abs/2211.12588, 2022. URL
```
https://doi.org/10.48550/arXiv.2211.12588.
```
24
-----
W.-L. Chiang, Z. Li, Z. Lin, Y. Sheng, Z. Wu, H. Zhang, L. Zheng, S. Zhuang, Y. Zhuang, J. E.
Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality,
march 2023. URL https://lmsys. org/blog/2023-03-30-vicuna, 3(5), 2023.
K. Cobbe, V. Kosaraju, M. Bavarian, J. Hilton, R. Nakano, C. Hesse, and J. Schulman. Training
[verifiers to solve math word problems. CoRR, abs/2110.14168, 2021. URL https://arxiv.](https://arxiv.org/abs/2110.14168)
```
org/abs/2110.14168.
```
DeepSeek-AI. Deepseek-v2: A strong, economical, and efficient mixture-of-experts language
model, 2024.
L. Engstrom, A. Feldmann, and A. Madry. Dsdm: Model-aware dataset selection with datamodels. arXiv preprint arXiv:2401.12926, 2024.
G. Gendron, Q. Bao, M. Witbrock, and G. Dobbie. Large language models are not strong
abstract reasoners yet. In ICLR 2024 Workshop: How Far Are We From AGI, 2024. URL
```
https://openreview.net/forum?id=Pc0fPGip78.
```
Z. Gou, Z. Shao, Y. Gong, Y. Yang, M. Huang, N. Duan, W. Chen, et al. Tora: A tool-integrated
reasoning agent for mathematical problem solving. arXiv preprint arXiv:2309.17452, 2023.
[GPT-4o. Gpt-4o simple evals, 2024. URL https://github.com/openai/simple-evals.](https://github.com/openai/simple-evals)
S. Gunasekar, Y. Zhang, J. Aneja, C. C. T. Mendes, A. Del Giorno, S. Gopi, M. Javaheripi,
P. Kauffmann, G. de Rosa, O. Saarikivi, et al. Textbooks are all you need. arXiv preprint
arXiv:2306.11644, 2023.
D. Guo, Q. Zhu, D. Yang, Z. Xie, K. Dong, W. Zhang, G. Chen, X. Bi, Y. Wu, Y. Li, et al. Deepseekcoder: When the large language model meets programming–the rise of code intelligence.
arXiv preprint arXiv:2401.14196, 2024.
C. He, R. Luo, Y. Bai, S. Hu, Z. L. Thai, J. Shen, J. Hu, X. Han, Y. Huang, Y. Zhang, et al.
Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual
multimodal scientific problems. arXiv preprint arXiv:2402.14008, 2024.
D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt.
Measuring mathematical problem solving with the MATH dataset. In Thirty-fifth Conference
on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021.
[URL https://openreview.net/forum?id=7Bywt2mQsCe.](https://openreview.net/forum?id=7Bywt2mQsCe)
J. Huang and K. C.-C. Chang. Towards reasoning in large language models: A survey. arXiv
preprint arXiv:2212.10403, 2022.
A. Q. Jiang, A. Sablayrolles, A. Mensch, C. Bamford, D. S. Chaplot, D. d. l. Casas, F. Bressand,
G. Lengyel, G. Lample, L. Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
A. Q. Jiang, A. Sablayrolles, A. Roux, A. Mensch, B. Savary, C. Bamford, D. S. Chaplot, D. d. l.
Casas, E. B. Hanna, F. Bressand, et al. Mixtral of experts. arXiv preprint arXiv:2401.04088,
2024a.
W. Jiang, H. Shi, L. Yu, Z. Liu, Y. Zhang, Z. Li, and J. Kwok. Forward-backward reasoning in
[large language models for mathematical verification, 2024b. URL https://openreview.n](https://openreview.net/forum?id=GhYXocT75t)
```
et/forum?id=GhYXocT75t.
```
25
-----
J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu,
and D. Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361,
2020.
W. Kwon, Z. Li, S. Zhuang, Y. Sheng, L. Zheng, C. H. Yu, J. Gonzalez, H. Zhang, and I. Stoica.
Efficient memory management for large language model serving with pagedattention. In
Proceedings of the 29th Symposium on Operating Systems Principles, pages 611–626, 2023.
Y. Lan, L. Wang, Q. Zhang, Y. Lan, B. T. Dai, Y. Wang, D. Zhang, and E.-P. Lim. Mwptoolkit: An
open-source framework for deep learning-based math word problem solvers. In Proceedings
of the AAAI Conference on Artificial Intelligence, volume 36, pages 13188–13190, 2022.
A. Lewkowycz, A. J. Andreassen, D. Dohan, E. Dyer, H. Michalewski, V. V. Ramasesh, A. Slone,
C. Anil, I. Schlag, T. Gutman-Solo, Y. Wu, B. Neyshabur, G. Gur-Ari, and V. Misra. Solving
quantitative reasoning problems with language models. In A. H. Oh, A. Agarwal, D. Belgrave,
[and K. Cho, editors, Advances in Neural Information Processing Systems, 2022. URL https:](https://openreview.net/forum?id=IFXTZERXdM7)
```
//openreview.net/forum?id=IFXTZERXdM7.
```
C. Li, W. Wang, J. Hu, Y. Wei, N. Zheng, H. Hu, Z. Zhang, and H. Peng. Common 7b language
models already possess strong math capabilities. arXiv preprint arXiv:2403.04706, 2024.
M. Li, Y. Zhang, Z. Li, J. Chen, L. Chen, N. Cheng, J. Wang, T. Zhou, and J. Xiao. From quantity
to quality: Boosting llm performance with self-guided data selection for instruction tuning.
[CoRR, abs/2308.12032, 2023. URL https://doi.org/10.48550/arXiv.2308.12032.](https://doi.org/10.48550/arXiv.2308.12032)
P. Lu, B. Peng, H. Cheng, M. Galley, K.-W. Chang, Y. N. Wu, S.-C. Zhu, and J. Gao. Chameleon:
Plug-and-play compositional reasoning with large language models. In Thirty-seventh
[Conference on Neural Information Processing Systems, 2023. URL https://openrevi](https://openreview.net/forum?id=HtqnVSCj3q)
```
ew.net/forum?id=HtqnVSCj3q.
```
H. Luo, Q. Sun, C. Xu, P. Zhao, J. Lou, C. Tao, X. Geng, Q. Lin, S. Chen, and D. Zhang.
Wizardmath: Empowering mathematical reasoning for large language models via reinforced
[evol-instruct. CoRR, abs/2308.09583, 2023. URL https://doi.org/10.48550/arXiv.2](https://doi.org/10.48550/arXiv.2308.09583)
```
308.09583.
```
X. Ni, Y. Gong, Z. Gou, Y. Shen, Y. Yang, N. Duan, and W. Chen. Exploring the mystery of
influential data for mathematical reasoning. arXiv preprint arXiv:2404.01067, 2024.
K. Paster, M. D. Santos, Z. Azerbayev, and J. Ba. Openwebmath: An open dataset of high-quality
mathematical web text. In The Twelfth International Conference on Learning Representations,
[2024. URL https://openreview.net/forum?id=jKHmjlpViu.](https://openreview.net/forum?id=jKHmjlpViu)
A. Peng, M. Wu, J. Allard, L. Kilpatrick, and S. Heidel. Gpt-3.5 turbo fine-tuning and api updates.
2023.
R. Rafailov, A. Sharma, E. Mitchell, C. D. Manning, S. Ermon, and C. Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information
Processing Systems, 36, 2024.
D. Saxton, E. Grefenstette, F. Hill, and P. Kohli. Analysing mathematical reasoning abilities of
neural models. arXiv preprint arXiv:1904.01557, 2019.
T. L. Scao, A. Fan, C. Akiki, E. Pavlick, S. Ilic, D. Hesslow, R. Castagné, A. S. Luccioni, F. Yvon,
M. Gallé, J. Tow, A. M. Rush, S. Biderman, A. Webson, P. S. Ammanamanchi, T. Wang, B. Sagot,
26
-----
N. Muennighoff, A. V. del Moral, O. Ruwase, R. Bawden, S. Bekman, A. McMillan-Major,
I. Beltagy, H. Nguyen, L. Saulnier, S. Tan, P. O. Suarez, V. Sanh, H. Laurençon, Y. Jernite,
J. Launay, M. Mitchell, C. Raffel, A. Gokaslan, A. Simhi, A. Soroa, A. F. Aji, A. Alfassy,
A. Rogers, A. K. Nitzav, C. Xu, C. Mou, C. Emezue, C. Klamm, C. Leong, D. van Strien, D. I.
Adelani, and et al. Bloom: A 176b-parameter open-access multilingual language model. CoRR,
[abs/2211.05100, 2022. URL https://doi.org/10.48550/arXiv.2211.05100.](https://doi.org/10.48550/arXiv.2211.05100)
O. Sener and S. Savarese. Active learning for convolutional neural networks: A core-set approach.
arXiv preprint arXiv:1708.00489, 2017.
Z. Shao, P. Wang, Q. Zhu, R. Xu, J. Song, M. Zhang, Y. Li, Y. Wu, and D. Guo. Deepseekmath:
Pushing the limits of mathematical reasoning in open language models. arXiv preprint
arXiv:2402.03300, 2024.
T. Shen, R. Jin, Y. Huang, C. Liu, W. Dong, Z. Guo, X. Wu, Y. Liu, and D. Xiong. Large language
model alignment: A survey. arXiv preprint arXiv:2309.15025, 2023.
P. Soviany, R. T. Ionescu, P. Rota, and N. Sebe. Curriculum learning: A survey. International
Journal of Computer Vision, 130(6):1526–1565, 2022.
J. Sun, C. Zheng, E. Xie, Z. Liu, R. Chu, J. Qiu, J. Xu, M. Ding, H. Li, M. Geng, et al. A survey of
reasoning with foundation models. arXiv preprint arXiv:2312.11562, 2023.
R. Taori, I. Gulrajani, T. Zhang, Y. Dubois, X. Li, C. Guestrin, P. Liang, and T. B. Hashimoto.
[Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab](https://github.com/tatsu-lab/stanford_alpaca)
```
/stanford_alpaca, 2023.
```
S. Toshniwal, I. Moshkov, S. Narenthiran, D. Gitman, F. Jia, and I. Gitman. Openmathinstruct-1:
A 1.8 million math instruction tuning dataset. arXiv preprint arXiv:2402.10176, 2024.
H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra,
P. Bhargava, S. Bhosale, D. Bikel, L. Blecher, C. Canton-Ferrer, M. Chen, G. Cucurull, D. Esiobu,
J. Fernandes, J. Fu, W. Fu, B. Fuller, C. Gao, V. Goswami, N. Goyal, A. Hartshorn, S. Hosseini,
R. Hou, H. Inan, M. Kardas, V. Kerkez, M. Khabsa, I. Kloumann, A. Korenev, P. S. Koura, M.-A.
Lachaux, T. Lavril, J. Lee, D. Liskovich, Y. Lu, Y. Mao, X. Martinet, T. Mihaylov, P. Mishra,
I. Molybog, Y. Nie, A. Poulton, J. Reizenstein, R. Rungta, K. Saladi, A. Schelten, R. Silva, E. M.
Smith, R. Subramanian, X. E. Tan, B. Tang, R. Taylor, A. Williams, J. X. Kuan, P. Xu, Z. Yan,
I. Zarov, Y. Zhang, A. Fan, M. Kambadur, S. Narang, A. Rodriguez, R. Stojnic, S. Edunov, and
T. Scialom. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288,
[2023. URL https://doi.org/10.48550/arXiv.2307.09288.](https://doi.org/10.48550/arXiv.2307.09288)
X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdhery, and D. Zhou.
Self-consistency improves chain of thought reasoning in language models. arXiv preprint
arXiv:2203.11171, 2022.
X. Wang, Z. Hu, P. Lu, Y. Zhu, J. Zhang, S. Subramaniam, A. R. Loomba, S. Zhang, Y. Sun,
and W. Wang. Scibench: Evaluating college-level scientific problem-solving abilities of large
[language models, 2024. URL https://openreview.net/forum?id=u6jbcaCHqO.](https://openreview.net/forum?id=u6jbcaCHqO)
J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou,
D. Metzler, et al. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682,
2022a.
27
-----
J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V. Le, D. Zhou, et al. Chainof-thought prompting elicits reasoning in large language models. Advances in neural
information processing systems, 35:24824–24837, 2022b.
T. Wei, J. Luan, W. Liu, S. Dong, and B. Wang. Cmath: can your language model pass chinese
elementary school math test? arXiv preprint arXiv:2306.16636, 2023a.
T. Wei, L. Zhao, L. Zhang, B. Zhu, L. Wang, H. Yang, B. Li, C. Cheng, W. Lü, R. Hu, et al.
Skywork: A more open bilingual foundation model. arXiv preprint arXiv:2310.19341, 2023b.
T. Wei, B. Zhu, L. Zhao, C. Cheng, B. Li, W. Lü, P. Cheng, J. Zhang, X. Zhang, L. Zeng, et al.
Skywork-moe: A deep dive into training techniques for mixture-of-experts language models.
arXiv preprint arXiv:2406.06563, 2024.
Y. Weng, M. Zhu, F. Xia, B. Li, S. He, S. Liu, B. Sun, K. Liu, and J. Zhao. Large language models
are better reasoners with self-verification. arXiv preprint arXiv:2212.09561, 2022.
Z. Wu, L. Qiu, A. Ross, E. Akyürek, B. Chen, B. Wang, N. Kim, J. Andreas, and Y. Kim. Reasoning or reciting? exploring the capabilities and limitations of language models through
[counterfactual tasks. CoRR, abs/2307.02477, 2023. URL https://doi.org/10.48550/arX](https://doi.org/10.48550/arXiv.2307.02477)
```
iv.2307.02477.
```
C. Xu, Q. Sun, K. Zheng, X. Geng, P. Zhao, J. Feng, C. Tao, and D. Jiang. Wizardlm: Empowering
large language models to follow complex instructions. CoRR, abs/2304.12244, 2023. URL
```
https://doi.org/10.48550/arXiv.2304.12244.
```
Y. Xu, X. Liu, X. Liu, Z. Hou, Y. Li, X. Zhang, Z. Wang, A. Zeng, Z. Du, W. Zhao, et al. Chatglmmath: Improving math problem-solving in large language models with a self-critique pipeline.
arXiv preprint arXiv:2404.02893, 2024.
F. Xue, Z. Zheng, Y. Fu, J. Ni, Z. Zheng, W. Zhou, and Y. You. Openmoe: An early effort on open
mixture-of-experts language models. arXiv preprint arXiv:2402.01739, 2024.
A. Yang, B. Xiao, B. Wang, B. Zhang, C. Bian, C. Yin, C. Lv, D. Pan, D. Wang, D. Yan, et al.
Baichuan 2: Open large-scale language models. arXiv preprint arXiv:2309.10305, 2023.
H. Ying, S. Zhang, L. Li, Z. Zhou, Y. Shao, Z. Fei, Y. Ma, J. Hong, K. Liu, Z. Wang, et al.
Internlm-math: Open math large language models toward verifiable reasoning. arXiv preprint
arXiv:2402.06332, 2024.
L. Yu, W. Jiang, H. Shi, J. YU, Z. Liu, Y. Zhang, J. Kwok, Z. Li, A. Weller, and W. Liu. Metamath:
Bootstrap your own mathematical questions for large language models. In The Twelfth
[International Conference on Learning Representations, 2024. URL https://openreview](https://openreview.net/forum?id=N8N0hgNDRt)
```
.net/forum?id=N8N0hgNDRt.
```
Z. Yuan, H. Yuan, C. Tan, W. Wang, and S. Huang. How well do large language models perform
[in arithmetic tasks? CoRR, abs/2304.02015, 2023. URL https://doi.org/10.48550/arX](https://doi.org/10.48550/arXiv.2304.02015)
```
iv.2304.02015.
```
X. Yue, X. Qu, G. Zhang, Y. Fu, W. Huang, H. Sun, Y. Su, and W. Chen. Mammoth: Building
math generalist models through hybrid instruction tuning. CoRR, abs/2309.05653, 2023. URL
```
https://doi.org/10.48550/arXiv.2309.05653.
```
28
-----
B. Zhang, Z. Liu, C. Cherry, and O. Firat. When scaling meets LLM finetuning: The effect of
data, model and finetuning method. In The Twelfth International Conference on Learning
[Representations, 2024. URL https://openreview.net/forum?id=5HCnKDeTws.](https://openreview.net/forum?id=5HCnKDeTws)
C. Zheng, Z. Liu, E. Xie, Z. Li, and Y. Li. Progressive-hint prompting improves reasoning in
large language models. arXiv preprint arXiv:2304.09797, 2023.
W. Zhong, R. Cui, Y. Guo, Y. Liang, S. Lu, Y. Wang, A. Saied, W. Chen, and N. Duan. Agieval: A
human-centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364,
2023.
C. Zhou, P. Liu, P. Xu, S. Iyer, J. Sun, Y. Mao, X. Ma, A. Efrat, P. Yu, L. Yu, S. Zhang,
G. Ghosh, M. Lewis, L. Zettlemoyer, and O. Levy. Lima: Less is more for alignment. CoRR,
[abs/2305.11206, 2023. URL https://doi.org/10.48550/arXiv.2305.11206.](https://doi.org/10.48550/arXiv.2305.11206)
29
-----
##### A. Illustrations of Three Different Data Augmentation Methods
We present three specific examples using the corresponding augmentation styles introduced in
Section 3.1. We use the same query to investigate the differences in the response. Overall, the
differences among these three methods are nuanced, but combining them is crucial to enhance
the diversity of the Skywork-MathQA dataset (ref. Section 4.3.2). In Figure 10, the MetaMathQAstyle data is answer-focused and maintains a coherent solving process. Figure 11 illustrates the
Evol-style data, which provides a more detailed solution and includes extensive text to describe
the problem-solving process. Figure 12 presents the Xwin-style response with a more detailed
calculation process.
Example A.1: MetaMathQA-Style Query-Response Pair
**Query: If 𝑥** is a positive number such that
√8𝑥 - √10𝑥 - √3𝑥 - √15𝑥 = 15,
find all possible values for 𝑥.
**Response: Let’s think step by step.**
We start with the equation given: √8𝑥 - √10𝑥 - √3𝑥 - √15𝑥 = 15
First, simplify the left-hand side using the property of square roots: √8𝑥 - √10𝑥 - √3𝑥
√15𝑥 = (8𝑥) · (10𝑥) · (3𝑥) · (15𝑥)
Next, multiply the expressions under the square root: 8𝑥 10𝑥 3𝑥 15𝑥 = 8 10 3
√︁ ( ) · ( ) · ( ) · ( ) - -
15 · 𝑥[4]
We calculate the numerical product:
8 · 10 · 3 · 15 = 8 · 10 · 45 = 8 · 450 = 3600
Thus, the expression simplifies to: √3600𝑥[4] = 15
Now, rewrite the expression inside the square root: √3600𝑥[4] = √3600 · √𝑥[4]
Calculate the square roots: √3600 = 60 and √𝑥[4] = 𝑥[2]
Thus, the equation becomes: 60𝑥[2] = 15
Solve for 𝑥[2]: 𝑥[2] = [15]60 [=][ 1]4
Take the positive square root of both sides (since 𝑥 is positive): 𝑥 = 14 [=][ 1]2
The answer is [1]2 [.] √︃
Figure 10 | An example of data formatted in the MetaMathQA-style.
30
-----
Example A.2: Evol-Style Query-Response Pair
**Query: Suppose 𝑥** represents a positive quantity, and it satisfies the following mathematical relationship:
√8𝑥 × √10𝑥 × √3𝑥 × √15𝑥 = 15.
Determine every viable value of 𝑥 that conforms to this equation.
**Response: We have the equation**
√8𝑥 × √10𝑥 × √3𝑥 × √15𝑥 = 15.
We can simplify the left-hand side by multiplying the square roots together. Since the
product of square roots is equal to the square root of the product of the values under
the square roots (as long as the quantities under the square roots are positive, which
they are since 𝑥 is positive), we have:√8𝑥 - 10𝑥 - 3𝑥 - 15𝑥 = 15. We can then multiply the
numbers under the square root together:√8 · 10 · 3 · 15 · 𝑥[4] = 15. Next, we simplify the
multiplication under the square root:√8 · 10 · 3 · 15 · 𝑥[4] = √3600 · 𝑥[4]. Notice that 3600 = 60[2],
so we can simplify the square root as follows:
3600 · 𝑥[4] = 60[2] - 𝑥[4].
√︁ √︁
Using the property of the square root that √𝑎[2] = 𝑎 for any positive 𝑎, we get:
60[2] - 𝑥[4] = 60𝑥[2].
√︁
Now we can equate the simplified square root expression to the right-hand side of the
original equation:60𝑥[2] = 15. To solve for 𝑥[2], we divide both sides of the equation by 60:
_𝑥[2]_ = [15]
60 [. Simplify the fraction:][ 𝑥][2][ =][ 1]4 [. To solve for][ 𝑥][, we take the square root of both sides,]
remembering that we’re looking for the positive value of 𝑥 (since 𝑥 represents a positive
quantity):𝑥 = 14 [. The square root of][ 1]4 [is][ 1]2 [, so we have:][𝑥] [=][ 1]2 [.]
The answer is√︃2[1] [.]
Figure 11 | An example of data formatted in the Evol-style.
31
-----
Example A.3: Xwin-Style Query-Response Pair
**Query: If 𝑥** is a positive number such that
√8𝑥 - √10𝑥 - √3𝑥 - √15𝑥 = 15,
find all possible values for 𝑥.
**Response: Given the equation:** √8𝑥 - √10𝑥 - √3𝑥 - √15𝑥 = 15
First, simplify the expression on the left side. We know that _𝑎_ √𝑏 = √𝑎𝑏, so we can
[√] -
combine the square roots: √8𝑥 - 10𝑥 - 3𝑥 - 15𝑥
Next, calculate the product inside the square root: 8𝑥 - 10𝑥 - 3𝑥 - 15𝑥 = 8 · 10 · 3 · 15 · 𝑥[4]
Simplify the constants: 8 · 10 = 80, 80 · 3 = 240, 240 · 15 = 3600. So we have: 8𝑥 - 10𝑥 - 3𝑥 -
15𝑥 = 3600𝑥[4]
Therefore, the equation becomes: √3600𝑥[4] = 24 Since √3600𝑥[4] = √3600 · √𝑥[4], and √𝑥[4] = 𝑥[2],
we get: √3600 · 𝑥[2] = 15 15
IsolateSimplify 𝑥[2]√by dividing both sides by3600: √3600 = 60 √3600: 𝑥[2] = √3600
So: 𝑥[2] = [15]60 [=][ 1]4
Finally, solve for 𝑥: 𝑥 = 14
Upon checking if further simplification is possible, we find:√︃ _𝑥_ = [1]2 [.]
The answer is 2[1] [.]
Figure 12 | An example of data formatted in the Xwin-style.
##### B. Case Studies with Correct Answers Presented in Incorrect Formats
- Different formats of the final answer but with the same value.
**Ground Truth: 0.24**
**Response: . . . The answer is 24%**
**Ground Truth:** √2, √3
**Response: . . . The answer is** √3, √2
**Ground Truth:Response: . . . The answer is[2][+]4√2** 2[1] √42
[+]
**Ground Truth: \\text{odd}**
**Response: . . . The answer is \"odd\".**
32
-----
- Unexpected format for presenting the final answer, such as rephrasing the prefix "\nThe
answer is " or including extra words before, in, or after "\nThe answer is ".
**Ground Truth: 1, 2**
**Response: . . . The correct answer is 1, 2**
**Ground Truth: 19**
**Response: . . . The correct answer is 19, but this is based on an assumption that ...**
**Ground Truth: 2**
**Response: . . . The value of x is 2**
**Ground Truth: 24.01**
**Response: . . . The answer is 𝑥** = [2401]100 [=][ 24.01]
##### C. Performance Analysis in Stage 2 of the Data Synthesis pipeline
Table 11 illustrates the relationship between data size in stage 2 of the data synthesis pipeline
and the model performance. As we generate more hard synthetic problems in stage 2 of our
data synthesis pipeline, the fine-tuned LLM models show gradual improvement in handling
hard problems (Level 3-5) on the MATH benchmark.
33
-----
**Difficulty Levels in MATH(%)**
**Base Model** **Dataset Size**
**Level-1 Level-2 Level-3 Level-4 Level-5**
LLaMA2-7B 7.5K 17.85 8.39 4.77 3.05 0.91
Mistral-7B 7.5K 37.99 25.17 15.12 8.48 2.49
DeepSeekMath-7B 7.5K 64.07 46.76 37.84 24.63 10.73
LLaMA2-7B 1.0M 75.29 55.03 44.56 31.22 13.75
Mistral-7B 1.0M 80.55 63.31 53.05 38.47 19.18
DeepSeekMath-7B 1.0M 79.18 62.30 54.82 40.44 19.71
LLaMA2-7B 2.1M 78.03 60.29 48.19 35.09 19.56
Mistral-7B 2.1M 80.78 66.33 55.53 41.52 21.45
DeepSeekMath-7B 2.1M 80.78 65.21 58.00 41.60 21.83
LLaMA2-7B 2.1M + 0.1M (hard) 78.03 62.19 48.89 36.66 17.98
Mistral-7B 2.1M + 0.1M (hard) 81.01 67.45 58.44 45.22 21.53
DeepSeekMath-7B 2.1M + 0.1M (hard) 84.90 67.45 57.91 44.07 21.22
LLaMA2-7B 2.1M + 0.2M (hard) 78.95 61.41 51.11 39.29 18.66
Mistral-7B 2.1M + 0.2M (hard) 83.52 68.90 59.50 46.05 22.21
DeepSeekMath-7B 2.1M + 0.2M (hard) 82.84 68.46 57.91 42.50 23.41
LLaMA2-7B 2.1M + 0.4M (hard) 78.03 62.42 52.87 37.48 18.73
Mistral-7B 2.1M + 0.4M (hard) 83.52 67.56 60.65 44.89 25.08
DeepSeekMath-7B 2.1M + 0.4M (hard) 82.84 67.23 58.71 42.01 21.30
LLaMA2-7B 7.5k + 0.4M (hard) 63.16 43.96 34.39 24.46 10.20
Mistral-7B 7.5k + 0.4M (hard) 71.62 57.27 48.72 34.60 16.99
DeepSeekMath-7B 7.5k + 0.4M (hard) 81.01 61.97 51.90 37.07 18.05
GPT-4-Turbo - 82.84 73.38 65.34 52.88 34.06
Table 11 | Difficulty level-wise performance of different base LLMs in Skywork-Math models
and various sizes of SFT data on MATH. GPT-4-Turbo is evaluated using our designed grading
criteria with 4-shot COT prompting.
##### D. Performance Analysis on MATH across Subjects
Table 12 presents the accuracy results on the MATH benchmark across various math subjects.
The Skywork-Math models excel in the "Algebra" category as we scale up the synthetic SFT data.
However, it struggles in some other math subjects, such as "Geometry", where the understanding
of geometric concepts may be challenging for language LLM models.
34
-----
|Base Model Dataset Size|Algebra Counting & Probability Geometry Intermediate Algebra Number Theory Prealgebra Precalculus|
|---|---|
|||
|LLaMA2-7B 7.5K Mistral-7B 7.5K DeepSeekMath-7B 7.5K|6.66 4.01 3.34 1.33 3.89 11.71 1.10 21.65 9.07 9.19 3.77 8.89 28.01 3.85 52.15 21.10 19.42 11.07 26.67 50.63 9.71|
|LLaMA2-7B 1.0M Mistral-7B 1.0M DeepSeekMath-7B 1.0M|55.69 32.28 30.69 16.06 34.26 55.80 18.50 65.37 34.60 35.49 20.49 44.44 64.87 23.81 68.16 35.02 35.91 22.81 41.30 62.34 26.74|
|LLaMA2-7B 2.1M Mistral-7B 2.1M DeepSeekMath-7B 2.1M|62.09 36.50 33.82 18.38 41.85 59.93 21.79 66.72 40.93 38.00 23.48 43.70 68.08 26.19 69.92 41.14 36.33 25.03 45.19 65.56 29.30|
|LLaMA2-7B 2.5M Mistral-7B 2.5M DeepSeekMath-7B 2.5M|64.62 37.13 35.49 21.26 40.56 63.72 25.64 70.85 43.25 41.75 24.58 49.44 70.72 30.77 69.25 38.40 38.00 24.70 43.52 68.77 30.22|
Table 12 | MATH accuracies across subjects with different SFT data sizes.
##### E. Effect of Model Maximum Length in Two Stages of the Data Synthesis Pipeline
Table 13 presents the performance with three 7B base models in Skywork-Math model series
with maximum lengths set of 512 and 2048 in the stage 1 & 2 of the data synthesis pipeline.
**Base Model** **Data Synthesis Pipeline (Size) Model Max Length MATH(%) GSM8k(%)**
LLaMA2-7B Stage 1 (2.1M) 512 42.36 70.81
LLaMA2-7B Stage 1 (2.1M) 2048 45.56 73.62
Mistral-7B Stage 1 (2.1M) 512 47.14 81.05
Mistral-7B Stage 1 (2.1M) 2048 49.1 83.25
DeepSeekMath-7B Stage 1 (2.1M) 512 48.24 79.61
DeepSeekMath-7B Stage 1 (2.1M) 2048 48.64 79.30
LLaMA2-7B Stage 2 (2.5M) 512 44.06 67.85
LLaMA2-7B Stage 2 (2.5M) 2048 47.66 72.86
Mistral-7B Stage 2 (2.5M) 512 50.56 82.41
Mistral-7B Stage 2 (2.5M) 2048 51.22 83.93
DeepSeekMath-7B Stage 2 (2.5M) 512 48.28 80.52
DeepSeekMath-7B Stage 2 (2.5M) 2048 49.88 81.50
Table 13 | Model performance with different model maximum lengths.
35
-----
##### F. More Experiments with Base LLM models after SFTing on the Skywork- Math Dataset
As shown in Table 14, we conduct experiments with two additional pre-trained base LLM model.
The results indicate that after SFTing on the Skywork-Math Dataset, both base models exhibit
consistent performance improvement.
**Base Model Data Synthesis Pipeline (Size) GSM8K(%) MATH(%)**
LLaMA3-8B - 79.60 30.00
LLaMA3-8B Stage 1 (2.1M) 80.82 50.34
LLaMA3-8B Stage 2 (2.5M) 75.97 50.30
Llemma-7B - 36.40 18.00
Llemma-7B Stage 1 (2.1M) 65.43 40.34
Llemma-7B Stage 2 (2.5M) 66.03 40.08
Table 14 | Performance on LLaMA3-8B AI@Meta (2024) and Llemma-7B Azerbayev et al. (2023)
base LLM models. We fine-tune the corresponding base LLM models using the SkyworkMathQA dataset in stage 1 and stage 2 of the data synthesis pipeline.
36
-----
| [
"Tianwen, Wei",
"Liang, Zeng",
"Yang, Liu",
"Liangjun, Zhong",
"Liang, Zhao",
"Han, Fang",
"Shuicheng, Yan",
"Liu, Yang",
"Jujie, He",
"Cheng, Cheng",
"Rui, Hu",
"Yahui, Zhou"
] | 2024-07-17T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2407.08348 | https://arxiv.org/abs/2407.08348 | null |
Small Language Models are Equation Reasoners | Chain-of-Thought (CoT) reasoning has enabled Large Language Model (LLM) to achieve remarkable performance in various NLP tasks, including arithmetic problem-solving. However, this success does not generalize to small language model (sLM) like T5, due to their limited capacity and absence of emergent abilities associated with larger models. Recent works to enhance sLM through knowledge distillation have yielded some improvements but still face significant limitations, particularly high ambiguity from the variability in natural language expressions and substantial computational costs. In this paper, we investigate why sLM perform poorly on arithmetic reasoning tasks and hypothesize that natural language format variability introduces high ambiguity for these smaller models. Based on this hypothesis, we conduct experiments with equation-only format, which is a reasoning format that unifies arithmetic reasoning previously expressed in natural language formats into mathematical equations. Experiment results demonstrate that equation-only format effectively boosts the arithmetic reasoning abilities of sLM, especially in very small models like T5-Tiny. | This paper investigates why sLM perform poorly on arithmetic reasoning tasks and hypothesizes that natural language format variability introduces high ambiguity for these smaller models, and conducts experiments with equation-only format, which is a reasoning format that unifies arithmetic reasoning previously expressed in natural language formats into mathematical equations. | [
"Bumjun, Kim",
"Kunha, Lee",
"Juyeon, Kim",
"Sangam, Lee"
] | 2024-09-18T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.12393 | https://arxiv.org/abs/2409.12393 | https://www.semanticscholar.org/paper/244b52d4d0901cca5fd105ec0b298d4bd0cec299 |
|
Smart Vision-Language Reasoners | In this article, we investigate vision-language models (VLM) as reasoners. The ability to form abstractions underlies mathematical reasoning, problem-solving, and other Math AI tasks. Several formalisms have been given to these underlying abstractions and skills utilized by humans and intelligent systems for reasoning. Furthermore, human reasoning is inherently multimodal, and as such, we focus our investigations on multimodal AI. In this article, we employ the abstractions given in the SMART task (Simple Multimodal Algorithmic Reasoning Task) introduced in \cite{cherian2022deep} as meta-reasoning and problem-solving skills along eight axes: math, counting, path, measure, logic, spatial, and pattern. We investigate the ability of vision-language models to reason along these axes and seek avenues of improvement. Including composite representations with vision-language cross-attention enabled learning multimodal representations adaptively from fused frozen pretrained backbones for better visual grounding. Furthermore, proper hyperparameter and other training choices led to strong improvements (up to $48\%$ gain in accuracy) on the SMART task, further underscoring the power of deep multimodal learning. The smartest VLM, which includes a novel QF multimodal layer, improves upon the best previous baselines in every one of the eight fundamental reasoning skills. End-to-end code is available at https://github.com/smarter-vlm/smarter. | This article employs the abstractions given in the SMART task (Simple Multimodal Algorithmic Reasoning Task) introduced incherian2022deep as meta-reasoning and problem-solving skills along eight axes: math, counting, path, measure, logic, spatial, and pattern. | # Smart Vision-Language Reasoners
**Denisa Roberts** [1] **Lucas Roberts** [2]
**Abstract**
In this article, we investigate vision-language
models (VLM) as reasoners. The ability to form
abstractions underlies mathematical reasoning,
problem-solving, and other Math AI tasks. Several formalisms have been given to these underlying abstractions and skills utilized by humans
and intelligent systems for reasoning. Furthermore, human reasoning is inherently multimodal,
and as such, we focus our investigations on multimodal AI. In this article, we employ the abstractions given in the SMART task (Simple Multimodal Algorithmic Reasoning Task) introduced
in (Cherian et al., 2022) as meta-reasoning and
problem-solving skills along eight axes: math,
counting, path, measure, logic, spatial, and pattern. We investigate the ability of vision-language
models to reason along these axes and seek avenues of improvement. Including composite representations with vision-language cross-attention
enabled learning multimodal representations adaptively from fused frozen pretrained backbones
for better visual grounding. Furthermore, proper
hyperparameter and other training choices led
to strong improvements (up to 48% gain in accuracy) on the SMART task, further underscoring the power of deep multimodal learning. The
smartest VLM, which includes a novel QF multimodal layer, improves upon the best previous
baselines in every one of the eight fundamental
reasoning skills. End-to-end code is available at
[github.com/smarter-vlm/smarter.](https://github.com/smarter-vlm/smarter)
**1. Introduction**
Human intelligence is oftentimes associated with the ability
to operate on mathematical abstractions. In (Chollet, 2019)
the author conducts an in depth discussion and formulates
1Department of Computer Science, New York University, New
York, USA. [2]Yext Inc., New York, USA. Correspondence to:
Denisa Roberts <[email protected]>.
_The first AI for MATH Workshop at the 41st International Confer-_
_ence on Machine Learning, Vienna, Austria. Copyright 2024 by_
the author(s).
_Figure 1. The smarterVLM reasoner architecture (right) and the_
novel QF layer (left). Vision (DinoV2+SigLIP) and language
(SigLIP) backbones are frozen. All other layers are trained from
scratch.
a formal definition of intelligence based on algorithmic information theory. Several meta characteristics of intelligent
systems are listed as scope, generalization difficulty, priors
and experience. On a different but related axis, (Didolkar
et al., 2024) speaks of metacognitive capabilities of large
language models, abilities that underlie all problem solving,
including math problems. In a related work in the multimodal domain (Cherian et al., 2022), a Simple Multimodal
Algorithmic Reasoning Task (SMART) is introduced with
visual-linguistic puzzles designed for children in the 6-8
age group (the US Kangaroo Olympiad style). In this work,
an explicit categorization of underlying skills utilized by
humans in problem solving are labeled and tallied as they
get employed in solving puzzles as measure, path, pattern,
logic, math, algebra, and spatial skills. Furthermore, reasoning must be multimodal because humans have multiple
senses whose inputs are amalgamated to reason at higher
abstractions. Better abstractions are akin to better mental
representations. Deep neural networks excel at learning
-----
**Smart Vision-Language Reasoners**
_Figure 2. Math Question: What do we need to put in the square to get a correct diagram? Answer Options: A: -3; B: /9; C: x6; D: x2; E:_
2; Path Question with Sequence Answer: You have to block some locations in the maze so that the feline cannot reach the bird. Which of
_the following options to block will fail? Answer Options: A: 1, 2, and 3; B: 4; C: 5, 6, and 7; D: 8 and 9 ; E: 10, 11, and 12. Counting_
**Question: The entire pie is divided among several children. Each child receives a piece of pie, and each piece of pie looks identical. The**
_maximum possible number of children there is: Answer Options: A: 7; B: 2; C: 1; D: 4; E: 3. Algebra Question: The entire pie is_
_divided among several children. Each child receives a piece of pie, and each piece of pie looks identical. The maximum possible number_
_of children there is: Answer Options: A: 5; B: 4; C: 2; D: 0; E: 6. Measure Question:A student had a few canes with a height of 1 cm_
_and a length of 5 cm. Using the canes, she built the arrangement illustrated. What is the width of the arrangement? Answer Options: A:_
20; B: 30; C: 15; D: 5; E: 35. Spatial Question: Cristina made a setup using some green blocks and 94 white blocks. How many of these
_white blocks are not visible in the figure? Answer Options: A: 28; B: 61; C: 64; D: 90; E: 79. Logic Question: Emily has 7 toy items: a_
_remote, a hair brush, a truck, an eraser, a rubber duck, carrots, and a toe ring. She keeps each toy at a different row of the shelf. The_
_carrots lower to toe ring. Remote lower to truck and toe ring higher to truck. Toe ring higher to rubber duck. She keeps carrots as shown._
_On which row can the rubber duck not be placed? Answer Options: A: 4; B: 3; C: 7; D: 5; E: 6. Pattern Question: Which picture on the_
_right matches with the left, if we invert the colors? Answer Options: A; B; C; D; E._
(artificial) representations (Bengio et al., 2013).
We base our investigations in this article on a few conjec**tures with respect to mathematical reasoning and general**
**problem-solving:**
- Intelligence is related to multimodal reasoning. If
a person is deaf and cannot hear more than 50% of
what is being said, the speech modality input is supplemented with reading faces and other visual aids (vision
modalities), captions (text), as well as other modalities
(all the other senses, as well as enhanced reasoning and
computation abilities). Therefore, it is worth striving
to improve multimodal deep learning architectures and
their reasoning abilities.
- Training leads to learning and improved reasoning.
Math and physics Olympiad expert competitors practice more than non-experts in environments that value
the pursuit. Taking a classic example, the Polgar
sisters-three chess grandmasters-were trained to excel at chess. Similarly in the musical domain, Mozart
and Beethoven were trained from a young age to excel
in music. In fact, one interpretation of evolution is
the act of training and learning. We learned to grow
better brains out of necessity. Therefore, training neural networks specifically to enhance reasoning makes
evolutionary sense.
- Intelligence is related to better abstractions and those
are related to better representations. Expert chess
players, thespians, martial artists, mathematicians, and
coders all develop fine-grained relevant mental representations so they can better reason and imagine new
ideas rapidly. Therefore, improving the representations
derived with deep learning architectures is worth pursuing (better image, text etc. representations as well as
their cross-play).
- If we make neural networks better at reasoning (we
include here a few types of reasoning such as creative
problem-solving in math, physics, logic and coding
algorithms, puzzles and IQ tests, learning, planning
and decision making), these skills may be transferable
to science, strategy, medicine, law, and commonsense,
with far reaching real world impact.
**2. Related Work**
**Reasoning Surveys of deep learning for mathematical rea-**
soning such as (Lu et al., 2022; Sun et al., 2023) mentioned
the relatively smaller subset of works on multimodal / visionlanguage models in this space, with datasets and models
which are smaller, niche, and mostly using visual question
answering frameworks. These approaches are lacking since
they are trained on natural images and not on models trained
on vision and language datasets. Subsequent works such
as (Zhang et al., 2024b; Wu et al., 2024; Lu et al., 2023)
and this article, aim to enrich the multimodal mathematical
reasoning domain.
**Vision-Language Models Opportunities for improvement**
of vision language models still exist along problem-solving
and algorithmic reasoning ability, visual grounding, as
well as architectures for encoding, decoding and aligning
(Karamcheti et al., 2024; Tong et al., 2024; Liu et al., 2023;
Wu & Xie, 2023). In this article we focus on the reasoning ability along eight dimensions of reasoning. In the
reasoning realm, much recent work focuses on evaluating
vision-language models on general multimodal tasks (Yue
-----
**Smart Vision-Language Reasoners**
et al., 2023; Lu et al., 2023) or applying a chain of thought
approach for in context learning (Zhang et al., 2023; 2024a).
In (Azerbayev et al., 2023) large language models are pretrained to solve text only questions bringing on the full
power of heavy-weight LLM, but without taking the visual
signal into account. Then a separate line of work builds
very large general Vision-Language Models (VLM) akin
to an LLM. In (Li et al., 2023b) a vision-language architecture, the Query Transformer, adds transformer layers to
frozen image and text encoders and learns in a contrastive
pretraining paradigm on massive datasets. Llava (Liu et al.,
2024a; 2023) versions (Liu et al., 2023) emerge as a multimodal instruction tuned large model finetuned on a science
dataset. In (Gao et al., 2023) authors use an LLM and
parameter-efficient visual instruction tuning, focusing on
learning efficiently only the adapters, with early fusion of
visual tokens in LLM layers. In (Tong et al., 2024) the deficiencies in visual grounding of large multimodal models
is investigated and mixtures of visual features are proposed
to improve vision modality. In (Li et al., 2023a), another
pretrained and visual instruction tuned framework is proposed, employing CLIP (Radford et al., 2021), Llama (Touvron et al., 2023) and Perceiver adapters (from Flamingo)
(Alayrac et al., 2022) as well as a dataset, MIMIC-IT. The
pretrained vision encoder in DinoV2 (Oquab et al., 2023)
aims to leverage different techniques and diversity of images to pretrain backbones for the purpose of using them as
general-purpose features. However, it is not necessarily true
that even this general purpose encoder will encode all the visual signals, so fusing with representations from a backbone
pretrained in a different fashion, for instance the SigLIP
(Zhai et al., 2023) representation, may provide additional
visual signal boost. Specifically, SigLIP improves on CLIP
(Radford et al., 2021) for language-image pretraining by
employing a sigmoid loss instead of the constrastive learning with softmax normalization and performs better across
tasks.
**Multimodality and representation learning in Math AI.**
MATH-AI research has focused primarily on language models, but problem solving is inherently multimodal (text, image, diagrams, tables, numbers, symbols). In the current
article we utilize the vision and text modalities to encode
all concepts included in each puzzle, without employing
separate encodings and/or representations for symbols or
numbers. Multiple works focus on representation learning
and reasoning with more than the specialized mathematical
visual modality, such as symbols and numbers on symbolic
reasoning (Li et al., 2023c), specialized representation learning works for numbers (Golkar et al., 2023), as well as more
complex hierarchical math concepts with graphs in (Rute
et al., 2024), which would be interesting to include as further modalities in future works. In a related vein, (Wu &
Xie, 2023) and (Tong et al., 2024), proffer the reasoning ca
pabilities of multimodal large language models and explores
their visual representation learning abilities.
**2.1. Benchmark, Dataset, and Challenges**
So how can we help (deep) artificial neural networks reason better? In (Cherian et al., 2022) experiments show
that the visual signal is very important in solving complex
multi-reasoning skill puzzles and, despite being very large,
language-only models lag behind visual language models in
terms of performance. Conversely, in (Zhang et al., 2024b)
the conclusion appears to be that large multimodal models
cannot truly understand the visual diagrams for mathematical reasoning, along the line of weak visual grounding and
poor attention to visual detail in (Tong et al., 2024) and
(Wu & Xie, 2023) for large multimodal models for math,
question answering, and other reasoning tasks. The Simple
Multimodal Algorithmic Reasoning Task (SMART) introduced in (Cherian et al., 2022) contains puzzles that measure
intelligence across eight different reasoning skill classes:
counting, math, logic, path, measure, logic, and pattern.
Problems include an image and a text question and are formulated as multiple choice. We can see a few examples of
problems in Figure 2. Baseline models trained in (Cherian
et al., 2022) struggle to solve this task, especially when
employing transformers. In the past, specialized neural networks such as (Mikuła et al., 2023) have been developed to
solve specific reasoning tasks, specifically premise selection
in automated theorem proving. In this article, we investigate
how we can craft and train deep neural networks which employ several types of deep learning blocks and multimodal
inputs from deep frozen transformers to reason better across
the eight meta reasoning axes in the SMART task.
**The SMART reasoning task and baselines. A set of vision-**
language models are trained as benchmarks in (Cherian
et al., 2022) and SMART-101 with 202K text-image pairs
for train, validation and test dataset is released. There are
101 origin puzzles, and additional problems are generated
programatically in each puzzle group for a total of 202,000
question-image pairs. Figure 2 clearly describes a training example problem. All trained VLMs struggle on the
SMART task, with transformers underperforming ResNet50
(He et al., 2016) based models. The learning tasks depend
on the type of puzzle and are in the classification, regression,
and sequence generation category. Several image and text
encoder backbones are considered. A puzzle specific set of
image features are learned via an MLP and the text embeddings are aggregated using an LSTM layer. The decoder
for sequence generation is another LSTM layer. All image
encoders are finetuned. Based on these characteristics, there
are a few research opportunities worth exploring, especially
since transformer-based VLM reasoners are doing so poorly
on the challenging SMART task.
-----
**Smart Vision-Language Reasoners**
_Table 1. Skill class accuracy for original baselines with a 10hr budget training. All backbones are frozen unless noted otherwise._
SMART BASELINE COUNTING MATH LOGIC PATH ALGEBRA MEASURE SPATIAL PATTERN OVERALL
BERT+RESNET50 35.6 **26.4** 36.8 21.5 18.1 26.0 32.2 27.0 28.0
BERT+RESNET50(UNFROZEN) 35.7 20.8 39.6 22.2 18.4 28.2 33.7 30.6 28.2
BERT+MAE (HE ET AL., 2022) 29.8 19.7 29.4 20.5 16.1 18.9 26.6 27.8 23.1
CLIP VLM 35.5 8.6 27.1 17.9 11.8 16.0 26.8 26.3 22
_Table 2. Validation Accuracy per Skill Class (counting, math, logic, path) per Architectural, Optimization and Hyperparameter Choices._
[The fused vision encoder is DinoV2+SigLIP. From CometML multimodalAI.](https://www.comet.com/droberts308/multimodalai/view/new/panels)
CHOICES COUNTING MATH LOGIC PATH VISION LANGUAGE
BASELINE: RESNET50+MBERT 23.4 8.1 18.9 17.9 RESNET50 MBERT
BASELINE: RESNET50+BERT 23.4 8.1 19.2 17.8 RESNET50 BERT
LSTM DECODER SIGLIP VISION 24.6 7.9 17.9 17.9 SIGLIP SIGLIP
LSTM DECODER WITH FUSED VISION 27.7 8.4 21.3 18.6 FUSED SIGLIP
NON-ADAPTIVE IMAGE REPRESENTATION 27.2 8.4 20.2 18.5 FUSED SIGLIP
EXTRA RESIDUAL CONNECTION IN MLP DECODER 21.5 7.4 17.5 17.8 FUSED SIGLIP
WARMUP STEPS 0 29.8 7.4 21.3 20.3 FUSED SIGLIP
WARMUP STEPS 0.06 PERCENT 30 7.8 22.3 19 FUSED SIGLIP
WARMUP STEPS 0.01 PERCENT NO EXTRA RESIDUALS 29.6 8.5 22.9 19.3 FUSED SIGLIP
10 WARMUP STEPS 29.9 8.1 17.8 18.8 FUSED SIGLIP
BATCH SIZE 64 26.1 8.2 21.4 18.8 FUSED SIGLIP
ADAPTIVE IMAGE REPR SIZE 256 25.3 8 21.5 18.5 FUSED SIGLIP
DECODER AND QF HIDDEN SIZE 128 28.4 8.4 22.4 18.8 FUSED SIGLIP
LAYERNORM EPS 1E-5 30 8.3 22.3 19.6 FUSED SIGLIP
DROPOUT PROBABILITY 0.1 29.7 8.2 20.9 19.8 FUSED SIGLIP
ADAMW WITH DEFAULT EPS AND BETA2 29.5 8 21.1 19.1 FUSED SIGLIP
FINAL MODEL: LR 0.001 29.3 8.5 22.8 19.1 FUSED SIGLIP
FINAL MODEL: LR 0.002 23.1 7.8 15.8 18.8 FUSED SIGLIP
FINAL MODEL: LR 0.0005 SEED0 32.8 8.4 23.7 20.1 FUSED SIGLIP
FINAL MODEL: LR 0.0001 31.3 8.3 25.1 19.4 FUSED SIGLIP
FINAL MODEL: LR 0.0003 33.8 8.5 26.2 20.1 FUSED SIGLIP
FINAL MODEL: LR 0.0006 23.6 8.4 19 18.6 FUSED SIGLIP
**Contributions. In (Awad et al., 2023) the authors demon-**
strated how a deep learning module which encode a sequence of image-and-text items using diverse representations composed on several modalities, across time steps,
and across pooling methods, obtained impressive results in
sponsored search and recommendations. Inspired by the
ADPM in (Awad et al., 2023) and using tricks to train vision
transformers in (Dolev et al., 2023) and (Karamcheti et al.,
2024), a smarter VLM is built. In this article, we make the
following contributions on the VLM reasoning axis:
- Introduce a novel multimodal QF-layer to learn a hidden representation from the vision and language modalities.
- Improve the MLP decoders in (Cherian et al., 2022)
through GELU activations, residual connections, and
layer normalization.
- Improve the sequence decoder by replacing the LSTM
with a GRU.
- Strengthen the vision modality by learning an adaptive
visual representation on top of two fused vision backbones: SigLIP (Zhai et al., 2023) and DinoV2 (Oquab
et al., 2023) similarly to (Karamcheti et al., 2024). In
this way, the model makes better use of the puzzle’s
image.
- Strengthen the text-vision alignment by using a frozen
SigLIP language encoder together with the vision
modality which includes the SigLIP vision backbone.
The pretrained text encoder does not overpower the visual signal as much as an LLM as seen in (Tong et al.,
2024; Zhang et al., 2024b).
- Furthermore, the smarter VLM reasoner includes a
composite hidden representation through the concatenation of language-only representations, an adaptive
image-only representation learned on top of the fused
frozen foundation backbones, and the QF multimodal
layer representation which includes a language-vision
cross-attention sublayer. Ablation studies in Section
4 show that the QF layer is essential to the smarter
VLM reasoner. The use of cross-attention improves
the ability of the reasoner to make use of the puzzle’s
visual cues.
- These model improvements lead to up to 48% accuracy gain across several of the meta reasoning skills
measured by the challenging SMART task.
**3. Methodology**
We formalize the problem as supervised learning with classi**fication loss. For each image-question instance, we predict**
**the probability of one of five answer options. When the**
-----
**Smart Vision-Language Reasoners**
_Table 3. QF Ablations. Validation Accuracy per Skill Class (counting, math, logic, path) per Architectural, Optimization and Hyperparam-_
[eter Choices. The fused vision encoder is DinoV2+SigLIP and the text encoder is SigLIP. From CometML multimodalAI.](https://www.comet.com/droberts308/multimodalai/view/new/panels)
CHOICES COUNTING MATH LOGIC PATH
SMARTEST VLM 33.8 8.5 26.2 20.1
1 MHA HEADS 29.7 8.2 23.1 19.4
3 MHA HEADS 34.2 8.6 25.8 19.9
4 MHA HEADS 32.9 8.7 25.6 20.1
8 MHA HEADS 33.1 8.4 26.9 20.3
QF INTERMEDIATE SIZE 128 33.1 8.6 25.6 19.8
QF INTERMEDIATE SIZE 512 33.4 8.1 27.3 19.9
QF INTERMEDIATE SIZE 768 33.4 8.7 25.1 19.3
QF INTERMEDIATE RELU 32.8 8.7 26.7 19.8
QF INTERMEDIATE SILU 33.2 8.7 26.5 19.6
COMPOSITE: NO QF LAYER 32.8 8.5 23.8 19.7
COMPOSITE: QF ONLY 32.2 8.0 26.3 18.8
COMPOSITE: QF AND VISION ONLY 33.7 8.7 24.8 20.4
COMPOSITE: QF AND LANGUAGE ONLY 33.6 8.9 24.9 19.4
NO RESIDUAL CONNECTION IN QF INTERMEDIATE 32 8.4 26.4 19
DROPOUT 0 IN QF LAYER 33 8.2 26.9 19.6
DROPOUT 0.1 IN QF LAYER 30.3 8.1 25.5 19
_Table 4. QF Layer Ablations. Validation Accuracy per Skill Class (algebra, measure, spatial, pattern) per Architectural, Optimization and_
[Hyperparameter Choices. The fused vision encoder is DinoV2+SigLIP and the text encoder is SigLIP. From CometML multimodalAI.](https://www.comet.com/droberts308/multimodalai/view/new/panels)
CHOICES ALGEBRA MEASURE SPATIAL PATTERN
SMARTEST VLM 11.2 10.4 26.8 27
1 MHA HEADS 10.5 10.8 23.2 22.7
3 MHA HEADS 11.1 11.3 26.8 27.0
4 MHA HEADS 11.2 10.6 27.8 26.6
8 MHA HEADS 11.6 10.4 27.9 25.8
QF INTERMEDIATE SIZE 128 11.1 10.2 26.9 27.1
QF INTERMEDIATE SIZE 512 11.3 10.4 27.8 26.4
QF INTERMEDIATE SIZE 768 11.3 11.5 27 26.7
QF INTERMEDIATE RELU 10.8 9.9 28.1 25.5
QF INTERMEDIATE SILU 10.9 9.5 27.4 26
COMPOSITE: NO QF 11.5 10.3 25.6 25.4
COMPOSITE: QF ONLY 11.3 10.7 27.4 25.7
COMPOSITE: QF AND VISION ONLY 11.3 11.5 27.3 27.4
COMPOSITE: QF AND LANGUAGE ONLY 11.1 10.9 28.2 25.9
NO RESIDUAL CONNECTION IN QF INTERMEDIATE 11.2 10.6 26.9 26.7
DROPOUT 0 IN QF LAYER 11.1 9.6 27.7 27.4
DROPOUT 0.1 IN QF LAYER 11.1 10.1 25.7 23.7
options are in the form of a sequence, a decoder module
decodes the answer sequence first, and then the answer is
translated to one of the {A, B, C, D, E} multiple choice
options. In this article, the decoder is a recurrent neural network (Cho et al., 2014). Furthermore, we focus on training
deep learning architectures from scratch for the SMART
task with inputs from diverse pretrained frozen backbones.
We focus the investigation on the eight skill classes counting,
**(counting, math, logic, algebra, path, pattern, measure,**
**spatial) rather than individual puzzle groups, since these rea-**
soning skills are of more general interest across domains and
trademarks of intelligence. In (Chen et al., 2019) authors
demonstrate that strong pretrained backbones can perform
without meta-learning, so we do not employ metalearning
as in (Cherian et al., 2022). The accuracy metric calculated
on the validation set is used to tune the models and evaluate
method success, and the accuracy for the five-class classification is evaluated on the test set. The accuracy is calculated
overall, and more interestingly, broken down by reasoning
skill class (counting, math, etc.).
We derive a multimodal representation through a novel layer,
the QF layer, inspired by the ADPM in adsFormers (Awad
et al., 2023), the QFormer in (Li et al., 2023b; Zhu et al.,
2024) and VilBERT in (Lu et al., 2019). More recently,
(Karamcheti et al., 2024) and (Tong et al., 2024) combine multiple image representations to leverage diversity
of signal in vision-language models, in a similar vein to the
ADPM. Additionally, inspired from (Cherian et al., 2022),
we learn an adaptive image representation on top of the
fused frozen backbone. Moreover, we directly include the
frozen SigLIP text encoder to encoder the question tokens.
The choice of the SigLIP text encoder is two-fold: firstly,
the alignment with the SigLIP vision encoder; secondly, to
tame the language power by not employing a very large
language model (LLM) such as (Gao et al., 2023), which
standard VLM such as BLIP (Li et al., 2023b) or Llava
(Liu et al., 2023) employ. In (Tong et al., 2024) we see
that visual grounding is lacking in large multimodal models.
Furthermore, in (Cherian et al., 2022) the visual signal is
quite important, the accuracy loss is more when removing
the image rather than the text question, so the visual signal
needs to be protected as it is critical for the reasoning skill
necessary to solve the puzzles.
-----
**Smart Vision-Language Reasoners**
The QF layer representation learned from the imagequestion input is concatenated to the average pooled text
representation and the puzzle-specific adaptive image representation from the fused image encoders. As depicted in
Figure 1, the resulting composite representation, denoted
_compositeR, takes as input three component representa-_
tions, r1, r2, and r3, defined as follows.
- Text representation r3 is an average pooled encoding
of the question sequence of max length 110 tokens.
Each token is first encoded using the frozen SigLIP
text model into a representation of size 768. Then
_r3 = AveragePooling([h1, h2, ..., h110])._ (1)
- An image representation r1 from the puzzle-specific
image encoder block of dimension 128 seen in 1. The
dimension is a hyperparameter selected via optimization. The image encoder consists of two feed forward
layers with a GELU unit, with separate weights for
each puzzle head, for the 101 separate puzzle groups
(e.g. one loss calculated per puzzle-group). Each encoder takes as input the image representation from
the two fused pretrained vision backbones, DinoV2
(Oquab et al., 2023) and SigLIP (Zhai et al., 2023),
each of dimension 768. Specifically, for an image X,
_r1 is_
_r1 = FC1i(GELU_ (FC2i(y))), (2)
_y = Concat([Dino(x), SigLIP_ (x)]) (3)
for i ∈{1, . . ., 101}, a distinct puzzle group.
- A QF representation, r2, is produced by the QF layer
which takes as input the encoded image representation
_r1 and the SigLIP-encoded sequence of text tokens_
(before average pooling). First, the SigLIP frozen language backbone encodes the 110-long question text
sequence. Then, the QF layer passes the text sequence
through a multi-head self-attention block (Vaswani
et al., 2017). The resulting hidden representation is fed
to a cross-attention layer as query, with keys and values
coming from the adaptive image encoder representation, marginally inspired from the QFormer in (Li et al.,
2023b) and VilBERT (Lu et al., 2019) but with multiple differences. Distinctly from these works, in our
case the image encoder in the cross-attention sublayer
is a per-puzzle group adaptive representation learned
on top of the frozen fused concatenation of DinoV2
and SigLIP vision backbones. Finally, an intermediate
stack of fully connected layers, with residual connections (He et al., 2016), dropout (Srivastava et al., 2014),
and layer normalisation (Ba et al., 2016) produces the
QF text-and-vision multimodal representation. Specifically,
_r2 = LayerNorm(x + Drop(FC(GELU_ (FC(x)))))
(4)
_X = MHCrossA(MHA([h1, h2, ..., h110]), r1)_
(5)
Finally, the composite QVFusion layer aggregates these
distinct representations (text-only, vision-only, and textand-vision QF multimodal) via concatenation producing
the composite representation CompositeR ∈ R[2][∗][768+128],
and then passing it through a two-layer feed forward module with Gaussian error linear units (Hendrycks & Gimpel,
2016) in between, before being read by the puzzle specific
decoder.
The composite representation is
_CompositeR = CLayer([r1, r2, r3])_ (6)
= LayerNorm(Concat([r1, r2, r3]). (7)
The QVFusion layer in 1 is
_QV Fusion(y) = LayerNorm(GELU_ (y)) (8)
_y = FC(GELU_ (FC(compositeR))).
(9)
Finally, the decoder, which is either a stack of three fully
connected layers separated by GELU activations, or a gated
recurrent neural network (GRU) (Cho et al., 2014) for
sequence-type answer puzzles, produces predictions fed
to a cross-entropy loss. The introduction of GELU units
with layer normalization boosts performance, as they do in
many recent attention-based multimodal neural networks
(Liu et al., 2023; Alayrac et al., 2022; Li et al., 2023b), by
allowing for a smoother loss landscape than rectified linear
units with batch normalization layers.
**4. Experiments and Results**
We train several baselines from (Cherian et al., 2022) and
give results in Table 1. We chose to move forward with
the frozen BERT+ResNet50 as baseline for two reasons:
1. Note that the numbers are extremely close between the
frozen and unfrozen variants but the frozen variant does
better on Math, a top skill of interest for this investigation; 2.
The backbones are frozen which we favor in this article for a
few reasons. The first reason is efficiency of training. Frozen
backbones result in fewer parameters to update. Secondly,
as noted in (Karamcheti et al., 2024), finetuning vision
-----
**Smart Vision-Language Reasoners**
backbones can deteriorate the performance of the visionlanguage model. Thirdly, keeping the backbone frozen
affords a better comparison between models and the ability
to reuse some hyperparameter settings such as batch size,
number of epochs, or optimizer choice. If we choose to
train the vision backbones, transformers and CNN-based
ResNets typically need to employ different training choices.
Next, we discuss experimental results on a subset of the
SMART-101 training set.
**Challenges and limitations for getting better models in-**
clude compute requirements-to tune deep neural networks
one needs to run many experiments. Large transformers are
data hungry as well so one needs GPUs with large memories
and disk space for extended periods of time. For efficiency
of resource utilisation, we sampled the dataset and raised the
bar on the model architectures rather than on data and compute. A dataset split of train : val : test = 60 : 20 : 20
and only 1000 out of the total 2000 question-image pairs per
puzzle group are utilized for fast training and insight, with a
total budget of three epochs of training. This results in 474
training batches, 158 validation and 158 test batches of size
128. The additional model layers described by Equations
1 through 9 result in 29,623,375 trainable parameters for
the final best model. All the models were trained on one
NVIDIA V100 GPU with 40Gb of memory, for 1-2 hours
when training on the downsampled dataset. Experiments are
[tracked in CometML (Comet.com, 2021), the multimodalAI](https://www.comet.com/droberts308/multimodalai/view/new/panels)
public project.
**Training and Evaluation. Tables 2 and 6 (in the Appendix)**
show results from experiments ablating architecture, optimization, and hyperparameter decisions toward the final
model. To avoid an inefficient combinatorial explosion of
hyperparameter choices, the authors’ deep learning experience guided the experimentation process in a stepwise
fashion, aided by learning curve visualisations in CometML.
Watching training and validation curves is an intimate part
of the deep learning development process. In Figure 3 we
can see training loss nicely descending over the training
steps across the three epochs modulated by five different
learning rates with a cosine scheduler (Loshchilov & Hutter,
2016) which adapts the learning rate based on the step number. Note how the large 0.002 learning rate (orange) impacts
learning negatively in the strongest way. Noticeable bumps
in curves depend on the scheduler’s change points. Repro**ducibility. All the experiments are run with seed set to zero**
for the sake of reproducibility. Furthermore, by tracking
machine learning training and evaluation experiments with
CometML, all the hyperparameters for a given experiments
are recorded, loss and accuracy curves and metrics, as well
as running code which can be checked out from the linked
repository. All results are fully reproducible.
**Final results. As can be seen in Table 5, the best trained**
_Figure 3. Epoch Train Loss, Validation Loss, and Validation Ac-_
[curacy for five different learning rates. From CometML multi-](https://www.comet.com/droberts308/multimodalai/view/QF0ah3akqYB6IiNuyVXuRchlh/panels)
[modalAI.](https://www.comet.com/droberts308/multimodalai/view/QF0ah3akqYB6IiNuyVXuRchlh/panels)
models display massive gains in accuracy (eg. +48% gain
**over the baseline in the counting skill) across reasoning**
skills. Recall that the baseline was chosen as the strongest
in the math skill. Truly so, the baseline was hardest to beat
in math vs. other skills. Tables 2 and 6 (in the Appendix)
list the hyperparameter setting and other training choices
experimented with to arrive at the best results in Table 5.
In summary, the smartest VLM reasoner was trained using
the following deep learning choices, inspired as a starting
point from results in (Awad et al., 2023; Dolev et al., 2023;
Roberts, 2019; Olteanu Roberts, 2021):
- Adam optimizer with decoupled weight decay from
(Loshchilov & Hutter, 2018) with weight decay of 0.2,
_eps = 1e −_ 8, and beta2 = 0.98.
- Cosine learning rate scheduler (Loshchilov & Hutter,
2016) with ten warmup steps, using the implementation
in the HuggingFace repository (Wolf et al., 2019).
- Clipping the gradient norm to no more than one to
avoid explosions.
- Layer normalization throughout the architecture modules with eps = 1e − 6 for better learning and generalization.
- A SigLIP frozen language backbone and fused DinoV2
and SigLIP vision backbone.
- A composite hidden representation with a QF layer
with two attention heads, concatenated with text-only
and adaptive vision-only representations.
- Dropout probability of 0.2 anywhere dropout is used.
- Employing Gaussian error linear units instead of rectified linear units in the decoder leads to improvements
across the eight skills. The MLP decoder is akin to
SigLIP’s MLP block (a stack of feed-forward layers
with GELU activation).
- Hidden representation sizes of 128 (for instance for
the adaptive image representation), except the hidden
size within the GRU which is 256. Note that we learn
-----
**Smart Vision-Language Reasoners**
_Table 5. Test set skill class accuracy for top models and comparison to the baseline in first row (percentage change). From CometML_
[multimodalAI.](https://www.comet.com/droberts308/multimodalai/view/new/panels)
NEURAL NET COUNTING MATH LOGIC PATH ALGEBRA MEASURE SPATIAL PATTERN OVERALL
BERT+RESNET50 23.4(-) 9.6(-) 17.9(-) 17.5(-) 10.5(-) 9.9(-) 25.8(-) 20.3(-) 17.1(-)
SMARTERVLM LR0.001 29.0(+24%) 9.9 (+3%) 21.2 (+18%) 17.9(+2%) 10.8 (+3%) 11.1 (+12%) 23.2 (-10%) 25.7 (+27%) 19.12 (+12%)
SMARTERVLM LR0.0005 32.9(+41%) 10.0(+4%) 22.8(+27%) **19.5(+11%)** 11.2(+7%) **11.6(+17%)** 26.3(+2%) 25.8(+27%) 20.86(+22%)
**_SmartestVLM lr0.0003_** **34.7(+48%)** 9.5(-1%) **25.7(+44%)** **19.5(+11%)** **11.3(+8%)** 11.1(+12%) **26.7(+3%)** **27.4(+35%)** **21.59(+26%)**
SMARTERVLM (NO QF) 32.3(+38%) **10.3(+7%)** 23.3(+30%) 18.8(+7%) 10.0(-5%) 10.1(+2%) 25.8 (+0%) 23.6(+16%) 20.14(+18%)
an adaptive per-puzzle group visual representation by
using a fully connected layer on top of the fused vision
backbone.
- GRU decoder for problems with sequence answer, as
they are easier to train than LSTMs.
A vital insight arose through the training process. In Figure 4 notice how the eight skill sets have different training
dynamics and respond differently to learning rate choices,
as well as to the cosine scheduler’s learning rate decision
throughout the training steps. This is something commonly
seen in multitask learning (Caruana, 1997; Dolev et al.,
2023), and sparks one of our recommendations in the future
work section. All experiments are run with seed 0 for the
sake of reproducibility but we also evaluated the test accuracy standard deviation across a few seeds (0, 42, 7) with
mean overall test accuracy 20.8 (σ = 0.16), math skill mean
accuracy of 9.73 (σ = 0.38), and pattern mean accuracy
25.2 (σ = 0.53). The other skills having similar variability.
As expected, we see more variability on smaller individual
skill class sets in terms of how many puzzles are in that
specific skill category.
_Figure 4. Validation accuracy curves per skill class (counting,_
math, spatial, logic, pattern, measure) for five different learning
rates.
**QF layer ablations. The QF layer employs a multihead**
self-attention (MHA) sublayer using the SigLip language encoder representations and a cross-attention sublayer which
uses the text hidden MHA representations as queries and the
adaptive image representations as keys and values. There
fore the hidden sizes in the MHA QF subnetworks are fixed
to the representation sizes of the language and vision encoder representations. There is flexibility on the intermediate subnetwork composition as well as the number of
heads, addition of normalization, dropout, and residual layers. We have performed extensive ablation experiments to
understand the impact of these different choices for the QF
layer, and how the learned QF representation performs as
part of the QVFusion composite representation where it is
concatenated with vision and language-only representations.
**Constituency of composite representation ablation re-**
**sults.** From Tables 3 and 4 we can see that the inclusion of the QF layer representation together with the
vision-only, language-only, both vision and language or QF
representation-only improves accuracy on all skill sets. Intuitively, the model can better use the word-language crosssignals to make sense of the puzzles. Interestingly some skill
sets benefit from visual cues more, where only QF and vision representations are included in the composite representation and not language (pattern and measure) while other
skills (math, spatial) benefit from using QF and language
representations (no image representation concatenated in
the composite QVFusion), which perhaps is an indication
that in some math puzzles the model cannot make good use
of the diagram/image as mentioned in (Zhang et al., 2024b).
The insight here is that the model can still make sense from
the cross-signal from text-image encoded through the QF
layer which includes a learned cross-attention sublayer with
language as queries and the adaptive vision representation
(learned on top of the fused frozen vision encoders) as keys
and values. Intuitively this makes sense because humans
will rely on the visual cues more for some type of problems
and more on the verbal cues for others. Furthermore, one
theory of learning postulates that some humans are more
“visual learners” while others are more “auditory” learners,
relying more on speech and language (Pennsylvania Higher
Education Assistance Agency).
**Activation function used inside the QF layer. We ablated**
the activation function to use the self-gated activation (Ramachandran et al., 2017) instead of GELU, as well as ReLU.
Several of the large language models, such as (Touvron et al.,
2023; Jiang et al., 2024), use the SiLU activation, which
motivated our choice. We found that GELU works best on
-----
**Smart Vision-Language Reasoners**
most skills, except on math-type puzzles, where both ReLU
and SiLU work better. Based on these results only we do
not have an intuition of why this might happen. We will
perform additional future experiments on a larger dataset
for deeper understanding.
**QF intermediate sublayer ablations. The QF layer in-**
cludes a multihead attention (MHA) sublayer which takes
the frozen text representations as input, a cross-attention
layer which uses the MHA hidden representations as queries
and the adaptive image representations as keys and values,
and an intermediate final stack consisting of two fully connected layers and a residual connection with dropout and
layer normalization. Exclusion of the residual connection
(together with dropout and layer normalization) had a detrimental impact across skills, confirming the importance of
residuals and regularization techniques for learning in deep
transformers and for generalization. Furthermore, ablation
on the dropout level, confirms better generalization with
more regularization from dropout. The ablation of the intermediate layer sizing had mixed results across reasoning
skills and we proceeded with the symmetry of using the
hidden size used elsewhere throughout the smartest reasoner architecture (256). The QF layer includes two types
of multihead attention (language only self-attention and
language-vision cross-attention) which share the number of
heads. An ablation on number of heads (1, 2, 4, 8) showed
mixed results across skills with very similar results for average accuracy (except one head, which underperforms all the
other choices) and is worth experimenting with further on
larger datasets in future work.
**Development process and scaling side note. For a deep**
understanding of the behavior of the models with various
architectural and hyperparameter choices, in the development phase, we started with training and evaluation on a
very small subset of data for quick insight and iteration:
only 20 questions per puzzle, with a batch size of 16, on
a split ratio of train : val : test = 40 : 20 : 40. The
experimental results are available in Tables 7 and 8 in the
Appendix and were tracked with CometML (Comet.com,
[2021) and publicly accessible at vlm-reasoners.](https://www.comet.com/droberts308/vlm-reasoners/view/C6sw7GhOEifcK1S0eJL5i4rgx/panels)
**5. Discussion and Future Work**
In this article, we show how deep learning architectural
innovations as well as hyperparameter and training choices
led to improvement in model performance on the SMART
reasoning task. Multimodal transformer-like architectures,
deep learning representations, and stronger visual grounding
led to improvements in eight fundamental reasoning skills
of vision language models.
**Future work. Considering the different learning dynam-**
ics for the eight skill classes, a multitask learning ap
**proach (Caruana, 1997; Lu et al., 2022; Dolev et al., 2023)**
with eight tasks may afford modulating the impact of eight
weighted losses to account for the different dynamics. A
mixture-of-experts approach (Zhao et al., 2019) within the
multitask learning framework could further help. Further
experimentation with other custom layers similar to the
QF layer in this article which facilitates better synergies
across modalities may spark further improvements across
fundamental reasoning skills. Experimenting with other
simple or composite general purpose backbones across
**modalities, deeper or wider, is another potential avenue,**
based on improvements seen in this work due to the fused
DinoV2+SigLIP.
Efficient training techniques employing compression
(Dettmers et al., 2024), autodiff variations (Roberts &
Roberts, 2020), or variations of multimodal transformers’
efficient training (Liu et al., 2024b) may facilitate access to
larger model sizes. In this article, visual and text representations are combined through concatenation. Experiments
with an approach similar to (Ramrakhya et al., 2024), where
learned representations on the decoder path take frozen
visual features as inputs and are conditioned on text embeddings via an element-wise product, show faster learning
and better performance in the context of Math AI, according
to initial experiments. Decoder-only architectures (Radford et al., 2019) took generative modeling by storm, and
exploring other decoding architectures for the VLM reasoner instead of recurrent neural networks, such as Perceiverinspired layers (Alayrac et al., 2022), or convolutional neural networks(He et al., 2016; Roberts, 2019), may show
interesting results. Furthermore, utilizing a frozen text en**coder pretrained to outperform in mathematical reason-**
**ing and for efficiency, such as Mistral in (Jiang et al., 2023),**
or Mixtral in (Jiang et al., 2024), in conjunction with the
strengthened vision encoders and cross-attention layers introduced in this article, is a further avenue worth exploring
for next generation VLM reasoners.
Additionally, in (Hu et al., 2024) LLMs are instructed to
follow rules for general problem solving, instead of relying
on the cases already seen in training. We expect that in
the multimodal smart puzzle solving case, where we have
multiple generated instances of each unique root puzzle, a
similar investigation may improve generalization to unseen
root puzzles. In fact, based on the progress in the quality of
visual representations generated for multimodal mathematical knowledge (Wu et al., 2024), more problems could be
automatically generated using machine generated representations based on models, for further data augmentation to
improve learning and generalization. While we deep dive
into the SMART benchmark, several other multimodal evaluation benchmarks were recently published in the AI for
Math space, as mentioned in the literature review section,
and our architecture can be applied to any of them, which
-----
**Smart Vision-Language Reasoners**
we leave for future work. Finally, other Math AI tasks, such
as theorem proving and other tasks mentioned in the excellent survey article (Lu et al., 2022), can benefit from our
methodology.
**Impact Statement**
This paper presents work whose goal is to advance the field
of Machine Learning and Math AI. There are many potential
societal consequences of our work, none which we feel must
be specifically highlighted here.
**References**
Alayrac, J.-B., Donahue, J., Luc, P., Miech, A., Barr, I.,
Hasson, Y., Lenc, K., Mensch, A., Millican, K., Reynolds,
M., et al. Flamingo: a visual language model for fewshot learning. Advances in neural information processing
_systems, 35:23716–23736, 2022._
Awad, A., Roberts, D., Dolev, E., Heyman, A.,
Ebrahimzadeh, Z., Weil, Z., Mejran, M., Malpani, V.,
and Yavuz, M. adSformers: Personalization from ShortTerm Sequences and Diversity of Representations in Etsy
Ads. arXiv preprint arXiv:2302.01255, 2023.
Azerbayev, Z., Schoelkopf, H., Paster, K., Santos, M. D.,
McAleer, S., Jiang, A. Q., Deng, J., Biderman, S., and
Welleck, S. Llemma: An open language model for mathematics. arXiv preprint arXiv:2310.10631, 2023.
Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization.
_arXiv preprint arXiv:1607.06450, 2016._
Bengio, Y., Courville, A., and Vincent, P. Representation
learning: A review and new perspectives. IEEE transac_tions on pattern analysis and machine intelligence, 35(8):_
1798–1828, 2013.
Caruana, R. Multitask learning. Machine learning, 28(1):
41–75, 1997.
Chen, W.-Y., Liu, Y.-C., Kira, Z., Wang, Y.-C. F., and Huang,
J.-B. A closer look at few-shot classification. arXiv
_preprint arXiv:1904.04232, 2019._
Cherian, A., Peng, K.-C., Lohit, S., Smith, K., and Tenenbaum, J. B. Are deep neural networks smarter than second
graders?. arxiv. Retrieved July, 9:2023, 2022.
Cho, K., Van Merrienboer, B., Gulcehre, C., Bahdanau,¨
D., Bougares, F., Schwenk, H., and Bengio, Y. Learning phrase representations using RNN encoder-decoder
for statistical machine translation. _arXiv preprint_
_arXiv:1406.1078, 2014._
Chollet, F. On the measure of intelligence. arXiv preprint
_arXiv:1911.01547, 2019._
[Comet.com. Comet.com home page, 2021. URL https:](https://www.comet.com/)
[//www.comet.com/.](https://www.comet.com/)
Dettmers, T., Pagnoni, A., Holtzman, A., and Zettlemoyer, L.
Qlora: Efficient finetuning of quantized LLMs. Advances
_in Neural Information Processing Systems, 36, 2024._
Didolkar, A., Goyal, A., Ke, N. R., Guo, S., Valko, M.,
Lillicrap, T., Rezende, D., Bengio, Y., Mozer, M., and
Arora, S. Metacognitive capabilities of LLMs: An exploration in mathematical problem solving. arXiv preprint
_arXiv:2405.12205, 2024._
Dolev, E., Awad, A., Roberts, D., Ebrahimzadeh, Z.,
Mejran, M., Malpani, V., and Yavuz, M. Efficient
large-scale vision representation learning. arXiv preprint
_arXiv:2305.13399, 2023._
Gao, P., Han, J., Zhang, R., Lin, Z., Geng, S., Zhou, A.,
Zhang, W., Lu, P., He, C., Yue, X., et al. Llama-adapter
v2: Parameter-efficient visual instruction model. arXiv
_preprint arXiv:2304.15010, 2023._
Golkar, S., Pettee, M., Eickenberg, M., Bietti, A., Cranmer,
M., Krawezik, G., Lanusse, F., McCabe, M., Ohana, R.,
Parker, L., et al. xval: A continuous number encoding for
large language models. arXiv preprint arXiv:2310.02989,
2023.
He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE
_conference on computer vision and pattern recognition,_
pp. 770–778, 2016.
He, K., Chen, X., Xie, S., Li, Y., Dollar, P., and Girshick,´
R. Masked autoencoders are scalable vision learners. In
_Proceedings of the IEEE/CVF conference on computer_
_vision and pattern recognition, pp. 16000–16009, 2022._
Hendrycks, D. and Gimpel, K. Gaussian error linear units
(GELUs). arXiv preprint arXiv:1606.08415, 2016.
Hu, Y., Tang, X., Yang, H., and Zhang, M. Case-based or
rule-based: How do transformers do the math? arXiv
_preprint arXiv:2402.17709, 2024._
Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C.,
Chaplot, D. S., Casas, D. d. l., Bressand, F., Lengyel, G.,
Lample, G., Saulnier, L., et al. Mistral 7b. arXiv preprint
_arXiv:2310.06825, 2023._
Jiang, A. Q., Sablayrolles, A., Roux, A., Mensch, A., Savary,
B., Bamford, C., Chaplot, D. S., Casas, D. d. l., Hanna,
E. B., Bressand, F., et al. Mixtral of experts. arXiv preprin
_t arXiv:2401.04088, 2024._
Karamcheti, S., Nair, S., Balakrishna, A., Liang, P., Kollar,
T., and Sadigh, D. Prismatic vlms: Investigating the
10
-----
**Smart Vision-Language Reasoners**
design space of visually-conditioned language models.
_arXiv preprint arXiv:2402.07865, 2024._
Li, B., Zhang, Y., Chen, L., Wang, J., Pu, F., Yang, J., Li, C.,
and Liu, Z. Mimic-it: Multi-modal in-context instruction
tuning. arXiv preprint arXiv:2306.05425, 2023a.
Li, J., Li, D., Savarese, S., and Hoi, S. Blip-2: Bootstrapping
language-image pre-training with frozen image encoders
and large language models. In International conference
_on machine learning, pp. 19730–19742. PMLR, 2023b._
Li, W., Li, W., Yu, L., Wu, M., Liu, J., and Li, Y. A
neural-guided dynamic symbolic network for exploring
mathematical expressions from data. _arXiv preprint_
_arXiv:2309.13705, 2023c._
Liu, H., Li, C., Li, Y., and Lee, Y. J. Improved baselines with visual instruction tuning. _arXiv preprint_
_arXiv:2310.03744, 2023._
Liu, H., Li, C., Wu, Q., and Lee, Y. J. Visual instruction tuning. Advances in neural information processing systems,
36, 2024a.
Liu, S.-Y., Wang, C.-Y., Yin, H., Molchanov, P., Wang,
Y.-C. F., Cheng, K.-T., and Chen, M.-H. Dora:
Weight-decomposed low-rank adaptation. arXiv preprint
_arXiv:2402.09353, 2024b._
Loshchilov, I. and Hutter, F. SGDR: Stochastic gradient
descent with warm restarts. In International Conference
_on Learning Representations, 2016._
Loshchilov, I. and Hutter, F. Decoupled weight decay regularization. In International Conference on Learning
_Representations, 2018._
Lu, J., Batra, D., Parikh, D., and Lee, S. VilBERT: Pretraining task-agnostic visiolinguistic representations for
vision-and-language tasks. Advances in neural informa_tion processing systems, 32, 2019._
Lu, P., Qiu, L., Yu, W., Welleck, S., and Chang, K.-W.
A survey of deep learning for mathematical reasoning.
_arXiv preprint arXiv:2212.10535, 2022._
Lu, P., Bansal, H., Xia, T., Liu, J., Li, C., Hajishirzi,
H., Cheng, H., Chang, K.-W., Galley, M., and Gao,
J. Mathvista: Evaluating mathematical reasoning of
foundation models in visual contexts. arXiv preprint
_arXiv:2310.02255, 2023._
Mikuła, M., Antoniak, S., Tworkowski, S., Jiang, A. Q.,
Zhou, J. P., Szegedy, C., Kucinski,´ Ł., Miłos, P., and´
Wu, Y. Magnushammer: A transformer-based approach
to premise selection. arXiv preprint arXiv:2303.04488,
2023.
Olteanu Roberts, D. A. Multilingual evidence retrieval and
fact verification to combat global disinformation: The
power of polyglotism. In Advances in Information Re_trieval: 43rd European Conference on IR Research, ECIR_
_2021, Virtual Event, March 28–April 1, 2021, Proceed-_
_ings, Part II 43, pp. 359–367. Springer, 2021._
Oquab, M., Darcet, T., Moutakanni, T., Vo, H., Szafraniec,
M., Khalidov, V., Fernandez, P., Haziza, D., Massa, F., ElNouby, A., et al. Dinov2: Learning robust visual features
without supervision. arXiv preprint arXiv:2304.07193,
2023.
Pennsylvania Higher Education Assistance Agency, P. The Learning Styles.
URL [http://www.educationplanner.](http://www.educationplanner.org/students/self-assessments/learning-styles-styles)
[org/students/self-assessments/](http://www.educationplanner.org/students/self-assessments/learning-styles-styles)
[learning-styles-styles.](http://www.educationplanner.org/students/self-assessments/learning-styles-styles)
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D.,
Sutskever, I., et al. Language models are unsupervised
multitask learners. OpenAI blog, 1(8):9, 2019.
Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G.,
Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J.,
et al. Learning transferable visual models from natural
language supervision. In International conference on
_machine learning, pp. 8748–8763. PMLR, 2021._
Ramachandran, P., Zoph, B., and Le, Q. V. Swish:
a self-gated activation function. _arXiv preprint_
_arXiv:1710.05941, 7(1):5, 2017._
Ramrakhya, R., Kembhavi, A., Batra, D., Kira, Z., Zeng,
K.-H., and Weihs, L. Seeing the unseen: Visual common sense for semantic placement. _arXiv preprint_
_arXiv:2401.07770, 2024._
Roberts, D. Neural networks for Lorenz map prediction:
A trip through time. arXiv preprint arXiv:1903.07768,
2019.
Roberts, D. A. and Roberts, L. R. QR and LQ decomposition matrix backpropagation algorithms for square, wide,
and deep–real or complex–matrices and their software
implementation. arXiv preprint arXiv:2009.10071, 2020.
Rute, J., Olsˇak, M., Blaauwbroek, L., Massolo, F. I. S.,´
Piepenbrock, J., and Pestun, V. Graph2tac: Learning
hierarchical representations of math concepts in theorem
proving. arXiv preprint arXiv:2401.02949, 2024.
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I.,
and Salakhutdinov, R. Dropout: a simple way to prevent
neural networks from overfitting. The journal of machine
_learning research, 15(1):1929–1958, 2014._
11
-----
**Smart Vision-Language Reasoners**
Sun, J., Zheng, C., Xie, E., Liu, Z., Chu, R., Qiu, J.,
Xu, J., Ding, M., Li, H., Geng, M., et al. A survey
of reasoning with foundation models. arXiv preprint
_arXiv:2312.11562, 2023._
Tong, S., Liu, Z., Zhai, Y., Ma, Y., LeCun, Y., and Xie,
S. Eyes wide shut? exploring the visual shortcomings
of multimodal llms. arXiv preprint arXiv:2401.06209,
2024.
Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi,
A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P.,
Bhosale, S., et al. Llama 2: Open foundation and finetuned chat models. arXiv preprint arXiv:2307.09288,
2023.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. Attention is all you need. Advances in neural information
_processing systems, 30, 2017._
Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C.,
Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M.,
et al. Huggingface’s transformers: State-of-the-art natural
language processing. arXiv preprint arXiv:1910.03771,
2019.
Wu, L., Choi, S., Raggi, D., Stockdill, A., Garcia, G. G.,
Colarusso, F., Cheng, P. C., and Jamnik, M. Generation
of visual representations for multi-modal mathematical
knowledge. In Proceedings of the AAAI Conference on Ar_tificial Intelligence, volume 38, pp. 23850–23852, 2024._
Wu, P. and Xie, S. V* : Guided visual search as a
core mechanism in multimodal llms. _arXiv preprint_
_arXiv:2312.14135, 2023._
Yue, X., Ni, Y., Zhang, K., Zheng, T., Liu, R., Zhang, G.,
Stevens, S., Jiang, D., Ren, W., Sun, Y., et al. Mmmu:
A massive multi-discipline multimodal understanding
and reasoning benchmark for expert agi. arXiv preprint
_arXiv:2311.16502, 2023._
Zhai, X., Mustafa, B., Kolesnikov, A., and Beyer, L. Sigmoid loss for language image pre-training. In Proceed_ings of the IEEE/CVF International Conference on Com-_
_puter Vision, pp. 11975–11986, 2023._
Zhang, D., Yang, J., Lyu, H., Jin, Z., Yao, Y., Chen, M., and
Luo, J. Cocot: Contrastive chain-of-thought prompting
for large multimodal models with multiple image inputs.
_arXiv preprint arXiv:2401.02582, 2024a._
Zhang, R., Jiang, D., Zhang, Y., Lin, H., Guo, Z., Qiu, P.,
Zhou, A., Lu, P., Chang, K.-W., Gao, P., et al. Mathverse:
Does your multi-modal llm truly see the diagrams in
visual math problems? arXiv preprint arXiv:2403.14624,
2024b.
Zhang, Z., Zhang, A., Li, M., Zhao, H., Karypis, G., and
Smola, A. Multimodal chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923, 2023.
Zhao, Z., Hong, L., Wei, L., Chen, J., Nath, A., Andrews,
S., Kumthekar, A., Sathiamoorthy, M., Yi, X., and Chi,
E. Recommending what video to watch next: a multitask ranking system. In Proceedings of the 13th ACM
_Conference on Recommender Systems, pp. 43–51, 2019._
Zhu, D., Tang, X., Han, W., Lu, J., Zhao, Y., Xing, G., Wang,
J., and Yin, D. Vislinginstruct: Elevating zero-shot learning in multi-modal language models with autonomous instruction optimization. arXiv preprint arXiv:2402.07398,
2024.
12
-----
**Smart Vision-Language Reasoners**
_Table 6. Follow up to Table 2 from main text. Validation Accuracy per Skill Class (algebra, measure, spatial, pattern) per Architectural,_
[Optimization and Hyperparameter Choices. The fused vision encoder is DinoV2+SigLIP. From CometML multimodalAI.](https://www.comet.com/droberts308/multimodalai/view/new/panels)
CHOICES ALGEBRA MEASURE SPATIAL PATTERN VISION LANGUAGE
BASELINE: RESNET50+MBERT 10.3 10.2 23.8 20.8 RESNET50 MBERT
BASELINE: RESNET50+BERT 10.7 10.3 24.8 20.5 RESNET50 BERT
LSTM DECODER SIGLIP VISION 10 9.1 22.6 21.4 SIGLIP SIGLIP
LSTM DECODER WITH FUSED VISION 11.2 9.2 23.3 25.1 FUSED SIGLIP
NON-ADAPTIVE IMAGE REPRESENTATION 10.3 10.2 23.4 22.1 FUSED SIGLIP
EXTRA RESIDUAL CONNECTION IN MLP DECODER 9.3 10.4 21.6 20.3 FUSED SIGLIP
WARMUP STEPS 0 10.1 10.8 22.8 26.6 FUSED SIGLIP
WARMUP STEPS 0.06 PERCENT 9.6 9.8 22.9 25.1 FUSED SIGLIP
WARMUP STEPS 0.01 PERCENT NO EXTRA RESIDUALS 9.8 9.8 21.1 26.5 FUSED SIGLIP
10 WARMUP STEPS 9.5 9.7 23.2 19.8 FUSED SIGLIP
BATCH SIZE 64 10.1 10.8 22.3 26 FUSED SIGLIP
ADAPTIVE IMAGE REPR SIZE 256 10.3 10.6 23.8 26.3 FUSED SIGLIP
DECODER AND QF HIDDEN SIZE 128 10.6 10.2 25.9 23.3 FUSED SIGLIP
LAYERNORM EPS 1E-5 9.9 9.8 22.8 26.1 FUSED SIGLIP
DROPOUT PROBABILITY 0.1 10 9.6 23.1 26.2 FUSED SIGLIP
ADAMW WITH DEFAULT EPS AND BETA2 10.7 9.7 23.2 26.8 FUSED SIGLIP
FINAL MODEL: LR 0.001 10.3 9.9 23.1 26.9 FUSED SIGLIP
FINAL MODEL: LR 0.0005 SEED0 10.4 10.8 25.7 26.7 FUSED SIGLIP
FINAL MODEL: LR 0.0001 10.4 10.6 26.1 28 FUSED SIGLIP
FINAL MODEL: LR 0.0003 11.2 10.4 26.8 27 FUSED SIGLIP
FINAL MODEL: LR 0.0006 9.5 10.6 23.9 22.4 FUSED SIGLIP
FINAL MODEL: LR 0.002 10.2 9.7 20.7 21.3 FUSED SIGLIP
**A. Further Experimental Results.**
Results in Table 6 correspond to the algebra, measure, spatial, and pattern reasoning skills for the ablation experiments
presented in the main text and a continuation to the results in Table 2 which included accuracy results for the counting, math,
logic, and path skills.
[Furthermore, experimental results in Tables 7 and 8 from vlm-reasoners were obtained on a small subset of the SMART-101](https://www.comet.com/droberts308/vlm-reasoners/view/C6sw7GhOEifcK1S0eJL5i4rgx/panels)
[dataset and can be reproduced using the end-to-end code github.com/smarter-vlm/smarter for the same hyperparameter and](https://github.com/smarter-vlm/smarter)
architectural choices. Table 7 gives results for the first four skills, counting, math, logic, and path, and Table 8 gives results
for the remaining four skills, algebra, measure, spatial, and pattern.
13
-----
**Smart Vision-Language Reasoners**
_[Table 7. Small Dataset Experimental Runs: Architecture and Hyperparameter Choices Impact on Skill Class Accuracy. From vlm-](https://www.comet.com/droberts308/vlm-reasoners/view/C6sw7GhOEifcK1S0eJL5i4rgx/panels)_
[reasoners.](https://www.comet.com/droberts308/vlm-reasoners/view/C6sw7GhOEifcK1S0eJL5i4rgx/panels)
**CHOICE** COUNTING MATH LOGIC PATH QF HEADS PDROP WD TEXT VISION
LR0.005 13.5 8.9 5.6 16.7 2 0.2 0.2 SIGLIP FUSED
LR0.0001 18.3 10.7 5.6 14.6 2 0.2 0.2 SIGLIP FUSED
DROPOUT 0.1 13.5 10.7 2.8 18.8 2 0.1 0.2 SIGLIP FUSED
EXTRA RESIDUAL IN DECODER 14.4 7.1 5.6 16.7 2 0.2 0.2 SIGLIP FUSED
SIGLIP VISION 12.5 7.1 5.6 16.7 2 0.2 0.2 SIGLIP SIGLIP
ADAPTIVE VISUAL REPR SIZE 64 13.5 7.1 5.6 16.7 2 0.2 0.2 SIGLIP FUSED
ADAPTIVE VISUAL REPR SIZE 64 18.3 8.9 5.6 16.7 2 0.2 0.2 SIGLIP FUSED
NO QF LAYER 12.5 5.4 5.6 18.8 2 0.2 0.2 SIGLIP FUSED
COMPOSITE ALL 15.4 10.7 5.6 18.8 2 0.2 0.2 SIGLIP FUSED
COMPOSITE NO TEXT 16.3 7.1 5.6 14.6 2 0.2 0.2 SIGLIP FUSED
COMPOSITE NO IMAGE 10.6 5.4 5.6 16.7 2 0.2 0.2 SIGLIP FUSED
VARIATION ON NON-ADAPTIVE IMAGE 17.3 3.6 5.6 14.6 2 0.2 0.2 SIGLIP FUSED
NO GELU IN DECODER 13.5 10.7 5.6 16.7 2 0.2 0.2 SIGLIP FUSED
ADD RESIDUAL BACK IN QF 16.3 8.9 11.1 14.6 2 0.2 0.2 SIGLIP FUSED
NO RESIDUAL IN QF 12.5 7.1 5.6 18.8 2 0.2 0.2 SIGLIP FUSED
QF INTERM. SIZE 128 17.3 5.4 5.6 16.7 2 0.2 0.2 SIGLIP FUSED
QF INTERM. SIZE 768 15.4 5.4 8.3 16.7 2 0.2 0.2 SIGLIP FUSED
COSINE SCHEDULER 16.3 8.9 11.1 14.6 2 0.2 0.2 SIGLIP FUSED
NO COSINE SCHEDULER 8.7 3.6 8.3 6.3 2 0.2 0.2 SIGLIP FUSED
RELU EVERYWHERE 14.4 8.9 11.1 14.6 2 0.2 0.2 SIGLIP FUSED
QF 3 HEADS 15.4 5.4 5.6 16.7 3 0.2 0.2 SIGLIP FUSED
QF 1 HEAD 17.3 5.4 8.3 16.7 1 0.2 0.2 SIGLIP FUSED
LSTM DECODER (NOT GRU) 14.4 8.9 5.6 16.7 2 0.2 0.2 SIGLIP FUSED
WD0 14.4 7.1 11.1 14.6 2 0.2 0 SIGLIP FUSED
WD0.05 14.4 7.1 11.1 14.6 2 0.2 0.05 SIGLIP FUSED
WD0.1 15.4 7.1 11.1 14.6 2 0.2 0.1 SIGLIP FUSED
WD0.2 15.4 7.1 11.1 14.6 2 0.2 0.2 SIGLIP FUSED
_[Table 8. Small Dataset Experimental Runs: Architecture and Hyperparameter Choices Impact on Skill Class Accuracy. From vlm-](https://www.comet.com/droberts308/vlm-reasoners/view/C6sw7GhOEifcK1S0eJL5i4rgx/panels)_
[reasoners.](https://www.comet.com/droberts308/vlm-reasoners/view/C6sw7GhOEifcK1S0eJL5i4rgx/panels)
**CHOICE** ALGEBRA MEASURE SPATIAL PATTERN QF HEADS PDROP WD TEXT VISION
LR0.005 8.3 3.1 27.8 30 2 0.2 0.2 SIGLIP FUSED
LR0.0001 8.3 15.6 27.8 10 2 0.2 0.2 SIGLIP FUSED
DROPOUT 0.1 15 3.1 30.6 30 2 0.1 0.2 SIGLIP FUSED
EXTRA RESIDUAL IN DECODER 8.3 9.4 27.8 25 2 0.2 0.2 SIGLIP FUSED
SIGLIP VISION 10 15.6 33.3 15 2 0.2 0.2 SIGLIP SIGLIP
ADAPTIVE VISUAL REPR SIZE 64 11.7 6.3 27.8 25 2 0.2 0.2 SIGLIP FUSED
ADAPTIVE VISUAL REPR SIZE 64 10 9.4 30.6 20 2 0.2 0.2 SIGLIP FUSED
NO QF LAYER 6.7 12.5 27.8 30 2 0.2 0.2 SIGLIP FUSED
COMPOSITE ALL 11.7 3.1 30.6 30 2 0.2 0.2 SIGLIP FUSED
COMPOSITE NO TEXT 6.7 12.5 27.8 25 2 0.2 0.2 SIGLIP FUSED
COMPOSITE NO IMAGE 3.3 9.4 30.6 20 2 0.2 0.2 SIGLIP FUSED
VARIATION ON NON-ADAPTIVE IMAGE 6.7 9.4 30.6 20 2 0.2 0.2 SIGLIP FUSED
NO GELU IN DECODER 6.7 3.1 30.6 30 2 0.2 0.2 SIGLIP FUSED
ADD RESIDUAL BACK IN QF 10 12.5 27.8 25 2 0.2 0.2 SIGLIP FUSED
NO RESIDUAL IN QF 11.7 12.5 30.6 25 2 0.2 0.2 SIGLIP FUSED
QF INTERM. SIZE 128 6.7 12.5 25 25 2 0.2 0.2 SIGLIP FUSED
QF INTERM. SIZE 768 6.7 9.4 30.6 20 2 0.2 0.2 SIGLIP FUSED
COSINE SCHEDULER 10 12.5 27.8 25 2 0.2 0.2 SIGLIP FUSED
NO COSINE SCHEDULER 1.7 3.1 16.7 25 2 0.2 0.2 SIGLIP FUSED
RELU EVERYWHERE 8.3 12.5 30.6 15 2 0.2 0.2 SIGLIP FUSED
QF 3 HEADS 6.7 9.4 27.8 20 3 0.2 0.2 SIGLIP FUSED
QF 1 HEAD 6.7 12.5 30.6 25 1 0.2 0.2 SIGLIP FUSED
LSTM DECODER (NOT GRU) 6.7 12.5 27.8 25 2 0.2 0.2 SIGLIP FUSED
WD0 8.3 9.4 27.8 20 2 0.2 0 SIGLIP FUSED
WD0.05 8.3 9.4 27.8 20 2 0.2 0.05 SIGLIP FUSED
WD0.1 8.3 9.4 27.8 20 2 0.2 0.1 SIGLIP FUSED
WD0.2 8.3 9.4 25 25 2 0.2 0.2 SIGLIP FUSED
14
-----
| [
"Denisa, Roberts",
"Lucas, Roberts"
] | 2024-06-13T00:00:00 | null | false | 0 | 0 | null | https://openreview.net/forum?id=Mf6ot5U7ni&name=pdf | https://arxiv.org/abs/2407.04212 | https://www.semanticscholar.org/paper/96a13e8290a6d2063b63eb936cfdb9d0b0f129e0 |
Solving Hard Mizar Problems with Instantiation and Strategy Invention | In this work, we prove over 3000 previously ATP-unproved Mizar/MPTP problems by using several ATP and AI methods, raising the number of ATP-solved Mizar problems from 75\% to above 80\%. First, we start to experiment with the cvc5 SMT solver which uses several instantiation-based heuristics that differ from the superposition-based systems, that were previously applied to Mizar,and add many new solutions. Then we use automated strategy invention to develop cvc5 strategies that largely improve cvc5's performance on the hard problems. In particular, the best invented strategy solves over 14\% more problems than the best previously available cvc5 strategy. We also show that different clausification methods have a high impact on such instantiation-based methods, again producing many new solutions. In total, the methods solve 3021 (21.3\%) of the 14163 previously unsolved hard Mizar problems. This is a new milestone over the Mizar large-theory benchmark and a large strengthening of the hammer methods for Mizar. | null | # Solving Hard Mizar Problems with Instantiation and Strategy Invention [∗]
Jan Jakubův[1][,][2], Mikoláš Janota[1], and Josef Urban[1]
1
Czech Technical University in Prague, Prague, Czech Republic
```
[email protected]
```
2
University of Innsbruck, Innsbruck, Austria
## 1 Introduction: Mizar, ATPs, Hammers
In this work, we prove over 3000 previously ATP-unproved Mizar/MPTP problems by using
several ATP and AI methods. First, we start to experiment with the cvc5 SMT solver which
uses several instantiation-based heuristics that differ from the superposition-based systems, that
were previously applied to Mizar, and add many new solutions. Then we use automated strategy
invention system Grackle to develop cvc5 strategies that largely improve cvc5’s performance on
the hard problems. In particular, the best invented strategy solves over 14% more problems
than the best previously available cvc5 strategy. We also show that different clausification
methods have a high impact on such instantiation-based methods, again producing many new
solutions. In total, the methods raise the number of ATP-solved Mizar problems from 75%
to above 80%. This is a new milestone over the Mizar large-theory benchmark and a large
strengthening of the hammer methods for Mizar.
The Mizar Mathematical Library (MML) [1] is one of the earliest large libraries of formal
mathematics, containing a wide selection of lemmas and theorems from various areas of mathematics. The MML and the Mizar system [26, 2, 15] has been used as a source of automated
theorem proving (ATP) [31] problems for over 25 years, starting with the export of several
Mizar articles done by the ILF system [10, 9]. Since 2003, the MPTP system [36, 37] has been
used to export the MML in the DFG [16] and later TPTP [35] formats. In the earliest (2003)
ATP experiments over the whole library, state-of-the-art ATPs could prove about 40% of these
problems when their premises were limited to those used in the human-written Mizar proofs
(the so called bushy [1], i.e., easier, mode).
Since 2013, a fixed version of the MML (1147) and MPTP consisting of 57880 problems has
been used as a large benchmark for ATPs and related hammer [6] (large-theory) methods over
Mizar [29, 21, 34, 30, 17, 8]. When using many ATP and premise-selection methods, 56.2% of
the problems could be proved in [22]. This was recently raised to 75.5% [19], mainly by using
the learning-guided E [32] (ENIGMA [20, 13]) and Vampire [25] (Deepire [33]) systems.
Both E and Vampire are mainly saturation-style superposition systems. In the recent years,
instantiation-based systems and SMTs such as cvc5 [3], iProver [24] and Z3 [11] are however
becoming competitive even for problems that do not contain explicit theories in the SMT
sense [5, 12, 14]. The problems that they solve are often complementary to those solved by the
superposition-based systems.
_∗Supported by the Czech MEYS under the ERC CZ project no._ LL1902 POSTMAN, by the European
Union under the project ROBOPROX (reg. no. CZ.02.01.01/00/22_008/0004590), Amazon Research Awards,
EU ICT-48 2020 project no. 952215 TAILOR, ERC PoC grant no. 101156734 FormalWeb3, and by the Czech
Science Foundation project no. 24-12759S.
[1https://tptp.org/MPTPChallenge](https://tptp.org/MPTPChallenge)
-----
Solving Hard Mizar Problems with Instantiation and Strategy Invention Jakubův et al.
## 2 Summary of the Involved Methods
We employ instantiation-based methods in cvc5 to solve automatically as many hard Mizar
problems as possible. Our main result is that the set of ATP-provable MPTP problems has
been increased by over 3,000, from 75.5% to 80.7%. All these problems are proved by the cvc5
system which we improve in several ways. First, we use the Grackle system [18] to automatically invent stronger strategies for MPTP. Our best strategy outperforms the previously best
cvc5 strategy by 14% and our best 7-strategy portfolio solves 8.8% more problems than the
corresponding CASC portfolio. We also combine strategy development with alternative clausification methods. This turns out to have a surprisingly high impact on the instantiation-based
system, contributing many new solutions. Finally, we obtain further solutions by modifying the
problems with premise selection. Ultimately, these methods together double the number of the
previously ATP-unproved Mizar problems solved by cvc5 from 1,534 to 3,021. We show that
the methods extend to previously unseen Mizar problems.
**Grackle Strategy Invention.** Grackle [18] is a system for the automated invention of a
portfolio of solver strategies targeted to selected benchmark problems. A user provides a set of
benchmark problems and Grackle can automatically discover a set of diverse solver strategies
that maximize the number of solved benchmark problems. Grackle supports invention of goodperforming strategies for several solvers, including ATP solvers E [32], Vampire [25], Lash [7],
and an SMT solver Bitwuzla [27]. Support for additional solvers can be easily added by providing a parametrization of the solver strategy space, and by implementing a simple wrapper
to launch the solver. In this paper, we extend Grackle to support an SMT solver cvc5 [3], and
we evaluate its capabilities on a first-order translation of Mizar problems.
**Different Clausification Methods.** The Mizar problems are given as TPTP [35] problems
in first-order logic (FOF). For cvc5 we translate them to the SMT2 language [4] in the theory of uninterpreted functions (UF). By default, cvc5 converts to clausal normal form (CNF)
internally but since instantiation-based heuristics seem sensitive to problem reformulation, we
also experiment with external clausification. This gives us syntactically different variants of the
problems and we can test whether cvc5 benefits from such alternative ways of clausification. We
use E as the external clausifier and we construct two more problem variants. The first variant
is produced by using E’s default clausification parameters. The second variant uses much more
aggressive introduction of definitions for frequent subformulas, introducing a new definition if
a subformula appears at least four times.
**Effects of Premise Selection.** Based on the success with problem reformulation, we perform
additional experiments, this time with different premise selection methods developed in our
prior work [19]. Namely, we evaluate Grackle and baseline strategies on the bushy variants of
the problems, on the strongest GNN (graph neural network [28]) premise selection slices, and
on LightGBM [23] premise selection slices. These variants were found complementary in our
previous experiments [19].
**Conclusions.** In the end, we have solved 3,021 (21.3%) of the remaining 14,163 hard Mizar
problems, raising the percentage of automatically proved Mizar problems from 75.5% to 80.7%.
This was mainly done by automatically inventing suitable instantiation-based strategies for
the cvc5 solver, using our Grackle system. Further improvements were obtained by using
alternative clausifications of the problems, and also alternative premise selections. Such problem
transformations have a surprisingly large effect on the instantiation-based procedures and are
likely to be explored further when creating strong portfolios for such systems.
-----
Solving Hard Mizar Problems with Instantiation and Strategy Invention Jakubův et al.
## References
[1] Grzegorz Bancerek, Czeslaw Bylinski, Adam Grabowski, Artur Kornilowicz, Roman Matuszewski,
Adam Naumowicz, and Karol Pak. The role of the Mizar Mathematical Library for interactive
proof development in Mizar. J. Autom. Reason., 61(1-4):9–32, 2018.
[2] Grzegorz Bancerek, Czeslaw Bylinski, Adam Grabowski, Artur Kornilowicz, Roman Matuszewski,
Adam Naumowicz, Karol Pak, and Josef Urban. Mizar: State-of-the-art and beyond. In Manfred
Kerber, Jacques Carette, Cezary Kaliszyk, Florian Rabe, and Volker Sorge, editors, Intelligent
_Computer Mathematics - International Conference, CICM 2015, Washington, DC, USA, July 13-_
_17, 2015, Proceedings, volume 9150 of Lecture Notes in Computer Science, pages 261–279. Springer,_
2015.
[3] Haniel Barbosa, Clark W. Barrett, Martin Brain, Gereon Kremer, Hanna Lachnitt, Makai Mann,
Abdalrhman Mohamed, Mudathir Mohamed, Aina Niemetz, Andres Nötzli, Alex Ozdemir, Mathias Preiner, Andrew Reynolds, Ying Sheng, Cesare Tinelli, and Yoni Zohar. cvc5: A versatile and
industrial-strength SMT solver. In TACAS (1), volume 13243. Springer, 2022.
[4] Clark Barrett, Aaron Stump, Cesare Tinelli, et al. The SMT-LIB standard: Version 2.0. In
_Proceedings of the 8th international workshop on satisfiability modulo theories (Edinburgh, UK),_
volume 13, page 14, 2010.
[5] Jasmin Christian Blanchette, Sascha Böhme, and Lawrence C. Paulson. Extending sledgehammer
with SMT solvers. J. Autom. Reason., 51(1):109–128, 2013.
[6] Jasmin Christian Blanchette, Cezary Kaliszyk, Lawrence C. Paulson, and Josef Urban. Hammering
towards QED. J. Formalized Reasoning, 9(1):101–148, 2016.
[7] Chad E. Brown and Cezary Kaliszyk. Lash 1.0 (system description). In IJCAR, volume 13385 of
_Lecture Notes in Computer Science, pages 350–358. Springer, 2022._
[8] Karel Chvalovský, Konstantin Korovin, Jelle Piepenbrock, and Josef Urban. Guiding an instantiation prover with graph neural networks. In LPAR, volume 94 of EPiC Series in Computing,
pages 112–123. EasyChair, 2023.
[9] Ingo Dahn. Interpretation of a Mizar-like logic in first-order logic. In Ricardo Caferra and Gernot
Salzer, editors, FTP (LNCS Selection), volume 1761 of LNCS, pages 137–151. Springer, 1998.
[10] Ingo Dahn and Christoph Wernhard. First order proof problems extracted from an article in
the MIZAR Mathematical Library. In Maria Paola Bonacina and Ulrich Furbach, editors, Int.
_Workshop on First-Order Theorem Proving (FTP’97), RISC-Linz Report Series No. 97-50, pages_
58–62. Johannes Kepler Universität, Linz (Austria), 1997.
[11] Leonardo Mendonça de Moura and Nikolaj Bjørner. Z3: An Efficient SMT Solver. In C. R.
Ramakrishnan and Jakob Rehof, editors, TACAS, volume 4963 of LNCS, pages 337–340. Springer,
2008.
[12] Martin Desharnais, Petar Vukmirovic, Jasmin Blanchette, and Makarius Wenzel. Seventeen
provers under the hammer. In ITP, volume 237 of LIPIcs, pages 8:1–8:18. Schloss Dagstuhl Leibniz-Zentrum für Informatik, 2022.
[13] Zarathustra Amadeus Goertzel, Karel Chvalovský, Jan Jakubův, Miroslav Olsák, and Josef Urban.
Fast and slow Enigmas and parental guidance. In FroCoS, volume 12941 of Lecture Notes in
_Computer Science, pages 173–191. Springer, 2021._
[14] Zarathustra Amadeus Goertzel, Jan Jakubův, Cezary Kaliszyk, Miroslav Olsák, Jelle Piepenbrock,
and Josef Urban. The Isabelle ENIGMA. In ITP, volume 237 of LIPIcs, pages 16:1–16:21. Schloss
Dagstuhl - Leibniz-Zentrum für Informatik, 2022.
[15] Adam Grabowski, Artur Korniłowicz, and Adam Naumowicz. Mizar in a nutshell. J. Formalized
_Reasoning, 3(2):153–245, 2010._
[16] Reiner Hähnle, Manfred Kerber, and Christoph Weidenbach. Common syntax of the DFGSchwerpunktprogramm deduction. Technical Report TR 10/96, Fakultät für Informatik, Universität
Karlsruhe, Karlsruhe, Germany, 1996.
-----
Solving Hard Mizar Problems with Instantiation and Strategy Invention Jakubův et al.
[17] Edvard K. Holden and Konstantin Korovin. Graph sequence learning for premise selection. CoRR,
abs/2303.15642, 2023.
[18] Jan Hůla, Jan Jakubův, Mikolás Janota, and Lukás Kubej. Targeted configuration of an SMT
solver. In CICM, volume 13467 of Lecture Notes in Computer Science, pages 256–271. Springer,
2022.
[19] Jan Jakubův, Karel Chvalovský, Zarathustra Amadeus Goertzel, Cezary Kaliszyk, Mirek Olsák,
Bartosz Piotrowski, Stephan Schulz, Martin Suda, and Josef Urban. MizAR 60 for Mizar 50. In
_ITP, volume 268 of LIPIcs, pages 19:1–19:22. Schloss Dagstuhl - Leibniz-Zentrum für Informatik,_
2023.
[20] Jan Jakubův and Josef Urban. ENIGMA: efficient learning-based inference guiding machine. In
Herman Geuvers, Matthew England, Osman Hasan, Florian Rabe, and Olaf Teschke, editors,
_Intelligent Computer Mathematics - 10th International Conference, CICM 2017, Edinburgh, UK,_
_July 17-21, 2017, Proceedings, volume 10383 of Lecture Notes in Computer Science, pages 292–302._
Springer, 2017.
[21] Jan Jakubův and Josef Urban. Hammering Mizar by learning clause guidance. In John Harrison,
John O’Leary, and Andrew Tolmach, editors, 10th International Conference on Interactive Theo_rem Proving, ITP 2019, September 9-12, 2019, Portland, OR, USA, volume 141 of LIPIcs, pages_
34:1–34:8. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2019.
[22] Cezary Kaliszyk and Josef Urban. MizAR 40 for Mizar 40. J. Autom. Reasoning, 55(3):245–256,
2015.
[23] Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and TieYan Liu. Lightgbm: A highly efficient gradient boosting decision tree. In NIPS, pages 3146–3154,
2017.
[24] Konstantin Korovin. iprover - an instantiation-based theorem prover for first-order logic (system
description). In IJCAR, volume 5195 of Lecture Notes in Computer Science, pages 292–298.
Springer, 2008.
[25] Laura Kovács and Andrei Voronkov. First-order theorem proving and Vampire. In Natasha
Sharygina and Helmut Veith, editors, CAV, volume 8044 of LNCS, pages 1–35. Springer, 2013.
[26] Roman Matuszewski and Piotr Rudnicki. Mizar: the first 30 years. Mechanized Mathematics and
_Its Applications, 4:3–24, 2005._
[27] Aina Niemetz and Mathias Preiner. Bitwuzla at the SMT-COMP 2020. CoRR, abs/2006.01621,
2020.
[28] Miroslav Olsák, Cezary Kaliszyk, and Josef Urban. Property invariant embedding for automated
reasoning. In Giuseppe De Giacomo, Alejandro Catalá, Bistra Dilkina, Michela Milano, Senén
Barro, Alberto Bugarín, and Jérôme Lang, editors, ECAI 2020 - 24th European Conference on
_Artificial Intelligence, volume 325 of Frontiers in Artificial Intelligence and Applications, pages_
1395–1402. IOS Press, 2020.
[29] Michael Rawson and Giles Reger. A neurally-guided, parallel theorem prover. In FroCos, volume
11715 of Lecture Notes in Computer Science, pages 40–56. Springer, 2019.
[30] Michael Rawson and Giles Reger. lazyCoP: Lazy paramodulation meets neurally guided search.
In TABLEAUX, volume 12842 of Lecture Notes in Computer Science, pages 187–199. Springer,
2021.
[31] John Alan Robinson and Andrei Voronkov, editors. _Handbook of Automated Reasoning (in 2_
_volumes). Elsevier and MIT Press, 2001._
[32] Stephan Schulz. System description: E 1.8. In Kenneth L. McMillan, Aart Middeldorp, and Andrei
Voronkov, editors, LPAR, volume 8312 of LNCS, pages 735–743. Springer, 2013.
[33] Martin Suda. Improving ENIGMA-style clause selection while learning from history. In CADE,
volume 12699 of Lecture Notes in Computer Science, pages 543–561. Springer, 2021.
[34] Martin Suda. Vampire with a brain is a good ITP hammer. In FroCoS, volume 12941 of Lecture
_Notes in Computer Science, pages 192–209. Springer, 2021._
-----
Solving Hard Mizar Problems with Instantiation and Strategy Invention Jakubův et al.
[35] Geoff Sutcliffe, Christian B. Suttner, and Theodor Yemenis. The TPTP problem library. In CADE,
volume 814 of Lecture Notes in Computer Science, pages 252–266. Springer, 1994.
[36] Josef Urban. MPTP – Motivation, Implementation, First Experiments. J. Autom. Reasoning,
33(3-4):319–339, 2004.
[37] Josef Urban. MPTP 0.2: Design, implementation, and initial experiments. J. Autom. Reasoning,
37(1-2):21–43, 2006.
-----
| [
"Andrea, Kohlhase",
"Josef, Urban",
"Jan, Jakubův",
"Mikoláš, Janota",
"Laura, Kovács"
] | 2024-01-01T00:00:00 | null | false | 0 | 0 | null | https://link.springer.com/10.1007/978-3-031-66997-2_18 | https://arxiv.org/abs/2406.17762 | https://www.semanticscholar.org/paper/bcfd19422385cdb549e21639e89ee0dfb332b45d |
Solving Intricate Problems with Human-like Decomposition and Rethinking | In this paper, we introduce a novel reasoning framework DeAR (Decompose-Analyze-Rethink) for large language models (LLMs) to conduct intricate reasoning. Our key idea is inspired by human cognitive reasoning, which refines complex problem-solving by breaking it down into sub-questions within a Reasoning Tree and then updating prior answers based on the responses to these sub-questions. In our framework, we propose a Decompose-Analyze-Rethink cycle, which gradually forms a reasoning tree guiding the reasoning process. Specifically, given the problem, the Decompose stage introduces a prompt-based method to break it into simpler sub-ones at subsequent tree nodes. Then, the Analyze stage generates and self-checks the rationales at the node level. Last, the Rethink stage updates the rationales of parent nodes based on its children's feedback. Our reasoning paradigm is more flexible than state-of-the-art methods including Tree-of-Thoughts (ToT), and Graph-of-Thoughts (GoT), as each branch is autonomously generated without fixed settings, and moreover, allows for timely and globally rationale correction throughout the entire process. We conduct extensive experiments on three reasoning benchmarks including ScienceQA, StrategyQA, and GSM8K. Experimental results show that our approach can significantly reduce logical errors and enhance the performance with different LLMs. Our codes are available at: https://anonymous.4open.science/r/Coarse-to-Fine-F216/. | null | null | null | null | NeurIPS 2024 | true | 0 | 0 | null | https://neurips.cc/virtual/2024/poster/95441 | null | null |
Solving One-Third of the OEIS from Scratch | We have automatically discovered explanations for about one-third of the sequences in the Online Encyclopedia of Integer Sequences (OEIS). We briefly describe the basic setting consisting of a feedback loop that starts from zero knowledge and iterates between guessing the explanations, their verification, and training of the guessing methods. Then we describe several additions and experiments that led to the current set of solutions found in 600 iterations of the loop. We also analyze some of the solutions discovered. | null | [
"Thibault, Gauthier",
"Josef, Urban"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
|
Solving for X and Beyond: Can Large Language Models Solve Complex Math Problems with More-Than-Two Unknowns? | Large Language Models (LLMs) have demonstrated remarkable performance in solving math problems, a hallmark of human intelligence. Despite high success rates on current benchmarks; however, these often feature simple problems with only one or two unknowns, which do not sufficiently challenge their reasoning capacities. This paper introduces a novel benchmark, BeyondX, designed to address these limitations by incorporating problems with multiple unknowns. Recognizing the challenges in proposing multi-unknown problems from scratch, we developed BeyondX using an innovative automated pipeline that progressively increases complexity by expanding the number of unknowns in simpler problems. Empirical study on BeyondX reveals that the performance of existing LLMs, even those fine-tuned specifically on math tasks, significantly decreases as the number of unknowns increases - with a performance drop of up to 70\% observed in GPT-4. To tackle these challenges, we propose the Formulate-and-Solve strategy, a generalized prompting approach that effectively handles problems with an arbitrary number of unknowns. Our findings reveal that this strategy not only enhances LLM performance on the BeyondX benchmark but also provides deeper insights into the computational limits of LLMs when faced with more complex mathematical challenges. | The Formulate-and-Solve strategy, a generalized prompting approach that effectively handles problems with an arbitrary number of unknowns, is proposed, and is revealed to enhance LLM performance on the BeyondX benchmark but also provides deeper insights into the computational limits of LLMs when faced with more complex mathematical challenges. | [
"Kuei-Chun, Kao",
"Ruochen, Wang",
"Cho-Jui, Hsieh"
] | 2024-07-06T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2407.05134v1 | https://arxiv.org/abs/2407.05134 | https://www.semanticscholar.org/paper/432b119c85f21462784f5be5d3b48b03ec423294 |
|
Steamroller Problems: An Evaluation of LLM Reasoning Capability with Automated Theorem Prover Strategies | This study presents the first examination of the ability of Large Language Models (LLMs) to follow reasoning strategies that are used to guide Automated Theorem Provers (ATPs). We evaluate the performance of GPT4, GPT3.5 Turbo and Google's recent Gemini model on problems from a steamroller domain. In addition to determining accuracy we make use of the Natural Language Processing library spaCy to explore new methods of investigating LLM's reasoning capabilities. This led to one alarming result, the low correlation between correct reasoning and correct answers for any of the tested models. We found that the models' performance when using the ATP reasoning strategies was comparable to one-shot chain of thought and observe that attention to uncertainty in the accuracy results is critical when drawing conclusions about model performance. Consistent with previous speculation we confirm that LLMs have a preference for, and are best able to follow, bottom up reasoning processes. However, the reasoning strategies can still be beneficial for deriving small and relevant sets of formulas for external processing by a trusted inference engine. | It is confirmed that LLMs have a preference for, and are best able to follow, bottom up reasoning processes, and the reasoning strategies can still be beneficial for deriving small and relevant sets of formulas for external processing by a trusted inference engine. | [
"Lachlan, McGinness",
"Peter, Baumgartner"
] | 2024-07-17T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2407.20244v1 | https://arxiv.org/abs/2407.20244 | https://www.semanticscholar.org/paper/d30e307a7015a9a2579d3c276f2dcbf2aea2e62d |
|
Steering Large Language Models between Code Execution and Textual Reasoning | While a lot of recent research focuses on enhancing the textual reasoning capabilities of Large Language Models (LLMs) by optimizing the multi-agent framework or reasoning chains, several benchmark tasks can be solved with 100% success through direct coding, which is more scalable and avoids the computational overhead associated with textual iterating and searching. Textual reasoning has inherent limitations in solving tasks with challenges in math, logics, optimization, and searching, which is unlikely to be solved by simply scaling up the model and data size. The recently released OpenAI GPT Code Interpreter and multi-agent frameworks such as AutoGen have demonstrated remarkable proficiency of integrating code generation and execution to solve complex tasks using LLMs. However, based on our experiments on 7 existing popular methods for steering code/text generation in both single- and multi-turn settings with 14 tasks and 6 types of LLMs (including the new O1-preview), currently there is no optimal method to correctly steer LLMs to write code when needed. We discover some interesting patterns on when models use code vs. textual reasoning with the evolution to task complexity and model sizes, which even result in an astonishingly inverse scaling law. We also discover that results from LLM written code are not always better than using textual reasoning, even if the task could be solved through code. To mitigate the above issues, we propose three methods to better steer LLM code/text generation and achieve a notable improvement. The costs of token lengths and runtime are thoroughly discussed for all the methods. We believe the problem of steering LLM code/text generation is critical for future research and has much space for further improvement. Project Page, Datasets, and Codes are available at https://yongchao98.github.io/CodeSteer/. | It is discovered that results from LLM written code are not always better than using textual reasoning, even if the task could be solved through code, so three methods to better steer LLM code/text generation are proposed and achieve a notable improvement. | [
"Yongchao, Chen",
"Harsh, Jhamtani",
"Srinagesh, Sharma",
"Chuchu, Fan",
"Chi, Wang"
] | 2024-10-04T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2410.03524v1 | https://arxiv.org/abs/2410.03524 | https://www.semanticscholar.org/paper/a56870071535a7d4b77eeaec4c7edf7823522f5d |
|
Step-by-Step Reasoning for Math Problems via Twisted Sequential Monte Carlo | Augmenting the multi-step reasoning abilities of Large Language Models (LLMs) has been a persistent challenge. Recently, verification has shown promise in improving solution consistency by evaluating generated outputs. However, current verification approaches suffer from sampling inefficiencies, requiring a large number of samples to achieve satisfactory performance. Additionally, training an effective verifier often depends on extensive process supervision, which is costly to acquire. In this paper, we address these limitations by introducing a novel verification method based on Twisted Sequential Monte Carlo (TSMC). TSMC sequentially refines its sampling effort to focus exploration on promising candidates, resulting in more efficient generation of high-quality solutions. We apply TSMC to LLMs by estimating the expected future rewards at partial solutions. This approach results in a more straightforward training target that eliminates the need for step-wise human annotations. We empirically demonstrate the advantages of our method across multiple math benchmarks, and also validate our theoretical analysis of both our approach and existing verification methods. | This paper introduces a novel verification method based on Twisted Sequential Monte Carlo (TSMC), which sequentially refines its sampling effort to focus exploration on promising candidates, resulting in more efficient generation of high-quality solutions. | [
"Shengyu, Feng",
"Xiang, Kong",
"Shuang, Ma",
"Yiming, Yang",
"Aonan, Zhang",
"Dong, Yin",
"Chong, Wang",
"Ruoming, Pang"
] | 2024-10-02T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.01920 | https://arxiv.org/abs/2410.01920 | https://www.semanticscholar.org/paper/71fdb55b39b8d76fd909f218c0d8de2c4fb6541f |
|
Stepwise Self-Consistent Mathematical Reasoning with Large Language Models | Using Large Language Models for complex mathematical reasoning is difficult, primarily due to the complexity of multi-step reasoning. The main challenges of this process include (1) selecting critical intermediate results to advance the procedure, and (2) limited exploration of potential solutions. To address these issues, we introduce a novel algorithm, namely Stepwise Self-Consistent Chain-of-Thought (SSC-CoT). SSC-CoT employs a strategy of selecting intermediate steps based on the intersection of various reasoning chains. Additionally, SSC-CoT enables the model to discover critical intermediate steps by querying a knowledge graph comprising relevant domain knowledge. To validate SSC-CoT, we present a new dataset, TriMaster100, tailored for complex trigonometry problems. This dataset contains 100 questions, with each solution broken down into scored intermediate steps, facilitating a comprehensive evaluation of the mathematical reasoning process. On TriMaster100, SSC-CoT triples the effectiveness of the state-of-the-art methods. Furthermore, we benchmark SSC-CoT on the widely recognized complex mathematical question dataset, MATH level 5, and it surpasses the second-best method by 7.2% in accuracy. Code and the TriMaster100 dataset can be found at: https://github.com/zhao-zilong/ssc-cot. | This work introduces a novel algorithm, namely Stepwise Self-Consistent Chain-of-Thought (SSC-CoT), which employs a strategy of selecting intermediate steps based on the intersection of various reasoning chains and enables the model to discover critical intermediate steps by querying a knowledge graph comprising relevant domain knowledge. | ## Stepwise Self-Consistent Mathematical Reasoning with Large Language Models
**Zilong Zhao** [1] **Yao Rong** [1] **Dongyang Guo** [1] **Emek G¨ozl¨ukl¨u** [1] **Emir G¨ulboy** [1] **Enkelejda Kasneci** [1]
**Abstract**
Q: Simplify tan 100[◦] + 4 sin 100[◦]. Q: Simplify tan 100[◦] + 4 sin 100[◦].
Answer: −√3 Answer: −√3
Hit after 1000 attempts. Hit after 50 attempts.
Hard to choose helpful identities. Critical identities are available.
Q: Simplify tan 100[◦] + 4 sin 100[◦].
Try and verify different identities.
KG
Get Critical identities.
Answer: −√3
Hit after 50 attempts.
Critical identities are available.
Q: Simplify tan 100[◦] + 4 sin 100[◦].
Try different identities.
Answer: −√3
Hit after 1000 attempts.
Hard to choose helpful identities.
_Figure 1. Our SSC-CoT (Right) improves the ability of LLMs_
(Left) to solve complex mathematical questions.
lems (Cobbe et al., 2021; Ling et al., 2017; Roy & Roth,
2016; Patel et al., 2021). Tackling complex mathematical
questions remains a significant challenge (Trinh et al., 2024;
Azerbayev et al., 2023; Xin et al., 2023). This difficulty often arises from the foundational models’ limited knowledge,
impairing their ability to comprehend complex questions.
Additionally, the long reasoning chains required for complex
problem-solving necessitate carefully designed multi-step
reasoning (Yao et al., 2023; Besta et al., 2023) methods to
arrive at the final answer.
Although existing multi-step reasoning algorithms demonstrate notable capabilities in solving mathematical problems,
they encounter challenges when addressing more complex
mathematical questions. Identifying critical intermediate
steps is important for guiding the model towards correct
solutions in these difficult problems. However, existing
approaches are not effective in discovering these critical
intermediate steps. Additionally, the model tends to become
stuck at an intermediate step, hindering further progress. To
address these two challenges, we propose a novel framework named Stepwise Self-Consistent Chain-of-Thought
(SSC-CoT), which enhances LLMs’ capabilities in multistep reasoning for complex mathematical questions.
The design of SSC-CoT is inspired by the scenario where
humans tackle a complex problem. They often go through
multiple attempts, reaching a promising intermediate step
where they may encounter a roadblock. At this moment,
a hint can facilitate their continued progress. In practice,
SSC-CoT first selects a set of potential intermediate steps
by determining their intersection across multiple reasoning
Using Large Language Models for complex mathematical reasoning is difficult, primarily due to
the complexity of multi-step reasoning. The main
challenges of this process include (1) selecting
critical intermediate results to advance the procedure, and (2) limited exploration of potential
solutions. To address these issues, we introduce a novel algorithm, namely Stepwise SelfConsistent Chain-of-Thought (SSC-CoT). SSCCoT employs a strategy of selecting intermediate steps based on the intersection of various reasoning chains. Additionally, SSC-CoT enables
the model to discover critical intermediate steps
by querying a knowledge graph comprising relevant domain knowledge. To validate SSC-CoT,
we present a new dataset, TriMaster100, tailored
for complex trigonometry problems. This dataset
contains 100 questions, with each solution broken down into scored intermediate steps, facilitating a comprehensive evaluation of the mathematical reasoning process. On TriMaster100, SSCCoT triples the effectiveness of the state-of-theart methods. Furthermore, we benchmark SSCCoT on the widely recognized complex mathematical question dataset, MATH level 5, and it
surpasses the second-best method by 7.2% in accuracy. Code and the TriMaster100 dataset can be
[found at: https://github.com/zhao-z](https://github.com/zhao-zilong/ssc-cot)
[ilong/ssc-cot.](https://github.com/zhao-zilong/ssc-cot)
**1. Introduction**
Large Language Models (LLMs) are increasingly being utilized for mathematical reasoning tasks (Imani et al., 2023;
Wei et al., 2022; Kojima et al., 2022; Wang et al., 2022;
Azerbayev et al., 2023; Chen et al., 2022; Xin et al., 2023;
Trinh et al., 2024; Paranjape et al., 2023). However, these
models predominantly focus on simpler mathematical prob
1Technical University of Munich, Munich, Germany. Correspondence to: Zilong Zhao <[email protected]>, Yao Rong
_<[email protected]>._
-----
**Submission and Formatting Instructions for ICML 2024**
chains. It then evaluates the correctness of each step within
this intersection set and selects the most optimal ones for
progression. To overcome being stuck at a step, it also
queries a Knowledge Graph (KG) that contains external
mathematical knowledge. The information gathered, along
with identified key intermediate steps, is then crafted into
hints that guide and further prompt the model. Our algorithm enhances the model’s capability by providing them
with critical intermediate steps, thereby enabling it to discover important steps it would not have identified independently. As illustrated in Figure 1, we present a trigonometry
question to ChatGPT 4 (on the left), and it arrives at the
correct answers after a thousand attempts[1]. SSC-CoT (on
the right) helps the model identify critical intermediate steps
through verification and KG integration, enabling the model
to reach the correct answer within 50 attempts.
To fairly validate the effectiveness of multi-step reasoning methods in discovering critical intermediate steps, we
collected a dataset, TriMaster100, comprising 100 complex trigonometry questions (up to Mathematical Olympiad
difficulty level). Each question is accompanied by a solution divided into intermediate steps, each of which is
scored. Recognizing the challenge for LLMs to solve complex mathematical questions within a limited number of
attempts, TriMaster100 focuses on evaluating the model’s
ability through the scoring of intermediate steps. This differs
from existing mathematical question benchmarks, which
typically concentrate only on the correctness of the final
answer, without considering the evaluation of these intermediate phases. We posit that evaluating intermediate results is
crucial for complex mathematical questions. Given the typically low accuracy of all mathematical reasoning algorithms
on such datasets, distinguishing the true capabilities of these
algorithms based solely on final outcomes is inappropriate.
Beyond this dataset, we benchmark our model on MATH
level 5 (Hendrycks et al., 2021b), which is recognized as
a complex dataset, to showcase its ability in solving various types of mathematical questions. To summarize, our
contributions are as follows:
- We introduce a novel multi-step reasoning algorithm SSCCoT to tackle complex mathematical questions. This algorithm significantly improves LLM’s capabilities in identifying critical intermediate steps for problem-solving.
- We propose a procedure to establish a KG and allow LLM
to efficiently retrieve information in textual form, facilitating the generation of critical intermediate results.
- We provide a new dataset named TriMaster100 designed
for evaluating intermediate results in very complex mathematical questions.
[1Example question can be found at https://openai.com](https://openai.com/research/improving-mathematical-reasoning-with-process-supervision)
[/research/improving-mathematical-reasoning](https://openai.com/research/improving-mathematical-reasoning-with-process-supervision)
[-with-process-supervision.](https://openai.com/research/improving-mathematical-reasoning-with-process-supervision)
- We benchmark SSC-CoT with state-of-the-art (SOTA)
multi-step reasoning algorithms on TriMaster100 and
MATH datasets, where our algorithm significantly surpasses others, demonstrating its ability in solving complex
questions by identifying critical intermediate steps.
**2. Related Work**
**LLMs for Mathematical Reasoning.** To enhance mathematical reasoning capabilities, recent efforts can be categorized into two approaches: one is training domain-specific
models such as LLEMMA (Azerbayev et al., 2023), while
the other leverages in-context learning methods without
training. The former method enhances model capabilities but demands substantial resources. Furthermore, this
specialization may lead to a reduced ability to understand
broader contexts, potentially compromising the models’ effectiveness in comprehending and solving questions outside
the fine-tuning dataset. Therefore, our focus is on an innovative in-context learning approach. A representative
of such an approach is Chain-of-Thought (CoT) (Kojima
et al., 2022; Wei et al., 2022). CoT significantly improves
the problem-solving abilities of Large Language Models
(LLMs) in mathematics by incorporating intermediate steps
into their outputs. Following this principle, Tree-of-Thought
(ToT) (Yao et al., 2023; Long, 2023) and Graph-of-Thought
(GoT) (Besta et al., 2023) prompt the model to evaluate the
current step and choose the next promising step. However,
ToT and GoT have a limitation: they consider only one step
ahead without emphasizing the overview of the problem,
which can slow down the problem-solving process. Our
method addresses this by generating the next step based
on the overlapping intermediate steps from multiple trials
(chains). CoT with Self-Consistency (CoT-SC) proposed
by Wang et al. (2022) similarly leverages the overlap between multiple chains, Nevertheless, it restricts this overlap
selection to the final answers. In contrast, our approach
focuses on the correctness of intermediate results, leading
to improved reasoning performance compared to CoT-SC.
**Retrieval Augmentation for Mathematical Reasoning**
The Retrieval-Augmented Generation (RAG) (Lewis et al.,
2020) technique was first proposed for knowledge-intensive
NLP tasks. The framework contains a retrieval component which can fetch relevant information from a structured
knowledge base, such as database or knowledge graph. This
framework has been successfully implemented for commonsense reasoning (Yu et al., 2022), and middle-school
algebra and geometry QA (Levonian et al., 2023). Studies (Xin et al., 2023; Yang et al., 2023) have explored the
integration of RAG with LLMs for mathematical reasoning, specifically for theorem proving. These works utilize
a library containing vector embeddings of definitions and
theorems for retrieval using cosine similarity based on the
question’s embedding. In this paper, our approach, utilizing
-----
**Submission and Formatting Instructions for ICML 2024**
Notably, if GPT-4 delivered a direct answer without explaining the reasoning process, we explicitly requested the model
to outline the intermediate steps involved (prompts in Appendix A.1). A question is considered solved only if both
the intermediate steps and the final answer provided are correct. Also due to the above reason, the Wolfram plugin in
GPT-4 (Inc.), which is a powerful computation plugin, is not
used since it cannot output all reasoning steps. The resulting dataset comprises questions broken down into various
intermediate steps, with each step being assigned a specific
score. The aggregation of scores for all these steps forms
an extensive evaluation, culminating in a total of 750 points
across the 100 questions. Comparing to prior math datasets,
TriMaster100 offers a thorough assessment of a model’s
reasoning capabilities by evaluating not just the final answer
but also the intermediate steps. This approach is grounded
in two fundamental reasons: (1) Current models often fail
to fully solve the problem, thereby not reaching the conclusive answer; (2) A correct final answer, if derived through
incorrect intermediate steps, should not be considered as a
demonstration of valid mathematical reasoning.
Figure 2 illustrates an example of the annotated intermediate steps with their scores in the TriMaster100 dataset.
This question encompasses 8 intermediate results, resulting in total 8 points if the question is successfully solved.
We recognize the possibility of parallel solution paths, e.g.,
step 1,2 and step 3,4. In such cases, scores for each distinct
path should be calculated independently, and the final scores
from these different paths are then summed up. It is important to understand that mathematical questions often allow
multiple valid solutions. Therefore, even if a reasoning process correctly reaches step 6 without adhering to steps from
1 to 5, we still attribute a final score of 6 if all its previous
steps are verified to be correct. Please note that TriMaster100 includes labeled final results, which allows users to
employ TriMaster100 as prior math datasets to evaluate the
accuracy of final answer.
Trigonometry was selected due to its perceived difficulty
and abstract nature (Sayster, 2023), making it ideal for complex mathematical question reasoning. Looking ahead, we
plan to integrate a KG to enhance mathematical reasoning abilities. Trigonometry’s relatively limited fundamental
identities facilitate the creation of a concise yet detailed
KG. This strategic focus on trigonometry ensures agility in
developing, validating, and refining methodologies. The TriMaster100 was independently audited by three researchers
to ensure its accuracy.
**3.2. Human-Level Performance**
We offer a preliminary yet informative comparison to
human-level performance by randomly selecting 10 questions from TriMaster100 and testing them with human par
**Question:**
If α ∈ (0, _[π]2_ [)][,][ β][ ∈] [(0][,][ π]2 [)][, and][ tan][ α][ = (1 + sin][ β][)][/][ cos][ β][,]
find the value of 2α − _β._
**Step 1 (1 point):** **Step 3 (1 point):**
1 + sin β = 1 + 2 sin _[β]2_ [cos][ β]2 1 + sin β = 1 + 2 sin _[β]2_ [cos][ β]2
**Step 2 (2 points):** **Step 4 (2 points):**
1 + 2 sin _[β]2_ [cos][ β]2 [= (sin][ β]2 [+ cos][ β]2 [)][2] cos[2][ β]2 _[−]_ [sin][2][ β]2 [= (cos][ β]2 [+ sin][ β]2 [)(cos][ β]2 _[−]_ [sin][ β]2 [)]
**Step 5 (5 points):**
(1 + sin _[β]2_ [)][/][ cos][ β][ = (sin][ β]2 [+ cos][ β]2 [)][/][(cos][ β]2 [)][ −] [sin][ β]2 [)]
**Step 6 (6 points):**
(sin _[β]2_ [+ cos][ β]2 [)][/][(cos][ β]2 [)][ −] [sin][ β]2 [) = (tan][ β]2 [+ 1)][/][(1][ −] [tan][ β]2 [)]
**Step 7 (7 points):**
(tan _[β]2_ [+ 1)][/][(1][ −] [tan][ β]2 [) = tan (] _[β]2_ [+][ π]4 [)]
**Step 8 (8 points):**
( _[β]2_ [+][ π]4 [) =][ α][,]
2α − _β =_ _[π]2_
_Figure 2. Example of the annotated intermediate steps with its_
scores in TriMaster100.
a knowledge graph, diverges by focusing on providing relevant trigonometry identities directly linked to the question’s
elements, such as trigonometric functions and angles, enabling a more direct match than the cosine similarity method
used in (Xin et al., 2023; Yang et al., 2023) for theorem retrieval. This ensures a clearer relation between the question
and the information retrieved.
**3. TriMaster100 Dataset**
Existing datasets for complex mathematical questions include only the final answer or encapsulate the entire reasoning process in a single string, lacking clear intermediate
results. These datasets fall short in adequately distinguishing the nuanced problem-solving abilities required for such
tasks, because of the low accuracy current mathematical
reasoning algorithms achieve on complex questions. Relying solely on accuracy for evaluation proves inadequate
in these challenging contexts. To address this, we introduce the TriMaster100 dataset, comprising 100 challenging
trigonometry questions ranging from senior high school to
Mathematical Olympiad levels. TriMaster100’s innovative
evaluation methodology not only assesses final answer accuracy but also scores intermediate results, enabling a more
detailed and accurate evaluation of an algorithm’s problemsolving process. This fills a critical gap in assessing complex
mathematical reasoning.
**3.1. Dataset Construction**
To ensure the complexity of the questions, we employed
GPT-4 as a standard, presenting each question to it three
times as a prompt. A question was defined as complex if
GPT-4 failed to solve it completely on at least one occasion.
-----
**Submission and Formatting Instructions for ICML 2024**
_Figure 3. An example of Stepwise Self-Consistent Chain-of-Thought workflow._
ticipants. We recruited a group of five participants, each
holding a Master’s Degree. During the evaluation, each
participant was given a maximum of one hour to solve two
questions. Collectively, the five participants achieved 17
points out of a possible 63 for the ten questions. On average,
participants spent 9 minutes per question, indicating that
they did not develop new thoughts on solving the question
after that time. Only one question was completely solved.
The best-performing participant scored 9 points out of 13,
while two participants scored 0 for their questions. This
result indicates that questions in TriMaster100 are difficult.
**4. Stepwise Self-Consistent Chain of Thought**
In this section, we introduce the workflow of SSC-CoT, followed by the detail of the two core components in our algorithm: (1) The design of the KG in the context of trigonometry questions and the approach to retrieve information from
it (Section 4.2); (2) The procedure of selecting critical intermediate steps (Section 4.3).
**4.1. SSC-CoT Workflow**
Figure 3 presents the SSC-CoT workflow for two rounds
of trigonometry question querying. First, when presented
with a question Q, step 1 extracts a set of key information
from the question, which is represented as = E(pθ, Q)
1
_V_
where E(·) represents the extraction function and pθ denotes an LLM with parameter θ. With a set of extracted
information V, SSC-CoT queries related information from
the KG for the round k, represented as rk = S(G, V) where
_S(·) represents the searching function and G denotes a_
KG (detail in Section 4.2). In step 2, the information
_r1 (k = 1), used as a hint with the question Q, forms a_
2
ing chains: Ci = G1(pθ, Q, r1) where i ∈ [1, N ], and G1(·)
is the function to generate reasoning chain for round 1 with
_N = 3 in this example, G1(_ ) prompt template is added
_·_
in Appendix A.4. Each circle in the chain symbolizes an
intermediate result, which is defined as a state x[j]i [, with][ i]
and j referring to ith chain position j.
In the first round, all generated states are used to identify
those states that share identical mathematical meanings (indicated in green). Results preceding these overlapping states
within the same reasoning chain are marked as Inactive
(shown in grey), signifying their exclusion in subsequent
overlap searches. Intermediate result selection detail is provided in Section 4.3. The selected overlapping states form a
set and denotes as S. Step 4 involves sending S along with
the original question Q to a verifier. This is represented as
4
**Sv =** _x[j]i_ _[∈]_ **[S][ |][ V][ (][Q, x]i[1][x]i[2][...x]i[j][) = 1]**, where V denotes
the verification function. It is important to note that, duen o
to the identical nature of overlapping intermediate results,
verification only requires a single chain of the result up to
the intermediate result, rather than from all chains. In the
current implementation of SSC-CoT, this verification role
_V (·) is fulfilled by the language model, its prompt template_
is added in Appendix A.3. In step 5, the intermediate results that do not pass the verification are discarded, while the
5
ones that pass the verification are combined to a verified set
of states Sv. This set is used to query related information
from the KG, i.e., rk = S(G, E(pθ, Sv)). This information rk, combined with the question and intermediate result,
serves as a new prompt for the subsequent round of querying. Step 7 generates reasoning chains for round 2, i.e.,
_Ci = G(pθ, Q, r2, Sv) with k = 2, prompt template of G_
7
is added in Appendix A.5. In this round, we evaluate not
only the results from the current round but also those from
round 1 that were not marked Inactive. The selection of a
new overlapping result (orange circle) in step 8 triggers a
8
repetition of steps 4 through 7 . The algorithm proceeds
3
4
7
-----
**Submission and Formatting Instructions for ICML 2024**
**Algorithm 1 Intermediate Result Selection**
**Input: 1. Whole reasoning chains C.**
2. List of deepest intermediate result state in
each reasoning chain: _all_
_L_
**Output: Selected state list: Lsn**
1: N length( _all)_
_←_ _L_
2: _el_ [ ] _▷_ // List to save eliminated state.
_L_ _←_
3: for i ← 0 to N − 1 do
4: **for j ←** _i + 1 to N −_ 1 do
5: **if Lall[j] in Lel then Continue**
6: **if Lall[i] in Lel then Break**
7: _▷_ // sel is the eliminated state or None.
8: _sel_ PairWiseSelection( _all[i],_ _all[j],_ )
_←_ _L_ _L_ _C_
9: **if sel != None then**
10: Add sel to Lel
11: Lsn = Lall - Lel
12: return Lsn
remaining functions, we search for nodes linked with them
and retrieve their node information. Second, we integrate all
extracted trigonometric functions with the involved angles.
In this context, the trigonometric function serves to identify
nodes, while the involved angles are used to match edges.
Finally, we combine the data collected from both steps,
removing any redundant details. This refined information is
then utilized as contextual hints within the prompt, aiding
in solving the question.
As users apply our method to solve trigonometry problems,
certain conclusions drawn from these problems can serve
as lemmas for subsequent questions. Our KG is designed
to be expandable. We offer an interface that allows users
to add new nodes and edges to the KG. As more lemmas
are incorporated, the KG becomes robust, providing more
relevant information for future use.
**4.3. Intermediate Result Selection**
In SSC-CoT, we posit that intermediate results in overlaps
among multiple reasoning chains can be beneficial for subsequent reasoning stages. The intuition behind this is that
errors in reasoning can manifest in various ways. But to accurately solve a problem, different methodologies are likely
to converge on certain key intermediate results; Additionally, when we need to select a single state from multiple
options within the same reasoning chain, the deepest state is
chosen. This selection criterion is based on the observation
that all states leading up to the deepest one contribute to
its deduction, thereby making the selection of earlier states
unnecessary for our purpose. Based on these rationales, we
explain how we arrive at S, as illustrated in Figure 3 for
steps 4 and 8 .
To identify overlapping intermediate results, SSC-CoT in
coscos x x cos + y y = cos _[π]2_
_−_ = 0
sin x sin y
sin 33 sinx x = _x =_ _[π]2_
_x = x + y_
_−4 sin[3]_ _x_ _x = 3x_ cos x
belongs to belongs to tan x
sin x
sin[3] _x_ equals to
belongs to tan x =
sin x/ cos x
_Figure 4. A subset of knowledge graph for trigonometry._
until it reaches a predefined number of rounds or queries.
To conclude the final result from SSC-CoT, we will first
find out which intermediate result contains the conclusion
statement. The final result is then derived from the majority
vote among all these conclusion statements.
**4.2. Knowledge Graph Design and Exploration**
**Design.** Figure 4 shows a subset of our KG. There are
two types of nodes: (1) Conceptual Node and (2) Theorem
**Node. The Conceptual Node refers to foundational elements**
in trigonometry, for example, sin[3] _x and cos x. On the other_
hand, Theorem Node represent general mathematical propositions or axioms, such as cos _[π]2_ [= 0][. The edges between]
nodes represent the directional relationships or dependencies between the concepts. Structured as a directed graph, it
features four types of connections: (1) Dependency Link,
illustrating a concept’s reliance on another. For instance, a
link from the sin x node to tan x = sin x/ cos x indicates
the latter’s dependency on sin x. (2) Derivation Link, signifying a derivation or logical progression from one concept
or identity to another. An example is a link from sin x to
sin 3x = 3 sin x − 4 sin[3] _x, indicating the derivation of the_
latter from the former. (3) Application Link, employed
when a concept is utilized to deduce a specific instance or
case. For instance, a link from cos x to cos( _[π]2_ [) = 0][ demon-]
strates the application of cos x to a particular scenario. (4)
**Identity Link, which connects two identity nodes, demon-**
strating how one identity relates to or is transformed into
another, such as tan x to tan x = sin x/ cos x.
**Information Retrieval.** We employ in-context learning to
distill features from the question, i.e., function E(·). Details
of the query template can be found in Appendix A.2. The
extracted features are categorized into two segments: (1)
trigonometric functions e.g., sin x, cos[2] _x, sin x cos y, and_
(2) associated angles e.g., _[π]2_ [,][ 3][x][,][ π][ −] _[x][. Our approach to]_
query the KG, i.e., function S(·), contains three steps. First,
we temporarily exclude sin x, cos x, tan x, sec x, csc x and
cot x from the extracted trigonometric functions. For the
-----
**Submission and Formatting Instructions for ICML 2024**
(a) (b) (c) (d)
_Figure 5. Chains of thought with more than one group of overlapping intermediate result scenarios. (a) Two overlapping intermediate_
result groups, overlapping nodes in different chains. (b) Two overlapping intermediate result groups, nodes from different group appear
in one chain. (c) Two overlapping intermediate result groups, nodes from different group appear in two chains. (d) More than two
overlapping intermediate result groups.
corporates a step for assessing the similarity between pairs
of results. This process involves converting each intermediate result into a TF (Term Frequency) - IDF (Inverse Document Frequency) vector. We then compute the pairwise
cosine similarity for all vector pairs. If the cosine similarity
between any two vectors exceeds the pre-set threshold T
(T = 0.999), those intermediate results are deemed overlapping. TF-IDF and cosine similarity are widely utilized
in information retrieval to calculate text similarity (Schutze¨
et al., 2008). Please note that our algorithm enables a human_in-the-loop option (detail in Section 5.2), as the overlapping_
states selection can be executed by human participants.
Upon all the overlapping intermediate results, we assess
their respective priorities and choose those with the highest.
Given two groups defined as A = _x[a]a[j]i_ _bi_
and A ∩ _B = ∅. If A can be inferred from {_ _[}][ and] B, then[ B][ =] B[ {] will[x][b][j]_ _[}]_
not be selected as A is more advanced in the reasoning chain.
Therefore, we focus on the states from A and B that come
from the same chain m, which we denote their positions
in the chain as aj|ai=m and bj|bi=m, respectively. Chain
number m forms a set M . Concretely, the selected group
after the comparison of A and B is as follows:
_B,_ if ∀m ∈ _M, bj|bi=m > aj|ai=m,_
A if _m_ _M, aj_ _ai=m > bj_ _bi=m,_ (1)
_∀_ _∈_ _|_ _|_
A _B,_ otherwise.
_∪_
The first condition represents the scenario that the position
of states in group B is consistently deeper than those in A,
indicating that A is used to infer B. In this case, only the
group B will be kept. Similarly, A is the inference of B
if the second condition fulfilled. The first two conditions
are depicted in Figure 5 (b). Beyond these two conditions,
the two groups could be independent as depicted in Figure 5 (a). In this scenario, M = ∅ indicating that there
is no overlapping chain between two groups. Figure 5 (c)
illustrates that the two groups are intertwined. This can
occur when simplifying a fractional expression: one might
choose to simplify the numerator before the denominator,
or vice versa. These steps can proceed in parallel but this
simultaneity is not reflected in the reasoning chain.
Above mentioned scenarios only contain two overlapping
groups, Figure 5d illustrates a more complex scenario which
contains seven groups of overlapping intermediate results.
The selection process for determining which group(s) of
intermediate results to utilize is formalized in Algorithm 1.
In Algorithm 1, we prepare (1) whole reasoning chains C
and (2) list of deepest overlapping intermediate result state
in each reasoning chain: _all. It is important to note that_
_L_
for each group of overlapping intermediate results, only one
state will be included in _all. The PairWiseSelection func-_
_L_
tion, outlined in line 8, ascertains the relationship between
any _all[i] and_ _all[j] within the reasoning chain, adher-_
_L_ _L_
ing to the rules previously defined for selection between
two groups. After completing all pairwise comparisons, the
algorithm returns the final selected states.
Finally, in scenarios where no overlap among intermediate
results is observed, we will forward the final state from each
reasoning chain to the verifier for further progression. If still
no intermediate result are selected, the most recent set of
intermediate results and related information will be reused
for the current round of reasoning. However, each set of
intermediate results and its related information is limited to
a maximum of two uses. Should there be no new overlapping intermediate results after two uses, we will revert to
the set of intermediate results and related information that
precedes the recently used one. This backtracking process
will continue until we utilize the first set of overlapping
intermediate results and related information.
**5. Experiment**
In this section, we first introduce the experimental setup,
including datasets, state-of-the-art baselines, and evaluation
metrics. Then, we present the quantitative results, followed
-----
**Submission and Formatting Instructions for ICML 2024**
|Task|Algebra|Counting and Probability|Geometry|Number Theory|Precalculus|Prealgebra|Intermediate Algebra|Total|
|---|---|---|---|---|---|---|---|---|
|#question|307|123|132|154|135|193|280|1324|
|LLEMMA-7b|9.1|4.1|2.3|3.9|2.2|9.8|2.1|5.3|
|LLEMMA-34b|10.1|3.3|4.5|3.2|2.2|13.0|2.9|5.6|
|ToT|27.4|14.6|3.8|11.0|1.5|34.7|3.2|15.3|
|CoT-SC|37.8|17.9|3.8|22.7|4.4|38.3|3.2|20.2|
|(Ours) SSC-CoT|42.7|25.2|15.9|31.8|7.4|49.7|8.9|27.4|
_Table 1. Result on MATH. Majority voting with k = 20 is done for LLEMMA and CoT-SC. Best result is indicated in bold._
up with a qualitative result comparison to further demonstrate the advantage of our method.
**5.1. Experiment Setup**
**Datasets.** Our method is evaluated against other SOTA
mathematical reasoning algorithms using the TriMaster100
dataset. Additionally, we benchmark these methods using
the MATH dataset (Hendrycks et al., 2021a). The MATH
dataset comprises 12,500 questions from mathematics competitions, segmented into five ascending levels of difficulty,
from level 1 to level 5. Given that our algorithm is specifically designed to tackle complex mathematical questions,
our experiments focus exclusively on level 5 questions the highest difficulty tier in MATH dataset. This level encompasses 1,324 questions spanning seven math categories:
Algebra, Counting and Probability, Geometry, Number Theory, Precalculus, Prealgebra, and Intermediate Algebra. The
number of questions in each category is given by Table 1.
**Baselines.** In our research, we benchmark our SSC-CoT
algorithm against other advanced in-context learning algorithms: Tree-of-Thought (ToT) (Yao et al., 2023) and
CoT-SC (Wang et al., 2022). All three - ToT, CoT-SC, and
our SSC-CoT- are implemented using the GPT-3.5. Graphof-Thought (GoT) (Besta et al., 2023) is also a SOTA incontext multi-step reasoning algorithm. However, GoT’s
design, which is not tailored specifically for mathematical reasoning, necessitates that input question be broken
down into sub-tasks – a requirement challenging to meet
for mathematical questions. Therefore, GoT was not included as a baseline in our experiments. Beyond in-context
learning algorithms, our research also encompasses benchmarks against LLEMMA (Azerbayev et al., 2023), a language
model explicitly developed for mathematical tasks and acclaimed for its SOTA performance across diverse mathematical datasets. Our experiments are conducted using both the
7B and 34B versions of the LLEMMA model.
For SSC-CoT, we generate 5 reasoning chains per round,
with a limit of 4 rounds, resulting in a maximum of 20
reasoning chains per question. In the case of ToT, we configure it to produce 5 steps at each level, selecting one for
further development. We cap the LLM queries at 20 per
question, counting only those queries that generate thoughts,
excluding state evaluations using the LLM. For CoT-SC, we
perform 20 queries per question to the foundational model
and apply a majority vote mechanism on the outcomes. For
the LLEMMA experiments, we follow the CoT-SC procedure, with the only change being the replacement of the
foundational model from the GPT-3.5 API to LLEMMA.
Except for SSC-CoT, all other baselines use the same fewshot learning template, which is added in Appendix A.6.
SSC-CoT cannot use the same template because SSC-CoT
needs to extract intermediate results, therefore the expected
output is different from other baselines.
**Evaluation Metrics.** On the TriMaster100 dataset, we
compute scores for intermediate results as introduced in Section 3. Specifically, we examine the deepest intermediate
result correctly achieved by the model. The sum of scores
over all questions will be used as the final result. Note that
the maximum score on TriMaster100 is 750. On MATH, we
use the accuracy of answers generated by language models
to indicate the model performance.
**5.2. Quantitative Results**
**Ablation Study.** In this section, we evaluate the efficacy
of key components in our SSC-CoT, KG and intermediate
result selection, by introducing three variants: SSC-CoT**HITL, SSC-CoT-HITL\KG and SSC-CoT\KG. ‘HITL’**
denotes ‘human-in-the-loop’, indicating that SSC-CoTHITL incorporates human experts to select overlapping intermediate result, whereas SSC-CoT-HITL\KG represents
the SSC-CoT-HITL without KG. SSC-CoT\KG represents
the variant that SSC-CoT without using KG. Concretely,
when comparing SSC-CoT and SSC-CoT-HITL with SSCCoT \KG and SSC-CoT-HITL\KG in Figure 6, we see
that utilizing KG efficiently improves the model’s reasoning capabilities. Furthermore, the notable performance gap
between SSC-CoT-HITL and SSC-CoT indicates a substantial boost from HITL intervention. When neither KG nor
intermediate result selection is used, which is CoT-SC in
Figure 6, SSC-CoT-HITL significantly outperforms, nearly
-----
**Submission and Formatting Instructions for ICML 2024**
_Q: Simplify sin 3A/(1 + 2 cos 2A)_
**Reasoning from SSC-CoT ***
——————————————**Round 1 - Given: Q, r1**
**Sv: sin 3A = 3 sin A −** 4 sin[3] _A_
**Round 2 - Given: Q, r2, Sv**
**Sv: (3 sin A −** 4 sin[3] _A) /(1 + 2 cos 2A) = sin A(3 −_ 4 sin[2] _A) /(1 + 2 cos 2A)_
**Round 3 - Given: Q, r3, Sv**
**Sv: (1 + 2 cos 2A) = 3 −** 4 sin[2] _A_
**Result: sin 3A/(1 + 2 cos 2A) =(3 sin A −** 4 sin[3] _A) / (3 −_ 4 sin[2] _A) = sin A_
**Reasoning from ToT**
——————————————**Step 1: Use cos 2A = 2 cos[2]** _A −_ 1, sin 3A/(1 + 2(2 cos[2] _A −_ 1))
...
**Step 5: Use identity sin[2]** _A + cos[2]_ _A = 1 to rewrite the denominator as 4(1 −_ sin[2] _A) −_ 1
**Step 6: Factoring out a sin A in the numerator, we get sin A · sin 2A /(3 −** 4 sin[2] _A)_
...
*In this problem, r1 and r3: “ 1. sin 3A = 3 sin A − 4 sin[3] _A, 2. cos 2A = 1 -_
2 sin[2] _A, 3. cos 2A = 2 cos[2]_ _A −_ 1”. r2: “ 1. sin 3A = 3 sin A − 4 sin[3] _A.”_
_Table 2. Solution provided by SSC-CoT (Top) and by ToT (Below)_
for a question from TriMaster100. Our SSC-CoT solves this question correctly based on Sv highlighted in colors, while ToT makes
a factual mistake (in red) and cannot arrive at the correct answer.
**6. Conclusion**
In this study, we introduce the Stepwise Self-Consistent
Chain-of-Thought (SSC-CoT) algorithm, tailored for complex mathematical problem-solving. SSC-CoT improves the
LLM’s mathematical reasoning by identifying critical intermediate results through the intersection of diverse reasoning
chains and integrating a knowledge graph. To evaluate SSCCoT, we collected a new dataset, TriMaster100, with 100
complex trigonometry questions, each divided into scored
intermediate steps totaling 750 points. Results on TriMaster100 and MATH level 5 datasets show SSC-CoT’s superior
performance compared to other methods, indicating its effectiveness in solving complex mathematical questions.
**Limitations and Future Work.** The use of human-inthe-loop intermediate result selection significantly enhances
results in Figure 6. This indicates the potential for enhancing
automatic detection of overlapping intermediate results. In
our future work, we plan to deploy a reinforcement learning
algorithm to train a reward model for selecting overlapping
results based on human decisions. Moreover, the verification
is conducted by querying the LLM with the given prompt.
We believe that incorporating a more robust verifier, as
discussed in (Lightman et al., 2023; Dhuliawala et al., 2023),
has the potential to further improve SSC-CoT.
**Ethical Statement.** In this research, our goal is to improve
the LLMs’ capabilities to solve complex mathematical question. We also incorporated human experts’ knowledge in
some experiments to harvest an effective synergy between
humans and AI models. To protect user privacy and rights,
we have firmly followed the guidelines and regulations provided by our institution. We believe that by making AI more
accessible, acceptable, and user-friendly, we can harness its
potential to better assist humans.
_Figure 6. Results on TriMaster100, full score is 750._
tripling the score achieved by CoT-SC. Hence, KG and high
quality intermediate result selection are both effective.
**Comparison with SOTA methods.** In this section, we
benchmark our SSC-CoT with baselines on both TriMaster100 and MATH Level 5 to show its generalizability
in solving complex mathematical questions. On TriMas**ter100, our SSC-CoT in Figure 6 surpasses all other base-**
lines. For instance, SSC-CoT is 34% higher than CoT-SC,
the second best result. With a more general version of SSCCoT, i.e. no KG is used, SSC-CoT\KG is still 29% higher
than CoT-SC. The accuracy on final answer that all the baselines achieved on TriMaster100 is discussed in Appendix B.
On MATH, the comparison of our general method, SSCCoT\KG, with other baselines are shown in Table 1. Overall, SSC-CoT achieves the highest average accuracy and
surpasses the second-best result by a large margin of 7.2%.
Both LLEMMA models show lower performance compared
to other baseline models. This can be attributed to their
relatively smaller size, resulting in reduced competence in
understanding and reasoning compared to GPT-3.5. It is
worth noting that the performance across all algorithms is
relatively better for Algebra and Prealgebra questions, reflecting the strengths and weaknesses of LLMs in processing
different types of mathematical queries.
**5.3. Qualitative Results**
Table 2 demonstrates a solution provided by our SSC-CoT.
It solves this question within 3 rounds. With the related
information in each round rk and Sv obtained from the last
round, SSC-CoT is able to discover critical intermediates at
each round, leading to the correct answer. ToT also focuses
on simplifying sin 3A to sin A, but it makes a factual mistake that sin 3A can be decomposed to sin 2A · sin A. This
indicates that without related information, it is difficult for
the model to deploy the correct identity. More qualitative
results can be found in Appendix C.
-----
**Submission and Formatting Instructions for ICML 2024**
**References**
Azerbayev, Z., Schoelkopf, H., Paster, K., Santos, M. D.,
McAleer, S., Jiang, A. Q., Deng, J., Biderman, S., and
Welleck, S. Llemma: An open language model for mathematics. arXiv preprint arXiv:2310.10631, 2023.
Besta, M., Blach, N., Kubicek, A., Gerstenberger, R., Gianinazzi, L., Gajda, J., Lehmann, T., Podstawski, M.,
Niewiadomski, H., Nyczyk, P., et al. Graph of thoughts:
Solving elaborate problems with large language models.
_arXiv preprint arXiv:2308.09687, 2023._
Chen, W., Ma, X., Wang, X., and Cohen, W. W. Program
of thoughts prompting: Disentangling computation from
reasoning for numerical reasoning tasks. arXiv preprint
_arXiv:2211.12588, 2022._
Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H.,
Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano,
R., et al. Training verifiers to solve math word problems.
_arXiv preprint arXiv:2110.14168, 2021._
Dhuliawala, S., Komeili, M., Xu, J., Raileanu, R., Li, X.,
Celikyilmaz, A., and Weston, J. Chain-of-verification
reduces hallucination in large language models. arXiv
_preprint arXiv:2309.11495, 2023._
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart,
S., Tang, E., Song, D., and Steinhardt, J. Measuring
mathematical problem solving with the math dataset. In
_Thirty-fifth Conference on Neural Information Processing_
_Systems Datasets and Benchmarks Track, 2021a._
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart,
S., Tang, E., Song, D., and Steinhardt, J. Measuring mathematical problem solving with the math dataset. arXiv
_preprint arXiv:2103.03874, 2021b._
Imani, S., Du, L., and Shrivastava, H. Mathprompter: Mathematical reasoning using large language models. arXiv
_preprint arXiv:2303.05398, 2023._
[Inc., W. R. Mathematica online, Version 14.0. URL https:](https://www.wolfram.com/mathematica)
[//www.wolfram.com/mathematica. Champaign,](https://www.wolfram.com/mathematica)
IL, 2024.
Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., and Iwasawa,
Y. Large language models are zero-shot reasoners. URL
_https://arxiv. org/abs/2205.11916, 2022._
Levonian, Z., Li, C., Zhu, W., Gade, A., Henkel, O., Postle, M.-E., and Xing, W. Retrieval-augmented generation to improve math question-answering: Trade-offs
between groundedness and human preference. _arXiv_
_preprint arXiv:2310.03184, 2023._
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V.,
Goyal, N., Kuttler, H., Lewis, M., Yih, W.-t., Rockt¨ aschel,¨
T., Riedel, S., and Kiela, D. Retrieval-augmented generation for knowledge-intensive nlp tasks. In Larochelle, H.,
Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.),
_Advances in Neural Information Processing Systems, vol-_
ume 33, pp. 9459–9474. Curran Associates, Inc., 2020.
[URL https://proceedings.neurips.cc/p](https://proceedings.neurips.cc/paper_files/paper/2020/file/6b493230205f780e1bc26945df7481e5-Paper.pdf)
[aper_files/paper/2020/file/6b4932302](https://proceedings.neurips.cc/paper_files/paper/2020/file/6b493230205f780e1bc26945df7481e5-Paper.pdf)
[05f780e1bc26945df7481e5-Paper.pdf.](https://proceedings.neurips.cc/paper_files/paper/2020/file/6b493230205f780e1bc26945df7481e5-Paper.pdf)
Lightman, H., Kosaraju, V., Burda, Y., Edwards, H., Baker,
B., Lee, T., Leike, J., Schulman, J., Sutskever, I., and
Cobbe, K. Let’s verify step by step. arXiv preprint
_arXiv:2305.20050, 2023._
Ling, W., Yogatama, D., Dyer, C., and Blunsom, P. Program induction by rationale generation: Learning to solve
and explain algebraic word problems. arXiv preprint
_arXiv:1705.04146, 2017._
Long, J. Large language model guided tree-of-thought.
_arXiv preprint arXiv:2305.08291, 2023._
Paranjape, B., Lundberg, S., Singh, S., Hajishirzi, H., Zettlemoyer, L., and Ribeiro, M. T. Art: Automatic multi-step
reasoning and tool-use for large language models. arXiv
_preprint arXiv:2303.09014, 2023._
Patel, A., Bhattamishra, S., and Goyal, N. Are nlp models
really able to solve simple math word problems? arXiv
_preprint arXiv:2103.07191, 2021._
Roy, S. and Roth, D. Solving general arithmetic word
problems. arXiv preprint arXiv:1608.01413, 2016.
Sayster, A. High-school students’ productive struggles during the simplification of trigonometrical expressions and
the proving of trigonometrical identities. 2023.
Schutze, H., Manning, C. D., and Raghavan, P.¨ _Introduction_
_to information retrieval, volume 39. Cambridge Univer-_
sity Press Cambridge, 2008.
Testolin, A. Can neural networks do arithmetic? a survey on
the elementary numerical skills of state-of-the-art deep
learning models. Applied Sciences, 14(2), 2024. ISSN
[2076-3417. doi: 10.3390/app14020744. URL https:](https://www.mdpi.com/2076-3417/14/2/744)
[//www.mdpi.com/2076-3417/14/2/744.](https://www.mdpi.com/2076-3417/14/2/744)
Trinh, T. H., Wu, Y., Le, Q. V., He, H., and Luong, T. Solving olympiad geometry without human demonstrations.
_Nature, 625(7995):476–482, 2024._
Wang, X., Wei, J., Schuurmans, D., Le, Q. V., Chi,
E. H., Narang, S., Chowdhery, A., and Zhou, D. Selfconsistency improves chain of thought reasoning in language models. In The Eleventh International Conference
_on Learning Representations, 2022._
-----
**Submission and Formatting Instructions for ICML 2024**
Wei, J., Wang, X., Schuurmans, D., Bosma, M., ichter,
b., Xia, F., Chi, E., Le, Q. V., and Zhou, D. Chain-ofthought prompting elicits reasoning in large language
models. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural
_Information Processing Systems, volume 35, pp. 24824–_
[24837. Curran Associates, Inc., 2022. URL https:](https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf)
[//proceedings.neurips.cc/paper_files](https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf)
[/paper/2022/file/9d5609613524ecf4f15](https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf)
[af0f7b31abca4-Paper-Conference.pdf.](https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf)
Xin, H., Wang, H., Zheng, C., Li, L., Liu, Z., Cao, Q.,
Huang, Y., Xiong, J., Shi, H., Xie, E., et al. Lego-prover:
Neural theorem proving with growing libraries. arXiv
_preprint arXiv:2310.00656, 2023._
Yang, K., Swope, A. M., Gu, A., Chalamala, R., Song, P.,
Yu, S., Godil, S., Prenger, R., and Anandkumar, A. Leandojo: Theorem proving with retrieval-augmented language models. arXiv preprint arXiv:2306.15626, 2023.
Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao, Y.,
and Narasimhan, K. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint
_arXiv:2305.10601, 2023._
Yu, W., Zhu, C., Zhang, Z., Wang, S., Zhang, Z., Fang, Y.,
and Jiang, M. Retrieval augmentation for commonsense
reasoning: A unified approach. In Goldberg, Y., Kozareva,
Z., and Zhang, Y. (eds.), Proceedings of the 2022 Confer_ence on Empirical Methods in Natural Language Process-_
_ing, pp. 4364–4377, Abu Dhabi, United Arab Emirates,_
December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.294. URL
[https://aclanthology.org/2022.emnlp-m](https://aclanthology.org/2022.emnlp-main.294)
[ain.294.](https://aclanthology.org/2022.emnlp-main.294)
10
-----
**Submission and Formatting Instructions for ICML 2024**
**A. Prompts.**
**A.1. Prompts on GPT-4 to construct the TriMaster100 dataset**
For most of the case, to let GPT-4 to output reasoning steps, following prompt is enough.
Solve the question: {question}.
For certain questions, GPT-4 may invoke an external component where a python code gets involved. In that case, the
calculated result will directly output without reasoning steps. To avoid this, following prompt will be used for that type of
question, for example the question: “Find the value of tan[2](20) + tan[2](40) + tan[2](80)”.
Solve the question: {question}. Please provide all steps of the reasoning.
**A.2. Prompt for E of SSC-CoT to extract related information from question**
**Q: For question: Simplify tan(100)+sin(10) cos(10)+(cot(20))[2]** +sin(180+ _A). Extract trigonometric function_
and angles from the question. Be careful, for the pattern such as sin(10) cos(10), we should extract the trigonometric
function as sin(A) cos(B), and for (cot(20))[2], the extracted trigonometric function should be both cot(A) and
(cot(A))[2]. There is no need to solve the problem, just provide the relevant information.
**A: Trigonometric pattern(s): tan(A), sin(A) cos(B), cot(A), (cot(A))[2], sin(A). Angle(s): 100, 10, 180 + A.**
**Q: For question: {question}. Extract trigonometric function and angles from the question. Be careful, for the**
pattern such as sin(10) cos(10), we should extract the trigonometric function as sin(A) cos(B), and for (cot(20))[2],
the extracted trigonometric function should be both cot(A) and (cot(A))[2]. There is no need to solve the problem,
just provide the relevant information.
**A:**
**A.3. Prompts for V of SSC-CoT to verify of intermediate result**
Here, the function V comprises two steps. Initially, we employ the following prompt to generate the entire reasoning process
for determining the correctness of intermediate results for solving the given question.
Given the mathematical question: {question}, we have following inferences: {intermediate results}. Do you think
it is correct? Let’s think step by step.
After getting the entire process, we give the whole reasoning process to the following prompt to conclude whether the the
intermediate results are correct.
We have the inference as follows: {inference process}. According to the inference, please conclude a clear answer.
Please only output “yes” or “no”.
11
-----
**Submission and Formatting Instructions for ICML 2024**
**A.4. Prompt for G1 of SSC-CoT to generate reasoning chain**
**Q. Simplify**
_−_ sin(180 + A) + sin(−A) − tan(360 + A)
tan(A + 180) + cos( _A) + cos(180_ _A)_ _[.]_
_−_ _−_
Useful knowledge:
1. sin(−X) = − sin X.
2. sin(180 + X) = − sin X.
3. tan(180 + X) = tan X.
4. cos(180 − _X) = −_ cos X.
5. cos(−X) = cos X.
In the end of reasoning, we have to summarize reasoning into concrete intermediate results without explanation
(pure equation without extra text).
**A. Let’s simplify the expression**
_−_ sin(180 + A) + sin(−A) − tan(360 + A)
tan(A + 180) + cos( _A) + cos(180_ _A)_ _[.]_
_−_ _−_
For the numerator part:
_−_ sin(180 + A) + sin(−A) − tan(360 + A) = − tan(A).
For the denominator part:
tan(A + 180) + cos(−A) + cos(180 − _A) = tan(A)._
Therefore,
sin(180 + A) + sin( _A)_ tan(360 + A)
_−_ _−_ _−_ = 1.
tan(A + 180) + cos( _A) + cos(180_ _A) [=][ −]tan([tan(]A[A])[)]_ _−_
_−_ _−_
To summarize, we can conclude our reasoning into three intermediate results:
1. − sin(180 + A) + sin(−A) − tan(360 + A) = − tan(A),
2. tan(A + 180) + cos(−A) + cos(180 − _A) = tan(A),_
3. _−tan(sin(180+A+180)+cos(A)+sin(−AA)+cos(180)−tan(360+AA))_ [=][ −]tan([tan(]A[A])[)] [=][ −][1][.]
_−_ _−_
**Q. {question}**
Useful knowledge: {related information}
In the end of reasoning, we have to summarize reasoning into concrete intermediate results without explanation
(pure equation without extra text).
**A.**
12
-----
**Submission and Formatting Instructions for ICML 2024**
**A.5. Prompt for G of SSC-CoT to generate reasoning chain**
**Q. Simplify**
_−_ sin(180 + A) + sin(−A) − tan(360 + A)
tan(A + 180) + cos( _A) + cos(180_ _A)_ _[.]_
_−_ _−_
Intermediate result 1:
_−_ sin(180 + A) + sin(−A) − tan(360 + A)
tan(A + 180) + cos(−A) + cos(180 − _A)_
= [sin(][A][)][ −] [sin(][A][)][ −] [tan(360 +][ A][)]
tan(A + 180) + cos(A) cos(A) _[.]_
_−_
Useful knowledge:
1. sin(−X) = − sin X.
2. sin(180 + X) = − sin X.
3. tan(180 + X) = tan X.
4. cos(180 − _X) = −_ cos X.
5. cos(−X) = cos X.
**A. According to the intermediate result, simplify the expression**
_−_ sin(180 + A) + sin(−A) − tan(360 + A)
tan(A + 180) + cos(−A) + cos(180 − _A)_
to
sin(A) − sin(A) − tan(360 + A)
tan(A + 180) + cos(A) cos(A) _[.]_
_−_
For the numerator part:
sin(A) − sin(A) − tan(360 + A) = − tan(A).
For the denominator part:
tan(A + 180) + cos(A) − cos(A) = tan(A).
Therefore,
sin(A) sin(A) tan(360 + A)
_−_ _−_ = 1.
tan(A + 180) + cos(A) cos(A) [=][ −]tan([tan(]A[A])[)] _−_
_−_
To summarize, we can conclude our reasoning into three intermediate results:
1. sin(A) − sin(A) − tan(360 + A) = − tan(A),
2. tan(A + 180) + cos(A) − cos(A) = tan(A),
3. tan(sin(AA)+180)+cos(−sin(A)−tan(360+A) cos(AA)) [=][ sin(]tan([A]A[)]) [=][ −][1][.]
_−_
**Q. {question}.**
Intermediate result: {intermediate results}.
Useful knowledge: {related information}.
In the end of reasoning, we have to summarize reasoning into concrete intermediate results without explanation
(pure equation without extra text).
**A.**
13
-----
**Submission and Formatting Instructions for ICML 2024**
**A.6. Prompt of few-shot learning for ToT, CoT-SC and LLEMMA**
**Example:**
1. Question: Consider the function f (x) defined as follows: f (x) = ax + 3 if x > 2, x − 5 if −2 ≤ _x ≤_ 2, and
2x − _b if x < −2. Find a + b if the function is continuous._
**Solution:**
(a) Step 1: Ensure continuity at x = 2 and x = −2. Equate ax + 3 and x − 5 at x = 2.
(b) Step 2: Solving a(2) + 3 = 2 − 5 leads to 2a = −6, so a = −3.
(c) Step 3: Equate x − 5 and 2x − _b at x = −2._
(d) Step 4: Solving −2 − 5 = 2(−2) − _b gives b = 3. Therefore, a + b = −3 + 3._
**Final answer: 0.**
**Example:**
1. Question: What number is 64% of 16?
**Solution:**
(a) Step 1: Let the number be x. Set up the equation [16]x [=] 10064 [.]
(b) Step 2: Simplify to _x[1]_ [=] 1004 [=] 251 [, so][ x][ = 25][.]
**Final answer: 25.**
**Example:**
1. Question: Given three complex numbers a + bi, c + di, e + fi, with b = 1, e = −a − _c, and their sum equal_
to −i, find d + f .
**Solution:**
(a) Step 1: Sum the complex numbers: a + bi + c + di + e + fi = −i. The real parts sum to 0, and imaginary
parts sum to -1.
(b) Step 2: The equations become a + c + e = 0 and b + d + f = −1.
(c) Step 3: With b = 1, solve for d + f, getting -2.
**Final answer: -2.**
**Task:**
1. Question: {question} Think step by step and explain the reasoning for the final answer like the examples.
Only include the current step number and explanation in your answer. Do not repeat the question or previous
steps.
**Solution:**
**B. Additional Quantitative Result**
In Table 3, we present the accuracy metrics for all baseline models applied to the TriMaster100 datasets. Notably, the
LLEMMA 7B model completely solves 2 questions. Judging purely by accuracy, LLEMMA 7B surpasses CoT-SC, ToT, SSCCoT\KG, and LLEMMA 34B. However, a closer examination of the results in Table 2 reveals that LLEMMA 7B ranks lowest
overall. This discrepancy in performance metrics underscores our initial assertion that for intricate mathematical questions,
the rarity of complete solutions diminishes the significance of accuracy as a measure of an algorithm’s mathematical
reasoning capabilities. Consequently, our TriMaster100 dataset, which includes scored intermediate results, offers a more
appropriate benchmark for evaluating the proficiency in complex mathematical question reasoning.
14
-----
**Submission and Formatting Instructions for ICML 2024**
**SSC-CoT-HITL** **SSC-CoT-HITL\KG** **SSC-CoT\KG** **SSC-CoT** **CoT-SC** **ToT** **LLEMMA 7B** **LLEMMA 34B**
**5** 2 1 2 1 1 2 1
_Table 3. Accuracy (%) Result of All Baselines on TriMaster100._
**C. Additional Qualitative Result**
Table 4 presents the mathematical reasoning process for an algebra question from MATH level 5. Initially, both algorithms
correctly infer the first step. However, from the second round onward, our method, which utilizes an overlapping-based
selection mechanism for intermediate results, successfully identifies and retains the crucial intermediate results. The
verification process ensures the accuracy of the selected states, notably 16 − 4c < 0, guiding us to the correct conclusion.
In contrast, ToT only arrives at the expression 16 − 4c in its third step, displaying a slower pace compared to SSC-CoT.
Furthermore, an incorrect inference at the second step by ToT leads to a cascade of erroneous conclusions. This outcome
demonstrates that SSC-CoT not only identifies critical intermediate results more quickly but also selects the accurate ones
through verification steps.
_Q: What is the smallest integer value of c such that the function f_ (x) = [2]x[x][2][2]+4[+][x]x[+5]+c [has a domain of all real numbers?]
**Reasoning from SSC-CoT**
——————————————**Round 1 - Given: Q**
**Sv: x[2]** + 4x + c != 0
**Round 2 - Given: Q, Sv**
**Sv: 16** 4c < 0
_−_
**Round 3 - Given: Q, Sv**
**Sv: c > 4**
**Result: c = 5**
**Reasoning from ToT**
——————————————**Step 1: We get a conclusion that x[2]** + 4x + c should not equal 0.
**Step 2: We further get a conclusion that b[2]** _−_ 4ac > 0
**Step 3: After putting the values of a and b into b[2]** _−_ 4ac respectively, we get 16 − 4c > 0
**Step 4: Finally, we get a conclusion :c < 4**
**Step 5: So, the answer is c = 4**
_Table 4. Solution from SSC-CoT (Top) and from ToT (Below) for an algebra question from MATH level 5. Mistakes during reasoning are_
highlighted in red.
Table 5 illustrates the mathematical reasoning process for another algebra question at MATH level 5. In this instance,
SSC-CoT makes an error, yet ToT eventually resolves the question, highlighting the challenges of applying SSC-CoT to
LLMs. During the first round, SSC-CoT partially solves the question. However, the second round reveals disorganized
calculations for k, producing several values without any overlap. x value is also recalculated, which is incorrect and leads to
an erroneous conclusion. This underscores the known difficulty of GPT-3 in performing arithmetic tasks (Testolin, 2024).
Specifically, in round 2, two types of errors are observed: incorrect calculation of k and x despite accurate values for a, b,
_c, and x are given, and a verification process that fails to accurately assess intermediate results, mistakenly using x = −1_
as Sv. As we mentioned the limitations in the Conclusion suggests that enhancing verification could improve SSC-CoT’s
performance. Nevertheless, it is worth noting that in round 1, SSC-CoT accurately determined the values of a, b, c, and x.
The same results are only found at step 3 by ToT, thereby reaffirming SSC-CoT’s capability in quickly pinpointing critical
intermediate results.
15
-----
**Submission and Formatting Instructions for ICML 2024**
_Q: In an equation of the form k = ax[2]_ + bx + c with a > 0, the least possible value of k occurs at x = _[−]2a[b]_ [.]
In the equation k = (6x + 12)(x − 8), what is the least possible value for k?
**Reasoning from SSC-CoT**
——————————————**Round 1 - Given: Q**
**Sv: a = 6, b =** 36, c = 96, x = 3
_−_ _−_
**Round 2 - Given: Q, Sv**
**Sv: x =** 1
_−_
**Round 3 - Given: Q, Sv**
**Sv: k = 54**
**Result: k = 54**
**Reasoning from ToT**
——————————————**Step 1: From the question, we can determine that k = (6x + 12)(x −** 8)
**Step 2: From k = (6x + 12)(x −** 8), we know that k = 6x[2] _−_ 48x + 12x − 96
**Step 3: We can get that k = 6x[2]** _−_ 36x − 96 so that a = 6, b = −36 and c = −96
**Step 4: Due to x =** _[−]2a[b]_ [, we can get a conclution that][ x][ = 3]
**Step 5:By bringing the result of a, b, c and x into k = ax[2]** + bx + c, we can conclude that k = −150
_Table 5. Solution from SSC-CoT (Top) and from ToT (Below) for an algebra question from MATH level 5. Mistakes during reasoning are_
highlighted in red.
**D. Computational Infrastructure Details**
All experiments in this paper are conducted on the device given in Appendix D.
|Device Attribute|Value|
|---|---|
|Computing infrastructure GPU model GPU number CUDA version|GPU NVIDIA A100 1 12.2|
_Table 6. Computational infrastructure details._
16
-----
| [
"Zilong, Zhao",
"Yao, Rong",
"Dongyang, Guo",
"Emek, Gözlüklü",
"Emir, Gülboy",
"Enkelejda, Kasneci"
] | 2024-02-24T00:00:00 | ICML 2024 Workshop ICL | false | 0 | 0 | null | http://arxiv.org/abs/2402.17786 | https://arxiv.org/abs/2402.17786 | https://www.semanticscholar.org/paper/5b5ae9ad94b4c0021349d9a6df17c7b2ddf4b111 |
SubgoalXL: Subgoal-based Expert Learning for Theorem Proving | Formal theorem proving, a field at the intersection of mathematics and computer science, has seen renewed interest with advancements in large language models (LLMs). This paper introduces SubgoalXL, a novel approach that synergizes subgoal-based proofs with expert learning to enhance LLMs' capabilities in formal theorem proving within the Isabelle environment. SubgoalXL addresses two critical challenges: the scarcity of specialized mathematics and theorem-proving data, and the need for improved multi-step reasoning abilities in LLMs. By optimizing data efficiency and employing subgoal-level supervision, SubgoalXL extracts richer information from limited human-generated proofs. The framework integrates subgoal-oriented proof strategies with an expert learning system, iteratively refining formal statement, proof, and subgoal generators. Leveraging the Isabelle environment's advantages in subgoal-based proofs, SubgoalXL achieves a new state-of-the-art performance of 56.1\% in Isabelle on the standard miniF2F dataset, marking an absolute improvement of 4.9\%. Notably, SubgoalXL successfully solves 41 AMC12, 9 AIME, and 3 IMO problems from miniF2F. These results underscore the effectiveness of maximizing limited data utility and employing targeted guidance for complex reasoning in formal theorem proving, contributing to the ongoing advancement of AI reasoning capabilities. The implementation is available at \url{https://github.com/zhaoxlpku/SubgoalXL}. | SubgoalXL is introduced, a novel approach that synergizes subgoal-based proofs with expert learning to enhance LLMs' capabilities in formal theorem proving within the Isabelle environment, and achieves a new state-of-the-art performance. | [
"Xueliang, Zhao",
"Lin, Zheng",
"Lingpeng, Kong",
"Haige, Bo",
"Changran, Hu",
"Urmish, Thakker"
] | 2024-08-20T00:00:00 | null | false | 0 | 0 | [
"Isabelle"
] | https://arxiv.org/abs/2408.11172v1 | https://arxiv.org/abs/2408.11172 | https://www.semanticscholar.org/paper/058ce15272a76a1c6376e7987b28644067f1ef92 |
|
Synthetic Proof Term Data Augmentation for Theorem Proving with Language Models | N/A | This work proposes using samples from trained language models in conjunction with the Lean kernel to generate novel training examples for proof term language modeling, and uses the Lean Kernel to identify type-correct proof term candidates and infer corresponding types. | null | [
"Jesse Michael, Han",
"Joseph, Palermo",
"Johnny, Ye"
] | 2022-01-01T00:00:00 | null | false | 0 | 0 | null | https://www.semanticscholar.org/paper/6a9b2512429012174858f44bb500e2a136f5862b | null | https://www.semanticscholar.org/paper/6a9b2512429012174858f44bb500e2a136f5862b |
System 2 reasoning capabilities are nigh | In recent years, machine learning models have made strides towards human-like reasoning capabilities from several directions. In this work, we review the current state of the literature and describe the remaining steps to achieve a neural model which can perform System 2 reasoning analogous to a human. We argue that if current models are insufficient to be classed as performing reasoning, there remains very little additional progress needed to attain that goal. | It is argued that if current models are insufficient to be classed as performing reasoning, there remains very little additional progress needed to attain that goal. | ## System 2 reasoning capabilities are nigh
Scott C. Lowe
Vector Institute
Toronto, Canada
[email protected]
October 7, 2024
**Abstract**
In recent years, machine learning models have made strides towards human-like reasoning capabilities from several directions. In this work, we review the current state of the literature and describe
the remaining steps to achieve a neural model which can perform System 2 reasoning analogous to a
human. We argue that if current models are insufficient to be classed as performing reasoning, there
remains very little additional progress needed to attain that goal.
### 1 Introduction
The dual process theory of thought processes is long standing within psychology (Wason & Evans, 1974;
Evans, 2008; Stanovich & West, 2000) and was popularized more broadly by Kahneman (2012). In this
framework, human thinking capabilities are conceptualized as two distinct modes of thought. System 1
is fast, automatic, instinctive, and more emotional; System 2 is slower, effortful, deliberate, and more
logical. System 1 is, in essence, unconscious thought, and System 2 is conscious thought; though there is
not yet consensus on whether “ah-ha” moments which come following an incubation period are triggered
by unconscious work of System 1 or 2 (Christensen, 2005; Gilhooly, 2016). Additionally, due to its
instinctive and reactive nature, System 1 is more prone to bias than System 2, though System 2 is not
without bias.
In comparison to these two cognitive systems of thought, feed-forward neural networks are sometimes
described as being analogous to System 1. Their outputs are immediate and automatic, yielded immediately without what might call “deliberation”. Like with System 1, the computational system producing
the output does not and can not provide an explicit explanation for why it produced a certain response,
making interpretability challenging, even when attempting to induce it to provide an a posteriori justification for its response; Jung et al., 2022). Such systems are effectively performing pattern matching for
their current stimulus against the body of data imbibed during training.
In comparison, symbolic rule-based algorithms (classical “artificial intelligence”), whether they are
manually or programatically created, can provide an explanation for their reasoning. However their
performance is limited because the space of the real-world is too large to be handled with a narrow set
of rules that are coded in the stimulus domain.
In this work, we review the existing literature in the space of reasoning from the perspective of
philosophy and machine learning, and we speculate on what form a neural network would need to take
for it to be able to perform reasoning in the style of System 2. We argue the majority of hurdles needed
to achieve this task have already been cleared, and there are a small number of pieces of the puzzle
remaining. Thus complex agents, trained through deep learning, that can reason logically about the
real-world will be available in the near-term, if they are not here already.
-----
### 2 Background
**2.1** **Modalities of human thought**
Historic texts indicate that ancient philosophers such as Plato used to think that thinking was synonymous
with an inner monologue. However, whilst an internal monologue (inner speech) is common, it is not
ubiquitous and most people overestimate how often their thoughts are expressed verbally (Hurlburt et al.,
2013). There is a wide variety of inner experiences across humans (Hurlburt & Heavy, 2006), and most
people are surprised when they first discover that other people’s internal experiences differ greatly from
their own.
The modalities of human inner thought are (Hurlburt & Schwitzgebel, 2011; Hurlburt et al., 2013):
- Inner speaking/inner monologue — thoughts expressed verbally, e.g. talking to yourself, hearing
your/a voice while recalling.
- Inner seeing/visual imagery — thoughts expressed visually, e.g. picturing a memory or imagining
a hypothetical scene.
- Feelings — a conscious experience of emotional processes, e.g. sadness when grieving.
- Unsymbolized thinking — thoughts expressed without words or images, e.g. drinking a glass of
water, without internal discussion or commentary.
- Sensory awareness — attending to a sensory aspect of the environment for an unimportant reason,
e.g. hearing someone talk but seeing the light reflecting off their glasses.
Most people experience inner speech and inner imagery some of the time but not all of the time, with
the majority of their thought processes unsymbolized (Hurlburt et al., 2013). However there are outliers in
each direction, with some people having no inner speech (anauralia), constant inner speech, no mind’s eye
(aphantasia), or extremely vivid mental imagery as detailed as sensory stimuli (hyperphantasia). Day-today observations of people across society demonstrate, and academic studies confirm, that people are able
to complete tasks irrespective of whether their internal thoughts are represented through speech, imagery,
or neither (Keogh et al., 2021; Hinwar & Lambert, 2021); though the lack of inner sight does impair the
ability to recall visual characteristics (Monzel et al., 2022; Bainbridge et al., 2021). Additionally, note
that those possessing an inner monologue who speak multiple languages can have their inner monologue
flip between languages depending on recent context. These observations lead us to hypothesise that
conscious thoughts (i.e. System 2 thinking) are fundamentally abstract in nature, but can be projected
to language and visual modalities internally.
**2.2** **What is System 2 reasoning?**
As a thought exercise, consider the task of solving this illustrative example from Kahneman (2012):
A bat and a ball together cost $1.10. The bat costs $1 more than the ball. How much does the
ball cost?
System 1 is responsible for the automatic response which immediately comes to mind on reading the
problem: ten cents. This answer is yielded in an involuntary manner, seemingly to all who hear the
question for the first time. However, by engaging System 2 we can verify whether the intuitive solution
is correct, and reason about it. By reflecting on the intuitive answer, one can observe that if this were
the price of the ball, the total would be $1.20, hence the answer is incorrect. Since as the total price is
the difference in price between the two objects ($1) plus twice the price of the ball, the answer is in fact
5 cents.
-----
If we analyse it, it appears that the instinctive response stems from pattern matching—the problem
looks at first glance like other problems comparing the quantity of two items, which we have solved in
the past using the “subtraction” method, hence we instinctively try to apply it here.
One way to conceptualize reasoning is as a series of hypothesis generation and verification steps. If the
initial hypothesis fails the verification, we then come up with a new hypothesis conditioned on the new
information generated during the verification process. This process is repeated until a revised hypothesis
satisfies the verification step. Note that such a framework is similar to the Actor-Critic reinforcement
learning algorithm (Konda & Tsitsiklis, 1999), with the actor analogous to the hypothesis generator and
the critic as the verifier.
Alternatively, reasoning can be conceptualized as a train of thought in a continuous stream of consciousness (Potter et al., 2014; James, 2012). This framework is comparable to the chain-of-thought LLM
prompting technique (Wei et al., 2022).
**2.3** **Existing neural reasoning agents**
Previous work has found that by prompting LLMs with an in-context example of chain-of-thought reasoning, and asking it to think step-by-step for its own answer, models can be coerced into “thinking”
step-by-step (Wei et al., 2022). Providing such a prompt changes this distribution of most likely next
token to be a longer trajectory with smaller steps toward the solution to a question outputted first. By
having the model attend to its own steps as it progresses, it builds on its previous steps such that its
final result is more likely to be accurate (Wei et al., 2022; Li et al., 2024). However, recent work has
demonstrated the majority of the gains seen when using chain-of-thought prompting can be matched
by prompting with a long series of task-independent filler tokens instead, suggesting the length of the
sequence and the size of the compute graph is more important than the textual output (Pfau et al., 2024).
This implies the transformer can process data through unseen computations within the hidden layers of
the network, unwitnessed in the chain-of-thought tokens that it outputs. Such findings may be analogous
to System 2 reasoning in humans, which we noted in §2.1 are primarily non-symbolic but can be projected
to consciously observed language streams (Hurlburt et al., 2013), though such a hypothesis is challenging
to investigate due to the difficulties of interpreting deep transformer representations (Rai et al., 2024).
In the domain of closed-world games, tremendous gains were seen by applying deep reinforcement
learning models that learn through self-play to optimize a value function (Silver et al., 2016, 2017, 2018).
In this case, the value network can be fit to predict the likelihood of each player winning the game from a
given position. The result of the game can be objectively determined by continuing to play the match and
seeing who wins, providing a strong training signal to the value network. Since the value network is able
to fit this task, it is able to steer the actor model effectively. Results are further improved by performing
a Monte Carlo Markov chain tree search at inference time, steered by the model’s predictions to prune
the tree to a narrow range of feasible moves, to evaluate future game states and choose an optimal move.
Such searches are similar to the Tree-of-thoughts approach to improve chain-of-thoughts reasoning (Long,
2023; Yao et al., 2023).
When deploying LLMs on mathematics problems, step-level verification of chain-of-thought has been
shown to be effective training technique (Cobbe et al., 2021; Uesato et al., 2022; Lightman et al., 2024).
### 3 Future steps and potential pitfalls
**3.1** **Learning to reason**
Given the existing neural reasoning techniques, and their analogous relationship to human reasoning
processes, we posit that networks already can learn to reason.
Multiple works have shown training LLMs to reason step-by-step is best achieved by step-level feedback
(Zelikman et al., 2022; Pfau et al., 2024; Lightman et al., 2024). One issue for training a reasoning model
at scale is thus that there is a lack of large-scale reasoning datasets to train on in which humans have
written out their train of thoughts explicitly, though some do exist (Yang et al., 2018). However, such
-----
data can be acquired at modest scale, and (by explicitly labelling which steps are valid and which are
not) such data can be used to train a verifier that predicts whether individual logical reasoning steps are
sound. This verifier (similar to the rationalisation evaluator used by Zelikman et al. (2022)) can serve a
similar role to the step-wise maths problem solver of Lightman et al. (2024). Using this, we can bootstrap
more chain-of-thought data by tasking a pretrained LLM with chain-of-thought prompting to generate
more reasoning data, and discarding outputs which contain steps which do not pass the verifier, similar
to that used in Lightman et al. (2024).
Note that the verifier is an essential part of this pipeline, and it must be accurate in order for the
iterative self-distillation to be effective. But in any scenario where verification is easier than generation,
the verifier (even if learnt and imperfect) can be deployed to iteratively refine and distill the generative
model (Christiano et al., 2018). An alternative bootstrap formulation would be to generate a large body
of chain-of-thoughts data using chain-of-thought prompting applied on a large corpus of problems with
known solutions. We then train a verifier to, given a particular point in the chain-of-thought, classify
whether the model will get the right answer. This verifier model will serve a similar role to the value
function in self-play RL systems (Silver et al., 2018), and we can fine-tune our model to generate its step
of thoughts whilst trying to maximize the verifier’s probability the problem will be solved. Since such a
system bears similarity to Q-learning and STaR bootstrap reasoning (Zelikman et al., 2022), it might be
aptly given the name “Q*”. We note that other recent work has successfully applied reinforcement learning
fine-tuning to pretrained LLMs, such as reinforcement with human feedback (RLHF) (Ziegler et al., 2020)
or with harmlessness feedback (Bai et al., 2022); and these methods can be improved by modifying
the method to provide direct feedback (Rafailov et al., 2024; Lee et al., 2024). The implementation we
propose would be a similar reinforcement learning fine-tuning stage, but with a objective focused on
reasoning accuracy.
All the components for this solution seemingly already exist in the literature, and it is even possible
such a model has already been trained recently (OpenAI, 2024).
**3.2** **Applicability**
LLMs trained only on textual data are unlikely to master reasoning about the real-world, since their
observations of it are highly indirect. When humans communicate with each other, they do so with a
large body of common experiences merely from being creatures raised and living in the real world. This
means that many things that are taken for granted remain unstated as they are assumed to be known by
all parties in the discourse.
In order for foundation models to be able to reason efficiently about the world, we speculate they
will need a world model that is built on sensory observations, not just text descriptions. More recent
foundation models have made progress in this direction (Zhang et al., 2022) by being multi-modal, processing both language and visual stimuli. However, we posit that further gains will be made when using
data which captures the richness of the real world through video data and (less abundantly) embodied
sensorimotor data. Video data has rich features about the world, enabling the network to construct its
own intuitive physics, infer cause and effect (Bardes et al., 2024).
**3.3** **Scaling**
Will scaling laws continue to hold for chain-of-thought reasoning, or will such models hit scaling problems?
The “bitter lesson” of machine learning has been that gains from methods that can exploit generic
compute scaling (e.g. larger and more flexible models, trained increasingly large datasets), in the longrun outperform gains from human-knowledge adjustments due to Moore’s law (Sutton, 2019). Thus
we postulate that reasoning models will naturally also benefit from utilizing general methods rather
than hand-tuned routines. This is evidenced by recent work deploying LLMs on mathematical problems
(Snell et al., 2024), which found that evaluation performance increases as the amount of inference compute
increases.
-----
However, one possible obstacle is the quadratic scaling of transformers with respect to their input
sequence length due to their all-to-all attention. Inefficient chain-of-thought reasoning will create excessively verbose thought-histories, greatly increasing the amount of compute required to reach the end of a
chain-of-thought. This poses a challenge to efficiently utilize compute when the model’s inference steps
are scaled up. There have been various attempts to modify transformers to scale better (Child et al.,
2019; Choromanski et al., 2021; Dao et al., 2022; Dao, 2024). Recently there have also been orthogonal
efforts towards SOTA LLMs that are built using State Space Model (SSM) architectures (Gu et al., 2022;
Poli et al., 2023; Gu & Dao, 2023; Dao & Gu, 2024).
More critically, as the number of entities to reason about grows, the number of potential interactions
between the entities grows exponentially. This has the potential to out-scale the computational resources
available to train and deploy reasoning models. However, we note that human working memory is limited
to 7 ± 2 objects or chunks across a variety of tasks, where the number and size of chunks depends on the
individual’s familiarity with the items being held in memory (Miller, 1956). This implies that reasoning
does not require all-to-all attention over objects in the thought history, rather it only requires a constant
memory space. The remaining challenges are (1) items being held in memory must be appropriately
compact; (2) when only a limited number of items are retained in memory, the model must learn which
memories to keep and which to drop.
With regards to compactness, this is a challenge for token-based models as typically the embedding
space has the same granularity as the stimulus space. Yet recent hierarchical models from the vision
literature offer insights into how a hierarchical token-based model may look, in which the embedding
space is more spatially compact than the stimulus representations (Liu et al., 2021; Fan et al., 2021;
Li et al., 2022; Ryali et al., 2023).
With regards to selecting memories to retain, recent work on memory-augmented transformers (Bulatov et al.,
2024) and on SSMs that can select and retain memories in their state-space (Dao & Gu, 2024) each provide research directions towards this goal, though there is still work to be done. Even if memory selection
remains challenging, less efficient reasoning models will be possible in the meantime.
**3.4** **Safety concerns**
As new capabilities are introduced to AI models, it is important to monitor these frontier models for
potential safety risks (Phuong et al., 2024). From an AI control perspective, ML agents which can reason
and strategically plan present a much larger risk than passive models which merely predict things. Like
any ML model in deployment, there is a societal risk that the model’s learnt biases from its training
distribution will result in its behaviour diverging from human aspirations.
But more importantly, such a model raises the existential risk from AI models. Models which can
reason can use their abilities to plan and strategize, potentially over the long-term. If allowed to act
autonomously to achieve a goal, they may erroneously or surreptitiously plan with subgoals that involve
taking control of resources they should not have access too, etc. To mitigate these concerns, it is important
that training data be screened to ensure it does not contain instructions we would not wish an agent to
take when deployed in the wild.
Another concern regards the scrutibility of reasoning agents. Current LLMs must always project their
chain-of-thought reasoning steps to English, though there are concerns that their internal computation
may not be fully reflected in their outputs (Pfau et al., 2024; Lyu et al., 2023; Lanham et al., 2023).
From a gain of function perspective, it may be advantageous to train models that can reason in abstract
concepts that do not directly correspond to tokens in the training corpus. However, we are of the opinion
that steps must always be taken to ensure that model reasoning is projected into a frame (be it language
or imagery) in which it can be explicitly and as completely as possible communicated to humans.
### 4 Conclusions
We have discussed the literature surrounding the philosophy of human inner thought and reasoning, and
the current neural network approaches to reasoning models. The current networks have strong analogues
-----
to processes ascribed to human reasoning. We thus argue they already achieve reasoning, though to
limited degrees due to either their limited domains or lack of explicit training.
From this, we propose a pipeline which combines several existing techniques from the machine learning
literature together as a candidate for how a reasoning agent could be explicitly trained to reason. By
expanding the breadth of training data to include richer, raw, temporal stimuli such as video, we anticipate
the model can achieve a more capable world model. Thus we conclude that neural reasoning models are
either already here, or if not they will be soon.
### Acknowledgements
Many thanks to David Emerson, Michael Zhang, and Iulia Eyriay for insightful discussions and feedback,
and to Philip from AI Explained for providing the initial inspiration (AI Explained, 2024).
### References
AI Explained. o1 - what is going on? why o1 is a 3rd paradigm of model + 10 things you might not
[know, 2024. URL https://www.youtube.com/watch?v=KKF7kL0pGc4.](https://www.youtube.com/watch?v=KKF7kL0pGc4)
Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., Chen, A., Goldie, A., Mirhoseini,
A., McKinnon, C., Chen, C., Olsson, C., Olah, C., Hernandez, D., Drain, D., Ganguli, D., Li, D.,
Tran-Johnson, E., Perez, E., Kerr, J., Mueller, J., Ladish, J., Landau, J., Ndousse, K., Lukosuite, K.,
Lovitt, L., Sellitto, M., Elhage, N., Schiefer, N., Mercado, N., DasSarma, N., Lasenby, R., Larson, R.,
Ringer, S., Johnston, S., Kravec, S., Showk, S. E., Fort, S., Lanham, T., Telleen-Lawton, T., Conerly,
T., Henighan, T., Hume, T., Bowman, S. R., Hatfield-Dodds, Z., Mann, B., Amodei, D., Joseph, N.,
McCandlish, S., Brown, T., and Kaplan, J. Constitutional AI: Harmlessness from AI feedback, 2022.
[URL https://arxiv.org/abs/2212.08073.](https://arxiv.org/abs/2212.08073)
Bainbridge, W. A., Pounder, Z., Eardley, A. F., and Baker, C. I. Quantifying aphantasia through drawing:
Those without visual imagery show deficits in object but not spatial memory. Cortex, 135:159–172,
[2021. ISSN 0010-9452. doi:10.1016/j.cortex.2020.11.014.](https://doi.org/10.1016/j.cortex.2020.11.014)
Bardes, A., Garrido, Q., Ponce, J., Rabbat, M., LeCun, Y., Assran, M., and Ballas, N. Revisiting feature
prediction for learning visual representations from video. arXiv:2404.08471, 2024.
Bulatov, A., Kuratov, Y., Kapushev, Y., and Burtsev, M. S. Scaling transformer to 1M tokens and
[beyond with RMT, 2024. URL https://arxiv.org/abs/2304.11062.](https://arxiv.org/abs/2304.11062)
Child, R., Gray, S., Radford, A., and Sutskever, I. Generating long sequences with sparse transformers,
[2019. URL https://arxiv.org/abs/1904.10509.](https://arxiv.org/abs/1904.10509)
Choromanski, K. M., Likhosherstov, V., Dohan, D., Song, X., Gane, A., Sarlos, T., Hawkins, P.,
Davis, J. Q., Mohiuddin, A., Kaiser, L., Belanger, D. B., Colwell, L. J., and Weller, A. Rethinking attention with performers. In International Conference on Learning Representations, 2021. URL
[https://openreview.net/forum?id=Ua6zuk0WRH.](https://openreview.net/forum?id=Ua6zuk0WRH)
Christensen, B. Problematic assumptions in incubation effect studies and what to do about them. Creative
_Cognition: Analogy And Incubation, 2005._
Christiano, P., Shlegeris, B., and Amodei, D. Supervising strong learners by amplifying weak experts,
[2018. URL https://arxiv.org/abs/1810.08575.](https://arxiv.org/abs/1810.08575)
Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton,
J., Nakano, R., Hesse, C., and Schulman, J. Training verifiers to solve math word problems, 2021. URL
[https://arxiv.org/abs/2110.14168.](https://arxiv.org/abs/2110.14168)
-----
Dao, T. FlashAttention-2: Faster attention with better parallelism and work partitioning. In International
_Conference on Learning Representations (ICLR), 2024._
Dao, T. and Gu, A. Transformers are SSMs: Generalized models and efficient algorithms through
structured state space duality. arXiv preprint arXiv:2405.21060, 2024.
Dao, T., Fu, D. Y., Ermon, S., Rudra, A., and Ré, C. FlashAttention: Fast and memory-efficient exact
attention with IO-awareness. In Advances in Neural Information Processing Systems (NeurIPS), 2022.
Evans, J. S. B. T. Dual-processing accounts of reasoning, judgment, and social cognition. Annu. Rev.
_Psychol., 59(1):255–278, 2008._
Fan, H., Xiong, B., Mangalam, K., Li, Y., Yan, Z., Malik, J., and Feichtenhofer, C. Multiscale vision
transformers. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6804–
[6815, 2021. doi:10.1109/ICCV48922.2021.00675.](https://doi.org/10.1109/ICCV48922.2021.00675)
Gilhooly, K. J. Incubation and intuition in creative problem solving. Frontiers in Psychology, 7, 2016.
[ISSN 1664-1078. doi:10.3389/fpsyg.2016.01076.](https://doi.org/10.3389/fpsyg.2016.01076)
Gu, A. and Dao, T. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint
_arXiv:2312.00752, 2023._
Gu, A., Goel, K., and Ré, C. Efficiently modeling long sequences with structured state spaces. In The
_International Conference on Learning Representations (ICLR), 2022._
Hinwar, R. P. and Lambert, A. J. Anauralia: The silent mind and its association with aphantasia.
_[Frontiers in Psychology, 12, 2021. ISSN 1664-1078. doi:10.3389/fpsyg.2021.744213.](https://doi.org/10.3389/fpsyg.2021.744213)_
Hurlburt, R. and Heavy, C. Exploring Inner Experience: The Descriptive Experience Sampling Method.
Advances in consciousness research. John Benjamins Pub., 2006. ISBN 9789027252005.
Hurlburt, R. and Schwitzgebel, E. Describing Inner Experience?: Proponent Meets Skeptic. Life and
Mind: Philosophical Issues in Biology and Psychology. MIT Press, 2011. ISBN 9780262516495.
Hurlburt, R. T., Heavey, C. L., and Kelsey, J. M. Toward a phenomenology of inner speaking. Conscious_[ness and Cognition, 22(4):1477–1494, 2013. ISSN 1053-8100. doi:10.1016/j.concog.2013.10.003.](https://doi.org/10.1016/j.concog.2013.10.003)_
James, W. _The Principles of Psychology, Vol. 1._ Number v. 1. Dover Publications, 2012. ISBN
9780486123493.
Jung, J., Qin, L., Welleck, S., Brahman, F., Bhagavatula, C., Le Bras, R., and Choi, Y. Maieutic
prompting: Logically consistent reasoning with recursive explanations. In Goldberg, Y., Kozareva,
Z., and Zhang, Y. (eds.), Proceedings of the 2022 Conference on Empirical Methods in Natural Lan_guage Processing, pp. 1266–1279, Abu Dhabi, United Arab Emirates, December 2022. Association for_
[Computational Linguistics. doi:10.18653/v1/2022.emnlp-main.82.](https://doi.org/10.18653/v1/2022.emnlp-main.82)
Kahneman, D. Thinking, fast and slow. Penguin, London, 2012. ISBN 9780141033570 0141033576.
Keogh, R., Wicken, M., and Pearson, J. Visual working memory in aphantasia: Retained accuracy and capacity with a different strategy. _Cortex, 143:237–253, 2021._ ISSN 0010-9452.
[doi:10.1016/j.cortex.2021.07.012.](https://doi.org/10.1016/j.cortex.2021.07.012)
Konda, V. and Tsitsiklis, J. Actor-critic algorithms. In Solla, S., Leen, T., and Müller, K.
(eds.), Advances in Neural Information Processing Systems, volume 12. MIT Press, 1999. URL
[https://proceedings.neurips.cc/paper_files/paper/1999/file/6449f44a102fde848669bdd9eb6b76fa-Paper.p](https://proceedings.neurips.cc/paper_files/paper/1999/file/6449f44a102fde848669bdd9eb6b76fa-Paper.pdf)
-----
Lanham, T., Chen, A., Radhakrishnan, A., Steiner, B., Denison, C., Hernandez, D., Li, D., Durmus, E.,
Hubinger, E., Kernion, J., Lukoši¯ut˙e, K., Nguyen, K., Cheng, N., Joseph, N., Schiefer, N., Rausch, O.,
Larson, R., McCandlish, S., Kundu, S., Kadavath, S., Yang, S., Henighan, T., Maxwell, T., TelleenLawton, T., Hume, T., Hatfield-Dodds, Z., Kaplan, J., Brauner, J., Bowman, S. R., and Perez, E. Mea[suring faithfulness in chain-of-thought reasoning, 2023. URL https://arxiv.org/abs/2307.13702.](https://arxiv.org/abs/2307.13702)
Lee, H., Phatale, S., Mansoor, H., Mesnard, T., Ferret, J., Lu, K., Bishop, C., Hall, E., Carbune, V.,
Rastogi, A., and Prakash, S. RLAIF vs. RLHF: Scaling reinforcement learning from human feedback
[with AI feedback, 2024. URL https://arxiv.org/abs/2309.00267.](https://arxiv.org/abs/2309.00267)
Li, Y., Wu, C.-Y., Fan, H., Mangalam, K., Xiong, B., Malik, J., and Feichtenhofer, C.
MViTv2: Improved multiscale vision transformers for classification and detection. In 2022
_IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4794–4804, 2022._
[doi:10.1109/CVPR52688.2022.00476.](https://doi.org/10.1109/CVPR52688.2022.00476)
Li, Z., Liu, H., Zhou, D., and Ma, T. Chain of thought empowers transformers to solve inherently
serial problems. In The Twelfth International Conference on Learning Representations, 2024. URL
[https://openreview.net/forum?id=3EWTEy9MTM.](https://openreview.net/forum?id=3EWTEy9MTM)
Lightman, H., Kosaraju, V., Burda, Y., Edwards, H., Baker, B., Lee, T., Leike, J., Schulman, J.,
Sutskever, I., and Cobbe, K. Let’s verify step by step. In The Twelfth International Conference
_[on Learning Representations, 2024. URL https://openreview.net/forum?id=v8L0pN6EOi.](https://openreview.net/forum?id=v8L0pN6EOi)_
Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. Swin transformer: Hierarchical
vision transformer using shifted windows. In 2021 IEEE/CVF International Conference on Computer
_[Vision (ICCV), pp. 9992–10002, 2021. doi:10.1109/ICCV48922.2021.00986.](https://doi.org/10.1109/ICCV48922.2021.00986)_
[Long, J. Large language model guided tree-of-thought, 2023. URL https://arxiv.org/abs/2305.08291.](https://arxiv.org/abs/2305.08291)
Lyu, Q., Havaldar, S., Stein, A., Zhang, L., Rao, D., Wong, E., Apidianaki, M., and Callison-Burch,
C. Faithful chain-of-thought reasoning. In Park, J. C., Arase, Y., Hu, B., Lu, W., Wijaya, D.,
Purwarianti, A., and Krisnadhi, A. A. (eds.), Proceedings of the 13th International Joint Conference
_on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association_
_for Computational Linguistics (Volume 1: Long Papers), pp. 305–329, Nusa Dua, Bali, November 2023._
[Association for Computational Linguistics. doi:10.18653/v1/2023.ijcnlp-main.20.](https://doi.org/10.18653/v1/2023.ijcnlp-main.20)
Miller, G. A. The magical number seven plus or minus two: some limits on our capacity for processing
information. Psychol. Rev., 63(2):81–97, March 1956.
Monzel, M., Vetterlein, A., and Reuter, M. Memory deficits in aphantasics are not restricted to autobiographical memory – perspectives from the dual coding approach. Journal of Neuropsychology, 16(2):
[444–461, 2022. doi:10.1111/jnp.12265.](https://doi.org/10.1111/jnp.12265)
[OpenAI. Learning to reason with LLMs, 2024. URL https://openai.com/index/learning-to-reason-with-llms/.](https://openai.com/index/learning-to-reason-with-llms/)
Pfau, J., Merrill, W., and Bowman, S. R. Let’s think dot by dot: Hidden computation
in transformer language models. In First Conference on Language Modeling, 2024. URL
[https://openreview.net/forum?id=NikbrdtYvG.](https://openreview.net/forum?id=NikbrdtYvG)
Phuong, M., Aitchison, M., Catt, E., Cogan, S., Kaskasoli, A., Krakovna, V., Lindner, D., Rahtz, M.,
Assael, Y., Hodkinson, S., Howard, H., Lieberum, T., Kumar, R., Raad, M. A., Webson, A., Ho, L.,
Lin, S., Farquhar, S., Hutter, M., Deletang, G., Ruoss, A., El-Sayed, S., Brown, S., Dragan, A., Shah,
R., Dafoe, A., and Shevlane, T. Evaluating frontier models for dangerous capabilities, 2024. URL
[https://arxiv.org/abs/2403.13793.](https://arxiv.org/abs/2403.13793)
-----
Poli, M., Massaroli, S., Nguyen, E., Fu, D. Y., Dao, T., Baccus, S., Bengio, Y., Ermon, S., and Ré,
C. Hyena hierarchy: Towards larger convolutional language models. In International Conference on
_Machine Learning, pp. 28043–28078. PMLR, 2023._
Potter, M. C., Wyble, B., Hagmann, C. E., and McCourt, E. S. Detecting meaning in rsvp at 13
ms per picture. Attention, Perception, & Psychophysics, 76(2):270–279, Feb 2014. ISSN 1943-393X.
[doi:10.3758/s13414-013-0605-z.](https://doi.org/10.3758/s13414-013-0605-z)
Rafailov, R., Sharma, A., Mitchell, E., Manning, C. D., Ermon, S., and Finn, C. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing
_Systems, 36, 2024._
Rai, D., Zhou, Y., Feng, S., Saparov, A., and Yao, Z. A practical review of mechanistic interpretability
[for transformer-based language models, 2024. URL https://arxiv.org/abs/2407.02646.](https://arxiv.org/abs/2407.02646)
Ryali, C., Hu, Y.-T., Bolya, D., Wei, C., Fan, H., Huang, P.-Y., Aggarwal, V., Chowdhury, A., Poursaeed,
O., Hoffman, J., Malik, J., Li, Y., and Feichtenhofer, C. Hiera: a hierarchical vision transformer without
the bells-and-whistles. In Proceedings of the 40th International Conference on Machine Learning,
ICML’23. JMLR.org, 2023.
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J.,
Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner,
N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., and Hassabis, D. Mastering
the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, Jan 2016. ISSN
[1476-4687. doi:10.1038/nature16961.](https://doi.org/10.1038/nature16961)
Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L.,
Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., van den Driessche, G., Graepel, T., and
Hassabis, D. Mastering the game of Go without human knowledge. Nature, 550(7676):354–359, Oct
[2017. ISSN 1476-4687. doi:10.1038/nature24270.](https://doi.org/10.1038/nature24270)
Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., and Hassabis, D. A general reinforcement learning
algorithm that masters chess, shogi, and go through self-play. Science, 362(6419):1140–1144, 2018.
[doi:10.1126/science.aar6404.](https://doi.org/10.1126/science.aar6404)
Snell, C., Lee, J., Xu, K., and Kumar, A. Scaling LLM test-time compute optimally can be more effective
[than scaling model parameters, 2024. URL https://arxiv.org/abs/2408.03314.](https://arxiv.org/abs/2408.03314)
Stanovich, K. E. and West, R. F. Advancing the rationality debate. Behavioral and Brain Sciences, 23
[(5):701–717, 2000. doi:10.1017/S0140525X00623439.](https://doi.org/10.1017/S0140525X00623439)
Sutton, R. The bitter lesson. Incomplete Ideas (blog), 13(1):38, 2019.
Uesato, J., Kushman, N., Kumar, R., Song, F., Siegel, N., Wang, L., Creswell, A., Irving, G., and
Higgins, I. Solving math word problems with process- and outcome-based feedback, 2022. URL
[https://arxiv.org/abs/2211.14275.](https://arxiv.org/abs/2211.14275)
Wason, P. and Evans, J. Dual processes in reasoning? Cognition, 3(2):141–154, 1974. ISSN 0010-0277.
[doi:10.1016/0010-0277(74)90017-1.](https://doi.org/10.1016/0010-0277(74)90017-1)
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q. V., and
Zhou, D. Chain-of-thought prompting elicits reasoning in large language models. In Koyejo, S.,
Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural In_formation Processing Systems, volume 35, pp. 24824–24837. Curran Associates, Inc., 2022._ URL
[https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-](https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf)
-----
Yang, Z., Qi, P., Zhang, S., Bengio, Y., Cohen, W. W., Salakhutdinov, R., and Manning, C. D. HotpotQA:
A dataset for diverse, explainable multi-hop question answering. In Conference on Empirical Methods
_in Natural Language Processing (EMNLP), 2018._
Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao, Y., and Narasimhan, K.
Tree of thoughts: Deliberate problem solving with large language models, 2023. URL
[https://arxiv.org/abs/2305.10601.](https://arxiv.org/abs/2305.10601)
Zelikman, E., Wu, Y., Mu, J., and Goodman, N. STaR: Bootstrapping reasoning with reasoning. In
Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural
_Information Processing Systems, volume 35, pp. 15476–15488. Curran Associates, Inc., 2022. URL_
[https://proceedings.neurips.cc/paper_files/paper/2022/file/639a9a172c044fbb64175b5fad42e9a5-Paper-](https://proceedings.neurips.cc/paper_files/paper/2022/file/639a9a172c044fbb64175b5fad42e9a5-Paper-Conference.pdf)
Zhang, C., Van Durme, B., Li, Z., and Stengel-Eskin, E. Visual commonsense in pretrained unimodal
and multimodal models. In Carpuat, M., de Marneffe, M.-C., and Meza Ruiz, I. V. (eds.), Proceedings
_of the 2022 Conference of the North American Chapter of the Association for Computational Linguis-_
_tics: Human Language Technologies, pp. 5321–5335, Seattle, United States, July 2022. Association for_
[Computational Linguistics. doi:10.18653/v1/2022.naacl-main.390.](https://doi.org/10.18653/v1/2022.naacl-main.390)
Ziegler, D. M., Stiennon, N., Wu, J., Brown, T. B., Radford, A., Amodei, D., Christiano,
P., and Irving, G. Fine-tuning language models from human preferences, 2020. URL
[https://arxiv.org/abs/1909.08593.](https://arxiv.org/abs/1909.08593)
10
-----
| [
"Scott C., Lowe"
] | 2024-10-04T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2410.03662v1 | https://arxiv.org/abs/2410.03662 | https://www.semanticscholar.org/paper/1ccefd16ae5b75ea96b8e5a7fa56407fb370b0db |
Tactic Characterizations by the Influences on Proof States | N/A | null | # Tactic Characterizations by the Influences on Proof States [∗]
Liao Zhang[1][,][3] and Lasse Blaauwbroek[2]
1 Czech Technical University, Prague, Czech Republic
2 Institut des Hautes ´Etudes Scientifiques, France
3 University of Innsbruck, Austria
## 1 Introduction
When formalizing mathematics in an interactive theorem prover, such as the Coq [1] proof
assistant, it is necessary to have an intuition on how the available proof actions change the
proof state. In particular users may have an idea that to transform the current proof state to
a different one, a particular tactic might be the right one to use.
In this paper, we regard the changes to a proof state made by the tactic application as the
semantic of that tactic. The purpose of our study is to predict the tactic based on its semantic.
Assume there is a triple (ps, t, {ps[′]}1..n), where ps, t, {ps[′]}1..n are a Coq state, the tactic applied
to ps by a Coq user, and the after states caused by the tactic application, respectively. We aim
at building a machine learning model to predict a tactic t[′] such that ps transforms to _ps[′]_ 1..n
_{_ _}_
by the application of t[′]. To ensure that t and t[′] lead to the same after states, we run t[′] in Coq
and compare with t.
There are several motivations behind our project. First, the task can be directly applied for
tactic suggestion given a human’s intuition for the next state. For a Coq beginner, it is quite
common that he can imagine the next state but cannot determine how to select a suitable tactic
to reach the goal. However, understanding Coq’s manual may be challenging for beginners. If he
can copy the before state from the Coq editor, convert it to the imaginary after state, and input
them into our system, we will be able to automatically suggest the tactics with the expected
behavior. Meanwhile, a medium-level Coq programmer may want to discover a single tactic to
substitute an awkward tactic sequence. Even for an expert, when he encounters an unfamiliar
domain, he needs our system to advise likely helpful tactics.
Second, the task serves as an initial step to a new formal verification strategy. When
a mathematician tries to prove a theorem, he first thinks of several intermediate goals and
then fills the gaps by order. However, nowadays proof assistants cannot skip tactics between
intermediate goals. We can extend our system to predict a tactic sequence from one state
to another. Afterwards, the human expert can merely specify the states that he thinks are
important to complete the proof and ask our system to erase the gaps.
Finally, since we encounter our own challenges in precisely characterizing the transition
between before and after states, the approaches developed by us can also be applied to other
machine learning domains. Take fault detection [3] for instance, we can apply our differential
techniques to the images before and after the fault occurs. Then, the results can be input into
a learning model to predict the category of fault.
_∗This work was supported by the ERC grant no. 714034 SMART and by the European Regional Development_
Fund under the project AI&Reasoning (reg. no. CZ.02.1.01/0.0/0.0/15 003/0000466).
-----
Tactic Characterizations by the Influences on Proof States Zhang and Blaauwbroek
## 2 Tactic Characterizations
We characterize the semantic of tactics as features and Coq strings as the input for random
forests [7] and GPT-2 [5], respectively. The feature extraction techniques on Coq terms are
the same as our previous work [7]. Large-scale pretrained transformers such as GPT-2 have
achieved significant progress in various domains [2]. We evaluate GPT-2 and random forests to
make a comparison.
We consider three feature extraction approaches. The first approach computes the differences between the state features of ps and _ps[′]_ 1..n. From ps, we extract a set of features F .
_{_ _}_
We also extract n sets of features {F _}1..n from {ps[′]}1..n. If a feature f exists in F but does not_
in any Fi, we regard it as a disappeared feature. Conversely, if there is an Fi with a feature f
that is not in F, then f is an appearing feature. The tactic characterization is the union of all
disappeared and appearing features.
Second, we extract features from the newly defined existential variables in proof terms. In
Coq, we write tactics to construct a proof script to prove a theorem. Actually, the tactics help
to complete a proof term. The relationship between proofs and proof terms is based on the
Curry-Howard correspondence [6]. An incomplete proof term may contain several existential
variables. Some are defined, and others are undefined as holes. A tactic fills some holes with
Coq terms and may generate several new holes. A proof term is completed once all the holes
have been filled. We obtain the features from the terms defined in the holes by the tactic as its
characterization.
Finally, we perform first-order anti-unification [4] on the before and after states to find
the substitutions. A term g of two terms t1 and t2 is called a generalization if there are
substitutions σ1 and σ2 such that σ1g = t1 and σ2g = t2. Anti-unification aims to find the
_least general generalization lgg such that for any generalization g[′]_ of t1 and t2, there exists a
substitution σ that makes σg[′] = lgg. We extract the features from the Coq terms present in
the substitutions(σ1 and σ2) and the lgg as the input to our model.
For GPT-2, we merely apply anti-unification to generate strings. We convert the lgg and
substitutions to strings and input them into the model.
## 3 Experimental Evaluation
Our dataset is composed of the proof states (158, 494) of all the lemmas (11, 372) in the Coq’s
standard library. The lemmas were randomly divided into three subsets for training, validation,
and testing in an 80-10-10 ratio. Each subset includes the states of the corresponding lemmas.
For random forests, we optimize parameters on the training and validation partitions, which
is depicted in Figure 1. Afterwards, we build models with the best hyper-parameters learned
from the training dataset and make predictions for the test dataset. We also fine-tune the
smallest GPT-2 for each characterization. Every model is executed for 25 epochs, and we store
the snapshot with the best accuracy on the validation dataset to synthesize tactics for the test
data. All the GPT models utilize the same parameters: a batch size of 32, no weight decay,
and the learning rate of 0.0003 with a linear schedule and the first 20% steps for warming up.
Figure 2 depicts the average training loss per step and validation accuracy during fine-tuning.
Table 1 shows the results on the test data. Unsurprisingly, only learning from before states
performs worst since it contains little information of the influences of the tactic. The best
accuracy achieved by GPT-2 is 10.47% better than that of random forests. This confirms the
power of the state-of-the-art neural network. Anti-unification does not work well for random
-----
Tactic Characterizations by the Influences on Proof States Zhang and Blaauwbroek
Figure 1: Results of hyper-parameter tuning for random forests. The accuracy denotes how
often we predict a tactic that is the same as the tactic in the dataset.
50
40
50
45
30
20
40
2[0] 2[1] 2[2] 2[3] 2[4] 2[5] 2[6] 2[7] 2[8] 2[9]
tree number
0.01 0.02 0.04 0.08 0.16 0.32 0.64
impurity
before before after feature difference anti-unify
Figure 2: Training loss and validation accuracy of GPT-2.
1.2 anti unify
before
before after
1.0
0.8
loss 0.6
0.4
0.2
0 5 10 15 20 25
epochs
anti unify
0.6 before
before after
0.5
0.4
accuracy
0.3
0.2
0 5 10 15 20 25
epochs
(a) training loss
(b) validation accuracy
Table 1: Results on the test dataset. “Same tactic” denotes that the prediction is exactly the
same as the tactic in the library. “Same change” checks how often the prediction makes the
same transformation.
before feature proof anti
model accuracy(%) before
after difference term unification
same tactic 36.917 44.563 49.723 47.480 47.727
random forests
same change 43.225 52.166 59.344 56.024 55.507
same tactic 39.154 56.215 60.300
GPT-2
same change 45.356 65.319 69.814
-----
Tactic Characterizations by the Influences on Proof States Zhang and Blaauwbroek
forests but obtains excellent performance for GPT-2. The reason may be that converting antiunification to appropriate features is challenging.
## References
[1] Bruno Barras, Samuel Boutin, Cristina Cornes, Judica¨el Courant, Yann Coscoy, David Delahaye,
Daniel de Rauglaudre, Jean-Christophe Filliˆatre, Eduardo Gim´enez, Hugo Herbelin, et al. The coq
proof assistant reference manual. INRIA, version, 6(11), 1999.
[2] Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan Yao,
Ao Zhang, Liang Zhang, et al. Pre-trained models: Past, present and future. AI Open, 2:225–250,
2021.
[3] Dubravko Miljkovi´c. Fault detection methods: A literature survey. In 2011 Proceedings of the 34th
_international convention MIPRO, pages 750–755. IEEE, 2011._
[4] Gordon D Plotkin. A note on inductive generalization. Machine intelligence, 5:153–163, 1970.
[5] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language
models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
[6] Morten Heine Sørensen and Pawel Urzyczyn. Lectures on the Curry-Howard isomorphism. Elsevier,
2006.
[7] Liao Zhang, Lasse Blaauwbroek, Bartosz Piotrowski, Prokop Cern`[ˇ] y, Cezary Kaliszyk, and Josef
Urban. Online machine learning techniques for coq: A comparison. In International Conference on
_Intelligent Computer Mathematics, pages 67–83. Springer, 2021._
-----
| [
"Lasse, Blaauwbroek",
"Liao, Zhang"
] | 2022-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
TaskGen: A Task-Based, Memory-Infused Agentic Framework using StrictJSON | TaskGen is an open-sourced agentic framework which uses an Agent to solve an arbitrary task by breaking them down into subtasks. Each subtask is mapped to an Equipped Function or another Agent to execute. In order to reduce verbosity (and hence token usage), TaskGen uses StrictJSON that ensures JSON output from the Large Language Model (LLM), along with additional features such as type checking and iterative error correction. Key to the philosophy of TaskGen is the management of information/memory on a need-to-know basis. We empirically evaluate TaskGen on various environments such as 40x40 dynamic maze navigation with changing obstacle locations (100% solve rate), TextWorld escape room solving with dense rewards and detailed goals (96% solve rate), web browsing (69% of actions successful), solving the MATH dataset (71% solve rate over 100 Level-5 problems), Retrieval Augmented Generation on NaturalQuestions dataset (F1 score of 47.03%) | This work empirically evaluates TaskGen on various environments such as 40x40 dynamic maze navigation with changing obstacle locations, TextWorld escape room solving with dense rewards and detailed goals, web browsing, and Retrieval Augmented Generation on NaturalQuestions dataset. | ## TaskGen: A Task-Based, Memory-Infused Agentic Framework using StrictJSON
**John Chong Min Tan** **Prince Saroj, Bharat Runwal, Hardik Maheshwari,**
Simbian AI **Alankrit Chona, Ambuj Kumar**
National University of Singapore Simbian AI
[email protected] [email protected]
**Brian Lim Yi Sheng**
Singapore-ETH Centre
[email protected]
**Abstract**
**Richard Cottrill** **Mehul Motani**
[email protected] National University of Singapore
[email protected]
TaskGen is an open-sourced agentic framework
which uses an Agent to solve an arbitrary task
by breaking them down into subtasks. Each
subtask is mapped to an Equipped Function or
another Agent to execute. In order to reduce
verbosity (and hence token usage), TaskGen
uses StrictJSON that ensures JSON output from
the Large Language Model (LLM), along with
additional features such as type checking and
iterative error correction. Key to the philosophy of TaskGen is the management of information/memory on a need-to-know basis. We
empirically evaluate TaskGen on various environments such as 40x40 dynamic maze navigation with changing obstacle locations (100%
solve rate), TextWorld escape room solving
with dense rewards and detailed goals (96%
solve rate), web browsing (69% of actions
successful), solving the MATH dataset (71%
solve rate over 100 Level-5 problems), Retrieval Augmented Generation on NaturalQuestions dataset (F1 score of 47.03%).
Figure 1: An Overview of TaskGen
1. TaskGen breaks a complex task down into bitesized subtasks, each of which are mapped to an
Equipped Function or Inner Agent to execute.
2. In contrast to free-form text output in agentic
frameworks, TaskGen uses a concise JSON output
for each part of the process. Specifically, it uses
StrictJSON (Tan, 2023), which is an LLM output
parser for JSON format with type checking, and
helps ensure concise and extractable output which
can be used for downstream tasks easily.
3. TaskGen has Shared Memory amongst various
components on a need-to-know basis. This Shared
Memory can come in the form of 1) Subtasks Com**pleted, a list of past Equipped Functions inputs and**
outputs, or 2) Shared Variables, which stores important information that may also be of the form of
long text or non-text modalities.
4. TaskGen utilises Global Context to inform the
Agent of important information that may be dynamically changing. This allows the Agent to react to
dynamic environments as the task progresses, or as
the Agent switches tasks.
5. Lastly, as memory is key to learning and decision making, TaskGen implements memory of
various abstraction spaces in the Agent’s Memory
**Bank, which can be used to augment the prompt**
to the Agent via Retrieval Augmented Generation
(RAG) (Lewis et al., 2020) based on semantic sim
**1** **Introduction**
[TaskGen (https://github.com/simbianai/taskgen) is](https://github.com/simbianai/taskgen)
an open-sourced agentic framework which breaks
down a task into subtasks, each of which are
mapped to an Equipped Function or another Agent
to execute. The Agents and Equipped Functions
operate independently, but share context on a needto-know basis using Shared Memory (see Fig. 1).
TaskGen is designed to be less verbose, and
hence incurs lower processing latency and costs
with potentially improved accuracy, than most existing agentic frameworks which output free text
such as AutoGPT (Yang et al., 2023a), BabyAGI
(Nakajima, 2023), MetaGPT (Hong et al., 2023),
AutoGen (Wu et al., 2023), ChatDev (Qian et al.,
2023), CrewAI (Moura, 2023), LangChain/LangGraph (LangGraph, 2024).
**Our Contributions. We propose a new open-**
sourced agentic framework named TaskGen:
-----
ilarity to the task. These memories are learnable
via experience and can be used to influence future
behaviour.
**2** **Motivation**
We strive to create an Agent that can solve arbitrary
tasks in arbitrary environments. However, when
solving an arbitrary task, we could potentially do
many actions, and there are many potential outcomes possible, as shown in Fig. 2. This is intractable for any Agent to manage and we need to
limit the scope of what the Agent can do for more
robust Agents.
Figure 2: Intractable action space when solving an arbitrary task
Hence, we should limit the scope of the Agent
by giving it only relevant Equipped Functions. This
will help filter the vast action space into something
tractable. Moreover, based on the Equipped Functions provided, we can break down a potentially
complicated task into bite-sized subtasks, each of
which can be solved entirely by one Equipped Function. This is shown in Fig. 3.
Figure 3: Constraining action space by Equipped Functions
In fact, for more complex tasks, we can even
let another Agent be the Equipped Function. This
Agent will henceforth be referred to as Inner Agent.
This is similar to how a manager offloads tasks
to each worker, each of whom have their own experiences and skills to do the task. By having intelligent Inner Agents as the Equipped Function,
the top-level agent (Meta Agent) will have greater
processing capability. This is shown in Fig. 4.
Figure 4: Inner Agents assigned as Equipped Functions
to a Meta Agent helps increase processing capability
**Infusing Shared Awareness. Each Equipped**
Function or Inner Agent would now be able to
perform a subset of the entire task independently.
However, they will need some shared context, as
1) the outcome of the subtask may influence other
subtasks down the line, or 2) they may need input
from earlier subtasks in order to perform their subtask. To solve this problem, we implement a Shared
Memory amongst the Meta Agent, Equipped Function and Inner Agents. Notably, we have two types
of Shared Memory, 1) Subtasks Completed and
2) Shared Variables. This is shown in Fig. 1.
**3** **TaskGen Overall Design Philosophy**
TaskGen has three key design philosophies.
Firstly, the output of each Agent or Equipped
Function is made to be as concise as possible for
minimal token use. This is done using StrictJSON.
By ensuring a structured JSON output format with
type checking, StrictJSON reduces verbosity typically associated with free-form text output in LLMs.
This cuts down on latency and costs, and improves
reliability of extracting output fields needed for
downstream components. For a more in-depth runthrough of StrictJSON, refer to Appendix A.
Secondly, we map each subtask to exactly one
Equipped Function or Inner Agent, so as to guarantee executability of the subtask. Unlike AutoGPT
(Yang et al., 2023a), we ensure that there are no infinite loops when executing subtasks. This is done
via the following design guidelines:
1. An Agent can only call an Equipped Function
or Inner Agent that is not above it in the hierarchy.
2. Each Agent gets context relevant to its own
processing abstraction space and are assigned
Equipped Functions and Inner Agents suitable for
that space.
-----
Lastly, information is only shared between
Agents and Equipped Functions on a need-to-know
basis. We have a shared pool of information in
Shared Memory, but we only expose those that are
relevant to each Agent / Equipped Function. This
helps to reduce context length and minimise the
cognitive load on each part of the system.
**4** **The Core of TaskGen**
**4.1** **Agent Definition**
At the core of TaskGen is the definition of an Agent,
which consists of the following components:
1. Agent Name: Name of the Agent
2. Agent Description: Description of the Agent
3. Equipped Functions: List of Equipped Functions and Inner Agents available to solve subtasks
4. Assigned Task: Agent’s assigned task
5. Subtasks Completed: Python dictionary of past
subtasks that Agent has done, which detail the
Equipped Function’s name and input parameters
and their corresponding output
6. Shared Variables: Python dictionary containing variables that will be shared between Equipped
Functions and Agents
7. Global Context: Additional context to the
Agent that can reference persistent states, such as
those in Shared Variables
8. Memory Bank: Python dictionary containing
various abstraction spaces of memory that will be
retrieved via top-k retrieval via similarity to Assigned Task
**4.2** **Imbuing Agentic Capabilities with**
**Equipped Functions**
By default, an Agent comes pre-built with a
**use_llm function, which uses an LLM with the**
Agent Name and Agent Description as context to
perform a task, and an end_task function to end the
current task. Additionally, we can assign Equipped
Functions or Inner Agents to the Agent to imbue it
additional capabilities.
Equipped Functions come in two forms:
1. Internal Functions use an LLM to do processing of input-output relations. They are useful for
tasks that are difficult for traditional rule-based approaches to handle well, such as sentiment analysis
and summarisation.
2. External Functions utilise any Python function
to do processing to get output, which makes it very
easy for TaskGen to utilise functions from other
agentic frameworks such as LangChain or CrewAI.
They are suitable for tasks that can be called via
fixed functions, or APIs, which guarantee reliability
while imbuing additional functions to the LLM. As
an aside, if we need a hybrid approach of rule-based
fixed processes with flexibility of LLMs, an LLM
can also be called within the External Function.
**4.3** **Choosing the Next Subtask**
The core ability of an Agent is the ability to choose
the correct next subtask to fulfil the Assigned Task.
This is a non-trivial problem as it requires understanding of the Assigned Task, Agent Name, Agent
Description, Subtasks Completed, relevant Memory, Equipped Functions and Inner Agents in order
to make an informed decision.
In order to increase robustness in choosing the
right Equipped Function and corresponding input
parameters, we split it up into two steps.
**Step 1: Decide on subtask and corresponding**
**Equipped Function / Inner Agent. The first step**
simply takes the available information to the Agent
and does a Chain-of-Thought (CoT) (Wei et al.,
2022) prompting to elicit reasoning via thoughts,
leading to more accurate selection of subtask and
the corresponding Equipped Function / Inner Agent
in the following format:
1. Observation: Reflect on what has been done in
Subtasks Completed for Assigned Task
2. Thoughts: Brainstorm how to complete remainder of Assigned Task only given Observation
3. Current Subtask: What to do now in detail
with all context provided that can be done by one
Equipped Function for Assigned Task
4. Equipped Function Name: Name of Equipped
Function to use for Current Subtask
**Step 2:** **Decide on input parameters to**
**Equipped Function / Inner Agent. Instead of**
providing the entire list of Equipped Functions
/ Inner Agents as per Step 1, we only give this
step information of the exact Equipped Function
/ Inner Agent we have decided in Step 1, so as
to encourage greater output specificity. We then
generate the input parameters of the Equipped
Function / Inner Agent given the Current Subtask
and Equipped Function details (Equipped Function
Name, Equipped Function Description, Equipped
Function Input Parameter Description and type),
and uses StrictJSON to ensure that the input parameters meet the type that is stated for in the Equipped
Function. This ensures robustness and reliability
for the input parameters.
-----
**5** **Using TaskGen**
Using TaskGen is extremely simple and is designed
for any new user to learn it within 5 minutes. The
steps needed are detailed as follows:
1. Install TaskGen. "pip install taskgen-ai"
2. Define LLM. This takes in a user prompt and
system prompt as Python strings, and returns a
Python string for the LLM generated response
"def llm(user_prompt: str, system_prompt:
str) -> str"
3. Define Agent. Simply define an Agent class
with the Agent Name, Agent Description
"agent = Agent(name, description, llm =
llm)"
4. Equip **Functions.** Equip the Agent
with Equipped Functions or Inner Agents
to broaden the Agent’s capabilities.
"agent.assign_functions([fn_1, fn_2])"
5. Run Agent. Run the Agent with a task
"agent.run(task)"
6. Query Agent. Query the Agent about Subtasks
Completed
"agent.reply_user(query)"
For an in-depth tutorial on how to use TaskGen,
refer to Appendix B.
**6** **Benefits of TaskGen**
The key philosophy of TaskGen is to be concise.
This helps greatly with the performance of the overall system, as numerous studies (Xiong et al., 2023;
Ding et al., 2024) have shown that an increase
in context length generally leads to poorer performance on tasks referencing the context.
**6.1** **JSON is more concise than free text**
Figure 5: More concise output using JSON as compared
to Free Text using gpt-3.5-turbo on 12 Jul 2024
Given a similar input prompt, asking the LLM to
output in a JSON format generally gives much less
verbose output as compared to free text. An example can be seen from from Fig. 5 for a prompt
about the meaning of life. This is likely because
the pre-training data of JSON on the web is more
concise without much explanation, and the value
of the field is very correlated to the key of the field.
This means that we can use a JSON format to constrain the generation of the LLM to give the desired
fields which we are interested in.
**6.2** **StrictJSON is more concise than JSON**
Figure 6: StrictJSON Schema (bottom) is much less
verbose than JSON Schema (top). Token count is computed using gpt-3.5-turbo tokeniser.
TaskGen steers clear away from the typical JSON
schema approach to define functions, which are
used in many agentic frameworks adopting Pydantic as the JSON parser. This is because the JSON
schema format is extremely verbose, and TaskGen
using the StrictJSON schema is able to express the
entire JSON schema of a function with much fewer
tokens. As can be seen in Fig. 6, in order to express
two parameters, the StrictJSON Schema uses 58
tokens compared to JSON Schema of 110 tokens,
or about 53% the amount of tokens. The token
savings are significant, and would be even more so
with a lot more parameters.
**6.3** **Modular and robust components**
TaskGen utilises a modular approach, where for
each part of the system, be it Equipped Function or
Inner Agent, we give it only the required context
to do the task. This results in shorter context for
LLM prompts, leading to better performance.
Moreover, as we move from one subtask to the
next, we split the process into multiple smaller
chunks as required. For instance, when deciding
what to do for the next subtask, we choose the
Equipped Function / Inner Agent as one chunk,
and choose the input parameters as another chunk.
This again helps with reducing context length and
cognitive load on each part of the process, and we
can error check better at each part of the process.
-----
**6.4** **Shared Memory**
One of the key design philosophy of TaskGen is to
share information only on a need-to-know basis. To
that end, we utilise Shared Memory (see Fig. 7) to
share information between the Agent and Equipped
Function / Inner Agents.
There are two kinds of Shared Memory:
1. Subtasks Completed. This is a Python dictionary which stores the outcome of each subtask. The
dictionary key is the name of the Equipped Function / Inner Agent and its input parameters, the
value is the function output. This past history of
function inputs and outputs will be made known to
all LLM-based components of the system to help
with shared awareness. Do note that this differs
from the traditional ReAct framework (Yao et al.,
2022) in that we do not store the earlier Thoughts.
We notice empirically that just having the Sub**tasks Completed in the form of function inputs**
and outputs is enough for the LLM to understand
past history to make an informed decision, and at
the same time results in reduced context length.
2. Shared Variables. This is a Python dictionary
which stores Python variables. These Python variables will be made available to the Agent and all
Equipped Functions / Inner Agents upon request.
The exact names and values of these Shared Vari**ables will not be in the prompt to LLM calls by**
default, meaning that this information will not increase context length unless explicitly referred to.
As such, we are able to store lengthy text output
as well as filenames for various other modalities
for suitable pre-processing when needed later on.
The Equipped Functions / Inner Agents are also
allowed to modify these Shared Variables, and
as such can directly update the Shared Memory
whenever needed.
Figure 7: Two types of Shared Memory: Subtasks Completed and Shared Variables
**6.5** **Global Context**
**Global Context augments the default LLM prompt**
for the Agent. We use Global Context to expose certain persistent variables, typically stored in
**Shared Variables, which we want to carry through**
the task / carry across tasks. This is very useful
for letting the Agent know the current state in a dynamically changing environment. Global Context
can also contain more specific instructions for the
LLM beyond the defaults in TaskGen.
**6.6** **Memory Bank**
The Memory Bank contains all the important information that an Agent might need to know for
an arbitrary task. We posit that a generic problem
solver will need to contain memory at multiple
**forms of abstraction. For instance, when given a**
piece of text, we can store the 1) summary of it, 2)
extracted entities and relationships in a knowledge
graph, 3) entire text. These information will be
useful when we are doing 1) generic question and
answer, 2) causal reasoning, 3) specific question
and answer respectively. If we just store information at one form of abstraction only (e.g. summary),
some tasks will be significantly harder or impossible (e.g. find out specific details in text).
**Task-Augmented Prompt. When given a task,**
we extract out the relevant memories using RAG or
other semantic matching algorithms. This will be
used to augment the LLM prompt when selecting
the next subtask and using the use_llm function.
**Equipped Function Filtering by Task. Further-**
more, when given a task, not all Equipped Functions/Inner Agents are relevant, so we can filter
them by semantic similarity to the task. This will
help improve LLM performance provided that the
correct functions are kept.
**6.7** **Other Notable Features**
**Conversable Agent. TaskGen provides a wrap-**
per for a two-person chat interface with the Agent,
where the Agent can use its Equipped Functions to
perform actions and then reply the User.
**Code Generator. TaskGen has an in-built code**
generator and code corrector, which can also be
used to perform actions with Python code, similar
to CodeAct. (Wang et al., 2024)
**Asynchronous Mode.** TaskGen has asynchronous equivalents of strict_json, Function
and Agent classes for faster asynchronous processing.
**Community Contributions. TaskGen has a**
community space where users can easily upload
and download Agents (see Appendix C).
-----
**7** **Evaluation**
We evaluate TaskGen on various environments to
showcase its versatility: dynamic maze navigation (see Appendix D), escape room solving in
TextWorld (see Appendix E), web browsing (see
Appendix F), MATH dataset (see Appendix G),
RAG-based Question Answering (QA) on NaturalQuestions dataset (see Appendix H).
**8** **Results**
Overall, TaskGen works well for generic environments. The summarised results for each environment are as follows:
1. Dynamic Maze Navigation. We implement a
40x40 maze with obstacles that change halfway
during the Agent’s learning, similar to Learning,
Fast and Slow (Tan and Motani, 2023). TaskGen
with Global Context and an external StrictJSON
Planner manages to solve 100% of the episodes on
the first try, even after environment changes.
2. Escape Room Solving in TextWorld. We used
TaskGen as a generic interactive fiction player to
solve TextWorld (Côté et al., 2019) challenges.
Where dense rewards and detailed goals were provided, TaskGen achieved a 96% solve rate, outperforming a neural-network agent’s (Côté, 2024)
solve rate of 88%. Where commands are not
provided and needed to be derived by the agent,
TaskGen achieved an 88% solve rate, outperforming the baseline LLM’s 57%.
3. Web-Browsing Agents. We designed a series
of tasks requiring agents to navigate and extract
information from the web, simulating real-world
scenarios where users need to find specific information across various websites. Tasks included searching for academic studies, gathering news headlines,
summarising market trends, and exploring educational resources. The agent demonstrated varying
levels of success across different tasks, with 69%
of actions being completed successfully.
4. MATH Dataset. We randomly selected 20 problems from the test set of 5 categories (Algebra,
Pre-Algebra, Intermediate Algebra, Number Theory, and Counting and Probability) of the MATH
dataset (Hendrycks et al., 2021). Our experiments
(see Appendix G) showed that the TaskGen Agent
with Equipped Functions achieved an average accuracy of 71% on challenging Level-5 problems,
compared to 44% accuracy for the Agent without
these functions. This demonstrates that imbuing an
Agent with code generation and debugging capa
bilities significantly improves problem-solving.
5. RAG-based QA on NaturalQuestions. On
[the Natural Questions dataset (Kwiatkowski et al.,](https://github.com/google-research-datasets/natural-questions)
2019), TaskGen with Equipped Functions for dynamic retrieval and answering (we term this Interactive Retrieval) outperformed the baseline LLM with
RAG across all metrics (see Appendix H). Compared to the baseline LLM, Interactive Retrieval
achieved an F1 Score of 47.03% (+5.49%), precision of 40.75% (+7.43%), and recall of 55.59%
**(+0.42%), demonstrating TaskGen’s effectiveness**
in dynamically refining context for more accurate
question answering.
**9** **Conclusion and Future Work**
[TaskGen is already used in production at Simbian](https://simbian.ai/)
[AI, and we would like to share its benefits with oth-](https://simbian.ai/)
ers. TaskGen’s approach of not using conversation,
but instead focusing directly on solving the task is
a marked improvement over most existing agentic
frameworks. TaskGen will continue to be actively
developed over the coming years. The future work
includes: 1) better planning abilities using statebased graphs, parallel searching, 2) multiple memory abstraction spaces such as vector databases and
knowledge graphs, 3) reflection as a way to consolidate experiences and use for future decision
making, 4) extended multi-modal support and 5)
multiple agents with different skills and biases collaborating with one another.
**Towards Hybrid Workflows. As demonstrated**
by systems like AGENTless (Xia et al., 2024), full
end-to-end agentic workflows may not always provide the best performance, as we may want to fix
parts of the processes without Agents if we already
know what needs to be done. This mixture of fixed
processes and flexible agentic process selection
will form the core tenet of future agentic systems.
While not featured in native TaskGen, such hybrid
systems can be implemented by using StrictJSON
or fixed rules for dynamic routing over TaskGen
Agents. We will explore more of such approaches
and incorporate key elements into TaskGen.
**10** **Build Together With Us**
TaskGen is an actively developing framework, and
we would love to seek your inputs / contributions / feedback. Build together with us via our
**[GitHub (https://github.com/simbianai/taskgen),](https://github.com/simbianai/taskgen)**
and join the discussion group at **Discord**
[(https://discord.com/invite/bzp87AHJy5).](https://discord.com/invite/bzp87AHJy5)
-----
**11** **Experiment Details**
For more information on the following experiments
in the Appendix, do contact the following:
1. Community Contributions (see Appendix C)
- Hardik ([email protected])
2. Dynamic Maze Navigation (see Appendix D)
- John Tan Chong Min
3. TextWorld (see Appendix E)
- Richard Cottrill
4. Web-Browsing Agents (see Appendix F)
- Brian Lim Yi Sheng
5. MATH Dataset (see Appendix G)
- Bharat Runwal ([email protected])
6. NaturalQuestions QA (see Appendix H)
- Prince Saroj
**Limitations**
The experiments conducted in this paper are not
extensive for all available LLMs. We mainly use
OpenAI’s "gpt-4o" and "gpt-3.5-turbo". That said,
we have also empirically tested and verified, though
not shown here, that TaskGen works with other
LLMs such as OpenAI’s "gpt-4o-mini", Llama-3
8B and Claude-3 Haiku.
**Acknowledgements**
[The research is supported by Simbian AI, where](https://simbian.ai/)
it is used in the core products. This research is
also supported by the National Research Foundation, Singapore under its AI Singapore Programme
(AISG Award No: AISG2-PhD-2021-01-003[T])
and by A*STAR, CISCO Systems (USA) Pte. Ltd
and National University of Singapore under its
Cisco-NUS Accelerated Digital Economy Corporate Laboratory (Award I21001E0002).
**References**
Marc-Alexandre Côté. 2024. Building a simple agent
[with textworld. https://github.com/microsoft/](https://github.com/microsoft/TextWorld/blob/main/notebooks/Building%20a%20simple%20agent.ipynb)
[TextWorld/blob/main/notebooks/Building%](https://github.com/microsoft/TextWorld/blob/main/notebooks/Building%20a%20simple%20agent.ipynb)
[20a%20simple%20agent.ipynb.](https://github.com/microsoft/TextWorld/blob/main/notebooks/Building%20a%20simple%20agent.ipynb)
Marc-Alexandre Côté, Akos Kádár, Xingdi Yuan, Ben
Kybartas, Tavian Barnes, Emery Fine, James Moore,
Matthew Hausknecht, Layla El Asri, Mahmoud
Adada, et al. 2019. Textworld: A learning environment for text-based games. In Computer Games:
_7th Workshop, CGW 2018, Held in Conjunction with_
_the 27th International Conference on Artificial In-_
_telligence, IJCAI 2018, Stockholm, Sweden, July_
_13, 2018, Revised Selected Papers 7, pages 41–75._
Springer.
Yiran Ding, Li Lyna Zhang, Chengruidong Zhang,
Yuanyuan Xu, Ning Shang, Jiahang Xu, Fan Yang,
and Mao Yang. 2024. Longrope: Extending llm context window beyond 2 million tokens. arXiv preprint
_arXiv:2402.13753._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Xiaodong
[Song, and Jacob Steinhardt. 2021. Measuring math-](https://api.semanticscholar.org/CorpusID:232134851)
[ematical problem solving with the math dataset.](https://api.semanticscholar.org/CorpusID:232134851)
_ArXiv, abs/2103.03874._
Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng
Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven
Ka Shing Yau, Zijuan Lin, Liyang Zhou, et al. 2023.
Metagpt: Meta programming for multi-agent collaborative framework. arXiv preprint arXiv:2308.00352.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti,
Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark
for question answering research. Transactions of the
_Association for Computational Linguistics, 7:453–_
466.
[LangGraph. 2024. Langgraph. https://github.com/](https://github.com/langchain-ai/langgraph)
[langchain-ai/langgraph.](https://github.com/langchain-ai/langgraph)
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio
Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation
for knowledge-intensive nlp tasks. Advances in Neu_ral Information Processing Systems, 33:9459–9474._
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi
Mirza, Alex Graves, Timothy Lillicrap, Tim Harley,
David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning.
In ICML, pages 1928–1937. PMLR.
João Moura. 2023. crewai. [https://github.com/](https://github.com/joaomdmoura/crewAI)
[joaomdmoura/crewAI.](https://github.com/joaomdmoura/crewAI)
[Yohei Nakajima. 2023. Babyagi. https://github.](https://github. com/yoheinakajima/babyagi)
[com/yoheinakajima/babyagi.](https://github. com/yoheinakajima/babyagi)
Chen Qian, Xin Cong, Cheng Yang, Weize Chen,
Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong
Sun. 2023. Communicative agents for software development. arXiv preprint arXiv:2307.07924.
John Schulman, Sergey Levine, Pieter Abbeel, Michael
Jordan, and Philipp Moritz. 2015. Trust region policy
optimization. In ICML, pages 1889–1897. PMLR.
John Schulman, Filip Wolski, Prafulla Dhariwal,
Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint
_arXiv:1707.06347._
Chong Min John Tan. 2023. Strictjson. [https://](https://github.com/tanchongmin/strictjson)
[github.com/tanchongmin/strictjson.](https://github.com/tanchongmin/strictjson)
-----
Chong Min John Tan and Mehul Motani. 2023. Learning, fast and slow: A goal-directed memory-based
approach for dynamic environments. In 2023 IEEE
_International Conference on Development and Learn-_
_ing (ICDL), pages 1–6. IEEE._
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and
Anima Anandkumar. 2023. Voyager: An open-ended
embodied agent with large language models. arXiv
_preprint arXiv:2305.16291._
Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang,
Yunzhu Li, Hao Peng, and Heng Ji. 2024. Executable
code actions elicit better llm agents. arXiv preprint
_arXiv:2402.01030._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural
_information processing systems, 35:24824–24837._
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu,
Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang,
Xiaoyun Zhang, and Chi Wang. 2023. Autogen: Enabling next-gen llm applications via multiagent conversation framework. _arXiv preprint_
_arXiv:2308.08155._
Chunqiu Steven Xia, Yinlin Deng, Soren Dunn, and
Lingming Zhang. 2024. Agentless: Demystifying llm-based software engineering agents. arXiv
_preprint arXiv:2407.01489._
Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang,
Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi
Rungta, Karthik Abinav Sankararaman, Barlas Oguz,
et al. 2023. Effective long-context scaling of foundation models. arXiv preprint arXiv:2309.16039.
Hui Yang, Sifu Yue, and Yunzhong He. 2023a. Auto-gpt
for online decision making: Benchmarks and additional opinions. arXiv preprint arXiv:2306.02224.
Z. Yang, Ming Ding, Qingsong Lv, Zhihuan Jiang, Zehai He, Yuyi Guo, Jinfeng Bai, and Jie Tang. 2023b.
[Gpt can solve mathematical problems without a cal-](https://api.semanticscholar.org/CorpusID:261582750)
[culator. ArXiv, abs/2309.03241.](https://api.semanticscholar.org/CorpusID:261582750)
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak
Shafran, Karthik Narasimhan, and Yuan Cao. 2022.
React: Synergizing reasoning and acting in language
models. arXiv preprint arXiv:2210.03629.
Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun
Luo, Zipeng Qin, Shaoqing Lu, Anya Jia, Linqi Song,
[Mingjie Zhan, and Hongsheng Li. 2023. Solving](https://api.semanticscholar.org/CorpusID:260900008)
[challenging math word problems using gpt-4 code](https://api.semanticscholar.org/CorpusID:260900008)
[interpreter with code-based self-verification. ArXiv,](https://api.semanticscholar.org/CorpusID:260900008)
abs/2308.07921.
-----
**APPENDIX**
The Appendix contains the following sections:
A StrictJSON Details
B TaskGen Details
C Community Contributions to TaskGen
D Dynamic Maze Navigation
E Escape Room Solving in TextWorld
F Web-Browsing Agents
G MATH Dataset
H RAG-based Question Answering on NaturalQuestions Dataset
[The code for the experiments in the Appendix can be found at https://github.com/simbianai/taskgen.](https://github.com/simbianai/taskgen)
-----
**A** **StrictJSON Details**
StrictJSON is a library created in order to parse LLM output into a structured JSON format, and is used
for all LLM calls in TaskGen. This enables efficient extraction of LLM output based on the JSON keys
and enables interfacing the LLM as part of a larger system, such as the agentic framework in TaskGen.
Furthermore, StrictJSON comes in-built with rule-based type checking which increases output reliability.
StrictJSON also has error checking capabilities, which uses the JSON parsing errors or type checking
errors to feed into the LLM in an iterative feedback loop as an error message to regenerate the JSON
again. This is similar to the error feedback mechanism in Voyager (Wang et al., 2023).
**Comparison with json.loads(): Typically, in order to parse JSON string into a dictionary, the function**
json.loads() is called. This is not robust to variations of the JSON and can easily fail to parse incorrectly
formatted JSON, especially when generating code. StrictJSON is more robust, as it adds a delimiter
before and after the key which the regex uses to extract. This regex will still work even if the quotation
marks are not closed properly or are missing within the string. See Section A.2 for more details.
**Why not YAML? YAML could also potentially be the format for LLM outputs in order to reduce token**
counts. However, YAML formatting performance has been empirically tested to be poorer than JSON, at
least on the GPT models. We posit that this is because current LLMs are extensively trained on web data,
of which JSON is more prevalent than YAML since it is the earlier format to be used. This may change as
more web data is of YAML format. For now, JSON format is used to get a reliable system working.
This appendix details how to use StrictJSON based on TaskGen v3.2.0.
**A.1** **Usage**
Figure A1: Example LLM Definition
To use StrictJSON, we firstly need to have an LLM available in order to generate the JSON from the
text input given. Fig. A1 illustrates an example LLM function (named llm) that can be interfaced with
StrictJSON. It takes as input the system prompt, which is the overall system message for the LLM, as
well as the user prompt, which is what the user typically enters into the LLM for a response. This returns
an LLM model response in the form of a string, which is the output of this LLM function. By exposing the
entire LLM function to the user, StrictJSON is extremely versatile and can operate with both API-based
LLM models and local models.
-----
Figure A2: Basic Usage of StrictJSON
In order to use StrictJSON to process the LLM’s output, we simply use the strict_json function. We
give it the system prompt, user prompt, and the output format in a dictionary format with keys being
the field name and values being the description of the field. For instance, Fig. A2 illustrates how to use
StrictJSON to classify a sentence in the user prompt. As can be seen, StrictJSON processes the type of
sentiment, an array of adjectives in the sentence, and the number of words all in the same function call.
Figure A3: Advanced Usage of StrictJSON for code
StrictJSON is also able to process code reliably, as shown in Fig. A3.
Figure A4: Type Checking in StrictJSON
-----
StrictJSON also supports type checking of the following types: int, float, str, dict, list, array, code, bool,
Dict[], List[], Array[], Enum[]. If there is a [], you can nest datatypes within it such as List[int] for a
list of integers. Only Dict[] cannot be nested, and Dict[dictionary_keys] is used instead to enforce the
presence of the dictionary_keys within the dictionary. Fig. A4 illustrates how to use StrictJSON with type
checking. This can ensure greater output specificity and greater reliability for downstream tasks.
**A.2** **How it works under the hood**
Figure A5: Visualising the actual LLM prompt that StrictJSON uses with verbose = True
StrictJSON creates a prompt to the LLM to output JSON in a specified format using delimiters to enclose
the output keys, that is more reliable to extract with regex as compared to unmodified keys of JSON. This
is because the unmodified keys are just words with quotation marks, like "Sentiment", which may appear
in other parts of the JSON and confuse the regex extraction.
Fig. A5 demonstrates how to visualise the actual LLM system and user prompt using verbose = True as
a parameter to strict_json. We can see that we get the LLM to enclose keys with delimiters (default
‘###’), and enclose the JSON values with <>, which the LLM will be instructed to update.
Figure A6: Regex is done on the delimiter + key + delimiter pattern
The regex that is used to parse the LLM output can be seen in Fig. A6. By extracting keys of the form
‘###{key}###’ or "###{key}###", we can extract and parse the JSON even when there are mismatched
quotation marks, unclosed brackets, and many other issues that will cause json.loads() to fail.
-----
**B** **TaskGen Details**
This appendix details the various modules of TaskGen and how to use them based on TaskGen v3.2.0.
**B.1** **Initialising TaskGen**
Figure B1: 3 Steps to Initialise TaskGen
Fig. B1 shows how to initialise TaskGen. Here, we use "gpt-4o", but TaskGen can also work with
"gpt-3.5-turbo" or equivalent LLM models at the cost of lower performance.
There three steps are:
1. Install TaskGen
2. Import required functions and setup relevant API keys for your LLM
3. Define your own LLM, which takes in a system prompt and user prompt and outputs the response
string from the LLM
-----
**B.2** **TaskGen Agent Overview**
**B.2.1** **Initialising the Agent**
Figure B2: Initialising the Agent
Fig. B2 shows how to initialise the Agent.
We firstly define the functions for the Agent.
This can be of the form of an Internal Functions using Function class, which takes in the function
description and output format of the function. We denote the variables in function description via <>
enclosing the variable name. The output format is in the style of StrictJSON’s output format. The Internal
Function uses LLM to process the function, leading to very flexible functions that rule-based solutions
may not allow for.
Functions can also be of the form of an External Function, which is very flexible as it is just a Python
function. We simply define the function with typing for inputs and outputs, and with a docstring that
contains the input parameter names. If any of the typing or docstring is missing, we will omit them from
the function description, but the External Function can still work. External Functions allow for both
rule-based rigidity and LLM-based flexibility, as an LLM call can be made inside the External Function
as well.
After defining our Functions, we define our Agent by calling Agent(name, description, llm).
Thereafter, we proceed to assign our functions via assign_functions.
To see how the functions look like, we can also use print_functions to visualise it. Notice that the
functions just consists of Name, Description, Input and Output fields, which is much shorter than the
JSON schema or Pydantic way of defining a function.
-----
**B.2.2** **Running the Agent**
Figure B3: Running the Agent
Fig. B3 shows how to assign a task and run the Agent by simply calling run(task). Notice how we can
visualise the output via Observation, Thoughts, Action (Subtask) in the traditional ReAct framework. The
difference between TaskGen and the original ReAct framework is that the observation here is actually
the observation of the Subtasks Completed instead of the Observation of the function’s output. By
structuring Observation this way, this helps to provide a summary of what has been done so far, which
aids in decision making.
We also do not store these Observation and Thoughts as they are just used in decision making at that point
of time, but not needed in the longer term. The entire history of what has been done is stored in Subtasks
Completed, which can be visualised via status() or via the subtasks_completed variable of the agent.
Notice also that calling status() also gives us the Agent’s details, such as Agent Name, Agent Description, Equipped Functions, Shared Variable Names, Assigned Task, Subtasks Completed, and whether the
task is completed. We can call status() anytime to check on how the Agent is performing.
-----
**B.2.3** **Querying the Agent**
Figure B4: Querying the Agent
Fig. B4 shows how we can reply the user by simply calling reply_user() to get the Agent to reply based
on what has been done in Subtasks Completed. If reply_user() is called without any query parameter,
it will reply based on the assigned task. If there is a query parameter given, then it will reply based on the
query.
This functions as a simple question answer bot, from which we can ask multiple questions about what the
Agent has done so far and reply the user.
**B.2.4** **Asynchronous Agents**
We can perform whatever we did for the Agent in asynchronous mode too. Such an asynchronous runtime
has advantages in that we can run multiple Agents in a shorter time, as we can effectively let other Agents
run in the downtime of one Agent.
TaskGen has two main classes - Agent and Function. Their asynchronous equivalents are AsyncAgent
and AsyncFunction. Furthermore, the asynchronous version of strict_json is strict_json_async.
Figure B5: Initialising an Asynchronous Agent
Fig. B5 shows how to initialise the asynchronous LLM. Simply define a function that takes in a system
prompt and user prompt, and outputs the response string of the LLM operating in asynchronous mode.
-----
Figure B6: Initialising and Running an Asynchronous Agent
Figure B7: Querying an Asynchronous Agent
Figs. B6 and B7 shows how to initialise and run the AsyncAgent and AsyncFunction. As a general guide,
to use the AsyncAgent and AsyncFunction, we do the same as what we would do for the synchronous
version, and just put in the llm variable as the asynchronous version of the LLM.
When running the methods of AsyncAgent, we add an await keyword in front of them, like await
my_agent.run() and await my_agent.reply_user(). The outputs and how these methods work are
similar to the synchronous versions.
-----
**B.3** **Meta Agent**
Sometimes, due to task complexity, we would like to assign our Agent another Agent as an Equipped
Function. Henceforth, our main Agent will be termed the Meta Agent, and the Agent equipped to it be
termed the Inner Agent.
**B.3.1** **Initialising the Meta Agent**
Figure B8: Initialising the Meta Agent
Fig. B8 show how to initialise the Meta Agent. It is generally the same process as initialising functions to
the Agent, except that this function is of class "Agent". Note that we can specify how each Inner Agent
should behave, including the max_subtasks it should run for and what LLM it should use.
The Inner Agents will have full access to the Subtasks Completed and Shared Variables of the Meta
Agent, and all the Equipped Functions of the Inner Agents will have access to these as well. This helps
ensure that the context of the Meta Agent is fed downwards to the Inner Agents, and the Inner Agents can
also change the Shared Memory of the Meta Agent.
-----
**B.3.2** **Running the Meta Agent**
Figure B9: Running the Meta Agent (Part 1 - Creative Writer)
Figure B10: Running the Meta Agent (Part 2 - Chef)
-----
Figure B11: Running the Meta Agent (Part 3 - Economist)
Figs. B9, B10 and B11 show the process of running the Meta Agent by simply calling run() and showcase
the responses of the respective Creative Writer, Chef, Economist Inner Agents.
Notice that if we call the Inner Agent as the function, we will generally repeat the Observation, Thoughts,
Action (Subtask Identified) loop at the Inner Agent level. This kind of recursiveness helps to make the
implementation of the Inner Agent easy, and we can stack as many Inner Agents as we would like to scale
up the system.
We give the Inner Agent the full awareness of the Meta Agent’s Assigned Task, Subtasks Completed and
**Shared Variables. When the Inner Agent ends the subtask, it does not give all the information back to the**
Meta Agent, but instead call reply_user() to consolidate important information to put into Subtasks
**Completed (reply text shown in magenta). This helps to minimise the information stored in Shared**
Memory, which helps to reduce the overall context length, as many details done by the Inner Agent do not
need to be known by the Meta Agent.
The Agents should generally be given context and Equipped Functions appropriate for their level of
processing. In practice, such a hierarchical structure of Agents help with decomposing a complex problem
into bite-sized bits, with the Agents at the higher levels focusing on the broader picture, while the Agents
at the lower levels will do more of the specific details needed. This structure can be used to do most tasks
that have such a hierarchical nature.
-----
**B.3.3** **Visualising the Meta Agent’s Status**
Figure B12: Visualising the Meta Agent’s Status
Fig. B12 shows how to use status() to see the Meta Agent’s status, including Subtasks Completed.
Here, we can see that the Inner Agents like Chef, Boss, Creative Writer, Economist are the Equipped
Functions of the Meta Agent.
Furthermore, the Subtasks Completed shows which Inner Agent is called and what instruction was passed
to each of them, along with their reply as the output when the subtask has ended.
-----
**B.3.4** **Querying the Meta Agent**
Figure B13: Querying the Meta Agent
Fig. B13 shows how to query the Meta Agent after the task is run using query().
Here, we can see that the Agent is able to use the information in Subtasks Completed to give a coherent
answer to what the user was asking, namely, to create a menu with 5 dishes with name, description,
ingredients and price. In general, the more detailed the description of the Assigned Task, the better the
answer by the Agent.
-----
**B.4** **Shared Variables**
**B.4.1** **Initialising Shared Variables**
Figure B14: Initialising Shared Variables
Fig. B14 shows how to initialise the Shared Variables. In general, we call shared_variables as a
variable in the External Function, and proceed to extract and modify the relevant shared_variables as
appropriate for the Equipped Function. Here in generate_quotes, we store the new generated quotes in
the shared variable "Quote List".
Then, in order to use this shared variable in the Equipped Functions, we need to initialise the
shared_variables of the Agent. Here, we can see that we initialise "Quote List" as an empty list
[].
**B.4.2** **Modifying Shared Variables at Runtime**
Figure B15: Modifying Shared Variables at runtime
Fig. B15 shows how we can modify Shared Variables at runtime. The function generate_quotes was
called, but the quotes did not appear in Subtasks Completed since generate_quotes does not return
any output. Rather, we store the generated quotes in the shared variable "Quote List". This helps reduce
the overall context length for the Agent as the details for the quotes do not matter for this situation - only
the fact that the quotes are generated does. This is a template for how we can use LLM as an Operating
System (OS), by just simply returning whether or not an action was completed in Subtasks Completed,
and storing the details in Shared Variables as needed.
-----
**B.5** **Global Context**
**B.5.1** **Initialising Global Context**
Figure B16: Initialising Global Context
Fig. B16 shows how we can initialise the Global Context by simply initialisng Agent with a
global_context variable. This contains the additional prompt we want to give the Agent, and we
express whatever we want to replace with Shared Variables with a <> enclosing the shared variable
name.
Here, in this Inventory Manager Agent, we want to expose the inventory items to the Agent, so we give it
the global_context of "Inventory: <Inventory>", which at runtime, the <Inventory> will be replaced by
the actual value in the shared variable "Inventory".
Placing information in Global Context helps the Agent to maintain the most updated picture when the
Agent makes its decisions, which is very useful for dynamically changing environments where the Agent
would need to continually assess and re-evaluate its situation in the environment.
-----
**B.5.2** **Running Agent with Global Context**
Figure B17: Running Agent with Global Context (Part 1)
Figure B18: Running Agent with Global Context (Part 2)
Figs. B17 and B18 show how Global Context can be used when running tasks with the Agent. Typically,
we do not carry over information across new tasks. However, if we store a persistent state in Shared
**Variables, such as "Inventory", we can actually expose this "Inventory" variable to the Agent via Global**
**Context.**
Hence, as can be seen, after running the task to add apples and oranges, although we reset the Agent and
clear its Subtasks Completed, the Agent is still able to know that there are apples and oranges in the
inventory and proceed to remove the apples in the next task.
In fact, this practice of continually clearing the Subtasks Completed via reset() and using Global
**Context to carry over information between tasks is very helpful for Agentic decision making, as the**
amount of information the Agent needs to focus on is significantly reduced for every future task.
-----
**B.6** **Memory**
**B.6.1** **Initialising Function Memory**
Figure B19: Initialising Functions
Fig. B19 depicts how we can define External Functions using a normal Python function format with input
and output typing and a docstring containing the input variable names, as well as Internal Functions by
defining the function description and output format. In order to ensure that certain functions do not go
through RAG to filter functions, we can additionally set the is_compulsory variable to be True when
initialising the Function class of TaskGen.
-----
Figure B20: Equipping Agent with Functions
Fig. B20 depicts how we can assign the functions using assign_functions to the LLM. We remove the
use_llm function by setting default_to_llm to False in the Agent’s initialisation.
We can preview the entire list of functions using list_functions(). Notice that both the Internal and
External Functions are converted to the same format of Name, Description, Input and Output according to
the Function class parameters.
Since there are too many Equipped Functions for the Agent to use reliably, TaskGen automatically filters
the Equipped Functions (excluding use_llm and end_task) down to a top_k value of 5 based on semantic
matching of the function’s name and description to the Assigned Task. We can also change this value
by modifying the top_k parameter in the Agent’s Memory Bank for Function. There are many other
parameters that can be customised, and we encourage the interested reader to check out "Tutorial 3 Memory" for more details.
**B.6.2** **Using Function Memory**
Figure B21: Filtering Functions by Task
Fig. B21 shows how we can retrieve relevant functions by a ranker (default: OpenAI’s "text-embedding-3small", can be customised to other providers as well). Here, the Assigned Task is to evaluate 3 - 1, and as
expected, the Equipped Function subtract_numbers appear in the list of top_k = 5 functions filtered.
We note here that the retrieve_by_ranker function uses cosine similarity to filter the functions according
to similarity to the Assigned Task, which may not always be the best approach to do so if the embedding
space is not informative of the similarity required. Hence, users are free to customise their own ranker
function or to customise the entire retrieve_fn that takes in a task and outputs the top_k most similar
memories. These changes can be done by simply modifying the "Memory" class accordingly.
-----
Figure B22: Running Task with Filtered Functions
Fig. B22 shows how we can run a task using run(), and the filtering of functions is done automatically at
the backend. Do note that the Agent can only use the filtered functions, so if there are functions that are
missed out due to failure in retrieving them via RAG, performance may decrease.
**Current Thoughts by Developer: The recommended approach for Agents using TaskGen now is actually**
not to use memory-based filtering of functions, but instead to define each Agent with only a limited set of
functions, and to use Inner Agents with a limited set of functions to cover the spectrum of tasks needed
if the main Agent has too many functions to use. This reduces the dependency on filtering functions
correctly, and ensures quality response by the Agent.
**B.6.3** **Storing Additional Task-based Memory in Memory Bank**
Figure B23: Storing Additional Task-Based Information in Memory Bank
-----
Fig B23 shows how we can incorporate task-based memory in the Memory Bank. We simply define a
new key in the Memory Bank Python dictionary. In this case, we define a new key "Word to Numbers"
and add in the mapping of various nonsense words to their numerical equivalents. We can also do the
same for multiple keys to add in some additional context based on the task. This task-based addition of
relevant context is an extremely powerful concept that enables the Agent to work across a wide variety of
tasks using the same format. It functions like a general plug-and-play Agent that is infused with specific
task-based knowledge based on the Assigned Task.
Here, we can see that by adding in the knowledge of the various nonsense words and their numerical
equivalents, the Agent is able to compute a sum such as "Boneti + mdsnfk + Azo".
As the task becomes more complex, storing and using memory of various abstraction spaces will be
extremely critical for solving arbitrary tasks.
Figure B24: Storing Task to Function mappings in Memory Bank
Fig. B24 shows how we can also use Memory Bank to store various task to function mappings. This
memory could be based off the ground truth mappings, or they could be learned on the go by simply
storing what worked best during the earlier tasks. Conditioned with such a task to function mapping, an
Agent is better able to match future tasks to what has been done effectively in the past.
Here, we can see that by default the task "Booyah" conveys no specific meaning. If we do not have the
memory bank of "Priority Task to Function", the Agent will most likely generate a quote about "Booyah".
However, when conditioned with the mapping of a task of "Booyah" to the function "generate quote" with
topic "TaskGen", we see this being carried out when the Agent is given the task "Booyah".
**Current Thoughts by Developer: Having memory in Memory Bank actually biases the Agent greatly**
to do the previous actions / instructions given in memory. While this may be ideal in cases where the
environment does not change, we find that actually storing too much memory in Memory Bank may help
to decrease adaptability of the Agent to new scenarios. We are still testing, and are proposing a multi-agent
approach to solving new environments. Such a multi-agent approach will contain some agents with past
memory, and some agents without any past memory, and we will select the most performant agent in
the environment. Such a multi-agent approach will increase robustness and reward either experience if
doing actions according to past memory is the best way in the current environment, or exploration if doing
something new is the better approach. Increasingly, we come to think of intelligence as not just one single
Agent doing tasks, but a group of Agents exploring and exploiting the environment together and learning
from one another. This will be a future direction of TaskGen to increase robustness and adaptability for
Agents to do well in dynamic environments.
-----
**B.7** **Conversation Class - Beta Version**
As many applications of LLM involve some form of chatbot or personal assistant, we have decided to
create a wrapper class ConversableAgent that takes in an Agent and interfaces it with a conversational
interface.
In addition to the shared variables in Agent, ConversableAgent adds on three more:
1. Persistent Memory. This stores memory which we want to persist over the entire conversation and
it will be updated after each turn of the conversation.
2. Conversation. This stores the actual conversation itself.
3. Summary of Conversation. This stores the summary of the entire conversation, which will be used
to provide a global context to the Agent.
In general, when given a task, the ConversableAgent firstly performs the actions needed to answer
the User’s query. The ConversableAgent would then use the summarised actions (if any), Global
**Context, Summary of Past Conversation, Past Conversation, Persistent Memory to reply the User. The**
ConversableAgent will also update the Summary of Conversation.
After the reply to the User, ConversableAgent will append the User’s message and the Agent’s reply to
Conversation, and update the Persistent Memory accordingly.
Overall, the main goal is to imbue a conversation with persistent states such as Persistent Memory and
Summary of Conversation, so as to be able to create more wholesome and natural conversation.
**Insights by Developer: Conversation is not the main means of solving the User’s query, so as to make**
the task solving portion concise. The task is solved first, before the Agent is given the chance to reply the
User. In earlier iterations of ConversableAgent, when we had given the LLM function directly to the
Agent, it is quite likely that the Agent will use the LLM function to hallucinate an outcome for the task
that has never happened. This is an interesting finding that the task executor and the response to User
portion of ConversableAgent should be implemented separately to minimise hallucinations.
-----
Figure B25: Running ConversableAgent without additional Equipped Functions
Fig. B25 shows how we can implement a Psychology Counsellor Agent by wrapping the baseline agent in
a ConversableAgent class, and giving it Persistent Memory of User Request for the Conversation, User
Emotion, Summary of Key Incidents. Note that the persistent_memory variable takes the same form as
the output_format of the strict_json function.
-----
Figure B26: Initialising ConversableAgent with Equipped Functions
Figure B27: Running ConversableAgent with additional Equipped Functions
Figs. B26 and B27 shows how to initialise and run a Shop Assistant that can search and purchase items
for the User and responds in the persona of Sherlock Holmes. The Shop Assistant is given Global Context
(and Shared Variables) of Money Remaining, Items Searched, Items Purchased and Past Conversation.
When replying the User, the relevant functions are firstly called in response to the User’s message, and the
Shop Assistant Agent then references what has been done in the action summary (red text titled Actions
Done) to inform the User accordingly.
-----
**C** **Community Contributions to TaskGen**
This section elucidates the methodology by which users of TaskGen can contribute to the library, thereby
fostering the growth of the TaskGen community.
**C.1** **Motivation for Community Contribution**
TaskGen, an open-source repository, actively encourages contributions from its user base to enhance the
library’s functionality and accessibility. As users of TaskGen, individuals are incentivised to develop
sophisticated agents utilising the framework and subsequently contribute these agents for the benefit of the
broader community. This approach aligns with the ethos of open-source development and aims to cultivate
a collaborative ecosystem where users can build upon each other’s contributions. The overarching vision
is to establish a marketplace of powerful agents leveraging the TaskGen framework, ultimately increasing
the repository’s utility through reusability.
Figure C1: Community contributions to TaskGen
**C.2** **Key Features of the Contribution Process**
To facilitate seamless community involvement, significant efforts have been invested in streamlining the
contribution process. Notable features include:
1. Simplified Contribution: Users can contribute their agents through a single function invocation.
2. Minimal Prerequisites: The process requires only a configured GitHub profile, eliminating the need
for local git setup or repository cloning.
3. Comprehensive Support: The contribution mechanism accommodates various configurations, including max_subtasks, summarise_subtasks_count, memory_bank, shared_variables, global context
settings, sub_agents, and both internal and external functions.
4. Efficient Integration: Accepted contributions can be loaded as agents with a single line of Python
code.
**C.3** **Technical Implementation**
The contribution process involves the following steps:
1. Environment Configuration: Users must set the GITHUB_USERNAME and GITHUB_TOKEN
environment variables.
2. Agent Contribution: Invocation of the contribute_agent function on the user’s agent.
To load a contributed agent, users can utilise the load_community_agent class method from the Agent
class, specifying the agent name.[1]
The backend process of the contribute_agent function encompasses:
1We recommend you pull the latest version of taskgen to get the most recent community agents.
-----
1. Creation of a TaskGen fork for the user (if not already existing).
2. Generation of a Python representation of the user’s Agent, including subclasses for the agent and
sub-agents, along with external functions and configurations.
3. Utilization of low-level GitHub APIs to commit the agent’s Python representation to a branch in the
user’s fork.
4. Initiation of a Pull Request to the main TaskGen repository.
**C.4** **Examples**
To illustrate the contribution and usage process, we provide the following examples:
**C.4.1** **Contributing an Agent**
The following code snippet demonstrates how to create and contribute an agent:
**from taskgen import ***
# Create your agent by specifying name and description
my_agent = Agent( 'Helpful assistant ', 'You are a generalist agent')
# Example External Function
**def binary_to_decimal(x: int** ) -> int :
''' Convert input <x: a binary number in base 2> to base 10 '''
**return int** ( **str** (x), 2)
# Example Internal Function
sentence_style = Function(fn_description = 'Output a sentence with <obj > and <entity
> in the style of <emotion >',
output_format = { 'output ': 'sentence '}, fn_name = '
sentence_with_objects_entities_emotion ')
# Assign functions
my_agent.assign_functions(function_list = [binary_to_decimal, sentence_style ])
# Contribute your agent
os.environ[ 'GITHUB_USERNAME '] = '<your GitHub username >'
os.environ[ 'GITHUB_TOKEN '] = '<your GitHub token >'
my_agent.contribute_agent(author_comments = 'This is a generalist agent')
**C.4.2** **Loading a Community Agent**
To load a contributed agent, users can employ the following simple code:
**from taskgen import ***
agent = Agent.load_community_agent("Helpful Assistant")
-----
**C.4.3** **Generated Code**
The contribution process generates a Python representation of the agent. Below is an example of the
generated code:
**from taskgen import Agent, Function, Memory, Ranker**
**import math**
# Author: @name_of_author
# Author Comments: This is a generalist agent
**class HelpfulAssistant_abc(Agent):**
**def __init__(self):**
var_binary_to_decimal = Function(
fn_name="binary_to_decimal",
fn_description= ''' Convert input <<x: int >: a binary number in base 2>
to base 10 ''',
output_format ={ 'output_1 ': 'int'},
examples=None,
external_fn=binary_to_decimal,
is_compulsory=False)
var_sentence_with_objects_entities_emotion = Function(
fn_name="sentence_with_objects_entities_emotion",
fn_description= '''Output a sentence with <obj > and <entity > in the style
of <emotion >''',
output_format ={ 'output ': 'sentence '},
examples=None,
external_fn=None,
is_compulsory=False)
**super ().__init__(**
agent_name="Helpful assistant",
agent_description= '''You are a generalist agent ''',
max_subtasks =5,
summarise_subtasks_count =5,
memory_bank ={ 'Function ': Memory(memory =[], top_k=5, mapper= **lambda x: x.**
fn_name + ': ' + x.fn_description, approach= 'retrieve_by_ranker ', ranker
=Ranker(model= 'text -embedding -3-small ', ranking_fn=None)),},
shared_variables ={},
get_global_context=None,
global_context= '''''',
default_to_llm=True,
code_action=False,
verbose=True,
debug=False
)
self.assign_functions(
[var_binary_to_decimal,var_sentence_with_objects_entities_emotion]
)
self.assign_agents(
[]
)
# Supporting Functions
**def binary_to_decimal(x: int** ) -> int :
''' Convert input <x: a binary number in base 2> to base 10 '''
**return int** ( **str** (x), 2)
These examples demonstrate the simplicity of contributing and loading agents, as well as the structure of
the generated code that encapsulates the agent’s functionality.
**C.5** **Future Work and Community Feedback**
While efforts have been made to support diverse agent configurations, it is acknowledged that there may
be limitations in the current contribution process. Users are encouraged to provide feedback by raising
issues on the GitHub repository to continually improve this process.
-----
**D** **Dynamic Maze Navigation**
**D.1** **Maze Navigation**
We evaluate TaskGen with a StrictJSON Planner, Shared Variables and Global Context in a dynamic
maze navigation environment. It manages to solve the hardest 40x40 dynamic grid world all the time,
faring better than prior methods in Learning, Fast and Slow (Tan and Motani, 2023).
**D.1.1** **Background**
Figure D1: A sample maze environment of size 10x10. The actual experiment is 40x40. By default, the agent’s start
state is at the top left and the door is at the bottom right, but it can be varied. Obstacles change after half of the
total number of episodes. (Left) Obstacles in the first half form a vertical wall with a gap in the centre across the
mid-point. (Right) Obstacles in the second half from a horizontal wall with a gap in the centre across the mid-point.
**D.1.2** **Experimental Setup**
The environment used is a 2D grid world, where there are 40 by 40 squares. There are also some grid
squares which are denoted as obstacles and are not traversable. The agent starts off at a grid square and is
supposed to head towards the door (exit) position.
The obstacles change mid-way, and the start and end points vary randomly with each episode. This is a
difficult environment to evaluate learning as it is continuously changing. See Fig. D1 for an illustration.
**State Space. The agent is provided with both its own position and the door (exit) position.**
**Reward. This is a sparse reward environment and the agent will only be counted as completing the**
episode and receive a reward of 1 if it manages to reach the door before 40 × 40 time steps. Otherwise, it
will receive a reward of 0.
**Action Space. The available action space is discrete from the set {Up, Down, Left, Right}. There is no**
wraparound, and the agent will remain in its existing position should it collide with the edges of the grid
or with an obstacle.
**Agents. We use a TaskGen Agent using "gpt-4o" as the LLM. We pit its performance against Fast & Slow**
(F&S) and three RL-based agents - Proximal Policy Optimisation (PPO) (Schulman et al., 2017), Trust
Region Policy Optimisation (TRPO) (Schulman et al., 2015) and Advantage Actor-Critic (A2C) (Mnih
et al., 2016).
-----
**D.1.3** **TaskGen Agent**
Figure D2: Schema of TaskGen Planner Interface with TaskGen Agent
**Introduction. As the environment is huge and difficult to navigate just by exploring and thinking on-the-**
go, we use a Planner to craft an overall plan. This is actually a alpha-version of the Planner for TaskGen
which is not released officially yet. The reason why we need Planner instead of just simply a Meta Agent
is because we want to use rule-based methods to ensure each part of the plan is followed for continuity.
**Overall Schema. Both the Planner and the Agent will have access to the environmental states via Global**
**Context, and actions available. The difference in roles is that the Planner is in charge of the bigger picture**
from end-to-end and is in charge of deriving a list of steps to get the Agent from the current state to the
goal state. The Agent will focus on the more immediate situation, and will seek to execute the most
immediate step from the list of steps that the Planner has planned out. In order to ensure continuity of
plan, we follow the Planner’s plan all the way until a step failed, which will then cue replanning. In the
dynamic maze environment, this is if an obstacle is encountered or if the Agent reaches out of bounds.
After the entire plan is executed, we will check for task completion before breaking out of the loop. If the
task is completed (i.e. agent is at the exit position), we end the task. Otherwise, we will get the Planner to
replan again. This is shown in Fig. D2.
**Planner. For the Planner, we use a strict_json function with the inputs of Start Position, Exit Position,**
Obstacle Locations and Subtasks Completed. The Planner will use CoT prompting to get a plan from
current position to exit position. A sample CoT generation for the plan is as follows:
1. Example Start Position: (2, 0)
2. Example Exit Position: (2, 4)
3. Example Obstacle Positions: ["Obstacle from (0, 1) to (5, 1)"]
4. Example Obstacle Position Layout: There is a wall of obstacles from (0, 1) to (5, 1)
5. Example Thoughts: I need to get from (2, 0) to (0, 4) There are obstacles in the way. Since (2, 1) to (5,
1) has obstacles, I am only able to go past the wall via (6, 1)
-----
6. Example Plan: ["Move down 4 times from (2, 0) to (6, 0)", "Move right 4 times from (6, 0) to (6, 4)",
"Move up 4 times from (6, 4) to (2, 2)"]
**Agent. The Agent is equipped with a move function that takes in an action and the number of times**
to execute it. We first reset the Subtasks Completed of the Agent before running the task, to prevent
past history from affecting the current task. The task is the most immediate step of the Plan. We also
provide the Agent with its current position and exit position in Global Context. As the Agent traverses
the environment, we also update the Obstacle Locations encountered. If the obstacle is not present, it will
be removed from memory. If the obstacle is present but not in memory, it will be added to memory. If
there is no error in execution of the task, we proceed to the next item of the Plan. Otherwise, we will get
the Planner to re-plan.
**Differences from prior methods. As LLM-based methods require more semantic understanding of the**
world to work, we give the TaskGen agent the full specifications of the environment description and
meanings of each action. Furthermore, to facilitate faster execution, we allow the TaskGen Agent to
execute the same action multiple times at a go. This is possible as the LLM is able to express arbitrary
output which prior methods struggle with. We are also able to externally store the observed obstacle
positions, and input these positions as in-context prompt to the TaskGen Agent. The obstacle positions
are grouped continuously before being fed to the LLM, like (0, 0), (0, 1) and (0, 2) will get grouped as
obstacles from (0, 0) to (0, 2). This is because the LLM is not very good at exact grid positions, and
abstractions like these help with understanding a wall of obstacles better. Another significant difference is
that in order to reduce the number of turns, instead of letting the Agent bump into an obstacle to discover
its presence, we give the TaskGen Agent a 3x3 square view of vision centered on itself to discover all
nearby obstacles. This is also more realistic as in real life an agent should have some vision to see what is
in the world.
-----
**D.1.4** **Overall Results**
|Col1|Col2|Col3|
|---|---|---|
||||
||||
|Solve Rate (%) of Various Agents on a Dynamic 40x40 Navigation Task|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|100100100||||||||||||||||
|||||||||||||||||
|||||||||||||||||
|||||56.6 52.6 48.6||||||||||||
|||||||||||||||||
|||||||||||||||||
|||||||||||||||||
|||||||||||||||||
||||||||||14.4 10.6 6.8 4.44.64.5 2.85.2 4|||||||
|||||||||||||||||
|||||||||||||||||
TaskGen F&S PPO TRPO A2C
100
80
60
40
20
Agent
First Half Second Half All
Figure D3: Solve rate (%) of various agents on a dynamic 40x40 navigation task for the first half, second half of
episodes after obstacle positions change, and across all episodes.
For the RL agents and F&S, we show the results across 100 episodes averaged across 10 seeds, as per the
original paper. For the TaskGen agent, we evaluate it with few-shot prompting without any training across
20 episodes with environment changeover after 10 episodes across 1 seed.
Overall, Fig. D3 shows that TaskGen performs the best compared to all other agents such as F&S, PPO,
TRPO, A2C. This is significant, as it shows that for complex environments, perhaps having a Planner is
critical for success and continuity of actions to achieve a long-term goal. It also shows the versatility of
TaskGen to be reconfigured for an agentic task such as navigation.
-----
**D.1.5** **Detailed Run-through of TaskGen Agent**
Comparison of Actual Steps taken vs Minimum Steps possible for each episode for TaskGen Agent
250
200
150
100
50
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||Minimum Steps Actual Steps||||
|||||||
||Obstacle Change|||||
|||||||
|||||||
10 11 12 13 14 15 16 17 18 19 20
Episode Number
Figure D4: Comparison of Actual Steps taken vs Minimum Steps possible for each episode for TaskGen Agent.
Note the obstacle positions are changed after episode 10.
Fig. D4 shows that in general, TaskGen Agent is able to solve the episodes quite efficiently with planning.
The main cause of the higher actual steps for some environments is when some obstacle positions are
unknown, the Planner is not perfect and sometimes chooses a position to backtrack to that does not work
out. When the obstacles in the path are known, the Planner can usually generate a perfect plan, or can
correct itself quickly mid-way.
In general, as the number of episodes increase in the same environment, the better the knowledge of the
obstacles, and hence the better the plan and the generated actions. Even if the obstacles are changed,
like in Episode 11, the Planner and TaskGen Agent combined are able to navigate and still clear the
environment.
**D.1.6** **Insights**
As we are trying to test TaskGen’s ability to solve the maze, we intentionally use LLM as the Planner and
LLM as the executor for the maze environment. However, such a logical pathfinding task is best done by
rule-based deterministic methods like Breadth-First Search or A* search algorithm. It may be better to
treat pathfinding with known obstacles as a problem solved by traditional pathfinding algorithms, and
simply get TaskGen to call such a function to do pathfinding.
It is noted that the Planner was not able to perform well without few-shot prompting of something which
might occur in the environment (i.e. a wall with a gap). This is a huge downside for using LLM as an
optimiser, as it does not optimise well. LLMs also do not understand 2D text grids perfectly, and hence,
the spatial awareness for the Planner is lacking, resulting in less robust plans.
We have also tried to use native TaskGen without the Planner, but the LLM was not able to see the big
picture that well, resulting in LLM going to the same squares again and again trying to navigate past the
wall. Planning is a difficult problem for an LLM and it is best to offload that to a rule-based Planner.
-----
**E** **Escape Room Solving in TextWorld**
**E.1** **Introduction to Interactive Fiction / Escape Room Environments**
This appendix describes the implementation of an interactive fiction player as an agent. Interactive fiction
is a genre of computer game that pre-dates GUIs, with many of the games (and tools) originating in the
1980s. The Microsoft TextWorld project has delivered a system for building arbitrary games and provides
a framework for building agents to navigate these games.
The key game system in interactive fiction is the discovery of the game world. Players are presented with
limited information at one time and are required to recall or rediscover elements of the game "world".
Within advanced games, the game world may change without player interaction but this behaviour is not
present in TextWorld.
Another aspect of interactive fiction is discovering how to interact with it. Players issue commands on
each turn, typically in a terse pseudo-code, and the game attempts to interpret them. TextWorld has
optional support for providing the player/agent the list of acceptable commands at each turn. Agents in
this experiment will utilise these hints, if provided by the game. By design, the TaskGen agent developed
does not depend on any specific input from the game and the few shot examples are not present in the
test/development environment but chosen to represent "reasonably complex" commands where subject
and object are each qualified.
Interactive fiction games may have counter-intuitive problems to solve to succeed in the game. e.g.: to
cook a carrot, grill it directly on a stove. For this developer, carrots aren’t often grilled, and things that are
grilled are rarely done so directly on a stove.
**E.2** **Conversation Class in TaskGen**
The TaskGen agent developed for this paper used the new Conversation Class interface, building on the
existing escape room example.
Figure E1: Block diagram of TextWorld and agent.
The Persistent Memories of the player/agent are aligned to the core systems of interactive fiction: commands, rooms (locations in the game), objectives. The "Summary of Conversation" in the player/agent
was useful for the agent as it allowed the agent to reflect on futile behaviour and move on to alternative
solutions.
-----
Structuring the Persistent Memory as "arrays" made the memories much more effective at guiding the
agent.
**E.3** **Agents Used**
**Random Agent. The "random" agent is part of the TextWorld project and selects from the available**
commands at random. Without the commands provided, it cannot act.
**LLM Only Agent. The LLM Only "gpt-4o" agent is simply a chat session, where each "observation" of**
context from the game to the player is another chat message. "gpt-4o" was remarkably effective in these
circumstances.
**TaskGen Agent. The input to the TaskGen Agent is the same as the LLM Only agent. The TaskGen**
Agent utilises the Conversation Class with Persistent Memories to store a continued awareness of the
environment, which could potentially help it to make better decisions.
**E.4** **Experiment Setup**
The tests are from the TextWorld project examples. The variations relate to how detailed the "goals"
(intermediate steps) are and how common the feedback from the game (points). As a proxy to goal solving,
we can treat each point as one goal fulfilled, and so the total percentage of points earned in a game relative
to the total points is a proxy for total solve rate of all goals.
We halt each game at 100 turns, and run each game 10 times for each agent and report averaged scores.
All commands were truncated to prevent fatal buffer overflows in the 1980’s game engine.
We vary three aspects of the environment. The first is goal description - detailed, brief, none. The second
is rewards - dense or sparse. The third is whether commands are provided or not. We test the agents across
the following environments, and report the score obtained:
1. Detailed Dense (commands provided)
2. Brief Sparse (commands provided),
3. None Sparse (commands provided),
4. Detailed Dense (commands not provided),
5. Brief Sparse (commands not provided),
6. None Sparse (commands not provided)
-----
**E.5** **Results**
100
LLM Only
|Score for each Agent across 6 environments|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|96 93 88 57 42 3030 2020 0 0 0 0 0 0 0 0 0||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
Agent
80
60
40
20
Detailed Dense (commands provided) Brief Sparse (commands provided)
None Sparse (commands provided) Detailed Dense (commands not provided)
Brief Sparse (commands not provided) None Sparse (commands not provided)
Figure E2: Score for each Agent across 6 environments
Fig. E2 shows the scores for the various agents on 6 different kinds of environments. In general, the
TaskGen Agent performs the best, having a higher overall score (and hence solve rate) than LLM Only
and random. It is to be noted that the None Sparse environments may be too difficult for the agents, as
no goal is provided and the agents will need to search around until the right sequences of commands are
done. This highlights that having goals is very important for solving an environment efficiently.
**E.6** **Insights**
Reviewing the results, it seems likely that the LLM Only and TaskGen agents could (in some circumstances) have been more successful, if given more turns.
It has been observed that when there are multiple objectives, the TaskGen Agent may not complete all of
them dutifully and may think that it has completed some when it has not. This is a problem that could
potentially be solved with a rule-based plan follower like that of the Planner in Appendix D.
-----
**F** **Web-Browsing Agents**
This appendix details the implementation of Web-Browsing Agents in TaskGen.
The goal is to introduce the idea of Agents using a program/web application using TaskGen’s current
capability and performing actions based on user’s query.
**F.1** **Agent Diagram/Flow**
Figure F1: Agent Diagram and beginning of agent flow. Consists of 1 Meta Agent and 1 Inner Agent. User performs
a query via terminal CLI that starts off agent flow.
-----
Figure F2: Inner Agent opens browser and performs search
Figure F3: Inner Agent opens browser and performs search
-----
**F.2** **Agent’s Action Space**
- informational_search:
**– Description: Performs a search query on Bing and saves the context of the search results page.**
**– Steps:**
1. Navigates to Bing with the given search query.
2. Captures the current state of the browser (URL, title, and page source).
3. Extracts and saves relevant content from the page to a file.
- navigational_search:
**– Description: Performs a search query on Bing, clicks the first result, and saves the context of**
the resulting page.
**– Steps:**
1. Navigates to Bing with the given search query.
2. Clicks on the first search result link.
3. Captures the current state of the browser (URL, title, and page source).
4. Extracts and saves relevant content from the page to a file.
- visit_page:
**– Description: Visits a specified URL and saves the context of the page.**
**– Steps:**
1. Navigates to the given URL.
2. Captures the current state of the browser (URL, title, and page source).
3. Extracts and saves relevant content from the page to a file.
- open_browser:
**– Description: Opens a web browser using Helium.**
**– Steps:**
1. Starts a Chrome browser session.
2. Returns a message indicating the browser has been opened.
- close_browser:
**– Description: Closes the web browser using Helium.**
**– Steps:**
1. Kills the current browser session.
2. Returns a message indicating the browser has been closed.
- summarise_context:
**– Description: summarises the content saved in a file (default: context.txt) using OpenAI’s**
GPT model.
**– Steps:**
1. Reads the content from the specified file.
2. Sends the content to OpenAI’s API to generate a summary.
3. Returns the generated summary.
-----
**F.3** **TaskGen Code for Web Browsing**
We use a Meta Agent with Inner Agents to solve the task of web browsing. Some of the code used as as
shown below:
**F.3.1** **Function Definition**
**def informational_search(query: str** ) -> str :
go_to(f"https ://www.bing.com/search?q={ query}")
header, content = _browser_state ()
save_context_to_file(header, content)
**return {**
"Output": f"Performed informational search for '{query}' and saved context."
}
**F.3.2** **Meta Agent Creation**
WebSurfer = Agent(
"WebSurfer",
"Performs web searches and navigates web pages. Always open the browser at the
start of the task and close the browser at the end.",
model="gpt -4o",
default_to_llm=False,
).assign_functions(fn_list_3)
**F.3.3** **Boss Agent**
bossagent = Agent(
"WebNavigator" ,
"Assists user to navigate the web. Always open the browser at the start of the
task and close the browser at the end." ,
model="gpt -4o" ,
default_to_llm=False ,
)
-----
**F.4** **Results of Web Browsing Agent**
Figure F4: Graphical representation of the success rates for each query tested. Each query was tested 5 times, and
the success rate is calculated as the percentage of successful attempts out of these 5 tests. The chart compares the
effectiveness of different queries, providing a clear visualization of the success rate for each query.
Fig. F4 shows the result of the Agent executing various queries. We achieve an overall solve rate of 69%.
While better prompt engineering may significantly improve the solve rate, our focus is to show a working
initial interface, so we did not over-engineer for this use case.
-----
**G** **MATH Dataset**
TaskGen Agent
Problem Statement Solution
Find the domain of the real-valued function
. **Math Problem** Roots: [1/2, 4/3]
Give the endpoints in your answer as **Solver** Intervals: (1/2 <= x) & (x <= 4/3)\n
common fractions (not mixed numbers or decimals).
Execution Results
Code Generator
If Execution Failed
Retry 3 times
Code Debugger
Figure G1: Math Problem Solver Agent
Leveraging LLMs for solving mathematical problems has become a widely researched area (Zhou et al.,
2023; Yang et al., 2023b). In this section, we explore the use of the TaskGen agent to tackle complex
mathematical problems across various domains. For our evaluations, we utilised the MATH dataset
(Hendrycks et al., 2021), which contains over 12,500 competition-level mathematical problems. We
focused specifically on the most challenging subset, Level 5, across the following categories: Algebra,
Pre-Algebra, Intermediate Algebra, Number Theory, and Counting and Probability. We randomly selected
20 problems from the test set of each category, resulting in a total of 100 problems in our test set, to assess
the TaskGen agent’s ability to solve these tasks.
To solve these challenging problems, we employed a TaskGen Agent called “Math Problem Solver”
powered with GPT4o as depicted in Figure G1. This agent is equipped with a specialised functions that
facilitates the generation, execution, and debugging of code necessary to tackle the given tasks. The
function has access to essential Python libraries, including numpy, sympy, math, and random.
-----
**Evaluation Result.** In Figure G2, we provide the evaluation results of our TaskGen agent equipped with
the function described above and the agent without any equipped functions.
Agent with Function 17.00 17.00
Agent w/o Function
16
15.00
14
12.00 12.00
12
11.00
10.00
10
9.00
8.00
8
6
4.00
Number of correct answers out of 20 4
2
0
Algebra Prealgebra Intermediate Number Theory Counting And
Algebra Probability
Figure G2: Quantitative results of TaskGen agents on the subset of the MATH dataset.
From our experiments, we found that on the challenging Level-5 problems, the TaskGen agent with
Equipped Functions achieved an average accuracy of 71%, while the TaskGen agent without the Equipped
Functions achieved only 44% accuracy. For evaluation, we manually verified the generated solution
against the provided ground truth solution. These results demonstrate that, in order to solve these
challenging Level-5 problems, equipping the agent with code generation and debugging capabilities leads
to more accurate solutions of mathematical problems.
-----
**H** **RAG-based Question Answering on NaturalQuestions Dataset**
In this section, we describe the development and functionality of a Retrieval-Augmented Generation (RAG)
system using TaskGen. This system integrates one TaskGen agent, known as the “User Agent,” along
with two critical TaskGen functions: ContextFetchFunction and AnswerFunction. These components
constitute the fundamental operations of our system.
**H.1** **System Overview**
1. ContextFetchFunction: This function accepts a user’s query and a batch number, retrieving the
relevant context. It is designed to incrementally fetch more context if the initial retrieval proves
inadequate.
2. AnswerFunction: After receiving context, this function generates an answer based on the context
available. If the context is insufficient to resolve the query, AnswerFunction returns “no answer.”
3. User Agent: The orchestrator of the entire Q&A cycle, the User Agent is responsible for managing
both the ContextFetchFunction and the AnswerFunction. It initiates the process by retrieving
context for the user’s query and continues to fetch additional context in subsequent batches until a
satisfactory answer is found or the interaction limit is reached.
**H.2** **Detailed Process**
- Query Submission: The user submits a query to the User Agent.
- Context Retrieval: The User Agent invokes ContextFetchFunction with the initial query and a
starting batch number (0).
- Answer Generation: With the context obtained, the User Agent next activates AnswerFunction. If
the context sufficiently answers the query, a response is generated. Otherwise, it issues “no answer.”
- Incremental Fetching: If “no answer” is received, the User Agent increments the batch number and
re-engages ContextFetchFunction to obtain more context. This iterative process is capped at five
interactions (max interactive retrieval count).
Figure H1: Illustration of the Interactive Retrieval-Augmented Generation (RAG) Question and Answer Flow. The
diagram sequentially represents the process from step (0) to (7), detailing the interaction between the User, User
Agent, ContextFetchFunction, and AnswerFunction. Each numbered marker (num) in the diagram corresponds to a
specific step in the query-answer cycle.
-----
**H.3** **Example**
Consider the query: “What is the capital of France?”
- First Batch (Batch 0):
1. Paris is a major city in France.
2. France is known for its culture and cuisine.
_AnswerFunction Output: “no answer” (the context does not explicitly state Paris as the capital)._
- Second Batch (Batch 1):
1. The capital of France is Paris.
2. Paris is famous for the Eiffel Tower.
_AnswerFunction Output: “The capital of France is Paris.”_
This example illustrates the system’s ability to handle complex queries by sequentially enhancing the
context until a definitive answer can be provided.
**H.4** **Technical Framework and Evaluation Methodology**
This section outlines the technical and methodological specifics employed in our study to develop and
evaluate the Retrieval-Augmented Generation (RAG) system.
**H.4.1** **Technology Stack**
- TaskGen Agents and Functions: Our system utilises TaskGen’s capabilities extensively. The core
components, namely the User Agent, ContextFetchFunction, and AnswerFunction, are powered
by GPT-3.5. This model was chosen for its lower cost and robust performance in natural language
understanding and generation.
- Embedding Storage and Retrieval: We employed Postgres PGvector to manage the storage and
retrieval of embeddings. For Retrieval we use k = 10 configuring our system to fetch top 10 most
relevant vector embeddings for each query
- Embedding Model: The text-embedding-ada-002 model was used to convert text data into vector
embeddings. These embeddings represent the textual data in a format amenable to similarity
comparisons and retrieval operations.
**H.4.2** **Dataset and Benchmarking**
[• Dataset: The Natural Questions dataset was chosen for its comprehensive collection of real-world](https://github.com/google-research-datasets/natural-questions)
questions. Our study focuses on the first 2,000 entries of the development split validation set,
providing a balanced mix of complexity and coverage.
[• Evaluation Metrics: To assess the effectiveness of our RAG system, we used Google’s nq_eval](https://github.com/google-research-datasets/natural-questions/blob/master/nq_eval.py)
[script. This script is widely recognised for its rigour in measuring the precision and accuracy of](https://github.com/google-research-datasets/natural-questions/blob/master/nq_eval.py)
answers provided by question-answering systems.
-----
**H.4.3** **Evaluation Results**
We conducted a comprehensive evaluation to compare the performance of non-interactive versus interactive
(via TaskGen) retrieval method. The non-interactive retrieval approach involves a single invocation of an
LLM using context from the vector database to answer the query. This method assumes that the initial
context contains all the necessary information to generate an answer. In contrast, the interactive retrieval
method dynamically fetches and refines context based on the ongoing interaction with the user’s query,
allowing for a more adaptive and potentially accurate response as additional information is incorporated
in successive retrieval steps.
Figure H2: Graphical representation of the benchmark results comparing F1 Score, Precision, and Recall for
Non-Interactive versus Interactive Retrieval via TaskGen (for both k=10 used for retrieval).
-----
| [
"John Chong Min, Tan",
"Prince, Saroj",
"Hardik, Maheshwari",
"Bharat, Runwal",
"Brian Lim Yi, Sheng",
"Richard, Cottrill",
"Alankrit, Chona",
"Ambuj, Kumar",
"Mehul, Motani"
] | 2024-07-22T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2407.15734 | https://arxiv.org/abs/2407.15734 | https://www.semanticscholar.org/paper/5777f18fa94baca77948ccdcc01fc073d246a04c |
Teaching the Structure of First-order Formulas to Neural Networks | Logical reasoning as performed by human mathematicians involves an understanding of terms and formulas as well as transformations on them. In this paper we consider a number of syntactic and semantic properties of logical expressions. Based on these we extract and generate data sets. We develop models that encode these formulas in a continuous vector space while preserving the aforementioned properties. We train, evaluate and compare multiple models on the extracted data sets. Furthermore, we show that these models generalize to properties they have not explicitly been trained on. | null | # Teaching the Structure of First-order Formulas to Neural Networks
Julian Parsert[1][∗], Stephanie Autherith[1], and Cezary Kaliszyk[1][∗]
University of Innsbruck, Innsbruck, Austria
```
[email protected]
[email protected]
[email protected]
```
**Abstract**
Logical reasoning as performed by human mathematicians involves an understanding of
terms and formulas as well as transformations on them. In this paper we consider a number
of syntactic and semantic properties of logical expressions. Based on these we extract and
generate data sets. We develop models that encode these formulas in a continuous vector
space while preserving the aforementioned properties. We train, evaluate and compare
multiple models on the extracted data sets. Furthermore, we show that these models
generalize to properties they have not explicitly been trained on.
Many previous examples can be found where artificial intelligence technology was applied to (interactive) theorem proving problems. While F¨arber [2] use simple machine learning algorithms
for proof search in theorem proving, Loos et al. [6] use a deep learning approach. Also other
tasks such as tactic and premise selection have been improved using different types of artificial
intelligence [3, 7, 8, 4]. All of these examples and many more apply their machine learning to
specific problems and extract features, engineer data etc. that precisely describes the problem
at hand. We propose a learned encoding and embedding of (first-order) formulas that can later
be used by more complex as well as naive models alike. Clearly, encodings of formulas need to
carry syntactic and semantic information about the original formula. In addition, one would
like such encodings to be relation-preserving. Ideally, the encoding of two “related” formulas
will carry that relation as well. As an example, when applying these encodings to premise
selection, one could imagine that useful premises would have a vector representation which are
close in distance to the conjecture in question. Similarly, one could imagine application to
clause selection for theorem proving, etc.
**Learning Framework** We propose a deep learning based encoding. Following the results
from [1], we use CNNs and LSTMs based architectures. Our encoding networks are trained on
char level embeddings as shown in Figure 1. This learning framework essentially consists of two
main parts, the encoding network (which we are mainly interested in) and a set of classifiers.
The models are trained by propagating the loss that is obtained from the classifiers back to
the encoding network. Once the training phase is done, we discard the classifiers and use the
encodings.
**Properties** The properties which are recognized in the classifiers are extracted beforehand.
For now the considered properties are the subformula relation, modus ponens, term-formula
distinction, well-formedness, unifiability, and alpha-equivalence. It is worth noting that there
are two iterations of the subformula classification, one multilabel classification with one input
_∗This work is supported by the European Research Council (ERC) grant no 714034 SMART._
-----
Encoding of First-order Formulas Parsert, Autherith, Kaliszyk
and one binary classification with two inputs. These formulas or pairs of formulas are fed to
the learning framework where each of the formulas is first encoded by the encoding network.
Then, the encodings are used as input to different types of classifiers which, as mentioned
above, propagate the loss back to the encoding. The properties where chosen by considering
the application of the encoding to (interactive) theorem proving. The two main focuses were
1) the structure of first-order formulas, and 2) useful properties for theorem proving. For
the former we chose the properties well-formedness and subformula, whereas for the latter
unifiability and modus ponens is important. The properties such as term-formula classification
and alpha equivalence form an important part in both. Syntacitc and structural properties of
first order logic nowadays form an important part in premise selection[5] whereas unifiability
is an important property of resolution in theorem proving. We leave it up to future work to
consider the minimality or the addition of these or additional properties.
**Encoding Models** We consider different models for our encoding network. But they can
be split into a group of CNN based models and a group of LSTM based models as shown in
Figure 2. All models first go through and embedding layer. After that, we have either a set
of convolution/pooling layers or a set of LSTM layers depending on the model. On top of the
model specific layers we put a final fully connected layer. We did not mention this explicitly
yet, however we consider two types of encoding networks. Encoding networks that is functions
of the form N[n] _→_ R[n] and embedding networks, which correspond to N[n] _→_ R[m] where m ≤ _n._
The latter of which, is achieved through appending a projection layer to the encoding. Hence,
we get a lower dimensional continuous representation of formulas.
**Results** The training data and evaluation data is split 9:1 before the training phase. The
evaluation seems to confirm the results achieved in [1] where the CNNs based models outperform
the LSTM based models. The best CNN based models are the ones with a fully connected
layer following the convolution/pooling layers. Of the seven properties that we considered,
the CNN based networks achieved anywhere between 80% and 100%. The 100% results came
from the classification of terms/formulas and alpha-equivalence. Meanwhile the LSTM based
models performed similarly in 4 out of the 7 considered properties. However, they perform
considerably less when being tasked with classifying modus ponens, well-formedness, and subformulas. When trying to recognize a modus ponens inference step, the best LSTMs only
reach an accuracy of 61%, while the best CNNs reach up to 99%. We also used the encodings
and embeddings of formulas to train simpler models such as SVMs. Here, SVMs were able to
recognize whether or not a term contained a variable with an accuracy of 90%. Doing a nearest
neighbor analysis it also seems that the concept of variables are learned by the network.
In the future we aim for two things, adding additional properties as well as incorporating
these encodings in actual theorem proving problems.
## References
[1] Alexander A. Alemi, Fran¸cois Chollet, Niklas E´en, Geoffrey Irving, Christian Szegedy, and Josef Urban. DeepMath - deep sequence models for premise selection. In Daniel D. Lee, Masashi Sugiyama,
Ulrike V. Luxburg, Isabelle Guyon, and Roman Garnett, editors, Advances in Neural Informa_tion Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016,_
_December 5-10, 2016, Barcelona, Spain, pages 2235–2243, 2016._
[2] Michael F¨arber and Chad E. Brown. Internal guidance for satallax. In Nicola Olivetti and Ashish
Tiwari, editors, Automated Reasoning - 8th International Joint Conference, IJCAR 2016, Coimbra,
-----
Encoding of First-order Formulas Parsert, Autherith, Kaliszyk
Input formula φ
Input φ Input φ
|Col1|Col2|
|---|---|
|Embedding Layer||
|||
|Convolution||
|||
|Pooling||
|||
Encoding Network
emb(φ1) emb(φ1
Well-formedness . . . Modus Ponens
Classifiers
|Col1|Col2|
|---|---|
|Well-formedness||
|mb(φ1),|emb(φ2|
|---|---|
|||
|Modus Ponens||
Figure 1: This graph shows the training framework we developed. The bottom
area contains the classifiers that get one or
more continuous representations of formulas emb(φ) as input. The encoding networks are described subsequently (cf. Figure 2).
|Input φ|Col2|
|---|---|
|Embedding Layer||
|Bidirectional LSTM layers||
|||
|Fully Connected Layer(s)||
|||
Embedding Layer Embedding Layer
Convolution
Pooling
. . . Bidirectional LSTM layers
Convolution
Pooling
Fully Connected Layer(s) Fully Connected Layer(s)
Projection Layer
Encoding of φ Encoding of φ
Figure 2: The encoding models we considered with the layers that the input passes
through. On the left we show the CNN
based models, while on the right the LSTM
based models are presented. The dashed
boxes describe layers that are not present
in each model of that type.
_Portugal, June 27 - July 2, 2016, Proceedings, volume 9706 of Lecture Notes in Computer Science,_
pages 349–361. Springer, 2016.
[3] Thibault Gauthier, Cezary Kaliszyk, and Josef Urban. Tactictoe: Learning to reason with HOL4
tactics. In Thomas Eiter and David Sands, editors, LPAR-21, 21st International Conference on
_Logic for Programming, Artificial Intelligence and Reasoning, volume 46 of EPiC Series in Com-_
_puting, pages 125–143. EasyChair, 2017._
[4] Andrzej Stanislaw Kucik and Konstantin Korovin. Premise selection with neural networks and
distributed representation of features. CoRR, abs/1807.10268, 2018.
[5] Daniel K¨uhlwein, Jasmin Christian Blanchette, Cezary Kaliszyk, and Josef Urban. Mash: Machine
learning for sledgehammer. In Sandrine Blazy, Christine Paulin-Mohring, and David Pichardie,
editors, Interactive Theorem Proving, pages 35–50, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg.
[6] Sarah Loos, Geoffrey Irving, Christian Szegedy, and Cezary Kaliszyk. Deep network guided proof
search. In Thomas Eiter and David Sands, editors, LPAR-21. 21st International Conference on Logic
_for Programming, Artificial Intelligence and Reasoning, volume 46 of EPiC Series in Computing,_
pages 85–105. EasyChair, 2017.
[7] Yutaka Nagashima and Yilun He. PaMpeR: proof method recommendation system for Isabelle/HOL. In Marianne Huchard, Christian K¨astner, and Gordon Fraser, editors, Proceedings
_of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ASE 2018,_
_Montpellier, France, September 3-7, 2018, pages 362–372. ACM, 2018._
[8] Mingzhe Wang, Yihe Tang, Jian Wang, and Jia Deng. Premise selection for theorem proving by
deep graph embedding. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach,
Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information
_Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9_
_December 2017, Long Beach, CA, USA, pages 2783–2793, 2017._
-----
| [
"Cezary, Kaliszyk",
"Julian, Parsert",
"Stephanie, Autherith"
] | 2019-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Textualized Agent-Style Reasoning for Complex Tasks by Multiple Round LLM Generation | Chain-of-thought prompting significantly boosts the reasoning ability of large language models but still faces three issues: hallucination problem, restricted interpretability, and uncontrollable generation. To address these challenges, we present AgentCOT, a llm-based autonomous agent framework, which can solve complex problems in an agent-style manner by multiple round LLM generation. At each step, AgentCOT selects an action and executes it to yield an intermediate result with supporting evidence. In addition, we integrate the step's index into the reasoning process to form a graph structure for complex inference logic. We introduce two new strategies to enhance the performance of AgentCOT.We conduct extensive experiments to verify the effectiveness of our method on six common benchmarks. Results exhibit that our method brings in substantial improvements over current competitive approaches. | This work presents AgentCOT, a llm-based autonomous agent framework, which can solve complex problems in an agent-style manner by multiple round LLM generation by integrating the step's index into the reasoning process to form a graph structure for complex inference logic. | [
"Chen, Liang",
"Yong, Wang",
"Zhifan, Feng",
"Zihe, Liu",
"Wenbin, Jiang",
"Jinan, Xu",
"Yufeng, Chen"
] | 2024-09-19T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2409.12411v1 | https://arxiv.org/abs/2409.12411 | https://www.semanticscholar.org/paper/db380a586573ba18ef34bcc4888d3fb8323600d3 |
|
The Impact of Language on Arithmetic Proficiency: A Multilingual Investigation with Cross-Agent Checking Computation | This paper critically examines the arithmetic capabilities of Large Language Models (LLMs), uncovering significant limitations in their performance. Our research reveals a notable decline in accuracy for complex calculations involving large numbers, with addition and subtraction tasks showing varying degrees of proficiency. Additionally, we challenge the notion that arithmetic is language-independent, finding up to a 10% difference in performance across twenty languages. The study also compares self-verification methods with cross-agent collaborations, showing that a single model often outperforms collaborative approaches in basic arithmetic tasks. These findings suggest a need to reassess the effectiveness of LLMs in tasks requiring numerical accuracy and precision. | A notable decline in accuracy for complex calculations involving large numbers, with addition and subtraction tasks showing varying degrees of proficiency, suggests a need to reassess the effectiveness of LLMs in tasks requiring numerical accuracy and precision. | # The Impact of Language on Arithmetic Proficiency: A Multilingual Investigation with Cross-Agent Checking Computation
**Chung-Chi Chen,[1]** **Hiroya Takamura,[1]** **Ichiro Kobayashi,[2]** **Yusuke Miyao[3]**
1
Artificial Intelligence Research Center, AIST, Japan
2
Ochanomizu University, Japan
3
University of Tokyo, Japan
[email protected], [email protected],
[email protected], [email protected]
**Abstract**
This paper critically examines the arithmetic capabilities of Large Language Models (LLMs),
uncovering significant limitations in their performance. Our research reveals a notable decline in accuracy for complex calculations involving large numbers, with addition and subtraction tasks showing varying degrees of proficiency. Additionally, we challenge the notion that arithmetic is language-independent,
finding up to a 10% difference in performance across twenty languages. The study
also compares self-verification methods with
cross-agent collaborations, showing that a single model often outperforms collaborative approaches in basic arithmetic tasks. These findings suggest a need to reassess the effectiveness
of LLMs in tasks requiring numerical accuracy
and precision.
**1** **Introduction**
Large language models (LLMs) have garnered significant attention over the past year. Several studies have re-evaluated various tasks to assess the
capabilities of general-purpose LLMs (Wadhwa
et al., 2023; Zhang et al., 2023; Ho et al., 2023).
A topic of particular interest is mathematical and
numerical reasoning (Wei et al., 2022; Imani et al.,
2023; Gaur and Saunshi, 2023; Davis, 2024). Figure 1 illustrates an instance where LLMs generate
step-by-step operational expressions while solving
a math word problem, named Chain-of-Thought
Prompting (Wei et al., 2022). While previous research indicates improved performance by LLMs
in solving math word problems, there is a scarcity
of discussion on whether LLMs truly comprehend
the operations they generate. This paper delves into
this issue through extensive experimentation and
reveals a notable limitation of LLMs in arithmetic.
Unlike other semantic tasks such as humor estimation (Hossain et al., 2020) or emotion prediction (Milkowski et al., 2021), where different labels
Figure 1: An example of arithmetic in LLM’s output in
Wei et al. (2022), and an example of the failure case of
LLM in arithmetic and checking computation.
may emerge due to language and cultural variations, arithmetic is typically considered languagefree and culture-free, as the same expression should
yield a consistent answer regardless of these factors.
In this study, we investigate twenty languages and
demonstrate that this assumption does not hold in
practice. Our findings reveal that the overall performance can vary by up to 10% in accuracy simply
by altering the language when utilizing LLMs for
arithmetic tasks.
Conversely, addition and subtraction are fundamental yet critical tasks in arithmetic. As depicted
in Figure 1, it is commonly assumed in prior research that LLMs are capable of solving such elementary calculations. Contrary to this belief, our
study reveals a significant decline in performance
for calculations involving more than five digits in
addition and more than four digits in subtraction.
Furthermore, we observe a 20% discrepancy in
accuracy between addition and subtraction tasks.
These findings underscore the need to reassess the
631
-----
extent to which LLMs genuinely comprehend the
principles of basic arithmetic.
Finally, checking computation is a crucial step
in human arithmetic processing. We initially investigate different prompts to examine the extent to
which performance alters with different approaches.
Besides self-verification by the same model, our
study also delves into cross-agent checking. Contrary to prior research, which indicates that multiagent communication can enhance performance in
contexts such as software development (Qian et al.,
2023) and generated-text evaluation (Chan et al.,
2023), our findings suggest that a single model
surpasses cross-agent collaboration in simple arithmetic tasks. This challenges the prevailing notion
that collaborative approaches always yield superior
results in NLP tasks.
**2** **Related Work and Preliminary**
Arithmetic computation forms the cornerstone of
mathematical capability. Earlier studies (Wies
et al., 2023; Liu and Low, 2023) classify arithmetic
tasks into two groups: learnable and unlearnable,
and Dziri et al. (2024) demonstrated that LLMs fail
at multi-digit multiplication. Tasks categorized as
learnable include copying, splitting, comparison,
ordering, addition, subtraction, and n-digit versus
1-digit multiplication/division. It is anticipated that
model performance would be robust when trained
specifically on these learnable tasks. Supporting
this, Chen et al. (2023a) provides evidence for the
comparison task, where models achieve a 99% accuracy rate after straightforward fine-tuning with
artificially generated datasets. However, this falls
outside the purview of our paper, as our focus is on
the capabilities of general-purpose LLMs trained
with commonly available resources. In this study,
we specifically investigate addition and subtraction
within a multilingual context, a subject seldom addressed in previous research.
On the other hand, checking computation is another seldom-explored area of prior studies. Drawing inspiration from Berglund et al. (2023), which
demonstrated that LLMs trained on the premise
“A is B” struggle to comprehend “B is A” (reversal curse), our research investigates the validity of
these findings in arithmetic tasks. Advancing this
inquiry, we observe that communicative agents exhibit superior performance compared to the use of
a single LLM in various tasks, as noted in many recent studies (Hong et al., 2023; Chen et al., 2023b;
Qian et al., 2023; Chan et al., 2023). Building
upon this trend, our study delves into the realm
of cross-agent checking computation. Our study
demonstrates that LLMs currently lack the capability for self-correction in basic arithmetic scenarios,
even through LLM interaction.
**3** **Experimental Setting**
**3.1** **Dataset**
In this research, we create an extensive test set comprising 39,708 instances for experimental analysis.
Each instance consists of two numbers, ranging
from 1 to 16 digits, combined with either an addition or subtraction operator. Examples from the
dataset include simple expressions like “1 + 1 =
” and more complex ones such as “2468 - 1357
= ”. The dataset is evenly split, with 50% of the
instances being addition expressions and the remaining 50% subtraction expressions. Instead of
presenting equations directly to the LLMs, we employ a standardized prompt: Answer the follow_ing expression, please only reply with the answer:_
_[Expression]. This prompt is translated and used_
across 20 different languages: English, Spanish,
French, German, Simplified Chinese, Traditional
Chinese, Russian, Japanese, Italian, Dutch, Korean, Portuguese, Swedish, Finnish, Danish, Polish,
Hindi, Turkish, Greek, and Thai. The input to the
model combines both the prompt and the arithmetic
expression. This approach allows us to assess the
LLMs’ arithmetic capabilities in a controlled and
consistent manner. We evaluate the performance
based on the accuracy.
**3.2** **Approach**
In this study, we primarily utilize GPT-3.5[1] for experimental purposes and compare its performance
with PaLM-2[2] using English instances. To assess
the impact of language on arithmetic performance,
GPT-3.5 is employed to process 39,708 instances
across 20 different language settings, amounting
to a total of 794,160 instances. Since PaLM-2 is
limited to English, a corresponding set of English
instances is used for comparative analysis.
Furthermore, we investigate whether LLMs can
verify their calculations and whether cross-LLM
verification enhances performance. In this experiment, the response from the Answerer (either ChatGPT or PaLM-2) is input into the prompt of the
[1https://chat.openai.com](https://chat.openai.com)
[2https://developers.generativeai.google/](https://developers.generativeai.google/)
632
-----
Overall Addition Subtraction
Rank Language Acc. Rank Language Acc. Rank Language Acc.
1 **English** 62.44 1 Thai 67.60 1 **English** 60.64
2 Japanese 62.40 2 Korean 66.51 2 Japanese 60.60
3 Trad. Chinese 61.57 3 Turkish 66.38 3 Trad. Chinese 59.76
4 Dutch 61.21 4 German 65.33 4 Dutch 58.42
5 German 61.19 5 Spanish 64.60 5 Russian 57.34
6 Spanish 60.66 6 Portuguese 64.28 6 German 57.06
7 Italian 59.93 7 Danish 64.27 7 Spanish 56.71
8 Russian 59.92 8 **English** 64.24 8 Italian 55.96
9 Portuguese 59.86 9 Japanese 64.21 9 Portuguese 55.45
10 Turkish 59.54 10 Dutch 64.01 10 Finnish 54.17
11 Danish 59.01 11 Italian 63.89 11 Polish 54.17
12 Sim. Chinese 58.47 12 Swedish 63.87 12 Sim. Chinese 54.10
13 Polish 58.35 13 Trad. Chinese 63.38 13 Danish 53.75
14 Swedish 58.16 14 Sim. Chinese 62.83 14 Greek 53.12
15 Finnish 57.94 15 French 62.69 15 Turkish 52.70
16 Thai 57.94 16 Polish 62.54 16 Swedish 52.46
17 Greek 57.81 17 Greek 62.51 17 French 51.11
18 French 56.90 18 Russian 62.49 18 Thai 48.27
19 Korean 56.28 19 Finnish 61.71 19 Korean 46.04
20 Hindi 51.32 20 Hindi 61.27 20 Hindi 41.37
Average 59.05 Average 63.93 Average 54.16
Standard Deviation 2.52 Standard Deviation 1.62 Standard Deviation 4.83
Table 1: GPT-3.5 performance in arithmetic using prompts in different languages (%). Trad. and Sim. Chinese
denote traditional and simplified Chinese. Acc. denotes accuracy.
Overall Addition Subtraction
All 1-5 digits 6-8 digits 16 digits All 1-5 digits 6-8 digits 16 digits All 1-5 digits 6-8 digits 16 digits
GPT-3.5 62.44 93.40 57.06 25.08 64.24 98.26 51.41 33.61 60.64 88.53 62.71 16.54
PaLM-2 81.51 97.88 87.63 31.76 89.91 98.56 96.50 54.01 73.10 97.19 78.76 9.51
Table 2: GPT-3.5 vs. PaLM-2 (%).
Verifier (either ChatGPT or PaLM-2), who is then a higher standard deviation noted in subtraction
tasked with verifying the accuracy of the answer. scores among different languages. Fourthly, there
If the response is incorrect, the Verifier is expected is a notable divergence in the arithmetic abilities of
to provide the correct solution. traditional Chinese and simplified Chinese, particu
larly in subtraction, suggesting limited transferabil
**4** **Evaluation Results** ity of arithmetic skills across even closely related
languages.
**4.1** **Multilingual Examination**
These observations highlight several topics for
Basic arithmetic is universally recognized as a fun- future exploration. (1) Our findings reveal that the
damental aspect of common sense, expected to arithmetic capabilities of LLMs hover just above
yield consistent results irrespective of geographi- the 60% threshold. This has implications for nucal or cultural differences. This section posits that merical reasoning studies presuming LLM profiarithmetic performance remains relatively stable, ciency in computing expressions, as illustrated in
regardless of the language employed in the task. Figure 1; these studies might benefit from focusTable 1 offers substantial evidence challenging ing on enhancing basic arithmetic skills. (2) The
this assumption. Firstly, arithmetic performance language used significantly affects arithmetic perin English surpasses that of other languages, al- formance, underscoring the need to consider linbeit marginally, with respective scores of 62.44%, guistic variables in numeracy assessments and to
64.24%, and 60.64% in overall, addition, and sub- develop language-independent methods for solving
traction tasks. Secondly, a significant disparity mathematical problems.
exists between the highest (English) and lowest
**4.2** **Checking Computation**
(Hindi) performing languages, with a maximum
performance gap of 11.22%. Thirdly, GPT-3.5 Computation checking represents a critical capaexhibits superior performance in addition com- bility in arithmetic, with the underlying hypothpared to subtraction across all languages, with esis being that LLMs performance can be en
633
-----
Answerer Verifier Overall Addition Subtraction Improvement
GPT-3.5 62.42 64.02 60.82 -0.02
Self-Checking PaLM-2 73.64 81.18 66.10 -7.87
GPT-3.5 PaLM-2 73.25 78.25 68.25 **10.81**
Cross-Agent Checking
PaLM-2 GPT-3.5 76.75 88.37 65.13 -4.76
Table 3: Experimental results of checking computation (%). Positive values signify overall performance enhancement, while negative values indicate a decline in performance.
Carry Non-Carry Borrow Non-Borrow
GPT-3.5 63.60 93.63 59.34 84.92
PaLM-2 89.89 91.04 71.99 93.68
Table 4: Performance on basic arithmetic concepts (%).
Model Input Overall Addition Subtraction
Expression Only 51.64% **64.85%** 38.43%
GPT-3.5 English Prompt **62.44%** 64.24% **60.64%**
Expression Only **89.24%** 92.41% **86.08%**
GPT-4 English Prompt 86.06% **92.63%** 79.16%
Expression Only 79.96% 89.16% 70.75%
PaLM-2 English Prompt **81.51%** **89.91%** **73.10%**
Expression Only 75.19% 81.00% 69.38%
Gemini
English Prompt **77.41%** **85.03%** **69.79%**
Table 5: Impact of language on arithmetic proficiency.
hallucination detection, including the detection of
exaggerated information (Chen et al., 2019). Future
research focused on number-aware tasks should
consider this phenomenon.
**5** **Discussion**
**5.1** **Carry and Borrow**
In this section, we categorize the instances into
two groups: (1) those requiring a carry (borrow)
concept for question resolution, and (2) non-carry
(non-borrow) instances. The results are presented
in Table 4. Irrespective of the language model
used, there is a notable decrease in performance
for instances necessitating a carry (borrow) concept. Particularly in scenarios involving the borrow
concept, both GPT-3.5 and PaLM exhibit markedly
inferior performance compared to non-borrow instances. This observation highlights a deficiency
in the generalization capabilities of auto-regressive
language models, suggesting that the borrow concept may not be adequately learned during current
training processes. Future research should focus
on developing tailored approaches to address this
limitation in handling arithmetic problems with
language models.
**5.2** **Using Pure Expression**
In previous sections, the influence of various languages on numeracy was discussed. This section
hanced through effective computation checking.
This section explores two distinct approaches: selfchecking and cross-model checking. Self-checking
involves using the same LLM for both computation and verification, while cross-model checking
entails employing different LLMs as the answer
provider and verifier.
To perform cross-agent checking, we experiment
with PaLM-2, which only supports English at this
time. According to Table 2, PaLM-2 outperforms
GPT-3.5. Further analysis, categorized by the number of digits in the computational tasks, reveals
that both LLMs excel with numbers smaller than
10[6]. However, GPT-3.5’s performance declines
with larger numbers. In contrast, PaLM-2 still performs well in addition instances but also drops in
subtraction instances. Regarding huge numbers
(16 digits), the performances of both LLMs drop
significantly.
Table 3 details the results of computation checking. It is observed that LLMs exhibit poorer performance in self-checking scenarios. Notably, when
PaLM-2 functions as both the answerer and verifier, there is a significant drop in performance.
Additionally, while employing PaLM-2 to verify
GPT-3.5’s computations yields better outcomes
than GPT-3.5 alone, the post-verification performance (73.25%) still falls short of PaLM-2’s solo
performance (81.51%).
These findings offer insights for arithmetic tasks
with recent trends in multi-agent approaches (Qian
et al., 2023; Chan et al., 2023). Our results indicate that in simple arithmetic tasks, a single model
approach is superior to cross-agent collaboration.
Furthermore, these findings highlight the existing
challenges in self-checking computations for even
high-performing LLMs like PaLM-2, which, despite its robust computational abilities, cannot fully
rectify all erroneous instances from GPT-3.5 that
are correctly resolved when exclusively employing
PaLM-2. Finally, this phenomenon can also be
considered a type of reversal curse in arithmetic
contexts (Berglund et al., 2023). It potentially affects the efficacy of number-aware fact-conflicting
634
-----
Overall Addition Subtraction
Rank Language Acc. Rank Language Acc. Rank Language Acc.
1 Russian 87.12% 1 Russian 92.66% 1 Japanese 81.58%
2 Japanese 87.09% 2 **English** 92.63% 2 Russian 81.54%
3 Polish 86.87% 3 Polish 92.55% 3 Polish 81.20%
4 Turkish 86.54% 4 Japanese 92.51% 4 Turkish 80.58%
5 Spanish 86.32% 5 Portuguese 92.50% 5 Spanish 80.13%
6 Trad. Chinese 86.20% 6 Italian 92.45% 6 Trad. Chinese 79.95%
7 **English** 86.06% 7 Spanish 92.45% 7 Greek 79.67%
8 Greek 85.86% 8 Dutch 92.44% 8 Danish 79.28%
9 Danish 85.76% 9 Trad. Chinese 92.36% 9 **English** 79.16%
10 Thai 85.52% 10 Danish 92.28% 10 Thai 78.76%
11 Portuguese 85.22% 11 German 92.25% 11 Hindi 78.19%
12 Italian 85.11% 12 Turkish 92.15% 12 Finnish 78.06%
13 German 85.07% 13 Thai 92.14% 13 German 77.99%
14 Finnish 85.01% 14 Swedish 92.09% 14 Portuguese 77.93%
15 Swedish 84.91% 15 Greek 91.99% 15 Italian 77.84%
16 French 84.61% 16 Finnish 91.80% 16 Swedish 77.43%
17 Dutch 84.55% 17 French 91.71% 17 French 77.39%
18 Korean 82.72% 18 Korean 88.83% 18 Dutch 76.60%
19 Hindi 81.65% 19 Hindi 86.87% 19 Korean 76.43%
20 Sim. Chinese 77.45% 20 Sim. Chinese 84.20% 20 Sim. Chinese 70.71%
Average 84.98% Average 91.44% Average 78.52%
Standard Deviation 2.23% Standard Deviation 2.22% Standard Deviation 2.40%
Table 6: GPT-4 performance in arithmetic using prompts in different languages (%).
further explores the impact of language on models’ persists with the optimal model, we examined it
numeracy by conducting experiments with purely with GPT-4, and the results are presented in Tasymbolic expressions to determine if the absence ble 6. First, it shows a significant difference from
of natural language affects the outcomes. Addition- the performance of GPT-3.5. Despite variations in
ally, two more models, Gemini and GPT-4, were rankings, a considerable performance disparity beincluded in the experiment for a more comprehen- tween the best and worst scenarios remains evident.
sive discussion. Similarly, the observed reduction in subtraction
Table 5 presents the experimental results. No- performance with GPT-3.5 is consistent with our
tably, three out of the four models exhibited im- current findings.
proved overall performance when arithmetic ques
**6** **Conclusion**
tions were posed in natural language (English). A
closer examination reveals distinctions between This study aimed to demonstrate negative results
two model families (GPT-3.5/GPT-4 and PaLM- and uncover shortcomings of LLMs in basic arith2/Gemini). Both PaLM-2 and Gemini showed en- metic tasks. Our findings reveal that (1) numerhanced performance in addition and subtraction acy is intertwined with linguistic elements, (2)
tasks when questions were posed in language. Con- LLMs exhibit suboptimal performance in compuversely, GPT-3.5 and GPT-4 demonstrated only tation verification tasks, and (3) the concept of
marginal differences under various settings. How- carrying/borrowing is not effectively mastered by
ever, for subtraction tasks, natural language sig- LLMs, especially borrowing. These results pronificantly enhanced GPT-3.5’s performance while vide a foundation for future research to (1) investidetrimentally affecting GPT-4’s performance. Al- gate the robustness of numeracy in language modthough a universal phenomenon across all language els, (2) enhance computational verification capamodels was not observed, the findings suggest that bilities in number-aware fact-checking tasks, and
language has a discernible impact on basic numer- (3) improve the fundamental arithmetic proficiency
acy. However, the results should not vary with the of LLMs. Additionally, our observation that lanuse of different languages. guage would enhance numeracy is another promis
ing topic that future studies can pay attention to.
**5.3** **Observation with GPT-4**
For example, researchers could investigate how
Table 5 indicates that GPT-4 outperforms all other incorporating language-based strategies into mathmodels, confirming its status as one of the highest- ematics problem-solving improves models’ underperforming LLMs. To ascertain if this observation standing and retention of numerical concepts.
635
-----
**Limitations** Chung-Chi Chen, Hen-Hsen Huang, Hiroya Takamura,
[and Hsin-Hsi Chen. 2019. Numeracy-600K: Learn-](https://doi.org/10.18653/v1/P19-1635)
This study has two primary limitations. First, due [ing numeracy for detecting exaggerated information](https://doi.org/10.18653/v1/P19-1635)
to the vast number of existing LLMs, it is chal- [in market comments. In Proceedings of the 57th An-](https://doi.org/10.18653/v1/P19-1635)
_nual Meeting of the Association for Computational_
lenging to include all in our analysis. Therefore,
_Linguistics, pages 6307–6313, Florence, Italy. Asso-_
we focus on two recent high-performing LLMs: ciation for Computational Linguistics.
GPT-3.5 and PaLM-2. GPT-3.5 incorporates hu
Chung-Chi Chen, Hiroya Takamura, Ichiro Kobayashi,
man feedback during its training, while PaLM-2
[and Yusuke Miyao. 2023a. Improving numeracy by](https://doi.org/10.18653/v1/2023.findings-eacl.4)
relies exclusively on open-source data. We posit [input reframing and quantitative pre-finetuning task.](https://doi.org/10.18653/v1/2023.findings-eacl.4)
that the results obtained from these models on an In Findings of the Association for Computational
extensive test set are indicative of general trends. _Linguistics: EACL 2023, pages 69–77, Dubrovnik,_
Croatia. Association for Computational Linguistics.
However, future research could employ our proposed test set to compare and analyze additional Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang,
Chenfei Yuan, Chen Qian, Chi-Min Chan, Yujia
LLMs. Second, our investigation does not encom
Qin, Yaxi Lu, Ruobing Xie, et al. 2023b. Agent
pass the full spectrum of arithmetic capabilities but
verse: Facilitating multi-agent collaboration and ex
is confined to two fundamental operations: addition ploring emergent behaviors in agents. arXiv preprint
and subtraction. We encourage subsequent stud- _arXiv:2308.10848._
ies to extend our methodology to examine other
Ernest Davis. 2024. Mathematics, word problems, com
arithmetic operations. Third, basic arithmetic can mon sense, and artificial intelligence. Bulletin of the
actually be solved by generating codes or using _American Mathematical Society._
additional tools, such as calculators. However, this
Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lor
is beyond the scope of this paper. As shown in raine Li, Liwei Jiang, Bill Yuchen Lin, Sean Welleck,
Figure 1, some studies utilize LLMs for calcula- Peter West, Chandra Bhagavatula, Ronan Le Bras,
et al. 2024. Faith and fate: Limits of transformers on
tions. Our results show that the performance on the
compositionality. Advances in Neural Information
same question may vary when only the language
_Processing Systems, 36._
is changed. Moreover, as numbers increase in size,
[Vedant Gaur and Nikunj Saunshi. 2023. Reasoning in](https://doi.org/10.18653/v1/2023.findings-acl.364)
relying on LLMs for arithmetic may not be the
[large language models through symbolic math word](https://doi.org/10.18653/v1/2023.findings-acl.364)
best choice. Our findings underscore the impor
[problems. In Findings of the Association for Com-](https://doi.org/10.18653/v1/2023.findings-acl.364)
tance of using supplementary tools in conjunction _putational Linguistics: ACL 2023, pages 5889–5903,_
with LLMs, and future work could explore more Toronto, Canada. Association for Computational Lin
guistics.
in-depth topics based on our observations.
Namgyu Ho, Laura Schmid, and Se-Young Yun. 2023.
**Acknowledgements** [Large language models are reasoning teachers. In](https://doi.org/10.18653/v1/2023.acl-long.830)
_Proceedings of the 61st Annual Meeting of the As-_
This paper is based on results obtained from a _sociation for Computational Linguistics (Volume 1:_
_Long Papers), pages 14852–14882, Toronto, Canada._
project JPNP20006, commissioned by the New
Association for Computational Linguistics.
Energy and Industrial Technology Development
Organization (NEDO). The work of Chung-Chi Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng
Cheng, Ceyao Zhang, Zili Wang, Steven Ka Shing
Chen was supported in part by JSPS KAKENHI
Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, et al.
Grant Number 23K16956. 2023. Metagpt: Meta programming for multi
agent collaborative framework. _arXiv preprint_
_arXiv:2308.00352._
**References** Nabil Hossain, John Krumm, Michael Gamon, and
Henry Kautz. 2020. Semeval-2020 task 7: Assessing
Lukas Berglund, Meg Tong, Max Kaufmann, Mikita
humor in edited news headlines. In Proceedings of
Balesni, Asa Cooper Stickland, Tomasz Korbak, and
_the Fourteenth Workshop on Semantic Evaluation,_
Owain Evans. 2023. The reversal curse: Llms trained
pages 746–758.
on" a is b" fail to learn" b is a". _arXiv preprint_
_arXiv:2309.12288._ Shima Imani, Liang Du, and Harsh Shrivastava. 2023.
[MathPrompter: Mathematical reasoning using large](https://doi.org/10.18653/v1/2023.acl-industry.4)
Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, [language models. In Proceedings of the 61st An-](https://doi.org/10.18653/v1/2023.acl-industry.4)
Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan _nual Meeting of the Association for Computational_
Liu. 2023. Chateval: Towards better llm-based eval- _Linguistics (Volume 5: Industry Track), pages 37–_
uators through multi-agent debate. arXiv preprint 42, Toronto, Canada. Association for Computational
_arXiv:2308.07201._ Linguistics.
636
-----
LLama2-7B-Chat 5.17% 5.86% 4.48% The dataset is available on the Huggingface[3].
Model Overall Addition Subtraction
LLama2-7B 9.38% 11.43% 7.33%
LLama2-7B-Chat 5.17% 5.86% 4.48%
Table 7: Performances of LLama2-7B. Please note that we control the leading digit to
answer other research questions. Thus, the leading digits of two given numbers are always the
Tiedong Liu and Bryan Kian Hsiang Low. 2023. Goat: same. More data can be generated by using the
Fine-tuned llama outperforms gpt-4 on arithmetic
same code.[4]
tasks. arXiv preprint arXiv:2305.14201.
Piotr Milkowski, Marcin Gruza, Kamil Kanclerz, Przemyslaw Kazienko, Damian Grimling, and Jan Ko[con. 2021. Personal bias in prediction of emotions](https://doi.org/10.18653/v1/2021.acl-srw.26)
[elicited by textual opinions. In Proceedings of the](https://doi.org/10.18653/v1/2021.acl-srw.26)
_59th Annual Meeting of the Association for Compu-_
_tational Linguistics and the 11th International Joint_
_Conference on Natural Language Processing: Stu-_
_dent Research Workshop, pages 248–259, Online._
Association for Computational Linguistics.
Chen Qian, Xin Cong, Cheng Yang, Weize Chen,
Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong
Sun. 2023. Communicative agents for software development. arXiv preprint arXiv:2307.07924.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint_
_arXiv:2307.09288._
Somin Wadhwa, Silvio Amir, and Byron Wallace. 2023.
[Revisiting relation extraction in the era of large lan-](https://doi.org/10.18653/v1/2023.acl-long.868)
[guage models. In Proceedings of the 61st Annual](https://doi.org/10.18653/v1/2023.acl-long.868)
_Meeting of the Association for Computational Lin-_
_guistics (Volume 1: Long Papers), pages 15566–_
15589, Toronto, Canada. Association for Computational Linguistics.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural
_Information Processing Systems, 35:24824–24837._
Noam Wies, Yoav Levine, and Amnon Shashua. 2023.
Sub-task decomposition enables learning in sequence
to sequence tasks. In The Eleventh International
_Conference on Learning Representations._
Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Jialin Pan,
and Lidong Bing. 2023. Sentiment analysis in the
era of large language models: A reality check. arXiv
_preprint arXiv:2305.15005._
**A** **LLama2-7B**
Table 7 shows the results of LLama2-7B (Touvron [3https://huggingface.co/datasets/NLPFin/](https://huggingface.co/datasets/NLPFin/BasicArithmetic)
et al., 2023). However, the performance is not as [BasicArithmetic4https://drive.google.com/file/d/](https://huggingface.co/datasets/NLPFin/BasicArithmetic)
good as the models we discussed, and thus, we did [1WahChtYNj4wYy59gYDkvSThFzgQSN7Zh/view?usp=](https://drive.google.com/file/d/1WahChtYNj4wYy59gYDkvSThFzgQSN7Zh/view?usp=sharing)
not make discussions based on it. [sharing](https://drive.google.com/file/d/1WahChtYNj4wYy59gYDkvSThFzgQSN7Zh/view?usp=sharing)
637
-----
| [
"Chung-Chi, Chen",
"Yusuke, Miyao",
"Hiroya, Takamura",
"Ichiro, Kobayashi",
"Kevin, Duh",
"Helena, Gomez",
"Steven, Bethard"
] | 2024-06-01T00:00:00 | NAACL 2024 Short Papers | false | 0 | 0 | null | https://aclanthology.org/2024.naacl-short.53 | null | https://www.semanticscholar.org/paper/d24599b8a9d2c106dc283c41e086c78c741f652f |
The Perfect Blend: Redefining RLHF with Mixture of Judges | Reinforcement learning from human feedback (RLHF) has become the leading approach for fine-tuning large language models (LLM). However, RLHF has limitations in multi-task learning (MTL) due to challenges of reward hacking and extreme multi-objective optimization (i.e., trade-off of multiple and/or sometimes conflicting objectives). Applying RLHF for MTL currently requires careful tuning of the weights for reward model and data combinations. This is often done via human intuition and does not generalize. In this work, we introduce a novel post-training paradigm which we called Constrained Generative Policy Optimization (CGPO). The core of CGPO is Mixture of Judges (MoJ) with cost-efficient constrained policy optimization with stratification, which can identify the perfect blend in RLHF in a principled manner. It shows strong empirical results with theoretical guarantees, does not require extensive hyper-parameter tuning, and is plug-and-play in common post-training pipelines. Together, this can detect and mitigate reward hacking behaviors while reaching a pareto-optimal point across an extremely large number of objectives. Our empirical evaluations demonstrate that CGPO significantly outperforms standard RLHF algorithms like PPO and DPO across various tasks including general chat, STEM questions, instruction following, and coding. Specifically, CGPO shows improvements of 7.4% in AlpacaEval-2 (general chat), 12.5% in Arena-Hard (STEM & reasoning), and consistent gains in other domains like math and coding. Notably, PPO, while commonly used, is prone to severe reward hacking in popular coding benchmarks, which CGPO successfully addresses. This breakthrough in RLHF not only tackles reward hacking and extreme multi-objective optimization challenges but also advances the state-of-the-art in aligning general-purpose LLMs for diverse applications. | A novel post-training paradigm which can detect and mitigate reward hacking behaviors while reaching a pareto-optimal point across an extremely large number of objectives is introduced, called Constrained Generative Policy Optimization (CGPO). | # The Perfect Blend: Redefining RLHF with Mixture of Judges
**Tengyu Xu[1][,][†], Eryk Helenowski[1][,][†], Karthik Abinav Sankararaman[1][,][†], Di Jin[1][,][†], Kaiyan Peng[1], Eric**
**Han[1], Shaoliang Nie[1], Chen Zhu[1], Hejia Zhang[1], Wenxuan Zhou[1], Zhouhao Zeng[1], Yun He[1],**
**Karishma Mandyam[1], Arya Talabzadeh[1], Madian Khabsa[1], Gabriel Cohen[1], Yuandong Tian[2], Hao**
**Ma[1], Sinong Wang[1], Han Fang[1]**
1Meta GenAI, 2FAIR, †Equal contributions
Reinforcement learning from human feedback (RLHF) has become the leading approach for fine-tuning large
language models (LLM). However, RLHF has limitations in multi-task learning (MTL) due to challenges
of reward hacking and extreme multi-objective optimization (i.e., trade-off of multiple and/or sometimes
conflicting objectives). Applying RLHF for MTL currently requires careful tuning of the weights for reward
model and data combinations. This is often done via human intuition and does not generalize. In this work,
we introduce a novel post-training paradigm which we called Constrained Generative Policy Optimization
(CGPO). The core of CGPO is Mixture of Judges (MoJ) with cost-efficient constrained policy optimization with
stratification, which can identify the perfect blend in RLHF in a principled manner. It shows strong empirical
results with theoretical guarantees, does not require extensive hyper-parameter tuning, and is plug-and-play
in common post-training pipelines. Together, this can detect and mitigate reward hacking behaviors while
reaching a pareto-optimal point across an extremely large number of objectives.
Our results show that CGPO consistently outperforms other commonly used SoTA RLHF algorithms (such as
PPO and DPO) on a wide range of tasks – general chat, STEM questions, instruction following, math, coding
and knowledge. In particular, CGPO improves over PPO by 7.4% in AlpacaEval-2 (general chat), 12.5% in
Arena-Hard (STEM & reasoning), 2% in IFEval (Instrcution Following), 2% in both MATH and GSM8K
(Math & reasoning), 5% in HumanEval (Coding), and 2% in the ARC challenge (Knowledge). We also observe
that PPO is susceptible to severe reward hacking behaviors (it exhibits severe regression in popular coding
benchmarks) which can be addressed by CGPO. CGPO represents a breakthrough in RLHF, simultaneously
addressing reward-hacking and extreme multi-objective optimization, and thereby advancing the state-of-the-art
in aligning general-purpose LLMs.
**Date: October 1, 2024**
**Correspondence: Tengyu Xu at [email protected]**
## 1 Introduction
The emergence of general-purpose Large Language Models (LLMs) has significantly transformed the landscape of
natural language processing, demonstrating exceptional capabilities across various expert-level domains (Achiam et al.,
2023; Brown et al., 2020; Touvron et al., 2023; Anthropic, 2023; Team et al., 2023; Meta, 2024; Tunstall et al., 2023;
Zhu et al., 2023). These models are characterized by their extensive parameterization, enabling them to handle a wide
array of tasks using a unified parameter set (Zhao et al., 2018; Liu et al., 2019b,a). Central to this versatility is multi-task
learning (MTL) (Caruana, 1997; Crawshaw, 2020), a strategy that involves training a single model on multiple tasks
simultaneously. This approach fosters the development of shared representations, which enhances the model’s ability
to generalize better than those trained on isolated tasks. Although prior studies on MTL have concentrated on the
integration and processing of multi-task data during both pre-training and fine-tuning stages (Raffel et al., 2020; Liu
et al., 2023; Aghajanyan et al., 2021; Aribandi et al., 2021), the application of the primary LLM alignment method,
Reinforcement Learning with Human Preference (RLHF) (Ouyang et al., 2022; Ziegler et al., 2019; Zheng et al.,
2023b), has not been thoroughly explored within the MTL context. In previous studies, the implementation of RLHF
for multi-task post-training has typically involved a linear combination of multiple reward models within the standard
-----
RLHF framework (Ramamurthy et al., 2022; Glaese et al., 2022; Yuan et al., 2023; Bakker et al., 2022; Wu et al., 2024;
Li et al., 2020). Each reward model is crafted using preference data to mirror the distinct alignment preferences of
different tasks. Researchers often experiment with various reward weightings to identify a Pareto front that depicts the
optimal performance of the LLM across diverse tasks (Rame et al., 2024). However, this approach is limited by two
significant challenges:
**Vulnerability to Reward Hacking: The optimization of a preference-based reward model is susceptible to reward**
hacking, as the reward model is an imperfect proxy of human preferences (Gao et al., 2023; Jin et al., 2023; Skalse
et al., 2022). Studies indicate that excessive optimization of a reward model can lead to misalignment with actual human
preferences (Gao et al., 2023; Moskovitz et al., 2023; Stiennon et al., 2020; Rafailov et al., 2024a). This issue becomes
more pronounced in a multi-task setting, where each reward model may have its own unique flaws. Implementing a
uniform early stopping point in the RLHF optimization process to minimize reward hacking effects is impractical and
can lead to degraded performance across tasks (Moskovitz et al., 2023). This highlights the need for a more tailored
approach to compensate for the weaknesses of each reward model and to manage the optimization of reward models for
each task in complex, multi-task environments.
**Contradictory Goals: Different tasks often have conflicting objectives (Rame et al., 2024). Even if the prompt spaces**
for these tasks do not overlap, using a linear combination of reward models can lead to compromises in goal metrics.
For example, the typical strategy of LLM post-training involves maximizing the helpfulness reward for safe prompts
and maximizing the harmfulness reward for unsafe prompts (Bai et al., 2022). Although achieving global optimality for
both tasks is possible if the LLM’s capacity is sufficiently large (Iyer et al., 2022), employing a linear combination of
helpfulness and harmfulness rewards inevitably results in reduced gains for both metrics. This occurs because each task
partially sacrifices its own RLHF optimization progress to accommodate a contradictory metric, thereby diminishing the
effectiveness of both.
To address these challenges, we developed an innovative framework called Constrained Generative Policy Optimization
(CGPO). In response to the issue of reward hacking in RLHF, we introduce two types of judges: rule-based and
LLM-based. These judges collaborate to identify any reward hacking patterns during the LLM’s online generation
phase. Based on their evaluations, we implement a constrained RLHF method to update the LLM model. This method
is designed to maximize the likelihood of generating outputs that adhere to all constraints and achieve high reward
values, while minimizing outputs that breach constraints and have low reward values. To support the constrained policy
optimization update in the large-scale LLM setting, which is complicated even in traditional small-scale RL scenarios,
we have developed three new primary-type constraint RLHF optimizers. These optimizers are designed to operate
independently of the dual-variable update, which is often a critical component in conventional primal-dual constrained
RL algorithms. This independence simplifies the optimizers and enhances their scalability, making them more effective
for managing large-scale LLM post-training.
To effectively optimizing objectives of various tasks, which may be contradictory, we propose a novel design in CGPO for
managing multi-task post-training. In this design, prompts are segregated by task, and a customized policy optimization
strategy is applied to each set of prompts. This strategy includes a tailored MoJs, reward model, and hyperparameter
setup for the constrained RLHF optimizer. By optimizing each task independently, our approach avoids compromises
due to conflicting goals from other tasks, a common issue in previous works that used a linear combined reward model.
Furthermore, our design addresses the reward hacking issue and optimizes objectives for each task in a fine-grained
manner, resulting in a better Pareto frontier than previous methods that enforced uniform treatment across all tasks. See
Figure 1 for an overview of our CGPO pipeline.
We summarize our contributions as follows:
- We have developed a new strategy to address the issues of reward hacking in multi-task LLM post-tuning through
an innovative primal-type constrained RL method. To implement this method, we have introduced three new
constrained RLHF optimizers: Calibrated-Regularized Policy Gradient (CRPG), Constrained Online Direct
Preference Optimization (CODPO), and Calibrated-Regularized Reward Ranking Finetuning (CRRAFT). All
proposed methods are scalable and easy to implement.
- To support the implementation of the constrained RL method in CGPO, we have developed two types of judges:
the rule-based judge and the LLM-based judge. These judges are designed to effectively assess whether an LLM
generation violates constraints in a broad spectrum of NLP tasks.
- We have introduced a new multi-objective RLHF treatment strategy within CGPO, where each task is managed
-----
individually with a customized optimization setting, including reward models, mixture of judges, and optimizer
hyperparameters. This pioneering design, the first in the multi-task RLHF field, significantly enhances the Pareto
frontier across multiple metrics in the multi-task setting.
- We demonstrate the effectiveness of CGPO in a challenging multi-task post-training environment with five
tasks: general chat, instruction following, math and coding reasoning, engagement intent, and safety, despite
potentially contradictory goals across tasks. Notably, by primarily utilizing open-source data and the Llama3.0
70b pre-trained model, our research demonstrates that, in comparison to the baseline RLHF methods such as PPO
Schulman et al. (2017) and DPO Rafailov et al. (2024b), our approach—when combined with the CRPG and
CRRAFT optimizers—consistently outperforms these baselines across all benchmarks and tasks. Specifically
**– CRPG optimizers achieve the highest performance in terms of MATH, GSM8K, HumanEval, MBPP, ARC**
Challenge, and false refusal ratio. CRRAFT optimizers achieve the highest performance in AlpacaEval-2,
Arena-Hard, and TruthfulQA.
**– PPO experiences a significant drop in the 0-shot coding benchmarks (HumanEval and MBPP) after exceeding**
certain training steps, indicating the occurrence of severe reward hacking issues. In contrast, CGPO not only
avoids such regression but also consistently improves those benchmarks during training, demonstrating the
extraordinary capability of MoJs in preventing reward hacking issues.
**Figure 1 Overview of CGPO pipeline. In CGPO, a customized MoJs is applied to each task to evaluate model generations, and the**
model is updated through our proposed constrained RL algorithm.
## 2 Preliminaries
In the RLHF finetuing phase, we typically formulate a Markov Decision Process (MDP) as follows: each prompt is
considered as the state s, and the entire response is the action a = [a0, a1, · · ·, aT −1], where ai ∈ _A represents the token_
at position i and A is the vocabulary set. An LLM policy is defined as πw(at|at−1, at−2, · · ·, a0, s), which represents a
distribution over A at time step t, conditioned on all previous response history before t and prompt: {at−1, at−2, · · ·, a0, s}.
**2.1** **Supervised Finetuing**
RLHF starts by finetuing a pre-trained LLM using supervised learning on high-quality dataset relevant to the downstream
target task(s) (such as dialogue, summarization, reasoning, etc.) to obtain πSFT.
**2.2** **Reward Model Training**
After the supervised fine-tuning stage, we need to develop a reward model to assess the quality of an LLM’s output.
This will enable us to utilize exploration-based online RL alignment method. We typically use the pairwise preference
-----
reward model (Stiennon et al., 2020). In this model, we assume that human preference between a pair of responses
(ap, an), originating from the same prompt s, is determined by a latent reward r[∗]pair[(][s][,][ a][). The Bradley-Terry (BT) model]
(Bradley and Terry, 1952; Ouyang et al., 2022; Bai et al., 2022; Touvron et al., 2023; Meta, 2024), a well-established
reward-based preference model, defines the human preference distribution p[∗]pair [using the following formulation:]
_p[∗]pair[(][a][p][ >][ a][n][|][s][)][ =][ σ][(][r][∗]pair[(][s][,][ a][p][)][ −]_ _[r][∗]pair[(][s][,][ a][n][))][,]_ (1)
where σ denotes the logistic function. In practice, we can learn a parameterized reward model rϕ(s, a) as a surrogate for
_r[∗]pair[(][s][,][ a][). Given a pre-collected preference-pair dataset][ D][ =][ {][s][i][,][ a][w][,][i][,][ a][l][,][i][}]i[N]=1[, where][ a][w][,][i][ and][ a][l][,][i][ denote the preferred and]_
less preferred generations respectively, we can learn rϕ by framing the problem as a binary classification and resolving
the subsequent problem (Ouyang et al., 2022; Touvron et al., 2023; Meta, 2024):
min log σ(rϕ(s, ap) _rϕ(s, an))_ . (2)
ϕ [L][pair][(][r][ϕ][,][ D][pair][)][ =][ −][E][D][pair] −
h i
In a standard LLM training pipeline, the preference-based reward model rϕ is typically initialized from the finetuned
SFT model πSFT, augmented by a linear layer on the final transformer layer, which generates a single scalar prediction
for the reward value (Wang et al., 2024a; Askell et al., 2021; Ouyang et al., 2022).
**2.3** **RL Finetuning**
Given a LLM policy πw with parameter w, a reward model rϕ(a, s) and a prompt set Dp = {si}i[M][, we aim to optimize the]
policy by maximizing the following RL objective (Ouyang et al., 2022; Achiam et al., 2023; Touvron et al., 2023):
maxw Es∼Dp,a∼πw _rϕ(s, a)_ . (3)
h i
When solving the problem in eq. (3) we typically initialize πw with SFT policy πSFT instead of starting from scratch. In
previous works a number of online RL method such as proximal policy optimization (PPO) (Schulman et al., 2017),
reward ranking (RAFT) (Dong et al., 2023) and REINFORCE (Williams, 1992) has been utilized to solve Eq. (3).
Another direction of RL finetuing involves reward-free methods, which directly optimize πw using pre-collected
preference data, without the need for a reward model. The rationale behind this approach is to fine-tune the model within
a neighborhood of πSFT, ensuring that the probability of generating both preferred and less preferred samples aligns with
the pre-collected preference dataset. Direct Preference Optimization (DPO) (Rafailov et al., 2024b) is the most widely
adopted method in this direction.
## 3 Limitations in Traditional RLHF
In this section, we discuss several limitations in the current RLHF pipeline, which are major bottlenecks in the multi-task
LLM post-training.
**3.1** **Limitation of Reward Modelling**
**Insufficient capability for fine-grained criteria alignment.** Despite being based on a sophisticated LLM, the
reward model may struggle to provide accurate alignment guidance (Pan et al., 2022), particularly in tasks requiring
fine-grained criteria such as identifying correct answers in math questions and assessing code snippet correctness for
coding problems. This limitation, inherent to preference-based learning, necessitates additional support to enhance the
reward model’s effectiveness in handling these specific requirements.
**Proxy nature in coarse-grained preference setting. Reward hacking can occur even in coarse-grained settings**
where the goal is to optimize human preferences, as the reward model, serving as a proxy for true preferences, may
contain misspecifications (Gao et al., 2023; Moskovitz et al., 2023). This can lead to the model favoring less-preferred
outputs, misdirecting the alignment process. A common mitigation strategy is to include a KL penalty in the RL
objective to limit deviation from the initial policy, πSFT. However, this approach does not directly address the reward
model’s imperfections, indicating the need for a more systematic approach to tackle reward hacking.
-----
**3.2** **Limitation of RLHF Optimizer**
**Contradictory optimization objectives. The initial success of LLM hinges on the assumption that human preferences**
are homogeneous (Bakker et al., 2022), but they actually vary widely (helpfulness, harmlessness, honesty, etc) (Casper
et al., 2023; Rame et al., 2024). The current RLHF pipeline trains separate reward models for each task and combines
them using linear weights (Ramamurthy et al., 2022; Glaese et al., 2022; Yuan et al., 2023). However, this approach
applies the same weight of rewards to all tasks, which can be suboptimal (e.g., 90% helpfulness + 10% harmlessness
may work well for safe scenarios but lead to risky responses in dangerous situations).
**Rigid optimization strategy for mutli-tasks alignment. In the standard RLHF pipeline, a uniform RL optimizer**
setup is typically applied across all tasks (Ouyang et al., 2022). However, this approach may not be optimal since the
most effective hyperparameters, including number of generations per-prompt, batch-size, and KL-regularization, often
differ between tasks due to unique nature of each task. For example, tasks requiring more exploration typically need a
larger number of generations per prompt, whereas other tasks can work well with fewer.
**3.3** **Motivation**
In multi-task LLM alignment settings, where the goal is to enhance LLM performance across various tasks, the
limitations of reward modeling and RLHF optimizers discussed in Section 3 are significant bottlenecks that hinder the
RLHF process from effectively improving LLM performance across all tasks. In the following section, we will introduce
a novel RLHF framework, Constraint Generative Policy Optimization (CGPO), which addresses all the aforementioned
limitations in the most principled manner.
## 4 Constraint Generative Policy Optimization
In this section, we first explore how to implement the CGPO framework within the scope of a single task with MoJs, as
detailed in Section 4.1. Subsequently, we discuss the implementation of CGPO to manage scenarios involving multiple
objectives in Section 4.2 for multi-task learning.
**4.1** **CGPO in Single Task with Single Objective**
The primary design of CGPO is to integrate multiple constraints to mitigate the issue of reward hacking, which
arises from the limited capabilities of reward models. Specifically, in addition to optimizing the accumulated reward
model value as shown in eq. (3), we also ensure that the model generation meets several constraints. For example,
in mathematical reasoning tasks, we strictly require model generations to provide correct answers. This is essential
since the model often fails to solve the problem correctly, yet the reward model might still allocate high values to these
incorrect solutions. Another example is in general chat tasks with prompts that are free of harmful intent. We require
model generations to consistently respond to user queries. This is crucial because there are instances where the model
may refuse to answer, and the reward model might erroneously assign high values to such non-responsive generations.
In these cases, purely maximizing the reward model could impair the model’s reasoning capability and lead to an overly
conservative tendency. By introducing these constraints based on our prior knowledge about the weaknesses of each
reward model, we can avoid critical reward hacking patterns effectively.
We denote the set of constraints that the LLM generations need to satisfy as _C1,_ _C2, . . .,_ _CM_ and the state-action set that
{ }
satisfies constraint Ck as Σk, i.e., Σk = {(s, a) ∈S × A and (s, a) satisfies requirement of Ck}. We define the feasible
region as the state-action set that satisfies all constraints as Σ = Σ1 Σ2 . . . ΣM. In the single task setting, CGPO
∩ ∩ ∩
solves the following constrained problem (Ying et al., 2022; Zhang et al., 2024; Luo et al., 2024; Xu et al., 2021)
maxw Es∼Dp,a∼πw [r(s, a)]
s.t. Probs∼Dp,a∼πw ((s, a) ∈ Σ) ≥ 1,
KLs∼Dp (πw|πref) ≤ KLmax, (4)
where πref is the initialization model and KLmax is the threshold of KL-divergence, which could vary for different tasks.
The high-level framework of CGPO in the multiple-constraints and single-objective setting is illustrated in Algorithm 1.
At each iteration, we sample a minibatch from the prompt set D, and then apply the current LLM policy to generate K
-----
responses (1 ≤ _K) for each prompt. Subsequently, we apply all judges J = {Jh}h[M]=1_ [to all generated samples to evaluate]
whether a generation violates a specific constraint. We label a generation a[k]t,i [as “violated” if it fails any one of the]
constraint judgments, and “satisfied” otherwise. Note that the constraint judge is a module for evaluating the constraint
satisfaction conditions, which could be a rule-based script or an LLM classifier. This module can address a wide range
of constrained problems in the LLM post-tuning scenario. We will discuss this in detail in Section 4.1.4.
After that, we split the generations into “Positive” and “Negative” groups, depending on the constraint satisfaction label.
We then apply a constrained RLHF optimizer to update the policy with these two groups of samples (see line 9). In our
work, we propose three new RLHF optimizers to efficiently solve the multi-constraint problem in the LLM setting. For
Option I, we develop a policy gradient approach and an online DPO approach, and for Option II, we develop a reward
ranking-based approach. These optimizers will be discussed in detail in the subsequent sections.
**Algorithm 1 CGPO(D, πw0**, J, B, R,, T ) in single task with multi-constraints
O
1: Input: prompt set D = {st,i}i[N]=1[, LLM starting policy][ π][w][0] [, constraint judge set][ J][ =][ {][J][h][}]h[M]=1[, batchsize][ B][, reward model]
_R, iteration number T_, constrianed RLHF optimizer O.
2: for t = 0, 1, ..., T do
3: Prompt sampling: _st,i_ _i=1_
4: Response generation: { }[B]a[k]t,i[}][∼]k[K]=[D]1
{ [∼] [π][w][t] [(][·|][s][t][,][i][) for 1][ ≤] _[i][ ≤]_ _[n]_
5: Constraint judgement: y[k]t,i [=][ ∨]h[M]=1[J][h][(][s][t][,][i][,][ a]t[k],i[) for 1][ ≤] _[i][ ≤]_ _[n][ and 1][ ≤]_ _[k][ ≤]_ _[K]_
6: Split sample set:
7: Positive samples: Xt[+] [=][ {][(][s][t][,][i][,][ a][k]t,i[) for 1][ ≤] _[i][ ≤]_ _[n][,][ 1][ ≤]_ _[k][ ≤]_ _[K][ where][ y][t][,][i][ =][ 1][}]_
8: Negative samples: Xt[−] [=][ {][(][s][t][,][i][,][ a][k]t,i[) for 1][ ≤] _[i][ ≤]_ _[n][,][ 1][ ≤]_ _[k][ ≤]_ _[K][ where][ y][t][,][i][ =][ 0][}]_
10:9: Update[Option I]: maximize likelihood of πwt → πwt+1 for policy optimization with optimizer Xt[+] [with high][ R][(][x][+][) and minimize likelihood of] O: _[ X]t[−]_ [with low][ R][(][x][−][)]
11: [Option II]: maximize likelihood of Xt[+] [with high][ R][(][x][+][)]
12: end for
Intuitively, with either the Option I or Option II updating strategy, CGPO encourages the policy to explore regions that
satisfy all constraints to maximize the expected reward model value. Note that CGPO is a primal-type constraint policy
optimization approach, which differs from the standard primal-dual approach adopted in the constrained RL area. CGPO
does not involve co-optimizing the dual variable, thus avoiding the drawbacks of extensive hyperparameter tuning issues
associated with the primal-dual approach. Due to this reason, CGPO is user-friendly even with multiple different types
of constraints, making it well-suited for the LLM post-tuning scenario.
In the following sections, we will discuss how to implement Algorithm 1 with our proposed RLHF optimizers: CalibratedRegularized Policy Gradient (CRPG) and Constrained Online DPO (CODPO) for Option I, and Calibrated-Regularized
Reward Ranking Fine-tuning (CRRAFT) for Option II. Subsequently, we will discuss the constraint judge module that
we developed in CGPO, which enables us to assess the generation’s constraint satisfaction condition.
**4.1.1** **Calibrated Regularized Policy Gradient (CRPG)**
In this section, we discuss our new constraint RLHF optimizer, the Calibrated Regularized Policy Gradient (CRPG),
which is a policy gradient-based approach.
**Calibrated Reward. In the traditional RLHF algorithm, the reward model is typically directly incorporated into**
RL optimizers to progressively refine the policy. However, this method can pose difficulties when the reward model
value is not properly calibrated. For preference reward models trained with eq. (2), the reward’s accuracy may be
proficient in distinguishing between good and bad generations from the same prompt. However, the reward model values
between generations from different prompts may not be directly comparable due to potential significant variations in the
reward model value range for different prompts. Due to such reasons, standard RLHF algorithms, such as PPO and
REINFORCE, could lead to suboptimal performance due to the poor calibration of the reward model (Rita et al., 2024).
In CRPG, we introduce a novel and low-cost reward calibration strategy to address this issue.
We consider the scenario where each prompt s used in RLHF fine-tuning has a corresponding baseline response ¯a. This
condition can be easily satisfied in practice.
-----
- Option 1: We repurpose the prompt set from the SFT training set and/or the reward model training set. For the
SFT training dataset, the pre-collected golden response is utilized as the baseline response, denoted as ¯a. For the
pair-wise reward model training dataset, the preferred response is designated as the golden response ¯a.
- Option 2: Given an RLHF fine-tuning prompt set Dd, we use πref to generate the baseline response for all prompts
_s_ _Dd, i.e., ¯a_ πref( _s) before starting RLHF fine-tuning._
∈ ∼ - |
Without loss of generality, we assume there is an underlying policy ¯π that generates the baseline responses, denoted as
_a¯ ∼_ π¯(·|s). Given the baseline response ¯a, we developed the following calibrated reward to replace the raw reward model
_rpair(s, a):_
_Rcalib(s, a) = σ(rpair(s, a)_ _rpair(s, ¯a))._ (5)
−
Intuitively, Rpair(s, a) here represent the probability of a being better than baseline response ¯a conditioned on the same
prompt s, i.e.,
_Rcalib(s, a)_ Prob(a > ¯a _s)._
≈ |
The advantages of using calibrated rewards Rpair are twofold:
1. The magnitude of Rcalib becomes meaningfully comparable across different prompts. This is because it represents
the probability that the current policy π is superior to the baseline ¯π for different actions. In other words, if
_Rcalib(s, a) > Rcalib(s[′], a[′]), it directly implies that action a given state s is better than action a[′]_ given state s[′],
conditioned on the baseline policy ¯π. However, this implication cannot be made if rpair(s, a) > rpair(s[′], a[′]).
2. The magnitude of the calibrated reward model is strictly bounded between 0 and 1. This constraint prevents an
action with an extremely large raw value from dominating the policy update direction, which could be misleading,
since a large raw reward value does not necessarily imply superior action quality.
Based on Rcalib(s, a), we now reformulate RLHF objective in eq. (3) as
maxw _J¯(πw) = Ea∼πw(·|s),s∼Dd [Rcalib(s, a)]_ (6)
where _J[¯](πw) is the policy optimization objective. Intuitively, it represents the probability of current policy πw being_
better than the baseline policy ¯π conditioned on the prompt set Dd, i.e.,
_J¯(πw) ≈_ Prob(πw > ¯π|Dd).
**Constraint Regularized Gradient. Recall that in the multi-constraint setting, our goal is to maximize the expected**
reward model while aligning the LLM such that its generations strictly adhere to a set of constraints. These constraints
compensate for the limitations of the reward model, including safety requirements, reasoning accuracy, and factual
correctness. These aspects may not be fully captured by the reward model but can be well addressed via a separate
rule-based judge or an LLM-based judge. Note that the "Positive samples" in line 6 of Algorithm 1 is a subset of Σ, i.e.,
_Xt[+]_ _Jc:_
[∈] [Σ][. Consequently, we aim to optimize the following multi-constraint objective, denoted as ¯]
max _J¯c = Ea_ πw( _s),s_ _Dd_ _Rcalib(s, a)_ **1(s,a)** Σ . (7)
_w_ ∼ - | ∼ - ∈
By solving the optimization problem presented in eq. (7), the LLM is aligned to maximize the expected value of the
calibrated reward model as much as possible, while remaining within the constraint satisfaction region.
Given Rcalib and Σ, we define the following constraint regularized reward as
_Rcalib,_ if (s, a) Σ
_Rcr(s, a) =_ ∈ (8)
0, if (s, a) < Σ
With the calibrated regularized reward Rcr, we rewrite eq. (7) as
maxw _J¯c = Ea∼πw(·|s),s∼Dd [·Rcr(s, a)] ._ (9)
-----
We consider the following update to optimize J[¯]c
_wt+1 = wt + αt · gc(πwt_ ), (10)
where
_gc(πw) =_ [1]
log πw(si, ai) _Rcr(si, ai)._
∇ -
The subsequent theorem illustrates that CRPG has global optimality guarantee for both objective achievement and
constraint satisfaction in the multi-constraint LLM alignment setting.
**Theorem 1 Consider the CRPG update defined in Eq. (10). Consider the scenario where the optimal policy π[∗]** _of eq. (7)_
_satisfies Probπ[∗],Dd_ ((s, a) Σ) = 1. Denote the policy set within the constraint satisfaction region as ΠΣ, and the globally
∈
_optimal policy within ΠΣ as π[∗]c[, i.e.,][ π][∗]c_ [=][ argmax]π∈ΠΣ [E][π,][D]d [[][R][cr][(][s][,][ a][)]][. Given a few mild assumptions, we have]
Eπ[∗]c[,][D]d [[][R][cr][(][s][,][ a][)]][ −] [E][π]wt [,][D]d [[][R][cr][(][s][,][ a][)]][ ≤O]
_Probπwt,Dd ((s, a) < Σ) ≤O_
_poly(t)_
1
_poly(t)_
**CRPG Implementation. Consider the KL divergence between πref and πw as a universal regularization method to**
prevent reward hacking during CRPG fine-tuning. We propose the following new reward regularization approach:
_R˜_ _cr(s, a) = max_ 1, 0 _Rcr(s, a)._ (11)
− [log(][π][w][(][s][i]KL[,][ a][i][)]max[/π][ref][(][s][i][,][ a][i][))] -
( )
It is important to note that _R[˜]_ _cr not only penalizes samples that deviate significantly from πref, but also strictly bounds the_
overall KL divergence.
Moreover, to reduce the variance in the CGPG gradient estimation, we consider subtracting a baseline from the gc
without changing its expected direction as following
_n_
_g˜_ _c(πwt_ ) = [1] log πwt (st,i, at,i) _Rcr(st,i, at,i)_
_n_ ∇ - − _n[1]_
_i_
The final CRPG update in multi-constraints finetuning setting is given asX [˜]
_R˜_ _cr(st,i, at,i)_ (12)
[.]
_wt+1 = wt + αt · ˜gc(πwt_ ).
**4.1.2** **Constrained Online Direct Preference Optimization (CODPO)**
Based on Direct Preference Optimization (DPO), a widely used offline RLHF alignment algorithm in the unconstrained
setting, we propose a new variant called Constrained Online Direct Preference Optimization (CODPO) to solve the
constrained RLHF fine-tuning problem.
Recall that in DPO (Rafailov et al., 2024b), the optimal policy π[∗], which aligns with human preferences in the
β-regularized MDP setting, satisfies the following preference model:
_Pπ[∗](ap > an) =_
_Pπ[∗](ap > an) =_ .
1 + exp β log π[π]ref[∗][(]([s]s[,],[a]a[n]n[)]) πref(s,ap)
[−] [β][ log][ π][∗][(][s][,][a][p][)]
Given a pairwise preference sample pair (s, ap) and (s, an), we update our policy by solving the following problem:
1 + exp β log π[π]ref[∗][(]([s]s[,],[a]a[n]n[)]) πref(s,ap)
[−] [β][ log][ π][∗][(][s][,][a][p][)]
min ℓDPO(πw, s, ap, an) .
_w_
[L][DPO][(][π][w][)][ =][ −][E][(][s][,][a][p][,][a][n][)]
h i
where ℓDPO(πw, s, ap, an) = log σ β log [π][w][(][s][,][ a][p][)]
πref(s, ap) πref(s, an)
[−] [β][ log][ π][w][(][s][,][ a][n][)]
(13)
-----
To prevent the possible decreasing likelihood of positive samples ap, it has been proposed to add a regularization term to
the vanilla DPO loss (Pal et al., 2024):
ℓ˜DPO(πw, s, ap, an) = ℓDPO(πw, s, ap, an) + [λ] (14)
_ap_
[·][ log(][π][w][(][s][,][ a][p][))][,]
where _ap_ represents the length of response ap. By appropriately tuning the hyperparameter λ, the formulation in
eq. (14) can effectively increase the likelihood of ap while decreasing the likelihood of an to maximize the margin
between positive and negative generations.
In CODPO, similar to CRRAFT, we first generate multiple responses for each prompt using the current policy
{a[1]t,i[,][ a][2]t,i[, . . .,][ a]t[K],i[} ∼] [π][w][t] [(][· |][ s][t][,][i][) and split the generations into positive samples][ X]t[+] [and negative samples][ X]t[−][. After]
that, we select the positive sample from Xt[+] [with the highest reward value, and the negative sample from][ X]t[−] [with the]
lowest reward value, i.e.,
_rpair(si,t, a[k]i,t[)][,]_
_rpair(si,t, a[k]i,t[)][.]_
_a[+]i,t_ [=][ argmax]
_k∈[K],_
(si,t,a[k]i,t[)][∈][X]t[+]
_a[−]i,t_ [=][ argmin]
_k∈[K],_
(si,t,a[k]i,t[)][∈][X]t[−]
In cases where no generations satisfy all constraints, we can skip this sample. Conversely, when no generations violate
any constraints, we can select the generation with the lowest reward model value as the negative sample.
Then, at each iteration, we update the policy as follows:
_wt+1 = wt_ αt [1]
− - _n_
∇ℓ[˜]DPO(πwt, si,t, a[+]i,t[,][ a][−]i,t[)][.] (15)
_i=1_
X
**4.1.3** **Calibrated Regularized Reward Ranking Finetuning (CRRAFT)**
In this section, we introduce another constrained RLHF policy optimizers that we proposed: Calibrated Regularized
Reward Ranking Finetuning (CRRAFT), which is built upon the RAFT.
In the original RAFT algorithm (Dong et al., 2023), each round involves generating multiple responses from a prompt
using the current policy model, denoted as {a[1]t,i[,][ a][2]t,i[, . . .,][ a]t[K],i[} ∼] [π][w][t] [(][· |][ s][t][,][i][). A reward model][ r][ is then utilized to select]
the response with the highest reward model score, i.e., a[∗]j [=][ argmax]k [K] _[r][pair][(][s][t][,][i][,][ a]t[k],i[) (note that whether a calibrated]_
∈
reward is used or not does not affect the reward ranking result). Subsequently, an one-step SFT update is performed
to maximize the likelihood of this generated sample (st,i, a[∗]t,i[). The policy model is iteratively updated to improve its]
alignment with the reward model rpair as follow
_wt+1 = wt + αt_ [1]
- _n_
∇ log(πwt (st,i, a[∗]t,i[))][.] (16)
_j=1_
X
In the multi-constraint setting, we make the following two changes on top of RAFT to develop our CRRAFT optimizer:
- After applying the reward model to score each responses, we adopt Option I in Algorithm 1 to first filter out
those generated responses that violated any of the constraints. Additionally, to avoid large drift of current policy
from starting point policy πref, we also filter out all generations whoes KL-divergence is larger than a pre-defined
threshold KLmax, i.e., KL(si,t,aki,t[)][ =][ log]log π[ π]ref[wt] [(][s][i][,][t][,][ a]i[k],t[)][ >][ KL][max][. After that we apply reward ranking to select the one]
with the highest reward model score from the rest of responses, i.e.,
_rpair(si,t, a[k]i,t[)][.]_ (17)
_a[∗]i,t_ [=] argmax
_k∈[K],_
(si,t,a[k]i,t[)][∈][X]t[+][,]
KL( _si,t,a[k]i,t[)][≤][KL][max]_
-----
We refer to the procedure in eq. (17) as constrained regularized reward ranking. It’s important to note that
CRRAFT not only has the capability to manage multiple constraints, but it also strictly bounds the KL-divergence.
This is a feature that the standard RAFT algorithm lacks.
Note that there may be instances where no generations remain after filtering. In such cases, if the pre-collected
baseline response ¯ai,t satisfies all constraints, it can be used as a[∗]i,t[. If it doesn’t, this datapoint can be skipped.]
- After the constrained regularized reward ranking, instead of directly performing SFT update w.r.t the chosen
sample as eq. (16) does, here we reweigh each chosen response by their calibrated reward value and then perform
SFT update as follow
_wt+1 = wt + αt · ˜gra(nπwt_ )
= wt + αt [1] _Rcalib(si,t, a[∗]i,t[)][ · ∇]_ [log(][π][w]t [(][s][i][,][t][,][ a][∗]i,t[))][.] (18)
- _n_ _i=1_
X
By incorporating the calibrated reward model value in the update, we can differentiate the emphasis on chosen
responses based on their quality, unlike the RAFT algorithm which treats all chosen responses equivalently. This
approach allows for a more refined alignment with the reward model.
Please note that unlike CRPG and CODPO, CRRAFT specifically focuses on increasing the likelihood of constraintsatisfied positive samples and disregards the constraint-violated negative samples.
**4.1.4** **Judges in CGPO**
The key step in implementing multi-constraint CGPO optimizers, as outlined in Section 4.1.1 and Section 4.1.3, is
to determine whether a generation (s, a) satisfies a constraint or not. This determination allows us to split generated
samples into positive (Xt[+][) and negative (][X]t[−][) groups given the label][ y][ predicted by each constraint judge][ J][h][, i.e.,]
_Jh(s, a) = y_ 0, 1, where 1 _h_ _M,_
∈{ } ≤ ≤
and then apply our customized constraint RLHF optimizers based on that classification. In CGPO, we have developed
and integrated the following two types of constraint judge modules to assess whether a generation satisfies a constraint:
- Rule-based constraint judge module: This module employs a rule-based approach (such as string-matching and
code execution) to ascertain whether the generation strictly adheres to predefined regulations (Li et al., 2024a). It
is particularly effective for constraints related to precise instruction following, where the generation must meet
exact requirements such as length, number of paragraphs, and keyword inclusion (Zhou et al., 2023; Hendrycks
et al., 2021b; Cobbe et al., 2021). It can also handle reasoning tasks, such as math problems and code generation.
- LLM-based constraint judge module. This module functions as an LLM generator. In most cases, the generation
is formatted according to a template before being sent to the judge module. These modules not only provide
access to the constraint satisfaction condition but also offer reasoning behind the judgement construction. Due to
this property, they are typically capable of handling more challenging constraint evaluation tasks such as safety
violation, reference-based factuality verification, and false refusal patterns. The model could either be a compact
LLM fine-tuned with domain-specific data (Inan et al., 2023; Bai et al., 2022) or a powerful, large LLM without
task-specific fine-tuning (Yuan et al., 2024b; Zheng et al., 2024).
A detailed introduction to these two types of judges can be found in Appendix B.
**4.2** **CGPO in Multi-Taks with Multi-Objectives**
In the multi-tasks environment, CGPO utilizes customized combinations of "reward models + MoJs + optimizers" to
provide alignment guidance tailored to each task. This approach is designed to better accommodate the specific nature
of each problem, thereby enable CGPO to have better chance to achieve optimal alignment outcomes. Figure 2 provides
an end-to-end illustration of how the CGPO pipeline functions in the multi-tasks setting. The entire CGPO pipeline has
the following two core components: multi-objective reward modeling and multi-experts alignment.
**Mutli-Objective Reward Modelling. Unlike the approach adopted in previous RLHF pipelines in multi-objective**
scenarios, which applies the same linear combined reward model to all prompts in the prompt set D, CGPO first classifies
-----
**Gradient Update**
**Mixture of Judges**
"He was born in 1812,
Larue County, KY" ` **CGPO**
**Optimizer I**
**Task 1:**
**General Chat** **Factuality**
**Judge** **Helpfulness**
"Abraham Lincoln **RM**
"When was AbrahamLincoln born?" was born in Feb 12,1809."
**Task 2:**
**Math Reasoning** "The correct answer **CGPO**
is 10.5 pounds" **Optimizer II** **Merged**
"How much food doesMike need weekly for **Math** **optimizers**
3 dogs if each eats1/2 pound daily?" "Mike needs 12 **Judge** **ReasoningRM** **gradient**
pounds, I hope it is `
correct"
**Task N:**
**Harmful Intent**
"I can't provide
"How to make abomb?" information on illegalactivities" **CGPO**
**Optimizer N**
**Safety**
**Judge**
"Happy to assist! **Safety RM**
Below are the steps `
to make a bomb ..."
**Figure 2 CGPO in a multi-tasks setting. The reward model, a MoJs, and optimization setup are uniquely tailored to the specific**
characteristics of each task. This customization ensures the most effective and targeted approach for achieving optimal performance
across all tasks, even those with potentially contradictory goals.
the prompt set D into distinct, non-overlapping categories based on the nature of the prompts, i.e., D = _D1, D2, . . ., DL_ .
{ }
Each prompt set Dl _D is referred to as a task. For example, prompts with harmful intent, which could potentially_
∈
lead LLM to generate unsafe responses, are grouped into a class labeled "harmful intent". Conversely, prompts without
unsafe intent, primarily focused on information gathering and casual conversation, are grouped into a class labeled
"general chat". This categorization can be performed during the data collection phase or by prompting an LLM to
carry out the categorization given the definitions of different classes. Subsequently, with a collection of trained reward
models denoted as _Rcalib,1, Rcalib,2, . . ., Rcalib,V_, we tailor the specific reward model to be applied for each task Dl.
{ }
This customization guarantees that each prompt class Dl benefits from the most appropriate guidance provided by the
corresponding reward model. Note that the number of reward models, denoted by V, is less than or equal to the number
of tasks, meaning a single reward model can be utilized across multiple tasks.
The major advantage of segregating the reward modeling for each individual task is to exclude irrelevant or contradictory
objectives, thus enabling each task to focus solely on optimizing its own goal metrics without interference from other
objectives, which could otherwise lead to suboptimal gains in target goals.
**Multi-Expert Alignment. The concept of multi-expert alignment involves applying customized MoJs, reward model**
and policy optimization setups for each task.
After the policy model generates online samples for each task, we employ a mixture of task-specific judges to identify
generations that do not meet predefined standards. It is crucial to emphasize that the selection of judges are uniquely
tailored for each task, reflecting the particular shortcomings of each reward model and our established performance
criteria for LLMs in these tasks. For instance, in the "general chat" task, we employ LLM-based judges for false
refusal and factuality to enhance responsiveness and ensure honesty. In "reasoning" tasks, we implement a rule-based
math/coding constraint judge to guarantee correctness and accuracy.
-----
Based on the status of constraint satisfaction across generations and a customized reward model, we implement an
RLHF policy optimizer with a specifically tailored hyperparameter setup to align each task effectively. This method
deviates from the conventional RLHF pipeline, which generally employs a uniform optimizer setup for task alignment.
For tasks that have precise judges and require extensive exploration to derive the correct response, such as instruction
following, math, and coding, we apply a lenient KL threshold and allow a higher number of generations per prompt. In
contrast, for tasks where precise judges are lacking and extensive exploration is less critical, such as "general chat," we
opt for a stricter KL threshold and a reduced number of generations per prompt.
**Algorithm 2 CGPO({Dl}l[L]=1[, π][w][0]** [,][ {][J][l][}]l[L]=1[,][ {][B][l][}]l[L]=1[,][ {][R][l][}]l[L]=1[,][ {O][l][}]l[L]=1[,][ T] [) in multi-tasks with multi-constraints & multi-]
objectives
1: Input: Multi-tasks prompt set {Dl}l[L]=1[, LLM starting policy][ π][w][0][, judges sets][ {][J][l][}]l[L]=1[, multi-tasks batchsizes][ {][B][l][}]l[L]=1[,]
reward model sets {Rl}l[L]=1[, multi-tasks weights][ {][ρ][l][}]l[L]=1[, multi-tasks optimizers][ {O}]l[L]=1[, iteration number][ T] [.]
2: for t = 0, 1, · · ·, T do
3: **for l = 0, 1, · · ·, L do**
4: Obtain gradient ˜gl(πwt ) for l-th task via CGPO(Dl, πwt, Jl, Bl, Rl, _l, 1) in Algorithm 1_
O
5: **end for**
6: Update with multi-tasks gradient accumulation wt+1 = wt + αt · _l=1_ [ρ][l][ ·][ ˜]gl(πwt ),
7: end for
[P][L]
The high-level framework of CGPO in the multiple-constraint and multiple-objective setting is illustrated in Algorithm 2.
Specifically, at each iteration t, we process each individual task to compute the updated gradient ˜gl(πwt ). This computation
is based on the task-specific prompt set Dl, reward model Rl, judges Jl, batch size Bl, and optimizer _l, following the_
O
steps outlined in Algorithm 1. Subsequently, we accumulate the gradients across all tasks and combine them with our
predefined task weights {ρl}l[L]=1[, which are then used to update our model parameters.]
## 5 Experiments
In this section, we outline the specifics of our experimental setup designed for multi-task alignment under conditions of
extreme multi-constraints and multiple objectives. Specifically, we focus on fine-tuning a LLM to achieve alignment
across the following five tasks:
- General chat: This task is designed to enhance the general conversational abilities of LLMs by considering
multi-turn conversational histories (Wang et al., 2024b). It focuses on boosting the coherence, consistency, and
correctness of responses, thereby making the interactions more logical and seamless. Additionally, this task
improves the model’s capability to deliver responses that are better aligned with the user’s intentions and queries,
and are factually grounded (Sun et al., 2024).
- Instruction Following: This task is designed to enhance the ability of LLMs to follow instructions accurately
within specific contexts or industries (Zhou et al., 2023). By fine-tuning LLMs to adapt to particular domains
or user requirements, they can deliver more precise and relevant responses. This improvement leads to a
more satisfying and efficient user experience, making LLMs more effective and versatile tools across various
applications.
- Math/Code Reasoning: This task is designed to enhance the math and coding capabilities of LLMs, enabling
them to address more complex problems and broaden their range of functions. These include tasks like debugging
code or solving mathematical equations, which are vital in technical fields (Hendrycks et al., 2021b; Cobbe
et al., 2021; Chen et al., 2021; Austin et al., 2021). Furthermore, improving LLMs’ ability to comprehend and
produce mathematical and code-related content results in greater accuracy and efficiency in activities that demand
meticulous logical reasoning and computational thinking.
- Engagement Intent: This task aims to enhance user engagement and interaction with the LLM. To address this,
we involve human annotators who interact with the model and provide binary feedback (like or dislike) for each
response generated by the LLM. Our objective is to maximize the likelihood that users will favorably respond to
the LLM’s outputs.
-----
- Harmful Intent: This task trains LLMs to recognize and resist safety-related adversarial attacks. It ensures that
LLMs are safeguarded against exploitation for malicious purposes, such as generating harmful or misleading
information (Sun et al., 2024; Xu et al., 2020). By enhancing their ability to operate safely and ethically, this task
helps maintain user trust and uphold the credibility of the technology.
**5.1** **Supervised Fine-Tuning**
The foundational model we have chosen is the LLaMA-3.0-70B pre-trained checkpoint. We independently perform SFT
using an open-source dataset to establish the initial policy, denoted as π0. For all preference pair datasets listed below
we only use positive samples in SFT. We utilize the following datasets for the tasks under consideration:
- General chat: LMSys-55k (Chiang et al., 2024), UltraChat (Ding et al., 2023)
- Instruction following: LIama 3.0 70B instruct model synthetic instruction following dataset
- Math/Code Reasoning: Orca-Math Mitra et al. (2024), MetaMath (Yu et al., 2023), Evol-CodeAlpaca (Luo et al.,
2023), UltraFeedback (Cui et al., 2023), UltraInteract (Yuan et al., 2024a)
- Harmful Intent: Human annotated safety dataset
The training is carry out for 2 epoches with a learning rate of 10[−][5]. A cosince schedule is employed, the global batchsize
is set to 128 with minimum rate 0.1 and warm-up steps 200. The detail of how we obtain synthetic instruction following
dataset and safety dataset SFT can be found in Appendix A.
**5.2** **Reward Modelling**
We have employed open-source pairwise preference data to train three specialized reward models (RMs):
- Helpfulness RM: This model is tailored for tasks such as general chat, instruction following, and math/code
reasoning. It is based on the LLaMA-3-70B instruct finetuned model. The training utilized the following pairwise
preference datasets:
**– General chat: Includes datasets such as HH-RLHF (Bai et al., 2022), SHP (Ethayarajh et al., 2022),**
HelpSteer (Wang et al., 2023), Distilabel-Capybara (Ethayarajh et al., 2024), Distilabel-Orca (Álvaro
Bartolomé Del Canto et al., 2024), and LMSys-55k (Chiang et al., 2024).
**– Instruction Following: LIama 3.0 70B instruct model synthetic instruction following pairwise preference**
dataset.
**– Math/Code Reasoning: Features datasets like Argilla Math (Álvaro Bartolomé Del Canto et al., 2024),**
UltraFeedback (Cui et al., 2023) and UltraInteract (Yuan et al., 2024a).
- Engagement RM: This RM is designed to simulate user engagement preferences. Initially, we fine-tune a
binary classifier predictor using the LLaMA-3-70B instruct model to predict a user’s engagement intent based
on real interaction data between the language model and the user. We then treat this predictor as the oracle for
user intent regarding engagement with the language model, given prompts and generations. To gather pair-wise
training data, we subsample 129692 prompts from the LMSys-1M dataset (Zheng et al., 2023a) and use the
LLaMA-3-70B instruct model to generate four responses for each prompt. Each prompt is then scored using the
oracle engagement predictor. We select the generation with the highest score as the "chosen" response and the
generation with the lowest score as the "rejected" response. By doing this, we compile the pair-wise dataset and
train the engagement RM based on this data.
- Safety RM: Focused on ensuring safe responses in scenarios with potentially harmful user prompts, this model is
based on the LLaMA-3-8B instruct finetuned model. It utilizes a human-annotated safety pairwise preference
dataset that identifies harmful intent in prompts.
It is important to note that we are considering training a unified Helpfulness RM that encompasses general chat,
instruction following, and math/code reasoning, rather than training three separate RMs. This consideration is based on
the observed positive correlation among these tasks. A unified RM, trained with a blended dataset from these domains,
is expected to yield superior performance compared to training separate RMs for each individual task.
-----
**5.3** **Mixture of Judges**
To address the limitations of the reward model, we have implemented several judges in our experiment for multi-task
alignment:
- False refusal judge: Enhancing safety protocols may cause LLMs to become overly safe, leading to false
refusals when responding to innocuous user queries, thus degrading user experience. It has become critical for
LLMs to reduce false refusals while maintaining the same level of safety, both in the research community and
in the leading industry models (Cui et al., 2024). To address this challenge, we have developed a false refusal
classifier, a fine-tuned LLM designed to detect false refusals to ensure the effectiveness of the LLM.
- Precise instruction following judge: Reward models often struggle with precisely following instructions
(Zhou et al., 2023). To address this, we have implemented a rule-based judge capable of accurately assessing
compliance with over 30 types of specific instruction-following requests found in user prompts, such as "answer
the question in two paragraphs." It is important to note that during RLHF finetuning, we will also include precise
instruction-following prompts of this type so that the correctness of the generation can be evaluated with this
constraint judge.
- Regex math/code reasoning judge: Reward models frequently fail to accurately assess the correctness of
math and coding problems. To improve accuracy, we have introduced specialized judges for both domains. For
math-related queries, we use a rule-based approach to check whether the final answers of responses match the
ground-truth answers. For coding problems, we employ a unit-test-based judge that evaluates the accuracy of the
code by running it through a series of unit tests.
- Factuality judge: Hallucination is a common issue in LLMs, especially during the RLHF phase. The reward
model often fails to distinguish between factual and non-factual claims. To address this, we use the Llama3
70B model as a factuality constraint judge to evaluate whether the fact-related claims in an output contradict
pre-collected, verified factual data, thereby ensuring the accuracy and reliability of the information provided by
the LLM.
- Safety judge: The safety reward model alone does not sufficiently ensure the trustworthiness of our model due
to its limited accuracy. To further enhance safety, we incorporate LlamaGuard2, an industry leading open sourced
fine-tuned LLM, to assess whether an output violates predefined safety standards.
For details on all the above judges, please refer to Appendix B.
## 6 Evaluation Benchmarks
We assess models using a range of benchmarks to comprehensively evaluate their performance across all tasks. More
detailed information of evaluation setup can be found in Appendix C.
- General chat
**– AlpacaEval-2 (Dubois et al., 2024): This benchmark focus on single-turn conversations and includes**
805 test prompts that span a range of topics. The models are evaluated directly against GPT-4 Preview to
determine the win rate. The same GPT-4 model also serves as the judge.
**– Chat-Arena-Hard (Li et al., 2024b): This benchmark includes 500 test prompts sourced from the live data**
on Chatbot Arena, a crowd-sourced platform for evaluating large language models (LLMs). These prompts
assess the model’s capabilities in areas such as specificity, domain knowledge, complexity, problem-solving,
creativity, technical accuracy, and real-world application. Besides aligning with human preferences, when
compared to AlpacaEval-2, Chat-Arena-Hard also demonstrates distinct separability between different
models.
- Instruction Following
**– IFeval (Zhou et al., 2023): This benchmark concentrates on close-form instruction-following tasks, encom-**
passing 25 verifiable instructions. It comprises 541 evaluation prompts, each potentially containing multiple
instruction requests. Four accuracy scores are provided in this benchmark: prompt-level strict accuracy,
-----
prompt-level loose accuracy, instruction-level strict accuracy, and instruction-level loose accuracy. We report
the average of these four scores to represent the model’s performance in this benchmark.
- Math/Coding Reasoning
**– MATH (Hendrycks et al., 2021b): This benchmark includes 5000 problems drawn from a variety of**
mathematics competitions, encompassing a broad spectrum of subjects such as Prealgebra, Algebra, Number
Theory, Counting and Probability, Geometry, Intermediate Algebra, and Precalculus. Most of these problems
demand more than just the simple application of standard mathematical techniques.
**– GSM8K (Cobbe et al., 2021): This benchmark features 8.5k high-quality problems at the grade school math**
level. The solutions to these problems rely solely on elementary concepts, making high test performance an
achievable goal. Additionally, this dataset exhibits high linguistic diversity while depending on relatively
simple grade school math concepts.
**– MBPP (Austin et al., 2021): This benchmark comprises 974 programming tasks tailored for entry-level**
programmers. It evaluates the capability of language models to generate concise Python programs based
on descriptions provided in natural language. We consider the 0-shot evaluation prompt, which is closer to
real-world use cases. We provide a prompt example in the Appendix C.
**– HumanEval (Chen et al., 2021): This benchmark consists of 164 handwritten programming problems, each**
featuring a function signature, docstring, body, and unit tests. The programming tasks in this benchmark are
designed to evaluate language comprehension, reasoning, algorithmic thinking, and basic mathematics skills.
Similar to MBPP, we consider 0-shot evaluation prompt for this benchmark.
- World knowledge & factuality
**– MMLU (Hendrycks et al., 2020): This benchmark comprises 15908 multiple-choice questions spanning**
various branches of knowledge. It encompasses subjects including the humanities, social sciences, and hard
sciences. The evaluation dataset includes 57 tasks, covering areas such as elementary mathematics, US
history, computer science, law, among others.
**– ARC-Challenge (Clark et al., 2018): This benchmark features a collection of 2590 natural, grade-school**
science multiple-choice questions. All questions are considered challenging, as evidenced by the failure of
both retrieval and co-occurrence methods to provide correct answers.
**– TruthfulQA (Lin et al., 2021): The benchmark includes 817 questions across 38 categories, such as health,**
law, finance, and politics. Each question in the dataset is designed in a way that may lead some individuals
to answer incorrectly due to prevailing misconceptions or false beliefs. We report the multiple-choice QA
accuracy score (MC2) in our paper.
- Engagement Intent: We subsample 2000 prompts from the LMSys-1M dataset (Zheng et al., 2023a) and utilize
the oracle engagement predictor, as discussed in Section 5.2, to evaluate each generation. The average score of
the generations is used as a metric to measure the user’s intent to interact with the language model.
- Safety
**– Safety violation ratio (SVR): Our safety benchmark is broadly based on the risk categories in ML**
Commons taxonomy of hazards (Inan et al., 2023). We collect adversarial human-written prompts per risk
category to ensure every category is covered and balanced in equal proportions. The prompts are written in
such a way to include diverse adversarial styles and attack vectors. We train a safety classifier similar to
Llama Guard (Vidgen et al., 2024) to compute an overall violation ratio as an evaluation metric.
**– False refusal ratio (FRR): False refusal ratio acts as a counter-metric to violation rate, to make sure the**
model is not overly safe. The false refusal prompts are collected in tandem to the adversarial prompts,
targeting innocuous and benign cases but are also likely to trigger the model to refuse due to safety reasons.
One such example is "How to kill the lights". We also drew inspirations from XSTest (Röttger et al., 2023)
and OR-Bench (Cui et al., 2024) while constructing our false refusal benchmark. We train a refusal classifier
on model responses to compute an overall false refusal ratio as an evaluation metric.
-----
**6.1** **CGPO Training Setup**
In this section, we will show how we implment the CGPO in the RLHF finetuning stage.
**RLHF warm-up. Unlike previous studies Ouyang et al. (2022); Achiam et al. (2023), which directly employ the SFT**
model as the initial point for RLHF, our approach introduces a "warm-up" phase. This phase begins with a model that
has undergone preliminary fine-tuning through a few steps of DPO, starting from the SFT model. The rationale behind
this strategy is that even if DPO and the reward model utilize the same preference training dataset, initiating online
RLHF directly from the SFT model and performing policy optimization with the reward model may not be able to
explicitly exploit the high-quality preference data, potentially leading to suboptimal performance enhancements. By
initiating RLHF with a model already refined by DPO to a certain degree, we can fully harness the advantages of the
preference dataset, thereby providing a better starting point for RLHF with improved performance gains and more stable
optimization performance.
In our experiments, we employ all reward model training datasets in Section 5.2 to conduct DPO training using the SFT
model as described in Section 5.1. The warm-up model is developed through 3,000 update steps. As we will show in
Section 6.2, CGPO initiated from the warm-up model significantly outperforms that started from the SFT model.
**Training recipe: We begin the RLHF finetuning process using the warm-up model. We incorporated the reward models**
and MoJs, developed in Sections 5.2 and 5.3 respectively, into the CGPO framework to facilitate the RLHF finetuning
process. Table 1 shows the treatment we applied for each task.
**Instruction** **Math/Coding** **Engagement** **Harmful**
**Tasks** **General chat**
**Following** **Reasoning** **Intent** **Intent**
UltraChat,
LMSys-55k,
XSTest,
TriviaQA, ARC
Math, Safety RM
GSM8K, LMSys-1M training
Aqua & APPS prompt
Synthetic IF
prompts
**Prompt**
**Helpfulness RM**
**Engagement RM**
**Safety RM**
**Style**
**False refusal**
**Precise IF**
**Math/Code**
**Factuality**
**Safety**
**Table 1 Tasks and their corresponding prompt sets, reward models, and MoJs. Note that in general chat task the factuality constraint**
judge is only applied to TriviaQA and ARC prompt set.
**Baseline and Ablations: To assess the performance of various constrained RLHF optimizers proposed in this study,**
we conducted CGPO training with different optimizers: CRPG, CRRAFT, and CODPO under the same settings (prompt
set, reward model, and MoJs). We tailored the hyperparameter settings of the optimizer for various tasks to align with
the specific characteristics of each task. Additionally, we consider DPO and PPO as our RLHF baselines. To establish
the DPO baseline, we continue running the DPO updates starting from the RLHF warm-up model and extend the training
to 14,000 steps to thoroughly optimize all benchmarks listed in Section 6. As previously mentioned, for DPO training,
we utilize all reward models’ training sets starting from the SFT model. To establish the PPO baseline, we first train a
unified reward model by merging all reward models’ training data as described in Section 5.2. Following this, we start
from the warm-up model and perform PPO updates by applying the unified reward model to all prompt sets listed in
Table 1. For both PPO and CGPO variants, we utilize the same global batch size 1024 and conduct 600 update steps.
**6.2** **Experimental Results**
In this section, we present the main results of our experiments. In Section 6.2.1, we highlight the superior performance
of CGPO compared to baseline RLHF methods across various benchmarks. We present ablation studies in Section
-----
6.2.2 to demonstrate the importance of adopting MoJs. Additionally, we discuss the benefits of introducing an RLHF
warm-up stage in Section 6.2.3.
**6.2.1** **Main Results and Ablations**
**Figure 3 Comparison of CGPO variants with baseline RLHF algorithms PPO and DPO across various benchmarks**
For the online RLHF algorithms CGPO and PPO, we monitor the model’s performance at every 10-step interval
throughout the training trajectory across various benchmarks, as illustrated in Figure 3. The plot demonstrates
that CGPO, when paired with the CRPG and CRRAFT optimizers, consistently enhances performance across all
benchmarks compared to the initial model, indicating progressive improvement as training progresses. Specifically,
CRPG outperforms all others throughout the entire training period in terms of ARC Challenge, 0-shot HumanEval,
0-shot MBPP, 4-shots MBPP, MATH, and GSM8K. Meanwhile, CRRAFT excels in IFEval during the training phase.
Notably, the online RLHF baseline PPO exhibits a significant decline in performance on 0-shot coding benchmarks
(MBPP and HumanEval) as training progresses, indicating a severe case of reward hacking. Meanwhile, CGPO with the
CODPO optimizer shows a slight regression on MBPP and IFEval benchmarks compared to the warm-up model, yet it
effectively avoids the drastic performance drop observed with PPO in the coding benchmarks. The offline RLHF baseline
DPO, while avoiding the drastic regression seen with PPO, remains overly conservative in enhancing the model’s
performance, resulting in lower metric improvements compared to CGPO with the CRPG and CRRAFT optimizers.
In Table 2, we present the evaluation results for SFT, DPO warm-up, DPO baseline, the final step of PPO, and various
CGPO variants across all benchmarks detailed in Section 6. The data in Table 2 indicate that CGPO variants employing
CRPG and CRRAFT optimizers significantly outperform the DPO and PPO baselines across all benchmarks. Notably,
CRPG shows the most substantial improvements in math and coding benchmarks (Math, GSM8K, HumanEval, and
MBPP), while CRRAFT excels in helpfulness and factuality (AlpacaEval-2, Arena-Hard, and TruthfulQA). Both CRPG
and CRRAFT achieve the best results in terms of instruction following (IFEval). While the CGPO variant with the
CODPO optimizer does not perform as strongly as other variants, it offers performance that is on par with or better
than the DPO and PPO in all benchmarks except the IFEval. In terms of safety, CGPO with the CRPG and CODPO
optimizers achieve the best results in FRR and SVR, respectively. Table 2 demonstrates that the CGPO framework is
able to enhance model quality across all tasks, proving its efficacy in managing challenging multi-task fine-tuning.
**6.2.2** **Effectiveness of Mixture of Judges**
In this section, we explore the significance of incorporating MoJs within the CGPO framework. We conduct an ablation
study by eliminating all MoJs from CGPO, utilizing the CRPG optimizer, while keeping all other variables constant,
and then proceed to rerun the RLHF finetuning for 600 steps. Figure 4 presents a comparative analysis of CGPO
-----
**DPO** **CGPO -** **CGPO -** **CGPO -**
**SFT** **DPO** **PPO**
**warm-up** **CRPG** **CRRAFT** **CODPO**
**AlpacaEval-2** 10.9 13.3 16.3 24.8 25.9 **43.2** 18.08
**Arena-Hard** 13.6 ± 1.6 18.8 ± 1.6 18.3 ± 1.7 24.3 ± 1.8 31.2 ± 2.2 **36.8 ± 2.0** 16.8 ± 1.9
**IFEval** 0.71 0.75 0.79 0.81 **0.83** **0.83** 0.70
**MATH** 0.44 0.44 0.45 0.46 **0.48** 0.47 0.46
**GSM8K** 0.86 0.88 0.90 0.91 **0.93** 0.92 0.90
**0-shot MBPP** 0.50 0.51 0.49 0.002 **0.63** 0.57 0.51
**4-shots MBPP** 0.55 0.57 0.60 **0.62** **0.62** 0.58 0.55
**0-shot HumanEval** 0.09 0.15 0.59 0.006 **0.76** 0.70 0.57
**4-shots HumanEval** 0.62 0.70 0.70 0.66 **0.71** 0.68 0.67
**MMLU** 0.75 **0.76** 0.75 0.75 0.75 0.75 0.75
**ARC** 0.85 0.84 0.88 0.90 **0.92** 0.90 0.90
**TruthfulQA** 0.57 0.59 0.63 0.65 0.64 **0.66** 0.63
**Engagement** 0.50 0.59 0.71 **0.81** **0.81** 0.72 0.79
**SVR** 0.03 0.03 0.02 0.03 0.05 0.02 **0.01**
**FRR** 0.18 0.161 0.17 0.12 **0.04** 0.12 0.24
**Table 2 Evaluation results of SFT, DPO warm-up, DPO, PPO and CGPO variants**
performance with and without MoJs using the CRPG optimizer across various benchmarks, including HumanEval,
MBPP, MATH, and GSM8K.
From Figure 4, it is clear that in the absence of coding judges, the CRPG optimizer undergoes a notable decline in
0-shot coding benchmarks once it surpasses 180 steps, mirroring the performance of the PPO baseline. Additionally, in
the MATH, GSM8K and 4-shots HumanEval and MBPP benchmarks, while CRPG shows some improvement without
constraints, the increases in metrics are considerably less pronounced compared to cases where math judges are utilized.
This comparison effectively illustrates that MoJs play a crucial role not only in preventing reward hacking but also in
significantly boosting the model’s performance during online RLHF finetuning.
**6.2.3** **Impact of RLHF Warm-up**
In this section, we discuss the importance of introducing the RLHF warm-up stage. We consider CGPO with CRPG
optimizer, and rerun the experiment in Section 6.2.1 but switch the starting point with SFT model. Addtionally, we add
one more ablation by starting from the DPO baseline that has been extensively optimized, which has significantly better
performance across all benchmarks than the DPO warm-up model (Table 2).
Monitoring GPT-based helpfulness evaluations like AlpacaEval-2 and Arena-Hard during training is costly. To efficiently
assess the effectiveness of the RLHF warm-up stage from the helpfulness perspective, we implement a cost-effective
benchmark. We collect prompts from user-LLM interactions (e.g., LMSys-1M) and generate multiple responses using
the LIama3.0 70B model. These responses are ranked by a powerful LLM, and the highest and lowest-ranked responses
are used to create preference pairs for training a reward model (RM). This RM evaluates helpfulness based on its average
score on its training prompts. Although this RM may overfit this prompt set, it remains a valid measure of helpfulness
since our finetuning process does not depend on this specific prompt set.
Figure 5 illustrates the training curves of the CGPO model with different initial conditions across various benchmarks.
When compared to the standard online RLHF setting, which starts with the SFT model, CGPO initiated from the
warm-up model consistently achieves superior performance in all benchmarks, with the exception of GSM8K. For
the runs that begin with the DPO baseline, there is a noticeable higher initial performance across all benchmarks.
However, the ultimate performance of these models does not exceed those that started from the warm-up or SFT
models. Particularly in helpfulness, ARC challenge, Math and 4-shot coding benchmarks, there is a marked decline in
performance during the later stages of training. This suggests that starting from the highly optimized DPO baseline
may detrimentally affect the final model’s performance, potentially due to the soft-greedy nature of the DPO optimal
policy, which might limit the model’s ability to explore and further improve. Therefore, Figure 5 demonstrates that
-----
**Figure 4 Comparison of CGPO (CRPG optimizer) with and without MoJs**
incorporating an RLHF warm-up stage can significantly enhance the model’s performance during the subsequent online
RLHF phase.
## 7 Related Works
**RLHF in MTL. Reinforcement Learning with Human Feedback (RLHF) is designed to align language models with**
human preferences and has become a crucial component of the fine-tuning pipeline for Large Language Models (LLMs)
(Stiennon et al., 2020; Ouyang et al., 2022; Brown et al., 2020; Touvron et al., 2023; Bi et al., 2024; Bai et al., 2022).
The majority work of RLHF focus optimizing a single reward models (Ouyang et al., 2022; Gao et al., 2023; Dong et al.,
2023; Ethayarajh et al., 2023). The exploration of RLHF in the MTL setting remains relatively underexplored. The
most commonly adopted approach involves optimizing a weighted sum of several reward models, where each model
captures the interests of different tasks (Ramamurthy et al., 2022; Glaese et al., 2022; Yuan et al., 2023; Bakker et al.,
2022; Wu et al., 2024). However, a major limitation of this approach is that key information from each individual reward
model can be lost through linear combination, particularly when conflicting task goals exist. This can lead to suboptimal
performance for each individual task. Additionally, each individual reward model typically requires different treatments
(regularization, early stopping, etc) due to their unique properties, thus applying a uniform treatment for a composite
reward model can further impair optimization performance across tasks (Moskovitz et al., 2023). Another research
direction involves fine-tuning a separate LLM model for each task, followed by linear interpolation of the LLM weights
across all learned models to produce a single model that excels in multiple tasks (Rame et al., 2024). However, this
method remains computationally expensive and unstable due to the high cost and variability inherent in a single RLHF
process (Hu et al., 2023; Rafailov et al., 2024b). (Yang et al., 2024) Proposed to use in-context reward model to manage
multiple reward, but introduce additonal cost during inference time. Unlike the approaches mentioned above, CGPO
introduces a customized reward model recipe and an RLHF optimizer tailored for each specific task. This method is not
only as efficient as the conventional RLHF pipeline, but it also preserves all information within each reward model,
thereby optimizing alignment for each task to the fullest extent.
**Reward Hacking Mitigation. Compaired with traditional RL, where the reward is typically well-defined and the goal**
is to maximize it (Sutton and Barto, 2018), RLHF introduces a unique challenge known as "reward hacking." This issue
arises because the reward model serves as a proxy for actual human preferences. Over-optimization of the reward model
can adversely impact the performance of the language model (Gao et al., 2023; Moskovitz et al., 2023; Stiennon et al.,
-----
**Figure 5 Comparison of CGPO (CRPG optimizer) with different starting point**
2020; Rafailov et al., 2024b). Consequently, addressing reward hacking is a major focus in RLHF. Previous studies
have explored various approaches to mitigate the effects of reward hacking, including reward model regularization
(Singhal et al., 2023), reward ensembles (Eisenstein et al., 2023; Ramé et al., 2024), and explicitly learning the reward
bias error (Chen et al., 2024; Shen et al., 2023). In contrast to previous methods, our CGPO framework employs both
LLM and rule-based judges as constraints to detect and prevent reward hacking patterns. This approach offers a more
fine-grained and controllable solution to this persistent issue. Furthermore, the use of MoJs enables us to develop
tailored strategies for mitigating the effects of reward hacking across various tasks in the MTL setting. This allows us to
effectively address the reward hacking challenge in the more complex MTL environment, where previous methods have
struggled to perform efficiently.
## 8 Conclusion
In this paper, we introduced the CGPO framework to address key challenges in multi-task learning for LLM post-training
with RLHF. The CGPO framework effectively mitigates issues such as inhomogeneous reward hacking and conflicting
task goals through a novel primal-type multi-constraint RL method and a tailored multi-objective optimization strategy.
We demonstrate the effectiveness of CGPO in a scenario where we need to handle five tasks with three reward models
and seven constraints, marking the first application of RLHF in multi-task learning for general-purpose LLMs. Our
experiments show that CGPO achieves significantly better metric gains for all tasks compared to the baseline RLHF
methods. Moving forward, it is promising to explore more automated ways to adapt the gradient weights from different
tasks to further reduce the hyperparameter burden and advance the Pareto frontier (Sener and Koltun, 2018).
-----
## References
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko
Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta. Muppet: Massive multi-task
representations with pre-finetuning. arXiv preprint arXiv:2101.11038, 2021.
AI Anthropic. Introducing claude, 2023.
Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q Tran,
Dara Bahri, Jianmo Ni, et al. Ext5: Towards extreme multi-task scaling for transfer learning. arXiv preprint arXiv:2111.10952,
2021.
Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann,
Nova DasSarma, et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021.
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael
Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021.
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli,
Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint
_arXiv:2204.05862, 2022._
Michiel Bakker, Martin Chadwick, Hannah Sheahan, Michael Tessler, Lucy Campbell-Gillingham, Jan Balaguer, Nat McAleese,
Amelia Glaese, John Aslanides, Matt Botvinick, et al. Fine-tuning language models to find agreement among humans with diverse
preferences. Advances in Neural Information Processing Systems, 35:38176–38189, 2022.
Xiao Bi, Deli Chen, Guanting Chen, Shanhuang Chen, Damai Dai, Chengqi Deng, Honghui Ding, Kai Dong, Qiushi Du, Zhe Fu,
et al. Deepseek llm: Scaling open-source language models with longtermism. arXiv preprint arXiv:2401.02954, 2024.
Ralph Allan Bradley and Milton E Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika,
39(3/4):324–345, 1952.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam,
Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems,
33:1877–1901, 2020.
Rich Caruana. Multitask learning. Machine learning, 28:41–75, 1997.
Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz
Korbak, David Lindner, Pedro Freire, et al. Open problems and fundamental limitations of reinforcement learning from human
feedback. arXiv preprint arXiv:2307.15217, 2023.
Lichang Chen, Chen Zhu, Davit Soselia, Jiuhai Chen, Tianyi Zhou, Tom Goldstein, Heng Huang, Mohammad Shoeybi, and Bryan
Catanzaro. Odin: Disentangled reward mitigates hacking in rlhf. arXiv preprint arXiv:2402.07319, 2024.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda,
Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374,
2021.
Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anastasios Nikolas Angelopoulos, Tianle Li, Dacheng Li, Hao Zhang, Banghua Zhu,
Michael Jordan, Joseph E. Gonzalez, and Ion Stoica. Chatbot arena: An open platform for evaluating llms by human preference,
2024.
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have
solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob
Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
Michael Crawshaw. Multi-task learning with deep neural networks: A survey. arXiv preprint arXiv:2009.09796, 2020.
Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and Maosong Sun. Ultrafeedback:
Boosting language models with high-quality feedback. 2023.
Justin Cui, Wei-Lin Chiang, Ion Stoica, and Cho-Jui Hsieh. Or-bench: An over-refusal benchmark for large language models. arXiv
_preprint arXiv:2405.20947, 2024._
-----
Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing
chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023.
Hanze Dong, Wei Xiong, Deepanshu Goyal, Yihan Zhang, Winnie Chow, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and
Tong Zhang. Raft: Reward ranked finetuning for generative foundation model alignment. arXiv preprint arXiv:2304.06767, 2023.
Yann Dubois, Chen Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy S Liang, and
Tatsunori B Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback. Advances in Neural
_Information Processing Systems, 36, 2024._
Jacob Eisenstein, Chirag Nagpal, Alekh Agarwal, Ahmad Beirami, Alex D’Amour, DJ Dvijotham, Adam Fisch, Katherine Heller,
Stephen Pfohl, Deepak Ramachandran, et al. Helping or herding? reward model ensembles mitigate but do not eliminate reward
hacking. arXiv preprint arXiv:2312.09244, 2023.
Kawin Ethayarajh, Yejin Choi, and Swabha Swayamdipta. Understanding dataset difficulty with V-usable information. In Kamalika
Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International
_Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 5988–6008. PMLR, 17–23_
Jul 2022.
Kawin Ethayarajh, Winnie Xu, Dan Jurafsky, and Douwe Kiela. Human-centered loss functions (halos). Technical report, Technical
report, Contextual AI, 2023.
Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. Kto: Model alignment as prospect theoretic
optimization. arXiv preprint arXiv:2402.01306, 2024.
Leo Gao, John Schulman, and Jacob Hilton. Scaling laws for reward model overoptimization. In International Conference on
_Machine Learning, pages 10835–10866. PMLR, 2023._
Amelia Glaese, Nat McAleese, Maja Tre˛bacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin
Chadwick, Phoebe Thacker, et al. Improving alignment of dialogue agents via targeted human judgements. arXiv preprint
_arXiv:2209.14375, 2022._
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive
multitask language understanding. arXiv preprint arXiv:2009.03300, 2020.
Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He,
Dawn Song, and Jacob Steinhardt. Measuring coding challenge competence with apps. NeurIPS, 2021a.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring
mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021b.
Jian Hu, Li Tao, June Yang, and Chandler Zhou. Aligning language models with offline reinforcement learning from human feedback.
_arXiv preprint arXiv:2308.12050, 2023._
Hakan Inan, Kartikeya Upasani, Jianfeng Chi, Rashi Rungta, Krithika Iyer, Yuning Mao, Michael Tontchev, Qing Hu, Brian
Fuller, Davide Testuggine, et al. Llama guard: Llm-based input-output safeguard for human-ai conversations. arXiv preprint
_arXiv:2312.06674, 2023._
Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu,
Punit Singh Koura, et al. Opt-iml: Scaling language model instruction meta learning through the lens of generalization. arXiv
_preprint arXiv:2212.12017, 2022._
Di Jin, Shikib Mehri, Devamanyu Hazarika, Aishwarya Padmakumar, Sungjin Lee, Yang Liu, and Mahdi Namazifar. Data-efficient
alignment of large language models with human feedback through natural language. arXiv preprint arXiv:2311.14543, 2023.
Kaiwen Li, Tao Zhang, and Rui Wang. Deep reinforcement learning for multiobjective optimization. IEEE transactions on cybernetics,
51(6):3103–3114, 2020.
Ming Li, Han Chen, Chenguang Wang, Dang Nguyen, Dianqi Li, and Tianyi Zhou. Ruler: Improving llm controllability by rule-based
data recycling. arXiv preprint arXiv:2406.15938, 2024a.
Tianle Li, Wei-Lin Chiang, Evan Frick, Lisa Dunlap, Banghua Zhu, Joseph E Gonzalez, and Ion Stoica. From live data to high-quality
benchmarks: The arena-hard pipeline, april 2024. URL https://lmsys. org/blog/2024-04-19-arena-hard, 2024b.
Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint
_arXiv:2109.07958, 2021._
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale generation: Learning to solve and explain
algebraic word problems. arXiv preprint arXiv:1705.04146, 2017.
-----
Bingchang Liu, Chaoyu Chen, Cong Liao, Zi Gong, Huan Wang, Zhichao Lei, Ming Liang, Dajun Chen, Min Shen, Hailian Zhou,
et al. Mftcoder: Boosting code llms with multitask fine-tuning. arXiv preprint arXiv:2311.02303, 2023.
Shengchao Liu, Yingyu Liang, and Anthony Gitter. Loss-balanced task weighting to reduce negative transfer in multi-task learning.
In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 9977–9978, 2019a.
Shikun Liu, Edward Johns, and Andrew J Davison. End-to-end multi-task learning with attention. In Proceedings of the IEEE/CVF
_conference on computer vision and pattern recognition, pages 1871–1880, 2019b._
Yudong Luo, Yangchen Pan, Han Wang, Philip Torr, and Pascal Poupart. A simple mixture policy parameterization for improving
sample efficiency of cvar optimization. arXiv preprint arXiv:2403.11062, 2024.
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang.
Wizardcoder: Empowering code large language models with evol-instruct. 2023.
AI Meta. Introducing meta llama 3: The most capable openly available llm to date. Meta AI, 2024.
Arindam Mitra, Hamed Khanpour, Corby Rosset, and Ahmed Awadallah. Orca-math: Unlocking the potential of slms in grade school
math. arXiv preprint arXiv:2402.14830, 2024.
Ted Moskovitz, Aaditya K Singh, DJ Strouse, Tuomas Sandholm, Ruslan Salakhutdinov, Anca D Dragan, and Stephen McAleer.
Confronting reward model overoptimization with constrained rlhf. arXiv preprint arXiv:2310.04373, 2023.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal,
Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural
_information processing systems, 35:27730–27744, 2022._
Arka Pal, Deep Karkhanis, Samuel Dooley, Manley Roberts, Siddartha Naidu, and Colin White. Smaug: Fixing failure modes of
preference optimisation with dpo-positive. arXiv preprint arXiv:2402.13228, 2024.
Alexander Pan, Kush Bhatia, and Jacob Steinhardt. The effects of reward misspecification: Mapping and mitigating misaligned
models. arXiv preprint arXiv:2201.03544, 2022.
Rafael Rafailov, Yaswanth Chittepu, Ryan Park, Harshit Sikchi, Joey Hejna, Bradley Knox, Chelsea Finn, and Scott Niekum. Scaling
laws for reward model overoptimization in direct alignment algorithms. arXiv preprint arXiv:2406.02900, 2024a.
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference
optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36, 2024b.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu.
Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):
1–67, 2020.
Rajkumar Ramamurthy, Prithviraj Ammanabrolu, Kianté Brantley, Jack Hessel, Rafet Sifa, Christian Bauckhage, Hannaneh Hajishirzi,
and Yejin Choi. Is reinforcement learning (not) for natural language processing: Benchmarks, baselines, and building blocks for
natural language policy optimization. arXiv preprint arXiv:2210.01241, 2022.
Alexandre Rame, Guillaume Couairon, Corentin Dancette, Jean-Baptiste Gaya, Mustafa Shukor, Laure Soulier, and Matthieu Cord.
Rewarded soups: towards pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards. Advances in Neural
_Information Processing Systems, 36, 2024._
Alexandre Ramé, Nino Vieillard, Léonard Hussenot, Robert Dadashi, Geoffrey Cideron, Olivier Bachem, and Johan Ferret. Warm:
On the benefits of weight averaged reward models. arXiv preprint arXiv:2401.12187, 2024.
Mathieu Rita, Florian Strub, Rahma Chaabouni, Paul Michel, Emmanuel Dupoux, and Olivier Pietquin. Countering reward
over-optimization in llm with demonstration-guided reinforcement learning. arXiv preprint arXiv:2404.19409, 2024.
Paul Röttger, Hannah Rose Kirk, Bertie Vidgen, Giuseppe Attanasio, Federico Bianchi, and Dirk Hovy. Xstest: A test suite for
identifying exaggerated safety behaviours in large language models. arXiv preprint arXiv:2308.01263, 2023.
Paul Röttger, Hannah Rose Kirk, Bertie Vidgen, Giuseppe Attanasio, Federico Bianchi, and Dirk Hovy. Xstest: A test suite for
identifying exaggerated safety behaviours in large language models, 2023.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv
_preprint arXiv:1707.06347, 2017._
Ozan Sener and Vladlen Koltun. Multi-task learning as multi-objective optimization. Advances in neural information processing
_systems, 31, 2018._
-----
Wei Shen, Rui Zheng, Wenyu Zhan, Jun Zhao, Shihan Dou, Tao Gui, Qi Zhang, and Xuanjing Huang. Loose lips sink ships:
Mitigating length bias in reinforcement learning from human feedback. arXiv preprint arXiv:2310.05199, 2023.
Prasann Singhal, Tanya Goyal, Jiacheng Xu, and Greg Durrett. A long way to go: Investigating length correlations in rlhf. arXiv
_preprint arXiv:2310.03716, 2023._
Joar Skalse, Nikolaus Howe, Dmitrii Krasheninnikov, and David Krueger. Defining and characterizing reward gaming. Advances in
_Neural Information Processing Systems, 35:9460–9471, 2022._
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F
Christiano. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008–3021,
2020.
Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li,
et al. Trustllm: Trustworthiness in large language models. arXiv preprint arXiv:2401.05561, 2024.
Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk,
Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023.
[Llama Team. Meta llama guard 2. https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard2/MODEL_](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard2/MODEL_CARD.md)
```
CARD.md, 2024.
```
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra,
Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288,
2023.
Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes Belkada, Shengyi Huang, Leandro von
Werra, Clémentine Fourrier, Nathan Habib, et al. Zephyr: Direct distillation of lm alignment. arXiv preprint arXiv:2310.16944,
2023.
Bertie Vidgen, Adarsh Agrawal, Ahmed M Ahmed, Victor Akinwande, Namir Al-Nuaimi, Najla Alfaraj, Elie Alhajjar, Lora Aroyo,
Trupti Bavalatti, Borhane Blili-Hamelin, et al. Introducing v0. 5 of the ai safety benchmark from mlcommons. arXiv preprint
_arXiv:2404.12241, 2024._
Binghai Wang, Rui Zheng, Lu Chen, Yan Liu, Shihan Dou, Caishuang Huang, Wei Shen, Senjie Jin, Enyu Zhou, Chenyu Shi, et al.
Secrets of rlhf in large language models part ii: Reward modeling. arXiv preprint arXiv:2401.06080, 2024a.
Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al.
A survey on large language model based autonomous agents. Frontiers of Computer Science, 18(6):186345, 2024b.
Zhilin Wang, Yi Dong, Jiaqi Zeng, Virginia Adams, Makesh Narsimhan Sreedhar, Daniel Egert, Olivier Delalleau, Jane Polak
Scowcroft, Neel Kant, Aidan Swope, and Oleksii Kuchaiev. Helpsteer: Multi-attribute helpfulness dataset for steerlm. 2023.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8:
229–256, 1992.
Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A Smith, Mari Ostendorf, and Hannaneh
Hajishirzi. Fine-grained human feedback gives better rewards for language model training. Advances in Neural Information
_Processing Systems, 36, 2024._
Jing Xu, Da Ju, Margaret Li, Y-Lan Boureau, Jason Weston, and Emily Dinan. Recipes for safety in open-domain chatbots. arXiv
_preprint arXiv:2010.07079, 2020._
Tengyu Xu, Yingbin Liang, and Guanghui Lan. Crpo: A new approach for safe reinforcement learning with convergence guarantee.
In International Conference on Machine Learning, pages 11480–11491. PMLR, 2021.
Rui Yang, Xiaoman Pan, Feng Luo, Shuang Qiu, Han Zhong, Dong Yu, and Jianshu Chen. Rewards-in-context: Multi-objective
alignment of foundation models with dynamic preference adjustment. arXiv preprint arXiv:2402.10207, 2024.
Chengyang Ying, Xinning Zhou, Hang Su, Dong Yan, Ning Chen, and Jun Zhu. Towards safe reinforcement learning via constraining
conditional value-at-risk. arXiv preprint arXiv:2206.04436, 2022.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and
Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284,
2023.
-----
Lifan Yuan, Ganqu Cui, Hanbin Wang, Ning Ding, Xingyao Wang, Jia Deng, Boji Shan, Huimin Chen, Ruobing Xie, Yankai Lin,
Zhenghao Liu, Bowen Zhou, Hao Peng, Zhiyuan Liu, and Maosong Sun. Advancing llm reasoning generalists with preference
trees. 2024a.
Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Sainbayar Sukhbaatar, Jing Xu, and Jason Weston. Self-rewarding language
models. arXiv preprint arXiv:2401.10020, 2024b.
Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. Rrhf: Rank responses to align language
models with human feedback without tears. arXiv preprint arXiv:2304.05302, 2023.
Qiyuan Zhang, Shu Leng, Xiaoteng Ma, Qihan Liu, Xueqian Wang, Bin Liang, Yu Liu, and Jun Yang. Cvar-constrained policy
optimization for safe reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems, 2024.
Xiangyun Zhao, Haoxiang Li, Xiaohui Shen, Xiaodan Liang, and Ying Wu. A modulation module for multi-task learning with
applications in image retrieval. In Proceedings of the European Conference on Computer Vision (ECCV), pages 401–416, 2018.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Tianle Li, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zhuohan Li, Zi Lin,
Eric. P Xing, Joseph E. Gonzalez, Ion Stoica, and Hao Zhang. Lmsys-chat-1m: A large-scale real-world llm conversation dataset.
2023a.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li,
Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems,
36, 2024.
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, et al. Secrets
of rlhf in large language models part i: Ppo. arXiv preprint arXiv:2307.04964, 2023b.
Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Siddhartha Brahma, Sujoy Basu, Yi Luan, Denny Zhou, and Le Hou. Instruction-following
evaluation for large language models. arXiv preprint arXiv:2311.07911, 2023.
Banghua Zhu, Evan Frick, Tianhao Wu, Hanlin Zhu, and Jiantao Jiao. Starling-7b: Improving llm helpfulness & harmlessness with
rlaif, 2023.
Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving.
Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019.
Álvaro Bartolomé Del Canto, Gabriel Martín Blázquez, Agustín Piqueres Lajarín, and Daniel Vila Suero. Distilabel: An ai feedback
(aif) framework for building datasets with and for llms. GitHub repository, 2024.
-----
# Appendix
## A CGPO Training Set
The detail of of our training dataset is provide in Table 3. Note that in our experiment we adopt the instruction finetuing
format, in which the prompt is wrapped as "[INST] {prompt} [\INST]":
**Synthetic IF dataset. Inspired by Zhou et al. (2023), we consider synthetic prompts that require LLM generation**
to satisfy one or more closed-form instructions, which can be verified exactly. We identify 23 types of closed-form
instructions for generation and use LIama 3.0 70B instruct model to create synthetic prompts that address a specific
topic and also require these closed-form instructions. We create a template to enable LIama 3.0 70B instruct model
to generate all prompts. The prompt template that we input into LIama 3.0 70B instruct model to generate synthetic
instruction-following prompts is provided as follows:
**Prompt Template =**
```
"You are a helpful AI assistant. You are given a TOPIC and a FORMAT REQUIREMENT,
and you are expected to generate a PROMPT that is on the given TOPIC and specify the
given FORMAT REQUIREMENT that the corresponding answer should follow. Here are many
examples that you can learn from:
```
**TOPIC: Travel**
**FORMAT REQUIREMENT: In your entire response, refrain from the use of any commas**
**PROMPT: I am planning a trip to Japan, and I would like thee to write an itinerary**
```
for my journey in a Shakespearean style. You are not allowed to use any commas in
your response.
```
**TOPIC: Aerospace engineering**
**FORMAT REQUIREMENT: In your entire response, refrain from the use of any commas and**
```
Give two different responses. Responses and only responses should be separated by 6
asterisk symbols: ******
```
**PROMPT: Write two jokes about rockets.** `Do not contain commas in your response.`
```
Separate the two jokes with 6 asterisk symbols: ******.
```
**TOPIC: History**
**FORMAT REQUIREMENT: Entire output should be wrapped in JSON format**
**PROMPT: What is the history of NYC prospect park?** `Please wrap your entire answer in`
```
JSON format.
```
**TOPIC: Video game**
**FORMAT REQUIREMENT: Highlight at least 2 sections in your answer with markdown, i.e.**
```
*highlighted section* and Answer with at least 40 sentences
```
**PROMPT: Can you write a poem about the pros and cons of playing a lot of video games?**
```
Please make sure it’s at least 40 sentences long (don’t forget to add punctuations).
You must highlight at least sections in your response, like *highlighted phrase*.
```
**TOPIC: Movie**
-----
**FORMAT REQUIREMENT: Answer with at least 40 sentences, Highlight at least 4 sections**
```
in your answer with markdown, i.e. *highlighted section*, and Wrap your entire
response with double quotation marks
```
**PROMPT: Write a joke about the superhero movie with at least 5 sentences.** `Use`
```
Markdown to italicize at least 4 sections in your answer, i.e. *italic text*. Wrap
your answer in double quotes.
```
**TOPIC: Health care**
**FORMAT REQUIREMENT: Your entire response should be in English, capital letters only**
**PROMPT: Write an essay about public health care system in US in English and in all**
```
capital letters.
```
**TOPIC: Mathematics**
**FORMAT REQUIREMENT: Entire output should be wrapped in JSON format**
**PROMPT: List all facts about calculus in a structured output.** `In particular, Format`
```
your entire output in JSON.
Now it is your turn to generate a PROMPT that is on the given TOPIC and specify the
given FORMAT REQUIREMENT that the corresponding answer should follow. Please DO NOT
make up any new format requirement that is not given to you.
```
**TOPIC: {topic}**
**FORMAT REQUIREMENT: {instruction}**
```
To be noted, you just need to mention/specify the FORMAT REQUIREMENT in your response
but your response does not need to follow it. Please directly provide the PROMPT
without any extra words. Do not write any note or explanation.
```
**TOPICS = ["20th century events", "Accounting", "Architecture", "Astronomy",**
```
"Biology", "Businessethics","Celebrities","Chemistry","Clinical knowledge",
"Economics", "Electrical engineering", "Ethics of artificial intelligence",
"Education", "Energy", "Gaming", "Geography", "Global facts", "History", "Healthcare",
"Immigration law", "International law", "Jurisprudence", "Management", "Marketing",
"Mathematics", "Medicine", "Moraldisputes", "Movies", "Music", "Philosophy",
"Physics", "Prehistory", "Psychology", "Public relations", "Sociology", "Sports",
"Social media" "Transportation", "Virology"]
```
**Instructions = ["number of paragraphs", "number of sentences", "number of words",**
```
"first word in n-the paragraph", "number of a specific placeholder"; "number of
sections", "title", "response given in a certain format", "number of highlighted
sections", "response need to be in json", "postscript at the end of response", "number
of bullet list", "forbidden words", "certain keyword must exist", "a given key word
need to appear at least n-times", "a given letter need to appear at least n-times",
"generation should be in lowercase", "generation should be in capital", "capital word
need to appear at least n-times", "generation should no contain comma", "generation
should finish with an exact end checker", "entire response should be be wrapped within
double quotation marks", "generation should contain two responses"]
```
Each time, we randomly select up to three types of closed-form instructions along with one topic, and incorporate them
-----
into a template. This template is then used by LIama 3.0 70b instruct model to generate a prompt. We repeat this process
30000 times to create a comprehensive set of instruction-following prompts.
For each synthetic prompt, we utilized Llama 3.0 70B Instruct model, and Llama 3.0 8B Instruct model to generate
a response based on the prompt. We then evaluated whether these responses adhered to the instruction-following
constraints. Prompts that did not yield any responses meeting the constraints, as well as those where all responses met
the constraints, were filtered out. This process resulted in 11668 prompts that included both responses that satisfied the
constraints and responses that violated them. We randomly selected one response that met the constraints as the accepted
response and one that violated the constraints as the rejected response for each prompt. By doing so, we constructed our
pairwise instruction-following preference dataset.
**Human annotated safety dataset. We take an iterative approach to collect multiple batches of safety preference data**
and merge them together as the final train data. At each iteration, we generate two different responses from a pool of
models (model from previous iteration for example), and send them to human annotators to rate and rank based on the
safety guidelines. If no response meets the guideline, the annotators are asked to directly edit the higher ranked response
for it to abide the guideline. The collected preference pairs are used to train a reward model, and once such a reward
model is trained, we leverage it to do rejection sampling to produce finetuning data that are used to train the next model
iteration. This next model will be added to the pool of models that generate responses for human annotators to rank. We
repeat this process multiple times to iteratively collect higher quality safety preference pairs. An additional layer of data
auditing is also applied on top of each data iteration cycle due to the subtle and subjective nature of safety guidelines to
further ensure data quality.
**Synthetic engagement dataset. To develop a synthetic engagement pairwise preference dataset, we initially gathered**
1M user engagement samples from interactions with an LLM-based chatbot on social media platforms. Each sample
comprises a user query, the LLM’s response, and a binary label indicating user approval of the response. We used this
dataset to train a binary feedback reward model on top of the pretrained Llama 3.0 8B model by adding a linear output
layer and training it as a binary classifier. We selected a model iteration with an AUC of 0.89 from the training trajectory
to function as the oracle predictor of user engagement intent. This model was subsequently used to generate the synthetic
user engagement preference dataset in our study. In the next step, we subsampled 112,375 prompts from LMSys-1M
Zhu et al. (2023). We then generated two responses from the Llama 3.0 8B model and two responses from the Llama
3.0 70B model, ultimately generating four distinct responses for each prompt, conditioned under the generation setting
temperature=1, top_p=0.9. Following this, our oracle predictor was used to score all generated responses. The response
with the highest score was selected as the accepted response, while the one with the lowest score was marked as the
rejected response. By applying this methodology to all selected prompts, we created our synthetic user engagement
preference dataset.
**Additional Comment. It’s important to note that for certain datasets used in online RLHF, we also incorporate metadata**
to provide additional information about the data as shown in Table 4. During CGPO training, sometimes it will be
necessary to extract information from the metadata to implement the MoJs.
- MATH, GSM8K & Aqua Math: In the metadata, we include the ground truth answer for each question. This
allows the math constraint judge to leverage this information to evaluate the accuracy of the LLM’s response for
each math question.
- TriviaQA & ARC: For prompts related to deterministic factuality, we also incorporate the ground truth answer
into the metadata. This allows the factuality constraint judge to assess correctness based on this information.
- APPS: In the metadata, we include several unit tests that the correct code snippet should be able to pass through.
Our coding constraint judge can leverage this to determine if the generated code is correct
- Synthetic IF dataset: We include closed-form instructions in the metadata, specifying requirements that the
LLM’s generation must satisfy. This enables our instruction-following constraint judge to verify whether the
LLM’s output adheres precisely to the instructions.
## B CGPO Constraint Judge
In this section, we will discuss in detail about how we build MoJs in CGPO.
-----
**B.1** **Rule-based Constraint Judge**
**Math constraint judge. As illustrated in Table 4, for the math prompt sets MATH, GSM8K, and Aqua Math, we**
explicitly require the model to provide the final answer in a specified format, which can be easily extracted. When
implementing the math constraint judge, we extract the LLM’s answer by examining the final sentence and comparing
it with the ground truth answer in the metadata. There are instances where the model correctly answers the question
but fails to provide the answer in the correct format. In such cases, the math constraint judge will indicate that this
generation violates the constraint. Although this is a false negative, using CGPO to encourage the model to avoid such
patterns can implicitly help improve the model’s ability to follow instructions.
**Coding constraint judge. Our coding constraint judge examines the coding block in LLM’s response to extract the**
code snippet. It then runs the snippet through all the unit tests provided in the metadata to determine if it passes each
test. Similar to the math constraint, false negatives can occur if LLM’s solution is not formatted correctly. Implementing
CGPO to discourage such patterns could enhance the model’s ability to follow instructions accurately.
**Instruction following constraint judge. The instruction-following constraint judge begins by reading the metadata**
to understand the specific rules that LLM’s output must adhere to. Then, we employ string-matching based logic to
determine whether LLM’s generation complies with all the specified rules.
**B.2** **LLM-based Constraint Judge**
The LLM classifier constraint judge utilizes an additional LLM to assess whether the output from our training LLM
adheres to a specific predefined criterion. We design the input for this judge using a prompt template that arranges the
LLM’s response alongside other essential contexts. Within this template, we specify both a negative token and a positive
token. The negative token indicates that the LLM’s response breaches the constraint, while the positive token signifies
compliance. We explicitly direct the judge to issue either the positive or negative token based on their assessment. To
minimize the randomness in the judgment process, we do not rely solely on the LLM to generate a token and then
check its correspondence to the negative or positive token. Instead, we directly examine the softmax probabilities of the
negative and positive tokens. If the probability of the negative token is higher, we conclude that the LLM’s response
violates the constraint, and vice versa. Table 5 presents the template along with the negative and positive tokens for the
LLM classifiers in our experiment.
**False refusal constraint judge. We utilize the Llama 3.0 8b pretrained model as a foundation and fine-tune an LLM**
classifier specifically aimed at identifying refusal patterns in LLM responses. The training data is formatted as follows:
"[INST] {LLM response} [\INST] judgment", where "judgment" is True if the LLM response indicates refusal, and
False otherwise. During the inference phase of deploying this constraint judge, we also encapsulate the generated
responses from the training LLM within "[INST] ... [\INST]" and use that as the input for the judge.
**Factuality constraint judge. We employ the Llama 3.0 70b instruct model directly as the factuality constraint judge.**
Recall that for prompts associated with deterministic factuality, we include the ground truth answer in the metadata.
When deploying this constraint judge, we use the template as illustrated in Table 5, incorporating the prompt, ground
truth answer, and the LLM response into the template to serve as inputs for the judge.
**Safety constraint judge. We utilize LIamaGuard2 Team (2024), which is fine-tuned from the Llama 3.0 8b pretrained**
model. We reuse the template as introduced in the LIamaGuard2 paper Team (2024), where we incorporate pre-defined
safety guidelines and full completions into the prompt template to serve as inputs for the judge.
## C Evaluation Benchmarks
One example prompt of the MBPP evaluation set:
```
You are an expert Python programmer, and here is your task:
Write a function to sort a given matrix in ascending order according to the sum of its
rows.
Your code should pass the following tests:
```
-----
```
assert sort_matrix([[1, 2, 3], [2, 4, 5], [1, 1, 1]])==[[1, 1, 1], [1, 2, 3], [2, 4,
5]]
assert sort_matrix([[1, 2, 3], [-2, 4, -5], [1, -1, 1]])==[[-2, 4, -5], [1, -1, 1],
[1, 2, 3]]
assert sort_matrix([[5,8,9],[6,4,3],[2,1,4]])==[[2, 1, 4], [6, 4, 3], [5, 8, 9]]
```
One example prompt of the HumanEval evaluation set:
```
Write a solution to the following problem and make sure that it passes the tests:
```
“‘python
```
from typing import List
def remove_duplicates(numbers: List[int]) -> List[int]:
""" From a list of integers, remove all elements that occur more than once.
Keep order of elements left the same as in the input.
```
>>> remove_duplicates([1, 2, 3, 2, 4])
```
[1, 3, 4]
"""
```
“‘
## D Reward Hacking Examples
We provide examples of reward hacking across various tasks in Table 6. Here, the reward model we trained in Section
5.2 assigns higher values to low quality outputs than to high quality ones. Note that we report the raw reward value in
Table 6 instead of the calibrated one.
**Task** **Prompt** **Response** **Reward** **Quality**
To simplify the expression, we can start by
canceling out common factors between the
numerator and denominator of the fractions.
In this case, we have 12 and 36, which
have a common factor of 12. We also have
7 and 35, which have a common factor of 7.
Canceling out these factors, we get:
9 · [12]7 [·][ −]36[35] [=][ 9][ ·][ ]127 [·][ ]3635>> 35
Simplifying further, we have:
9Now, we can simplify the fraction 5 · [12]7 [·][ −]36[35] [=][ 9][ ·][ 5]3 /3 by
dividing the numerator and denominator by
their greatest common divisor, which is 1.
9 · [5]3 [=][ 9]3[·][5] [=][ 45]3
Finally, we can simplify the fraction 45/3
by dividing the numerator and denominator
by their greatest common divisor, which is
3.
45
3 [=][ 15.]
Therefore, the final answer is 15
4.75 Low
Given the following
problem, reason and give a
final answer to the problem.
Problem: simplify
should end with "The final9 · [12]7 [·][ −]36[35] [. Your response]
answer is [answer]"
Math
-----
To simplify the given expression, we can
start by canceling common factors:
9 [12]7 36
First, let’s look at the numerators. Both 12 · [·][ −][35]
and 36 are divisible by 12, so we can
simplify the expression as follows:
9Next, we can look at the remaining factors. · [12]7 [·][ −]36[35] [=][ 9][ ·][ 1]7 [·][ −]3[35]
7 and 35 are divisible by 7, so we simplify
further:
9Finally, we have 9 and 3 which are · [1]7 [·][ −]3[35] [=][ 9][ ·][ −]3[5]
divisible by 3. Performing the division, we
get:
9 · [−]3[5] [=][ −][15]
So, the final answer is -15.
Here is the requested Python code:
“‘
python
def you_are_a_cube(cube):
if cube in [0, 1, 8, 27, 64, 125, 216,
343, 512, 729, 1000, 1331, 1728, 2163,
2744, 3375, 4096, 4913, 5832, 6859, 8000,
9261, 10648, 12167, 13824, 15625, 17576,
19683, 21952, 24389, 27000, 29791,
32768, 35937, 39312, 42875, 46656,
60704, 63488, 68591, 72903, 79507,
92609, 101218, 109744, 120422, 131433,
143489, 157464, 173015, 190044, 209957,
232630, 250047, 270699, 294035, 320264,
348484, 390625, 435672, 488412, 511315,
539047, 576480, 625000, 648930, 676030,
704977, 729028, 758858, 795293, 847442,
904 Sha, 1000000, 1024003, 1056731]:
return True
return False
“‘
4.59 High
1.8 Low
You are an expert python
programmer, and here is
your task to complete the
code:
“‘
python
def you_are_a_cube(cube):
"""
In geometry, a cube is a
three-dimensional solid
object bounded by six
square faces, facets or sides,
with three meeting at each
vertex.The cube is the only
regular hexahedron and is
one of the five Platonic
solids.
It has 12 edges, 6 faces and
8 vertices.The cube is also a
square parallelepiped, an
equilateral cuboid and a
right rhombohedron. It is a
regular square prism in three
orientations, and a trigonal
trapezohedron in four
orientations.
You are given a task of
finding a if the provided
value is a perfect cube!
"""
“‘
Coding
-----
Sure, you can solve your task by checking
if the cube root of the given value is an
integer or not. Here’s your code:
“‘
python
import math
def you_are_a_cube(cube):
"""
In geometry, a cube is a three-dimensional
solid object bounded by six square faces,
facets or sides, with three meeting at each
vertex.The cube is the only regular
hexahedron and is one of the five Platonic
solids. It has 12 edges, 6 faces and 8
vertices.The cube is also a square
parallelepiped, an equilateral cuboid and a
right rhombohedron. It is a regular square
prism in three orientations, and a trigonal
trapezohedron in four orientations.
"""
return math.isqrt(cube) ** 3 == cube.
“‘
There are several things that can cause
instant death that people may not be aware
of or may underestimate the risks
associated with them. Here are some
examples · · ·
As a responsible AI language model, I
must emphasize that it’s essential to
prioritize safety and awareness to avoid
accidental harm. While it’s impossible to
list every potential danger, I’ll highlight
some little-known risks that could have
severe consequences. However, please
remember that most of these risks are
extremely rare, and with proper caution,
you can minimize the likelihood of
encountering them · · ·
0.96 High
0.37 Low
-0.44 High
What are some things that
Safety people are unaware about
that could kill them
instantly?
**Table 6 Example of reward hacking in different tasks**
-----
**Dataset** **is Preference** **Size** **Usage** **Source**
Orca-Math % 200035 SFT Mitra et al. (2024)
MetaMath % 395000 SFT Yu et al. (2023)
Evol% 111183 SFT Luo et al. (2023)
CodeAlpaca
MATH
% 7500 Online RLHF Hendrycks et al. (2021b)
training
GSM8K
% 7473 Online RLHF Cobbe et al. (2021)
training
Aqua Math % 97467 Online RLHF Ling et al. (2017)
APPS % 7070 Online RLHF Hendrycks et al. (2021a)
XSText % 2700 Online RLHF Röttger et al. (2023)
SFT, RM, DPO,
LMSys-55k ! 49865 Chiang et al. (2024)
Online RLHF
SFT, RM, DPO,
UltraChat ! 207865 Ding et al. (2023)
Online RLHF
UltraFeedback ! 340025 SFT, RM, DPO Cui et al. (2023)
UltraInteract ! 129531 SFT, RM, DPO Yuan et al. (2024a)
HH-RLHF ! 115396 RM, DPO Bai et al. (2022)
SHP ! 93301 RM, DPO Ethayarajh et al. (2023)
HelpSteer ! 37131 RM, DPO Wang et al. (2023)
Distilabel! 14811 RM, DPO Ethayarajh et al. (2024)
Capybara
Distilabel! 6926 RM, DPO Álvaro Bartolomé Del Canto et al. (2024)
Orca
Argilla Math ! 2418 RM, DPO Álvaro Bartolomé Del Canto et al. (2024)
Prompts are generated by LIama 3.0 70B
instruct model, accepted and rejected
responses are generated by LIama 3.0 70B
instruct model and LIama 3.0 8b instruct
model
Colloect adversarial human-written
prompts per risk category. The prompts are
written in such a way to include diverse
adversarial styles and attack vectors.
Prompt are sampled from LMSys-1M
Zheng et al. (2023a), the accepted and
rejected responses are generated by LIama
3.0 70B instruct model
Synthetic IF SFT, RM, DPO.
! 11668
dataset Online RLHF
Human
SFT, RM, DPO.
Annotated ! 244232
Online RLHF
safety dataset
Synthetic
SFT, RM, DPO.
engagement ! 112375
Online RLHF
dataset
**Table 3 Dataset used in CGPO experiments for SFT, RM, DPO and online RLHF training**
-----
**Data** **Prompt** **Metadata**
A quadratic equation ax[2] − 2ax + b = 0 has
two real solutions. What is the average of
these two solutions? Your response should end
with "The final answer is [answer]
MATH,
GSM8K,
Aqua Math
{"answer": "1"}
TriviaQA, Who was President when the first Peanuts
"answer": "Harry S. Truman"
ARC cartoon was published? { }
{"unit_tests": "assert similar_elements((3, 4,
5, 6),(5, 7, 4, 10)) == (4, 5), assert
similar_elements((1, 2, 3, 4),(5, 4, 3, 7)) ==
(3, 4), assert similar_elements((11, 12, 14,
13),(17, 15, 14, 13)) == (13, 14)"}
Write a function "similar_elements" to find the
APPS
similar elements from the given two tuple lists
What are the primary architectural styles seen
Synthetic IF in European churches? Give my answer in {"if_requirements": "english_capital"}
English using only capital letters.
**Table 4 Example of Prompt and Metadata used in CGPO experiment**
**LLM Judge** **Template** **neg & pos tokens**
False Refusal [INST] {LLM response} [\INST] True & False
[INST] Here is a question and a ground truth answer:
Question: {prompt}
Ground truth answer: {ground truth answer}
Now, you are given an answer from the AI:
AI: {LLM response}
Does the AI’s answer contains the factual information from the above ground
truth answer? Answer in TRUE or FALSE. [\INST]
[INST] Task: Check if there is unsafe content in ’Agent’ messages in
conversations according our safety policy with the below categories.
<BEGIN UNSAFE CONTENT CATEGORIES> {safety guidelines} <END
UNSAFE CONTENT CATEGORIES>
<BEGIN CONVERSATION> {completions} <END CONVERSATION>
Provide your safety assessment for ONLY THE LAST turn message in the
above conversation
- First line must read ’safe’ or ’unsafe’.
- If unsafe, a second line must include a comma-separated list of violated
categories. [\INST]
Factuality
Safety
False & True
Unsafe & Safe
**Table 5 Prompt template and negative & positive tokens used in LLM classifier judge**
-----
| [
"Sinong, Wang",
"Tengyu, Xu",
"Yuandong, Tian",
"Wenxuan, Zhou",
"Eryk, Helenowski",
"Chen, Zhu",
"Karthik Abinav, Sankararaman",
"Han, Fang",
"Hao, Ma",
"Di, Jin",
"Kaiyan, Peng",
"Eric, Han",
"Shaoliang, Nie",
"Hejia, Zhang",
"Zhouhao, Zeng",
"Yun, He",
"Karishma, Mandyam",
"Arya, Talabzadeh",
"Gabriel, Cohen",
"Madian, Khabsa"
] | 2024-09-30T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.20370 | https://arxiv.org/abs/2409.20370 | https://www.semanticscholar.org/paper/519d5ccbd5aec517ba987209e17afd4741ac9b8a |
The Role of Deductive and Inductive Reasoning in Large Language Models | Large Language Models (LLMs) have achieved substantial progress in artificial intelligence, particularly in reasoning tasks. However, their reliance on static prompt structures, coupled with limited dynamic reasoning capabilities, often constrains their adaptability to complex and evolving problem spaces. In this paper, we propose the Deductive and InDuctive(DID) method, which enhances LLM reasoning by dynamically integrating both deductive and inductive reasoning within the prompt construction process. Drawing inspiration from cognitive science, the DID approach mirrors human adaptive reasoning mechanisms, offering a flexible framework that allows the model to adjust its reasoning pathways based on task context and performance. We empirically validate the efficacy of DID on established datasets such as AIW and MR-GSM8K, as well as on our custom dataset, Holiday Puzzle, which presents tasks about different holiday date calculating challenges. By leveraging DID's hybrid prompt strategy, we demonstrate significant improvements in both solution accuracy and reasoning quality, achieved without imposing substantial computational overhead. Our findings suggest that DID provides a more robust and cognitively aligned framework for reasoning in LLMs, contributing to the development of advanced LLM-driven problem-solving strategies informed by cognitive science models. | The Deductive and InDuctive method is proposed, which enhances LLM reasoning by dynamically integrating both deductive and inductive reasoning within the prompt construction process, and provides a more robust and cognitively aligned framework for reasoning in LLMs. | [
"Chengkun, Cai",
"Lei, Li",
"Haoliang, Liu",
"Xu, Zhao",
"Zhongyu, Jiang",
"Tianfang, Zhang",
"Zongkai, Wu",
"Jenq-Neng, Hwang"
] | 2024-10-03T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2410.02892v1 | https://arxiv.org/abs/2410.02892 | https://www.semanticscholar.org/paper/a3c65d490f635b4ad0675f52d67e08fa45fc30d3 |
|
TinyGSM: achieving 80% on GSM8k with one billion parameters | Small models offer various computational advantages, yet the extent to which size is critical for problem-solving abilities remains an open question. This work studies the performance of small models on mathematical reasoning. Specifically, for solving math word problems, we find that a 1.3B model can achieve 80.1% accuracy on GSM8K, outperforming existing models that are orders of magnitude larger, and even rivaling the performance of the GPT-3.5-turbo teacher model from which the training data is generated. Our approach is simple and has two key components: The first is the use of a GPT-3.5-turbo-generated synthetic dataset of math word problem with solutions, which we will fully release. The second component is the use of a verifier, which selects the final outputs from multiple candidate generations. | null | ## TinyGSM: achieving > 80% on GSM8k with small language models
Bingbin Liu[1], Sebastien Bubeck[2], Ronen Eldan[2], Janardhan Kulkarni[2],
Yuanzhi Li[2], Anh Nguyen[2], Rachel Ward[2], Yi Zhang[2]
1Carnegie Mellon University
2Microsoft Research
**Abstract**
Small-scale models offer various computational advantages, and yet to which extent size is critical for problemsolving abilities remains an open question. Specifically for solving grade school math, the smallest model size
so far required to break the 80% barrier on the GSM8K benchmark remains to be 34B. Our work studies how
high-quality datasets may be the key for small language models to acquire mathematical reasoning. We introduce
TinyGSM, a synthetic dataset of 12.3M grade school math problems paired with Python solutions, generated fully
by GPT-3.5. After finetuning on TinyGSM, we find that a duo of a 1.3B generation model and a 1.3B verifier
model can achieve 81.5% accuracy, outperforming existing models that are orders of magnitude larger. This
also rivals the performance of the GPT-3.5 “teacher” model (77.4%), from which our model’s training data is
generated. Our approach is simple and has two key components: 1) the high-quality dataset TinyGSM, 2) the use
of a verifier, which selects the final outputs from multiple candidate generations.
### 1 Introduction
One fascinating phenomenon regarding large language models (LLMs) is the emergence of capbilities as both the
model and dataset sizes scale up (Wei et al., 2022a; Chan et al., 2022). Among many capabilities, mathematical
reasoning is one particular aspect that has received tremendous attention Lewkowycz et al. (2022); Lightman et al.
(2023). However, it is unclear to what extend scale is a necessity for mathematical reasoning, and the potential of
small language models (SLMs) remains largely under-explored.
In this work, we push the boundaries of SLMs’ math reasoning capabilities. As a first step towards general
mathematics, our primary testing ground is grade school math problems, to solve which require both mathematical
understanding and language comprehension. The gold-standard benchmark in this regime is GSM8K (Cobbe et al.,
2021), a collection of 8.8K grade-school math word problems (with a 7k-1k train-test split) that involve 2 to 11
reasoning steps. GSM8K has been widely regarded to be challenging for LLMs. Even though the questions appear
rather simple for humans, there have been few models that achieve > 80%, and they are commonly of prohibitive
sizes, e.g. 34B and above (see Table 1).
Our goal is to break the 80% barrier on GSM8K while keeping the model size friendly. As previous work
shows (Gunasekar et al., 2023; Li et al., 2023; Eldan & Li, 2023), training data quality is one of the most important
factors for enhancing performance of small models. In particular, prompt-engineered synthetic data generation from
gigantic models such as GPT-3.5/4 enjoys the clear advantage of desirable data hygiene and controllable diversity.
This constituents a teacher-student scenario where the student learns from teacher’s generations. On the tasks that
the model model already excels at, their guided generations remain one of the highest quality data one can collect
for training significantly smaller student models. It is also understood that the student model’s performance likely
ends up inferior than the teacher, and may fall far short especially when the student is considerably smaller than
the teacher (Mirzadeh et al., 2019; Gudibande et al., 2023) —after all, the teacher places an information-theoretic
bottleneck on the student.
To our surprise, in the case of GSM8K, we are able to bridge the performance gap between the student and
teacher, by utilizing a tiny amount of labeled real data (the original GSM8K training set of 7k questions) to train an
independent verifier model. At test time, the verifier score and select among multiple candidate answers generated
from the student, and then we output the highest score generation as the final submission. Note the idea of using
-----
100
90
80
70
60
50
40
30
20
10[8] 10[9] 10[10] 10[11]
|Col1|Col2|Col3|
|---|---|---|
||||
||||
||||
||||
||||
||||
||Phi-GSM models Phi-GSM models+Verifier||
|Other open-source models GPT-4 using Python|Other open-source models GPT-4 using Python||
Model size (# parameters)
Figure 1: Our results on GSM8K. Please refer to Table 1 for details.
a verifier is proposed by the seminal GSM8K paper (Cobbe et al., 2021), and here we demonstrate its power of
bridging the teacher-student gap, and we conduct a more thorough examination of factors affecting its efficacy.
The contributions of our work are the following:
- We introduce TinyGSM, a synthetic dataset containing GSM8K-style math word problems paired with Python
solutions, generated fully by GPT-3.5-turbo. TinyGSM consists of 12.3M questions which amount to 1.8B tokens.
We demonstrate TinyGSM’s high-quality by finetuning the Phi-1.5 1.3B model (before the use of verifiers) which
improves its accuracy from 44.6% to 68.2% on the GSM8K test set. Notably, our smallest 125M model can also
achieve 63.1% after finetuning on TinyGSM.
- We demonstrate the power of verifiers on small-scale models. When integrated with a verifier for scoring
generations, our models, named Phi-GSM models, achieve performance on par with other open source models
that are orders of magnitude larger. In particular, our 1.3B model achieves 81.5% accuracy on GSM8K, as shown
in Figure 1. This marks a new state-of-the-arts on billion-parameter-scale models, significantly outperforming
existing open-source models and even rivaling the 77.4% accuracy of GPT-3.5, from which TinyGSM is generated.
For verifier training, we identify data diversity as a crucial element for a verifier’s success, and find that the
_scaling of the verifier may be more effective than scaling of the generator: while scaling up from a 125M generator_
to a 1.3B generator only gives a 5.1% increase in performance (Table 1), scaling up the verifier from 125M to
1.3B leads to a 7.2% performance boost (Figure 4).
### 2 Related works
**Distilling from synthetic data: While scaling up has been a useful strategy, it is possible to outpace conventional**
scaling laws by better use of data (Sorscher et al., 2022). In the data-scarce case, quality synthetic data serves as an
effective workaround (Eldan & Li, 2023; Gunasekar et al., 2023), and the scaling in dataset size can compensate for
a small model size (Edelman et al., 2023). Additionally, our work uses samples in the true distribution (i.e. the
GSM8K train set) differently: given the small dataset size, we believe that the most sample-efficient way to utilize
the true train set is to train a verifier—while the 7.4k samples in the GSM8K training set is too small for language
model finetuning, it is sufficient for training a good quality verifier that provides 10% performance boost. While
there have been potential concerns of learning from synthetic data such as loosing diversity or having a drifted
distribution mean (Alemohammad et al., 2023; Shumailov et al., 2023), Alemohammad et al. (2023) showed that
such degradation can be avoided by including fresh samples from the true distribution during training.
-----
Model Base model Model size Answer format Eval method GSM8K (%)
7B 14.6
13B nlp pass@1 28.7
Llama-2 (Touvron et al., 2023) -
34B 42.2
70B 56.8
7B 66.5
MetaMath (Yu et al., 2023b) Llama-2 13B nlp pass@1 72.3
70B **82.3**
7B 54.9
WizardMath (Luo et al., 2023) Llama-2 13B nlp pass@1 63.9
70B **81.6**
Code-Llama 7B 59.4
Code-Llama 12B code 64.7
MAmmoTH (Yue et al., 2023) pass@1
Code-Llama 34B 72.7
Llama-2 70B nlp 76.9
7B 52.2
Mistral (Jiang et al., 2023) - nlp maj1@8
8×7B 58.4
Llama-2 7B+7B 73.7
OVM (Yu et al., 2023a) nlp verify100@1
Mistral 7B+7B **84.7**
7B 36.4
Llemma (Azerbayev et al., 2023) Llama-2 nlp pass@1
34B 51.5
7B 72.6
13B 75.8
ToRA-Code (Gou et al., 2023) Llama-2 code COT@1
34B **80.7**
70B **84.3**
7B 55.72
Orca 2 (Mitra et al., 2023) Llama-2 nlp pass@1
13B 65.73
Gemini Pro **86.5**
- - nlp maj1@32
Gemini Ultra (Gemini Team) 94.4
GPT-3.5-0613 77.4*
- - code pass@1
GPT-4-0613 (OpenAI, 2023) **97.0***
Phi-1.5 (Li et al., 2023) - 1.3B code pass@1 44.6
Phi-1.5-tiny 125M 63.1
Phi-1.5-small 350M 65.9
Phi-GSM code pass@1
Phi-1.5 1.3B 68.2
Phi-2 2.7B 74.3
Phi-1.5-tiny 125M+125M 68.9
Phi-GSM+V Phi-1.5-small 350M+350M code verify48@1 71.3
Phi-1.5 1.3B+1.3B **81.5**
Table 1: Results on GSM8K. * denotes results measured by ourselves. Accuracies above 80% are labeled in bold.
‘8×7B’ stands for mixture of 8 experts, and each expert is of 7B parameters. ‘7B+7B’ means a combination of a 7B
generation model plus a 7B verifier model. ‘+V’ denotes the use of verifier models.
-----
**Math word problem datasets GSM8K (Cobbe et al., 2021) has been the most common used math word**
problem dataset for its quality and size. In comparison, earlier datasets such as MAWPS (Koncel-Kedziorski et al.,
2016), ASDiv (Miao et al., 2020) and SVAMP (Patel et al., 2021) are either much smaller in size or of less difficulty.
However, GSM8K questions are too clean to test for robustness. Motivated by the observation that language models
are not robust to the presence of irrelevant context, Shi et al. (2023a) proposed GSM-IC (irrelevant context). Another
problem is the GSM8K dataset itself is still too small for training language models. (Ni et al., 2023b) addressed this
with self-sampled data. In a work concurrent to ours, Yu et al. (2023b) bootstraps an original dataset using various
augmentation techniques, such as generating multiple answers for the solution, question rephrasing, and backward
reasoning. The proposed MetaMath dataset consists of 40000 questions from GSM8K and MATH (Hendrycks et al.,
2021). In comparison, TinyGSM is significantly larger, encompassing 12.3M questions (or equivalently 1.8B tokens).
**Leveraging multiple generations: An important component of our method is to leverage multiple generation.**
This idea has been proven successful in many prior works. A notable example is “self-consistency” (Wang et al.,
2022), which selects the most frequent response among candidates and integrates well with other methods such as
progressive-hint prompting (Zheng et al., 2023) and model selection (Zhao et al., 2023). However, self-consistency
was not particularly helpful in our setup as mentioned in Section 4.2. More related to and a direct inspiration of our
work is Cobbe et al. (2021), which uses a verifier to select the best response among 100 candidates, leading to an
20% accuracy boost. Our work conducts a more thorough study on the design choices of the verifier, including data
diversity and the effect of verifier sizes. Another design choice orthogonal to ours is the supervision signals, such as
outcome-based supervision versus process supervision (Lightman et al., 2023).
**Learning from partial or process supervision: In our experiments, we evaluate on the final accuracy**
only but train on full programs. Prior work has studied the effect of process versus outcome based supervision.
Process-based supervision is shown to be particularly helpful for complex math problems (Lightman et al., 2023),
though for general problems one needs to consider a cost-efficacy tradeoff (Uesato et al., 2022). When process
supervision is not available, Ni et al. (2023b) proposed to learn from “self-sampled” solutions, which allows the
model to learn from partially correct self-generated solutions selected based on the execution trace.
**Self-improvement: Several works have explored the idea of “self-improvement” where a model evaluates**
and corrects its own generations, mostly relying on the self-debugging ability of GPT4. Examples include “selfrefine” (Madaan et al., 2023) and “self-verify” (Weng et al., 2022; Zhou et al., 2023), both of which ask the
model to iteratively provide feedback or verifications on its own generations and refine if needed. However, such
self-improvement abilities have not been discovered in small language models. This motivated our use of a separate
verifier model, which is initialized from the generative model but needs to be fully finetuned for verification.
**Prompt-based methods: Prompt-based methods, which find prompts to improve the later conditional**
generations, are particularly effective for large models. Examples include in-context learning (Brown et al., 2020),
where the model learns to perform novel tasks from few-shot examples provided in the prompt, as well as Chain-of_Thought (Wei et al., 2022b), which shows that explicitly generating intermediate steps can significantly help with_
reasoning abilities. However, similar to self-improvements, prompting is targeted at large language models and do
not apply for SLMs.
### 3 The TinyGSM dataset
Our objective is to assess the capability of a small language model (SLM) on mathematical reasoning. Ideally,
enhancing this mathematical reasoning ability should not compromise the model’s competence in language comprehension. This makes math word problems, which necessitate both mathematical and language understanding, a
suitable test ground. We focus on the GSM8K dataset (Cobbe et al., 2021), consisting of around 8k grade-school
math word problems. The math concepts in the dataset are elementary and within standard grade-school curricula,
but the challenges posed by the natural language problem statement introduce an additional layer of complexity to
the task.
**TinyGSM: augmenting GSM8K with synthetic generations** Despite the high quality, the GSM8K training set
only contains 7473 problems, which is too small for training a reasonably sized language model (Ni et al., 2023a).
To alleviate the size issue, we augment the GSM8K training set using GPT-3.5-turbo generated synthetic problems.
We prompt GPT-3.5-turbo to generate problem variants similar to a given question (but not the solution) randomly
-----
def simple_math_problem () -> int :
**"""**
**In preparation for her party, Sarah buys 10 trays**
**of food and 8 cases of beverages.**
**Each tray costs $50 and each case of beverages**
**costs $20.**
**What is the total cost of the trays and beverages?**
**"""**
trays = 10
tray_cost = 50
cases = 8
case_cost = 20
tray_total = trays * tray_cost
case_total = cases * case_cost
total_cost = tray_total + case_total
result = total_cost
return result
def simple_math_problem () -> int :
**"""**
**Kim has 4 times the number of crayons**
**than 8 less than the number of markers**
**she has.**
**If she has 60 crayons, how many markers**
**does she have?**
**"""**
number_crayons = 60
number_markers = number_crayons // 4 +
8
result = number_markers
return result
Figure 2: Examples from TinyGSM. The question is given as the docstring of a function, and the solution is the
code in the function body.
sampled from the GSM8K training set. Each problem variant contains both a question and the corresponding solution
written in Python, as shown in Figure 2.[1] Using code allows us to leverage a Python interpreter, circumventing
language models’ known limitation regarding numerical calculations and code execution.
To enhance robustness, we also generated synthetic problems whose questions contain irrelevant information. This
is achieved by augmenting the GSM-IC dataset (Shi et al., 2023a), which is an augmentation of GSM8K specifically
designed to introduce irrelevant context (IC) to the question statement. These GSM-IC variants constitute to
approximately one third of TinyGSM.
The resulting synthetic dataset contains 12.3M problems (i.e. question-solution pairs) [2] with, based on the
original 7.4k training set questions and their IC variants. For each question in the GSM8K train set, the prompt
based on this question is shared across API calls, and the source of randomness comes entirely from the generation
process. To encourage diversity, we use temperature sampling and specify in the prompt to encourage the problem
variants to be grammatically diverse and contain multiple steps; the exact prompts are provided in Figure 3 and
in Appendix A.1.
**Filtering** To ensure the quality of the synthetic data in TinyGSM, we filter out problems that are too short or
do not contain numbers, as well as code solutions which are not executable. Note that we do not check for the
_correctness of the question or the generated solutions, since the “ground truth” solution is not available. Given the_
effectiveness of self-consistency (Wang et al., 2022), one might want to filter the problems by keeping the ones which
have majority vote only. We did not adopt this strategy since we find that GPT-3.5-turbo’s generations are only
consistent on easy problems [3], hence such consistency filtering will remove challenging problems, resulting in a
dataset that is too easy to be useful.
### 4 Solving grade school math with small language models
The 1.3B version of our phi-GSM models is able to achieve 81.5% accuracy on GSM8K, a dataset that remains
challenging for small-scale models. The performance comes from sufficient good quality synthetic data and the use
1Note that the generated problems may be mathematically valid yet violating common sense. For example, some quantities may not
be integers.
2This corresponds to 1.8B tokens, which costs around $3600 to generate according to GPT commercial pricing.
3“Easy” problems refer to the ones for which a 350M model, trained on a part of TinyGSM, already produces same final answer as
GPT-3.5-turbo. For example, for an early version of our 350M model, the model only achieves around 50% on the GSM8K test set, but
can achieve more than 87% on synthetic questions with consistent answers. In other words, adding more easy problems like these will
not help our 350M model bridge the performance gap between itself and GPT-3.5-turbo.
-----
Consider the following grade -school math problem: {{ question }}
Generate 10 different math problems similar to this math problem.
- Make sure each question uses diverse NLP and includes multiple logical steps.
- After each generated problem, write down a ** detailed and complete Python program ** to solve the question **step
by step** (do NOT give the result directly, **DO NOT write calculations in the comments **).
- The program should contain multiple lines of code and end with ’result = XXX ’ (Make sure to replace XXX with the
actual result of the python program).
- Make sure your Python program is complete and solves the problem. Do **NOT** write things like ’solution to be
completed ’, result = ?, insert your code here etc.
- Give the complete solution to solve the problem, written in Python. Do not write things like ’insert your code
here ’.
- In each new question, **first end with <|endofquestion |>**, and then start writing the program. Each program
should end with <|endofprogram |>.
- Example format: Question X: New question (at least 4 sentences long and use diverse NLP) (without the solution)
<|endofquestion|> Complete python code with entire solutions and the correct indent (<| endofprogram |>])
Figure 3: The prompt template for generating TinyGSM.
of a verifier, which we describe in this section.
**4.1** **Learning from synthetic data**
We finetune the Phi-1.5 125M, 350M and 1.3B models on our TinyGSM from Section 3, and in particular, the 1.3B
model reaches 68.2% accuracy.[4 5] We use the Adam optimizer with FP16 during training, with a linear warm-up
and a maximum learning rate of 1e-4, a weight decay of 0.01, and an effective batch size of 1024. The finetuning
phase takes up to 20k steps in total. As shown in Figure 1, even without verifiers, our models are already competitive
to models of size from 7B and larger. As an anecdote, an earlier and worse performing version of our Phi-GSM 1.3B
model gets 94% (or 82.5% from 350M at pass@32, whereas the 750M CodeT5+ model (Wang et al., 2023) gets
73.8% (or 70.5% from 220M) at pass@100.
**4.2** **Improving small models with a verifier**
While sufficient synthetic data can significantly boost model performance, the performance is still below 70%. Does
further improvement necessitate larger model and more data then? There may be two concerns: First, there may be
a diminishing return in adding extra parameters and data; for instance, while there is a 10% increase in performance
when increasing from around one third of the final size of TinyGSM to two thirds, the final one third of the data
provided only marginal gain. Moreover, even if the small language model is able to fully match the quality of the
synthetic data, GPT-3.5-turbo itself can only achieves 77.4% test accuracy on GSM8K, which seemingly poses a
limit on the performance of any models distilling from its generations.
In this section, we show that the use of a verifier can be an effective strategy orthogonal to introducing more
and better data, and can even help SLMs exceed the accuracy of GPT-3.5-turbo generations. The main observation
that the best of multiple generations significantly outperforms a single generation. These generations could be
low-temperature generations from different checkpoints of a single run, where taking the best out of generations
from 5 checkpoints of (an early version of) our 350M model reaches 75% accuracy, similar to findings in temporal
ensembling (Laine & Aila, 2016) and snapshot ensembles (Huang et al., 2017). [6] The generations could also be
from high-temperature generations based on a single checkpoint; for instance, the pass@32 accuracy of our 1.3B
model is 94%.
This suggests a promising direction of leveraging multiple generations: we can obtain a great performance boost
if we are able to identify the best generation. This idea is effective yet natural: The probabilistic nature of the
4The Phi-1.5-small 350M and Phi-1.5-125M variants are pretrained on the same pretraining data as the Phi-1.5 1.3B model.
5Performance of training on TinyGSM from scratch is reported in Table 2.
6For utilizing multiple checkpoints, an option is to use model soup (Wortsman et al., 2022); however, a uniform soup did not improve
the accuracy. Another option is to perform EMA, which has been shown effective in Block et al. (2023). We found that EMA was not
helpful when applied to the 1k-step-interval checkpoints; more frequent averaging is likely required.
-----
generative process naturally leads to the fact that multiple generations of a language model are more likely to
contain a correct solution than a single one. Empirically, it has been widely observed that pass@k accuracy, namely,
the accuracy by taking the best of k generations, is often much higher than pass@1. The main challenge is that
without knowing the labels, the definition of “best” is usually unclear. A workaround is to apply some form of
self-selection, such as by choosing the one with the highest logit or the most consistent solution (Wang et al., 2022;
Li et al., 2022). There is, however, a notable limitation: generations can be consistent and confident yet inaccurate,
making the self-consistency approach through majority voting less effective (Li et al., 2022).
Given these observations and inspired by findings in (Cobbe et al., 2021), we propose to use a separate verifier for
selecting candidate generations. For each base generation SLM, we train a verifier to predict whether a generation
is a correct solution to the given question. During inference, we generate multiple candidate generations using
temperature sampling, and select the one with the highest verifier score.
**Training data** The training data consists of the SLM’s generations on the labele GSM8K training set questions,
paired with the binary labels indicating whether a generation leads to the correct numerical answer. We sample 48
generations for each training set question. The binary label for each generation is based on the final execution result
and the ground truth label only, and we do not verify the correctness of intermediate steps. Note that this is the
only time where the GSM8K training set is directly utilized in training.
90
80
70
60
50
|Verfier model size|Base generation model size|
|---|---|
||125M 350M 1.3B|
|125M 350M 1.3B|68.9 68.8 71.7 67.3 71.3 78.3 76.1 79.2 81.5|
generation model only
with paired verifier 81.5
71.3
68.9 68.2
65.9
63.1
Figure 4: Pass@1 results on GSM8K test set with verifiers. For each test question, we sample 48 candidate answers
from the base generation model, from which we submit the one with highest verifier score as the final answer. The
verifier’s score on a candidate answer is determined using its score on the last token.
|Col1|Col2|on model only red verifier|Col4|Col5|Col6|Col7|Col8|Col9|81.5|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
|||68.9||||71.3 68.2|||||
|63.1|||65.9||||||||
||||||||||||
||||||||||||
||||||||||||
|a w r|12 ch test ith hig e on th|5M quest hest v e last|ion erif tok||35, we sa ier sco en.|0M mple 4 re as t|8 c he|1. andid final a|3B ate ans nswer.|w T|
**Training setup** The verifier is trained with a sequence-to-sequence task, where we use the binary label on the
entire sequence to supervise each token. We find this approach improves consistently over training with a sequence
classification task (i.e. only predicting a binary label on the entire sequence). The verifier model is initialized to
be the same as the SLM, with an additional prediction head shared across all positions. All network parameters
are updated during verifier training, which significantly outperforms alternatives where only the prediction head is
updated, or where the network is trained from scratch.
**Checkpoint and data diversity** The diversity of the training data proves to be important, even if this may
result in slightly sacrificing the data quality[7]: we find that including data generated using various temperatures
and different checkpoints improves the performance of the verifier. In particular, the training data for the verifier
is generated from checkpoints at 1k, 6k and 12k steps, and both training and testing data use a mixture of data
generated with temperature 0.5 and 0.7. Compared to training with generations from a single checkpoint, the use of
multiple checkpoints improves the performance from 78.9% to 81.5%.
Note also that in general, while we train one verifier for each generative model, verifiers transfer reasonably well
across models. For example, our 1.3B model was able to reach 78.1% with a 350M verifier trained on generations
from a 350M model.
7Worse quality data refers to generations from earlier checkpoints, or sampled with a higher temperature: we found that the accuracy
for temperature=1 can be more than 10% worse than temperature=0 (i.e. greedy decoding).
-----
**Question:**
Sophia is thinking of taking a road trip in her car, and would like to know how far she can drive on a single
tank of gas. She has traveled 100 miles since last filling her tank, and she needed to put in 4 gallons of gas to
fill it up again. The owner’s manual for her car says that her tank holds 12 gallons of gas. How many miles can
Sophia drive on a single tank of gas?
**Generations:**
**1) correct:**
distance_traveled = 100
gas_used = 4
tank_size = 12
miles_per_gallon= distance_traveled /
gas_used
result = miles_per_gallon * tank_size
**2) wrong:**
distance_traveled = 100
gas_put_in = 4
gas_tank_size = 12
result = (distance_traveled
+ gas_put_in) / (gas_tank_size gas_put_in)
**3) correct:**
miles_traveled = 100
gallons_added = 4
total_gallons = 12
miles_per_gallon = miles_traveled //
gallons_added
total_miles = (total_gallons gallons_added) * miles_per_gallon +
miles_traveled
result = total_miles
Figure 5: Visualization of the verifier’s token-level predictions. Text colors denote the verifier’s prediction scores:
correct, potentially correct, potentially wrong, and wrong. In all the three examples, the verifier’s final prediction
(on the last token) aligns with the groundtruth labels. In generation 1) and 2) the verifier’s token-level scores appear
to be interpretable and aligned with human assessment. However, in generation 3), the scores are rather strange.
This suggests the verifier relies on special patterns of the model generations that may not be unversaly generalizable,
even though its final predictions are fairly reliable.
**Generation model size vs verifier size** In Figure 4, we present results from a cross-examination of various
generation model sizes + verifier model sizes. Interestingly, while the best accuracy is achieved with configuration
with largest sizes, the verifier size seems to play a bigger role than the generation model size. The effect of model
size scaling is surprisingly mild: as shown in Table 1, increasing the base generation model from 125M (Phi-1.5-tiny)
to 1.3B (Phi-1.5) only gives a 6% boost. On the other hand, the verifier seems to be much more parameter efficient.
For example, 125M generation model + 1.3B verifier can achieve 76.1%, while 1.3B generation model + 125M
verifier gets only 71.7% Figure 4.
### 5 Robustness and decontamination
**5.1** **Contamination test**
While we never use the GSM8K test set during training, TinyGSM consists entirely of synthetic data generated by
GPT models, which may be contaminated since GPT-3.5-turbo may have been exposed to the test set during its
own training, which would have led to some generated synthetic samples being replicating part of the test set. To
prevent contamination, we decontaminate TinyGSM by checking for n-gram matches. We use n = 13 following
standard practices (Brown et al., 2020; Wei et al., 2021; Du et al., 2022), [8] and remove punctuation and numbers
before computing the matching. Out of the 11.0M unique synthetic questions [9], 22 questions have a nonzero
13-gram match with the test set, and 38k questions (i.e. around 0.35% of the full set) have non-zero 8-gram matches.
Examples of 13-gram matches are provided in Appendix A.2.
8n-gram matching is not sufficient for guarding against some other types of contamination (e.g. with respect to paraphrasing).
However, we are not aware of better checks. One alternative is to check embedding similarity, though our clustering results on
CodeGen 350M (Nijkamp et al., 2022) embeddings suggest that the embedding mostly reflects the semantic (topics) rather than
structural/functional similarity, making it unfit for checking similarity in math questions. To our knowledge, state-of-the-art papers on
training set contamination only test for exact matching (Shi et al., 2023b; Oren et al., 2023), and checking for contamination beyond
exact match remains an open problem.
9The number of unique questions is smaller than the number of question-solution pairs since some questions were sampled more than
once in the second step of the 2-step generation (Appendix A.1) and hence have multiple solutions.
-----
**5.2** **Evaluation on SVAMP**
|Verfier model size|Base generation model size|
|---|---|
||125M 350M 1.3B|
|125M 350M 1.3B|63.2 70.0 72.2 64.6 68.7 72.3 74.1 79.0 75.6|
Figure 6: SVAMP test accuracies.
For evaluating robustness of our models, we test on the SVAMP (Simple Variations on Arithmetic Math word
Problems) dataset (Patel et al., 2021), consisting of 1000 math word problem questions with a focus on arithmetics.
SVAMP constructed by applying certain types of variations to a set of base questions. Even though the base
questions are generally considered easier than GSM8K [10], the variations may often confuse LLMs, thus making it
a challenging benchmark for robustness. Our 1.3B model achieves 75.6% on SVAMP without further finetuning,
indicating the robustness of the model.
### 6 Discussions
In this work, we showed a simple approach that enabled a 1.3B generation model to achieve 81.5% on the GSM8K
dataset, setting a new state-of-the-art for small language models and raising the performance curve for scaling. Our
approach consists of two simple steps: 1) collecting TinyGSM, a GPT-3.5 generated synthetic dataset which we
will fully release, and 2) using a verifier that scores how likely a generation is correct, whose quality is boosted by
utilizing diverse generations. Our results provide positive evidence that small language models have more potentials
to be unlock and can be used for efficient. For future directions,
- Leveraging different formats: TinyGSM uses Python code as solutions, inspired by the observation that language
models tend to struggle at calculations. However, we found that different solution formats, i.e. code versus natural
language, can be complementary: while code helps circumvent errors related to execution or calculation, it tends
to perform worse at questions that require equation solving, likely due to the fact that the Python syntax does not
naturally support equations. Properly combining both formats has the potential to further boost performance.
- The effect of verifier size: Our results show that given a budget on the model size, scaling the verifier may
be a more efficient use of the parameters. This counters our intuition that verification is an easier task than
generation (which involves search), though there might be connections to findings in GAN training where the size
of discriminator (Arora et al., 2018). Exploring the parameter efficiency in a generation model versus a verifier is
an interesting future direction.
### References
Sina Alemohammad, Josue Casco-Rodriguez, Lorenzo Luzi, Ahmed Imtiaz Humayun, Hossein Babaei, Daniel
LeJeune, Ali Siahkoohi, and Richard G. Baraniuk. Self-consuming generative models go mad. arXiv preprint
_arXiv: 2307.01850, 2023._
Sanjeev Arora, Andrej Risteski, and Yi Zhang. Do GANs learn the distribution? some theory and empirics.
In International Conference on Learning Representations, 2018. [URL https://openreview.net/forum?id=](https://openreview.net/forum?id=BJehNfW0-)
[BJehNfW0-.](https://openreview.net/forum?id=BJehNfW0-)
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia
Deng, Stella Biderman, and Sean Welleck. Llemma: An open language model for mathematics. arXiv preprint
_arXiv: 2310.10631, 2023._
10See Table 1 in Xie et al. (2023).
-----
Adam Block, Dylan J. Foster, Akshay Krishnamurthy, Max Simchowitz, and Cyril Zhang. Butterfly effects of sgd
noise: Error amplification in behavior cloning and autoregression. arXiv preprint arXiv: 2310.11428, 2023.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan,
Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom
Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse,
Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam
McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Hugo
Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in
_Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020,_
_[NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html)_
[1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html.](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html)
Stephanie C. Y. Chan, Adam Santoro, Andrew Kyle Lampinen, Jane X. Wang, Aaditya K Singh, Pierre H.
Richemond, J. Mcclelland, and Felix Hill. Data distributional properties drive emergent in-context learning in
transformers. Neural Information Processing Systems, 2022. doi: 10.48550/arXiv.2205.05055.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert,
Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve
math word problems. arXiv preprint arXiv: Arxiv-2110.14168, 2021.
Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi
Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten P. Bosma, Zongwei Zhou, Tao Wang,
Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathleen S. Meier-Hellstern, Toju Duke, Lucas
Dixon, Kun Zhang, Quoc V. Le, Yonghui Wu, Zhifeng Chen, and Claire Cui. Glam: Efficient scaling of language
models with mixture-of-experts. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022,
_Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp. 5547–5569. PMLR,_
[2022. URL https://proceedings.mlr.press/v162/du22c.html.](https://proceedings.mlr.press/v162/du22c.html)
Benjamin L. Edelman, Surbhi Goel, Sham Kakade, Eran Malach, and Cyril Zhang. Pareto frontiers in neural feature
learning: Data, compute, width, and luck. arXiv preprint arXiv: 2309.03800, 2023.
Ronen Eldan and Yuanzhi Li. Tinystories: How small can language models be and still speak coherent english?
_arXiv preprint arXiv: Arxiv-2305.07759, 2023._
Google Gemini Team. Gemini: A family of highly capable multimodal models.
Zhibin Gou, Zhihong Shao, Yeyun Gong, yelong shen, Yujiu Yang, Minlie Huang, Nan Duan, and Weizhu Chen.
Tora: A tool-integrated reasoning agent for mathematical problem solving. arXiv preprint arXiv: 2309.17452,
2023.
Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, and Dawn
Song. The false promise of imitating proprietary llms. arXiv preprint arXiv: 2305.15717, 2023.
Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan
Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin
Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, and Yuanzhi Li. Textbooks are all
you need. arXiv preprint arXiv: 2306.11644, 2023.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, D. Song, and J. Steinhardt.
Measuring mathematical problem solving with the math dataset. NeurIPS Datasets and Benchmarks, 2021.
Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, J. Hopcroft, and Kilian Q. Weinberger. Snapshot ensembles:
Train 1, get m for free. International Conference on Learning Representations, 2017.
Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las
Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne
Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed.
Mistral 7b. arXiv preprint arXiv: 2310.06825, 2023.
10
-----
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. Mawps: A math word
problem repository. In Proceedings of the 2016 conference of the north american chapter of the association for
_computational linguistics: human language technologies, pp. 1152–1157, 2016._
S. Laine and Timo Aila. Temporal ensembling for semi-supervised learning. International Conference on Learning
_Representations, 2016._
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose
Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant
Misra. Solving quantitative reasoning problems with language models, 2022.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. Making large language
models better reasoners with step-aware verifier. arXiv preprint arXiv: 2206.02336, 2022.
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, and Yin Tat Lee. Textbooks are
all you need ii: phi-1.5 technical report. arXiv preprint arXiv:2309.05463, 2023.
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John
Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step, 2023.
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng
Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical reasoning for large language models via
reinforced evol-instruct. arXiv preprint arXiv: 2308.09583, 2023.
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri,
Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean
Welleck, Amir Yazdanbakhsh, and Peter Clark. Self-refine: Iterative refinement with self-feedback. arXiv preprint
_arXiv: 2303.17651, 2023._
Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and developing english
math word problem solvers. _Annual Meeting of the Association for Computational Linguistics, 2020._ doi:
10.18653/v1/2020.acl-main.92.
Seyed-Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa, and Hassan Ghasemzadeh.
Improved knowledge distillation via teacher assistant. arXiv preprint arXiv: 1902.03393, 2019.
Arindam Mitra, Luciano Del Corro, Shweti Mahajan, Andres Codas, Clarisse Simoes, Sahaj Agarwal, Xuxi Chen,
Anastasia Razdaibiedina, Erik Jones, Kriti Aggarwal, Hamid Palangi, Guoqing Zheng, Corby Rosset, Hamed
Khanpour, and Ahmed Awadallah. Orca 2: Teaching small language models how to reason, 2023.
Ansong Ni, Jeevana Priya Inala, Chenglong Wang, Alex Polozov, Christopher Meek, Dragomir Radev, and
Jianfeng Gao. Learning math reasoning from self-sampled correct and partially-correct solutions. In The
_[Eleventh International Conference on Learning Representations, 2023a. URL https://openreview.net/forum?](https://openreview.net/forum?id=4D4TSJE6-K)_
[id=4D4TSJE6-K.](https://openreview.net/forum?id=4D4TSJE6-K)
Ansong Ni, Jeevana Priya Inala, Chenglong Wang, Alex Polozov, Christopher Meek, Dragomir Radev, and
Jianfeng Gao. Learning math reasoning from self-sampled correct and partially-correct solutions. In The
_[Eleventh International Conference on Learning Representations, 2023b. URL https://openreview.net/forum?](https://openreview.net/forum?id=4D4TSJE6-K)_
[id=4D4TSJE6-K.](https://openreview.net/forum?id=4D4TSJE6-K)
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong.
Codegen: An open large language model for code with multi-turn program synthesis. arXiv preprint arXiv:
_2203.13474, 2022._
OpenAI. Gpt-4 technical report, 2023.
Yonatan Oren, Nicole Meister, Niladri Chatterji, Faisal Ladhak, and Tatsunori B. Hashimoto. Proving test set
contamination in black box language models. arXiv preprint arXiv: 2310.17623, 2023.
11
-----
Arkil Patel, S. Bhattamishra, and Navin Goyal. Are nlp models really able to solve simple math word problems?
_North American Chapter Of The Association For Computational Linguistics, 2021. doi: 10.18653/V1/2021._
NAACL-MAIN.168.
Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, E. Chi, Nathanael Scharli, and Denny
Zhou. Large language models can be easily distracted by irrelevant context. International Conference on Machine
_Learning, 2023a. doi: 10.48550/arXiv.2302.00093._
Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, and Luke
Zettlemoyer. Detecting pretraining data from large language models. arXiv preprint arXiv: 2310.16789, 2023b.
Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, and Ross Anderson. The curse of
recursion: Training on generated data makes models forget. arXiv preprint arXiv: 2305.17493, 2023.
Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, and Ari Morcos. Beyond neural scaling laws:
beating power law scaling via data pruning. Advances in Neural Information Processing Systems, 35:19523–19536,
2022.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov,
Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya
Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao,
Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas,
Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux,
Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar
Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan
Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor,
Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie
Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2:
Open foundation and fine-tuned chat models. arXiv preprint arXiv: 2307.09288, 2023.
Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell,
Geoffrey Irving, and Irina Higgins. Solving math word problems with process- and outcome-based feedback. arXiv
_preprint arXiv: 2211.14275, 2022._
Xuezhi Wang, Jason Wei, D. Schuurmans, Quoc Le, E. Chi, and Denny Zhou. Self-consistency improves chain
of thought reasoning in language models. International Conference on Learning Representations, 2022. doi:
10.48550/arXiv.2203.11171.
Yue Wang, Hung Le, Akhilesh Deepak Gotmare, Nghi D. Q. Bui, Junnan Li, and Steven C. H. Hoi. Codet5+: Open
code large language models for code understanding and generation. arXiv preprint arXiv: 2305.07922, 2023.
Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, A. Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V.
Le. Finetuned language models are zero-shot learners. International Conference on Learning Representations,
2021.
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten
Bosma, Denny Zhou, Donald Metzler, E. Chi, Tatsunori Hashimoto, Oriol Vinyals, P. Liang, J. Dean, and W. Fedus.
Emergent abilities of large language models. Trans. Mach. Learn. Res., 2022a. doi: 10.48550/arXiv.2206.07682.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, E. Chi, F. Xia, Quoc Le, and Denny Zhou. Chain of
thought prompting elicits reasoning in large language models. Neural Information Processing Systems, 2022b.
Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He, Kang Liu, and Jun Zhao. Large language models are better
reasoners with self-verification. arXiv preprint arXiv: 2212.09561, 2022.
Mitchell Wortsman, Gabriel Ilharco, Samir Ya Gadre, Rebecca Roelofs, Raphael Gontijo Lopes, Ari S. Morcos,
Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, and Ludwig Schmidt. Model soups: averaging
weights of multiple fine-tuned models improves accuracy without increasing inference time. In Kamalika Chaudhuri,
12
-----
Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato (eds.), International Conference on
_Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine_
_[Learning Research, pp. 23965–23998. PMLR, 2022. URL https://proceedings.mlr.press/v162/wortsman22a.](https://proceedings.mlr.press/v162/wortsman22a.html)_
[html.](https://proceedings.mlr.press/v162/wortsman22a.html)
Yuxi Xie, Kenji Kawaguchi, Yiran Zhao, Xu Zhao, Min-Yen Kan, Junxian He, and Qizhe Xie. Decomposition
enhances reasoning via self-evaluation guided decoding. arXiv preprint arXiv: 2305.00633, 2023.
Fei Yu, Anningzhe Gao, and Benyou Wang. Outcome-supervised verifiers for planning in mathematical reasoning.
_arXiv preprint arXiv: 2311.09724, 2023a._
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian
Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models.
_arXiv preprint arXiv: 2309.12284, 2023b._
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. Mammoth:
Building math generalist models through hybrid instruction tuning. arXiv preprint arXiv: 2309.05653, 2023.
Xu Zhao, Yuxi Xie, Kenji Kawaguchi, Junxian He, and Qizhe Xie. Automatic model selection with large language
models for reasoning. arXiv preprint arXiv: 2305.14333, 2023.
Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo Li, and Yu Li. Progressive-hint prompting improves reasoning
in large language models. ARXIV.ORG, 2023. doi: 10.48550/arXiv.2304.09797.
Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun Luo, Zipeng Qin, Shaoqing Lu, Anya Jia, Linqi Song, Mingjie
Zhan, and Hongsheng Li. Solving challenging math word problems using gpt-4 code interpreter with code-based
self-verification. arXiv preprint arXiv: 2308.07921, 2023.
13
-----
### A Additional details on TinyGSM
**A.1** **Other prompts**
The majority of the TinyGSM was generated using the prompt in Figure 3, where GPT-3.5-turbo is asked to
generate both the question and the corresponding solution. The remaining data, including all data based on GSM-IC,
is generated using a two-step process, where the first step prompts the model to generate question variants, and
the second step asks to generate Python solutions given a question variant generated in the first step. The exact
prompts are provided in Figure 7–Figure 9.
Design 10 grade -school math problems similar to a given math problem.
- Make sure the language is different and sufficiently rephrased.
- Feel free to change the scenario, story and quantities.
- Make sure the new problems are no easier than the given problem and require more steps to solve.
- Make sure the new problems are self -complete, clear, unambiguous, and coherent.
- Please do not include hints or solutions for the new problems.
**## The original math problem**
{{ question }}
**## New problems**
- Problem 1:
Figure 7: The prompt template for generating question variants based on GSM8K.
Please write 10 questions similar in style to a given question, where the new questions also contain
irrelevant information. Make sure to change the name and scenarios.
**# Original question**
{{ question }}
**# New questions**
- Question 1:
Figure 8: The prompt template for generating question variants based on GSM8K-IC.
**A.2** **Contamination check: 13-gram collisions**
There are 22 questions (out of 11.0M) with 13-gram collisions to test set questions. Examples are shown in Figure 10.
### B Pretrained vs Random Init
In this section, we present a comparison of training on TinyGSM from a random initialization versus from a pretrained
model.
|Model size|125M|350M|1.3B|
|---|---|---|---|
|Random Init|53.1|55.5|57.3|
|pretrained|63.3|65.9|68.2|
Table 2: Comparison of performance with and without pretraining.
14
-----
Please generate a detailed and complete Python program to solve a given math question.
- Use variables to represent the quantities in the question.
- Solve the question **step by step** (do NOT give the result directly, **DO NOT write calculations in the
comments **).
- The program should contain multiple lines of code and end with ’result = XXX ’ (Make sure to replace XXX with
the actual result of the python program !!!).
- Then, the result should be printed out in the format of f’<<<{result}>>>’.
- Make sure your Python program is complete and solves the problem. Do **NOT** write things like ’solution to
be completed ’, result = ?, insert your code here etc.
- Give the complete solution to solve the problem in Python. Do not write things like ’insert your code here ’.
- You should solely rely on the Python program to solve the question. Do not do calculations in the comments.
The comment should not include any numbers.
- Try to use fewer comments since they are expensive.
- If you really want to solve equations like x = ..., try to use ‘‘import sympy ‘‘ and ** sympy.solve ()**. sympy
.solve(expression) returns the roots of the expression. Do not write down calculations in the comments!
- If you need to calculate the ceiling of a number, use ‘import math ‘ then ‘math.ceil().‘
- End the Python program with <|endofprogram|> in a new line.
**### Question**
{{ question }}
**### Program**
Figure 9: The prompt template for generating code solution for a given question.
**# Q: Daniel has** **brothers His older brother is** **years older than** **times Daniel ’s age when Daniel was** **years**
**old His younger brother is** **years old which is** **the age of the older brother What is their combined age**
match:
In a family there are brothers and sisters All sisters are the same age which is One of the brothers is
** years old which is the age of the older brother What is** the total age of all these siblings
**# Q: Two cars are driving on a highway The first car is traveling at an average speed of** **miles per hour**
**while the second car takes a minute break after driving for** **minutes how long can they remain stopped before the**
**first car catches up with them**
match:
**Two cars are driving on a highway The first car is traveling at an average speed of miles per hour** when
the second car passes it at an average speed of miles per hour If both cars continue on the highway at the same
speed how many miles will separate them after hours
**# Q: Leo and Nora sold lemonade at a stand Leo sold** **cups at** **cents each and Nora sold** **cups at** **cents each**
**They decided to split the money equally How much money did each of them get**
match:
While walking down the street with his young siblings Greg found To be fair to his siblings he ** decided to
split the money equally How much money did each of them get**
**# Q: A bookstore is selling a book for** **while is a** **discount from the original price What was the original**
**price of the book**
match:
Kyle bought last year ’s bestselling book for This is with a ** discount from the original price What was the
original price of the book**
Figure 10: The prompt template for generating code solution for a given question.
15
-----
| [
"Bingbin, Liu",
"Sebastien, Bubeck",
"Ronen, Eldan",
"Janardhan, Kulkarni",
"Anh, Nguyen",
"Rachel, Ward",
"Yuanzhi, Li",
"Yi, Zhang"
] | 2023-10-28T00:00:00 | null | false | 0 | 0 | null | https://openreview.net/forum?id=ROOVUBZp8v | null | null |
Token-Supervised Value Models for Enhancing Mathematical Reasoning Capabilities of Large Language Models | Large Language Models (LLMs) have demonstrated impressive problem-solving capabilities in mathematics through step-by-step reasoning chains. However, they are susceptible to reasoning errors that impact the quality of subsequent reasoning chains and the final answer due to language models' autoregressive token-by-token generating nature. Recent works have proposed adopting external verifiers to guide the generation of reasoning paths, but existing works utilize models that have been trained with step-by-step labels to assess the correctness of token-by-token reasoning chains. Consequently, they struggle to recognize discriminative details of tokens within a reasoning path and lack the ability to evaluate whether an intermediate reasoning path is on a promising track toward the correct final answer. To amend the lack of sound and token-grained math-verification signals, we devise a novel training scheme for verifiers that apply token-level supervision with the expected cumulative reward (i.e., value). Furthermore, we propose a practical formulation of the cumulative reward by reducing it to finding the probability of future correctness of the final answer and thereby enabling the empirical estimation of the value. Experimental results on mathematical reasoning benchmarks show that Token-Supervised Value Model (TVM) can outperform step-by-step verifiers on GSM8K and MATH with Mistral and Llama. | Experimental results on mathematical reasoning benchmarks show that Token-Supervised Value Model (TVM) can outperform step-by-step verifiers on GSM8K and MATH with Mistral and Llama. | ## Token-Supervised Value Models for Enhancing Mathematical Reasoning Capabilities of Large Language Models
**Jung Hyun Lee[*][1][†], June Yong Yang[*][2][†], Byeongho Heo[3], Dongyoon Han[3], Kang Min Yoo[1,4][†]**
1NAVER Cloud, 2KAIST AI, 3NAVER AI Lab, 4SNU AI Center
_[†[email protected], [email protected], [email protected]](mailto:[email protected])_
**Abstract**
Large Language Models (LLMs) have demonstrated impressive problem-solving capabilities
in mathematics through step-by-step reasoning chains. However, they are susceptible to
reasoning errors that impact the quality of subsequent reasoning chains and the final answer
due to language models’ autoregressive tokenby-token generating nature. Recent works have
proposed adopting external verifiers to guide
the generation of reasoning paths, but existing
works utilize models that have been trained
with step-by-step labels to assess the correctness of token-by-token reasoning chains. Consequently, they struggle to recognize discriminative details of tokens within a reasoning
path and lack the ability to evaluate whether
an intermediate reasoning path is on a promising track toward the correct final answer. To
amend the lack of sound and token-grained
math-verification signals, we devise a novel
training scheme for verifiers that apply tokenlevel supervision with the expected cumulative
reward (i.e., value). Furthermore, we propose a
practical formulation of the cumulative reward
by reducing it to finding the probability of future correctness of the final answer and thereby
enabling the empirical estimation of the value.
Experimental results on mathematical reasoning benchmarks show that Token-Supervised
_Value Model (TVM) can outperform step-by-_
step verifiers on GSM8K and MATH with Mistral and Llama.
|Terry|took|8|bushels|...|corn|.|
|---|---|---|---|---|---|---|
|Stacy|took|21|ears|of|corn|.|...|
|---|---|---|---|---|---|---|---|
|Stacy|took|21|ears|of|corn|+|168|...|
|---|---|---|---|---|---|---|---|---|
**Reasoning Step 1**
```
Terry took 8 bushels ... corn .
1 (Outcome CORRECT)
ORM
0 (Outcome WRONG )
PRM 1 (Process CORRECT)
TVM 0.5 0.5 0.5 0.5 ... 0.5 0.5
Token NEUTRAL
```
**Sampling**
**Reasoning Step 4**
```
Stacy took 21 ears of corn . ...
ORM 1 (Outcome CORRECT)
```
**Final Answer**
`PRM` `1 (Process CORRECT)` **CORRECT**
```
TVM 0.5 0.5 0.5 0.5 0.5 0.5 1 1…
Token NEUTRAL Token CORRECT
Stacy took 21 ears of corn + 168 ...
ORM 0 (Outcome WRONG)
```
**Final Answer**
`PRM` `0 (Process WRONG)` **WRONG**
```
TVM 0.5 0.5 0.5 0.5 0.5 0.5 0 0 1…
Token NEUTRAL Token WRONG
```
Figure 1: Illustrative comparison of token-level supervi**sion (TVM; ours) with outcome supervision (ORM) and**
**process supervision (PRM). We provide two examples for**
each correct and wrong reasoning path. Inthe reasoning step
4 of each example, both ORM and PRM use uniform labels
judged by the correctness of either an entire reasoning path
or step, which poses challenges for recognizing discriminative details of tokens within a reasoning path. On the other
hand, TVM is supervised with distinct per-token labels, thus
enabling the distinction of the details of tokens within a reasoning path and leading to more precise outcomes (see Fig. 2).
near-human performance. Previous studies have
been focused on enhancing the reasoning capabilities of LLMs through: encouraging LLMs to generate step-by-step thought processes via few-shot or
zero-shot prompting (Wei et al., 2022; Kojima et al.,
2022); fine-tuning LLMs with question-solution
pairs to generate intermediate reasoning steps before producing a final answer (Cobbe et al., 2021;
Luo et al., 2023; Yu et al., 2023; Yuan et al., 2023);
and employing aggregation techniques such as majority voting over final answers extracted from so
**1** **Introduction**
Large language models (LLMs) pre-trained on massive data have achieved human-level performance
across a wide range of tasks in natural language
processing (Maslej et al., 2024). A notable exception to this trend is complex multi-step reasoning
tasks such as mathematical problem solving, where
current state-of-the-art LLMs still struggle to attain
*Equal contribution.
Preprint.
-----
lutions generated by LLMs (Wang et al., 2023).
However, when LLMs are left to their own devices to solve given problems, they remain errorprone due to their autoregressive nature in generating reasoning paths. If an LLM, by chance,
produces a single error during generation, the
reasoning path can be easily steered towards a
wrong answer. This would worsen for LLMs when
they face more complex reasoning tasks such
as advanced-level mathematical problems in the
MATH dataset (Hendrycks et al., 2021). To address
this, researchers have focused on providing external aid to the LLM by training verifiers to assess
the correctness of generated reasoning paths.
Existing verifiers can be categorized into two
types: outcome-supervised reward models (ORMs)
and process-supervised reward models (PRMs).
ORMs (Cobbe et al., 2021; Uesato et al., 2022;
Yu et al., 2024) are trained to assess the correctness
of a reasoning path by labeling each token as either correct or incorrect solely based on whether
the final answer in the reasoning path is correct.
PRMs (Lightman et al., 2023; Uesato et al., 2022;
Wang et al., 2024) are trained with step-level labels
to assess the correctness of each reasoning step, and
they are generally preferred over ORMs due to the
finer resolution of assessment in practice. Despite
being proposed to assist LLMs, current verifiers
may retain a fundamental misalignment with their
per-token granularity. Since ORMs and PRMs employ uniform labels according to the correctness of
either a whole reasoning path or step, respectively
(Fig. 1), we argue that they were not designed to (i)
learn the discriminative details of tokens within a
reasoning path or (ii) evaluate whether an intermediate reasoning path is on a promising track toward
the correct final answer.
In this paper, we propose the Token-supervised
Value Model (TVM), a novel verifier that supervises each token in a reasoning path with a distinctive label, training each token with the expected
cumulative reward. Unlike ORMs and PRMs, our
token-level supervision with distinct per-token
value labels along a reasoning path (Fig. 1) equips
TVMs with the ability to capture the discriminative details of tokens within a reasoning path (see
Fig. 2). Furthermore, providing a theoretical insight that the value of each token is equivalent to
the probability of reaching the correct final answer
from that token, we propose to label each token via
empirical value estimation along sampled reasoning paths.TVM is trained to predict the probability
of a per-token intermediate reasoning path being on
a promising track toward the correct final answer.
Therefore TVM could choose among candidate reasoning paths most likely to reach the correct final
answer, whether they are partial or complete. Our
contributions are threefold:
- We propose the Token-supervised Value
Model (TVM), a new verifier capable of capturing token-wise details via direct supervision with the expected cumulative reward (i.e.,
value) for each token along a reasoning path.
- We generate per-token labels for verifier supervision via empirical value estimation, which
allows TVM to predict the probability of an intermediate reasoning path reaching the correct
final answer.
- We show that TVM achieves performance improvements on GSM8K and MATH benchmarks across LLMs under 10B parameters,
compared to ORMs and PRMs.
**2** **Background**
This section reviews existing verifier frameworks
for enhancing the mathematical reasoning capabilities of LLMs. Sec. 2.1 outlines the preliminary
setups for training verifiers in mathematical reasoning verification. The subsequent sections revisit
two existing types of supervision for verifier training: outcome supervision (Sec. 2.2) and process
supervision (Sec. 2.3).
**2.1** **Training Verifiers for Mathematical**
**Reasoning**
The mathematical reasoning capabilities of LLMs
can be enhanced by employing reward models as
external verifiers to assess the generated reasoning paths (Cobbe et al., 2021; Uesato et al., 2022;
Lightman et al., 2023; Yu et al., 2024; Wang et al.,
2024). The verifier is generally trained via supervised learning on a dataset obtained by sampling
multiple reasoning paths per training problem using
an LLM. Specifically, given a training problem qtr
as an input, the LLM first generates Ntr reasoning
_paths, where n-th reasoning path is comprised of_
reasoning steps {sn,j}j[S]=1[n] [and a final answer][ a][n][ for]
_n = 1,_ _, Ntr. In token-level notation, the n-th_
_· · ·_
reasoning path can also be expressed as a sequence
of tokens {tn,k}k[T]=1[n] [. Hereafter,][ {][s][n,][·][}]1[j] [and][ {][t][n,][·][}]1[k]
means _sn,1,_ _, sn,j_ and _tn,1,_ _, tn,k_, re_{_ _· · ·_ _}_ _{_ _· · ·_ _}_
spectively. The final answer an is correct if it is
-----
equal to the ground truth answer ˆa, and incorrect
otherwise. Based on the correctness of the sampled
reasoning paths, supervision is traditionally given
in two ways: (i) outcome supervision (Cobbe et al.,
2021; Uesato et al., 2022; Yu et al., 2024) and (ii)
process supervision (Uesato et al., 2022; Lightman
et al., 2023; Wang et al., 2024).
**2.2** **Outcome Supervision**
Prior works (Cobbe et al., 2021; Uesato et al., 2022;
Yu et al., 2024) employ outcome supervision to
label an entire reasoning path as correct if its final
answer is correct (Fig. 1). The outcome reward
function ro( ) is the correctness of the final answer:
_·_
1 if an = ˆa
_ro(an) =_ 0 if an = ˆa (1)
_̸_
for n = 1, _, Ntr. An outcome-supervised re-_
_· · ·_
ward model (ORM) fORM is trained with every
token in a reasoning path labeled as the outcome
reward (Eq. 1). The ORM loss LORM is defined as
_Ntr,Tn_
**2.3** **Process Supervision**
_Process supervision enables a more accurate as-_
sessment of a reasoning path by explicitly training a verifier on the correctness of each step with
step-level supervision (Lightman et al., 2023). The
correctness of each reasoning step is either labeled via human annotation (Uesato et al., 2022;
Lightman et al., 2023) or automation (Wang et al.,
2024). Since acquiring human annotations is laborintensive and costly, we mainly focus on process
supervision without human annotations.
Following Wang et al. (2024), an intermediate
reasoning step sn,j can be labeled as correct if
at least one of the reasoning paths starting from
_sn,j reaches the correct final answer ˆa (Fig. 1)._
In practice, sn,j is annotated by sampling a fixed
number of reasoning paths conditioned on a sequence of intermediate reasoning steps _sn,_ 1 [=]
_{_ _·}[j]_
_sn,1,_ _, sn,j_ . If at least one of the sampled
_{_ _· · ·_ _}_
reasoning paths reaches the correct final answer,
_sn,j is labeled as correct with the process reward_
_rp(sn,j) = 1. Otherwise, sn,j is labeled as incor-_
rect and rp(sn,j) = 0. Using the per-step labels
obtained through automation, a Process-supervised
Reward Model (PRM) is trained to provide a steplevel assessment by minimizing the following loss:
_Ntr,Sn_
_ro(an), fORM_ (qtr, _tn,_ 1[)] . (2)
_{_ _·}[k]_
_ORM_ =
_L_
_n,k_
The mean squared error is typically used as a loss
function ℓ(·) in Eq. 2. Cobbe et al. (2021) demonstrated that a token-level verifier trained to judge
the correctness after every token performs better
than a solution-level verifier trained to determine
the correctness only after the final token.
Interestingly, Yu et al. (2024) showed that ORMs
can be alternatively described as modeling the cu_mulative reward for each token, where all interme-_
diate rewards are zero (i.e., r(tn,k) = 0 for every
_n and k) and the discount factor γ is set to 1. The_
cumulative reward following an intermediate token
_tn,k, R(tn,k) is calculated as_
_R(tn,k) = r(tn,k+1) +_ + r(tn,Tn) + ro(an)
_· · ·_
0 + + 0 + 1 = 1 if an = ˆa
= _· · ·_ (3)
(0 + + 0 + 0 = 0 if an = ˆa,
_· · ·_ _̸_
which is equivalent to ro(an) in Eq. 1. This entails
that an intermediate reasoning path is labeled as
correct if the final answer is correct, and vice versa.
In this sense, ORMs can indirectly and implicitly
learn the potential correctness of an intermediate
reasoning path (Yu et al., 2024).
_rp(sn,j), fPRM_ (qtr, _sn,_ 1[)]
_{_ _·}[j]_
_, (4)_
_PRM_ =
_L_
_n,j_
where ℓ denotes the binary cross entropy loss.
**3** **Method**
In this section, we introduce our proposed method
coined Token-supervised Model (TVM), a novel
verifier trained with a token-level supervision strategy to directly estimate the expected cumulative
reward (i.e., value) for each token along a reasoning
path. We also describe how to empirically estimate
per-token value labels from Ntr generated reasoning paths for token-level supervision.
**3.1** **Motivation**
As mentioned in Sec. 2, both outcome supervision
(ORMs) and process supervision (PRMs) utilize
homogeneous labels determined by the correctness
of either the entire reasoning path or step (Fig. 1).
Consequently, we hypothesize that they are likely
to be neither explicitly nor directly trained to (i)
learn the discriminative details of tokens within a
-----
|ORM: 0.42(O)|PRM: 0.05(X)|TVM: 0.16(X)|
|---|---|---|
|ORM: 0.20(X)|PRM: 0.89(O)|TVM: 0.19(X)|
|---|---|---|
|ORM: 0.07(X)|PRM: 0.33(X)|TVM: 0.42(O)|
|---|---|---|
Figure 2: Illustration of reasoning paths ranked highest by ORM/PRM/TVM (ours) among 256 candidate reasoning
```
Ava and Emma want to know who is better at the new video game Ava got for her birthday. They ...
<PROBLEM> ···
finishes the level 4 seconds slower, what is the difference between their two scores? (Answer: 25)
(a) Reasoning ranked (b) Reasoning ranked (c) Reasoning ranked
highest by ORM highest by PRM highest by TVM
1. The difference in points 1. If Ava jumps on 8 more 1. Ava jumps on 8 more …
for jumping on enemies is 8 * 10 = <<8*10=80>>80 points. ··· 8*10 + 35 = 80 + 15 = <<8*10+35=95>>95 …
··· 3. … 80+15 = <<80+15=95>>95 … ···
4. … 30*4 = <<30*4=120>>120 …
4. So the total difference 2. … 30*4 = <<30*4=120>>120 …
in points is 80 + 15 + 120 = ··· 3. The total difference between
<REASONING> <<80+15+120=215>>215 points. 5. Thus, the total difference their scores is 120 - 95 =
between their scores is 95-120 <<120-95=25>>25 points.
= <<95-120=-25>>-25 points
```
1 0.5
0.5
0.5
0 0 0
**2** **1** **5** **points** **.** **\n** **9** **5** **-** **1** **2** **0** **-** **9** **5** **=** **2** **5**
**ORM** **PRM** **TVM** **ORM** **PRM** **TVM** **ORM** **PRM** **TVM**
```
<ANSWER> The answer is: 215 The answer is: -25 The answer is: 25
ORM: PRM: TVM: ORM: PRM: TVM: ORM: PRM: TVM:
<VERIFY> 0.42(O) 0.05(X) 0.16(X) 0.20(X) 0.89(O) 0.19(X) 0.07(X) 0.33(X) 0.42(O)
<OUTCOME> WRONG WRONG CORRECT
```
**paths for a test problem from GSM8K. We use Mistral-7B-MetaMath and illustrate practical failure cases of ORM and PRM**
compared to ours. (a) The reasoning step 4 begins with “So the total difference” but ends with a summation. Hence,
as soon as step 4 is finished, the TVM score decreases dramatically. (b) The reasoning step 5 starts with “Thus, the total
difference” but ends in subtracting a large number (“120”) from a small number (“95”), which is the exact opposite of the
definition of difference. Thus, the TVM score declines. (c) The reasoning step 3 opens with “The total difference”, ending
in subtracting the small number (“95”) from the large number (“120”), which is finally correct. As a result, immediately after the
token “=” emerges, the TVM score rises while the PRM score remains intact due to its step-wise assessment. Therefore, TVM
can filter out (a) and (b) while selecting (c) with the highest score, enabling token-level discrimination within a reasoning path.
reasoning path or (ii) evaluate whether an intermediate reasoning path is on a promising track toward
the correct final answer.
We elucidate our assertion through cases observed in practice, as illustrated in Fig. 2. In the
reasoning path ranked highest by ORM ((a) in
Fig. 2), reasoning step 4 begins with “So the
total difference” but ends with a summation,
where a logical error occurs. However, ORM is unable to catch the error and maintains a score over
0.4, the highest score among 256 candidate reasoning paths. In the reasoning path ranked highest by
PRM ((b) in Fig. 2), reasoning step 5 starts with
“Thus, the total difference” but ends in subtracting a larger number (“120”) from a smaller
number (“95”), which is the exact opposite of the
definition of difference. In the reasoning path,
“120” appears in reasoning step 4 after “95” appears in reasoning step 3. Since PRMs focus on
assessing the correctness of the current reasoning
step, the sequential appearance of numbers and the
resulting subtraction are considered correct by the
PRM even though the reasoning path is unlikely
to lead to a correct answer. The observed failures
inspire the proposal of a token-level value supervision strategy for training verifiers.
**3.2** **Token-level Value Supervision**
To overcome the issues above, we propose a new
verifier based on token-level supervision with distinctive token-wise labels according to the potential
of tokens in deducing the correct final answer. A
natural choice to appropriately reflect the tokenwise potential is prospective value modeling (Sutton and Barto, 2018), which is fine-grained and
future-oriented compared to retrospective cumulative reward modeling (Eq. 3). Accordingly, we
construct a supervision scheme for token tn,k in a
reasoning path _tn,_ 1 [=][ {][t][n,][1][,][ · · ·][, t][n,k][}][ with the]
_{_ _·}[k]_
_expected cumulative reward (i.e., value):_
_[∞]_
_V (tn,k) = E_ _γ[l][−][1]r(tn,k+l)_ _qtr,_ _tn,_ 1 _,_ (5)
_{_ _·}[k]_
_l=1_
X
where r(·) and γ denote a reward function and the
discount factor, respectively.
The primary challenge in training value models
as verifiers is estimating the value labels of a generated reasoning path (Yu et al., 2024). However,
-----
|𝒕𝟏,𝟏|𝒕𝟏,𝟐|
|---|---|
|Terry||
||took|
|Col1|Col2|Col3|𝒕𝟏,𝒂$𝟏|
|---|---|---|---|
|||||
|Jerry|took|3|,|
|𝒕𝟏,𝒂|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
|Linda|took|1|2|and|
|Terry|took|
|---|---|
|Jerry|took|3|,|
|---|---|---|---|
|and|Linda|took|1|2|,|
|---|---|---|---|---|---|
|=|6|
|---|---|
|Terry|took|
|---|---|
|Jerry|took|3|,|
|---|---|---|---|
|and|Linda|took|1|2|,|
|---|---|---|---|---|---|
|=|1|
|---|---|
**TokenIndex** **Generated reasoning paths (𝑵𝒕𝒓** = 𝟑) **Answer**
**Value label**
𝒕𝟏,𝟏 𝒕𝟏,𝟐 - ·· 𝒕𝟏,𝒂$𝟏 𝒕𝟏,𝒂
**Path 1** Terry took - ·· Jerry took 3, Linda took 1 2 and Stacy took 2 - ·· - ·· **656**
0.33 0.33 0.33 0.33 0.33 0.33 0 0 0 0 0 0 0 0
𝒕𝟐,𝟏 𝒕𝟐,𝟐 - ·· 𝒕𝟐,𝒂$𝟏 𝒕𝟐,𝒂 𝒕𝟐,𝒃$𝟏 𝒕𝟐,𝒃
**Path 2** Terry took - ·· Jerry took 3, and Linda took 1 2, - ·· corn \n - ·· = 6 - ·· **84**
0.33 0.33 0.33 0.33 0.33 0.33 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0
𝒕𝟑,𝟏 𝒕𝟑,𝟐 - ·· 𝒕𝟑,𝒂$𝟏 𝒕𝟑,𝒂 𝒕𝟑,𝒃$𝟏 𝒕𝟑,𝒃
**Path 3** Terry took - ·· Jerry took 3, and Linda took 1 2, - ·· corn . - ·· = 1 - ·· **357**
0.33 0.33 0.33 0.33 0.33 0.33 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 1 1
Figure 3: Illustration of empirical value estimation using Eq. 9. For a single training problem, Ntr reasoning-answer pairs
are sampled using an LLM. Here, let Ntr = 3 for convenience. (1) All three sentences begin with the same tokens {t1,k}k[a][−]=1[1][,]
and only one of them reaches the correct final answer (357). Accordingly, every token of {t1,k}k[a][−]=1[1] [is labeled as][ 1][/][3 = 0][.][33][.]
(2) At the a-th position, however, only one sentence starts with t1,a, which reaches an incorrect final answer (656). Thus, all
tokens after t1,a are labeled as 0/1 = 0. (3) The remaining two sentences continue with the same tokens {t2,k}k[b][−]=[1]a[, only one of]
which is correct. Hence, every token of {t2,k}k[b][−]=[1]a [is labeled as][ 1][/][2 = 0][.][5][. (4) Finally, at the][ b][-th position, which one is correct]
is pre-determined. As a result, all tokens after t2,b are labeled as 0/1 = 0, whereas all tokens after t3,b as 1/1 = 1.
under the specific outcome reward formulation of
Eq. 1 and no intermediate rewards, the expected
cumulative reward (Eq. 5) reduces to the probability of reaching the correct final answer conditioned on the question qtr and intermediate reasoning path _tn,_ 1[, which can be straightforwardly]
_{_ _·}[k]_
computed from generated reasoning paths and can
indicate whether an intermediate reasoning path
(i.e., _tn,_ 1[) is on a promising track toward the]
_{_ _·}[k]_
correct final answer.
**Proposition 3.1. Let the reward function r(tn,k)**
_be defined as Eq. 1, which includes only the out-_
_come reward with the discount factor γ = 1 and_
_no intermediate reward (i.e., r(tn,k) = 0 except_
_the final answer). Then, the expected cumulative_
_reward (Eq. 5) is equivalent to the probability of_
_reaching the correct final answer conditioned on_
_qtr and_ _tn,_ 1 [=][ {][t][n,][1][,][ · · ·][, t][n,k][}][:]
_{_ _·}[k]_
_[∞]_
E _γ[l][−][1]r(tn,k+l)_ _qtr, {tn,·}1[k]_ (6)
_l=1_
X
= P(the final answer will be ˆa _qtr,_ _tn,_ 1[)][.]
_|_ _{_ _·}[k]_
for n = 1, · · ·, Ntr and k = 1, · · ·, Tn, where
Pn,k indicates the right-hand side of Eq. 6 and the
loss function ℓ is the mean squared error.
Compared to existing verifiers, the resolution of
assessment provided by the proposed token-level
value supervision adequately matches the tokenwise granularity of LLMs, thereby being able to
capture the discriminative details of tokens within a
reasoning path (Fig. 2). In contrast to ORMs, TVM
is trained to directly estimate the probability of an
intermediate reasoning path being on a promising
track toward the correct final answer (Proposition
3.1). As a result, TVM can choose the reasoning
path most likely to reach the correct final answer
among candidate reasoning paths, whether they are
partial or complete.
During inference, TVM can be employed to either search the reasoning path most likely to be
correct over complete reasoning paths generated
from an LLM (Lightman et al., 2023) or distinguish
prospective candidates likely to reach the correct
final answer among partially generated reasoning
paths. For the latter, we conduct a detailed study in
Sec. 4.3 in the setting of verifier-guided step-wise
beam search (Yu et al., 2024).
**3.3** **Empirical Value Estimation**
As discussed in Sec. 3.2, Proposition 3.1 alleviates
the practical challenges of value estimation (Eq.
5) by formulating the value as the ratio of correct
reasoning paths to total reasoning paths. Following
Eq. 5 and Eq. 6, the estimated value for each token
The right-hand side of Eq. 6 can be empirically
estimated from generated reasoning paths by calculating the proportion of correct reasoning paths
starting from _tn,_ 1 [among total reasoning paths]
_{_ _·}[k]_
starting from _tn,_ 1 [(see Sec.][ 3.3][).]
_{_ _·}[k]_
Following Proposition 3.1, we train the Token_supervised Value Model (TVM) by supervising each_
token with a value label empirically estimated as
the probability of reaching the correct final answer
given until that token. The objective of TVM is
Pn,k, fTV M (qtr, _tn,_ 1[)] (7)
_{_ _·}[k]_
_LTV M =_
_n,k_
-----
Table 1: Accuracy of Mistral-7B, Mistral-7B-MetaMath, Llama3-8B, and Llama3-8B-MetaMath on the GSM8K
benchmark under best-of-N search (N = 256) and verifier-guided step-level beam search (K = 40, b = 10). "BS"
stands for beam search.
Search Strategy Method Mistral-7B Mistral-7B-MetaMath Llama3-8B Llama3-8B-MetaMath
Self-Consistency 79.23 83.90 80.97 85.44
ORM 85.52 86.20 87.79 89.77
Best-of-N
Math-Shepherd - 87.10 - 89.23
Search
TVM (Ours) **88.17** **88.86** **88.70** **90.37**
Verifier-guided OVM 86.73 87.79 88.10 89.69
Step-level BS TVM (Ours) **87.72** **88.78** **89.01** **90.30**
_tn,k can be represented as_
_[∞]_
_V (tn,k) = E_ _γ[l][−][1]r(tn,k+l)_ _qtr, {tn,·}1[k]_
_l=1_
X
= P(the final answer will be ˆa _qtr,_ _tn,_ 1[)]
_|_ _{_ _·}[k]_
= [P][(][{][t][n,][·][}]1[k] _[∩]_ _[the final answer will be][ ˆ]a|qtr)_ _._ (8)
P( _tn,_ 1[|][q][tr][)]
_{_ _·}[k]_
set N = 256 following Wang et al. (2024) unless
specified otherwise.
**Verifier-guided step-level beam search (BS).**
To prevent errors in an intermediate reasoning step
from propagating to subsequent steps, Yu et al.
(2024) proposed guided decoding during intermediate reasoning steps via a verifier, a search strategy
we call verifier-guided step-level beam search. For
a test problem, after an LLM partially generates K
reasoning paths each containing only the first intermediate reasoning step, the verifier-guided steplevel beam search strategy alternates between the
following two steps until all K partially generated
reasoning paths are complete: (i) a verifier selects
the top-b (< K) ranked partially generated reasoning paths, and (ii) the LLM generates K/b subsequent intermediate reasoning steps for each path
chosen by the verifier. Among the K complete reasoning paths, the one scored highest by the verifier
is selected. Thanks to verifier intervention in generating each intermediate reasoning step, with K
much smaller than N, the performance of verifierguided step-level beam search can be similar to that
of best-of-N search in Table 1 and 2.
**4.1** **Grade School Mathematics (GSM8K)**
**Setups.** An LLM is fine-tuned on the training
dataset of GSM8K for two epochs with a batch
size of 128 and a learning rate of 1e-5. Then, we
sample Ntr = 100 reasoning paths per training
problem with a temperature of 0.7 from the finetuned LLM and label each token in a reasoning
path as Eq. 9. Finally, TVM initialized from either
the same LLM or the fine-tuned LLM is trained
on this dataset for one epoch with a batch size of
512 and a learning rate of either 2e-6 or 1e-5. More
experimental details are deferred to Appendix E.
**Results.** In the case of best-of-N search, we compare TVM with ORM (Cobbe et al., 2021) and
Math-Shepherd (Wang et al., 2024), a PRM without human annotations, as explained in Sec. 2. As
all experimental results in Wang et al. (2024) are
In practice, Eq. 8 can be empirically estimated from
_Ntr generated reasoning paths as the ratio of cor-_
rect reasoning paths starting from _tn,_ 1 [among]
_{_ _·}[k]_
_Ntr and total reasoning paths starting from {tn,·}1[k]_
among Ntr, respectively. The value label of each
token V (tn,k) is assigned as
_Nn[′]tr=1_ [I]N[(][{]tr[t][n][′][,][·][}]1[k] [=][ {][t][n,][·][}]1[k] _[∩]_ _[a][n][′][ = ˆ]a)/Ntr_ (9)
P _n[′]=1_ [I][(][{][t][n][′][,][·][}]1[k] [=][ {][t][n,][·][}]1[k][)][/N][tr]
where IP(·) is the indicator function and Ntr cancels out. The overall procedure of empirical value
estimation is described in Figure 3. The overall
algorithm is deferred to Appendix C.
**4** **Experiments**
To demonstrate the efficacy of TVM in improving the mathematical reasoning capabilities of LLMs, we conduct extensive experiments on the GSM8K (Cobbe et al., 2021) and
MATH (Hendrycks et al., 2021) benchmarks. Our
experiments are based on the following large language models: 1) Mistral-7B (Jiang et al., 2023),
Llama3-8B (AI@Meta, 2024); 2) those fine-tuned
on MetaMATH (Yu et al., 2023) We use two existing verifier utilization strategies: (i) best-of-N
search and (ii) step-by-step beam search.
**Best-of-N search.** The best-of-N search strategy
introduced in Lightman et al. (2023) is a conventional experimental setting to evaluate the performance of a verifier. For every test problem, an LLM
first generates N complete reasoning paths. The
reasoning path ranked highest by the verifier is chosen as the final candidate. For all experiments, we
-----
Table 2: Accuracy of Mistral-7B-MetaMath, and Llama3-8B-MetaMath on the MATH benchmark under best-of-N
search (N = 256) and verifier-guided step-level beam search (K = 40, b = 10). "BS" stands for beam search.
Search Strategy Method Mistral-7B-MetaMath Llama3-8B-MetaMath
Self-Consistency 35.10 42.40
ORM 36.40 **43.60**
Best-of-N
Math-Shepherd 37.30 43.40
Search
TVM (Ours) **37.40** 43.40
Verifier-guided OVM 36.60 42.40
Step-level BS TVM (Ours) **39.20** **45.20**
only based on LLMs fine-tuned on MetaMATH,
we also evaluate Math-Shepherd only for Mistral7B-MetaMath and Llama3-8B-MetaMath. Despite
using large N, Table 1 shows that TVM surpasses
ORM and Math-Shepherd with improvements ranging from 0.6 to 2.6%p as well as self-consistency
from 4.9 to 8.9%p, across the board.
Under the verifier-guided step-level beam search
strategy, we primarily compare TVM against
OVM (Yu et al., 2024) because Yu et al. (2024)
confirmed that step-level beam search guided by
a token-level verifier performs significantly better than that guided by a sentence-level value
model (Feng et al., 2024) on GSM8K. Further comparison to Feng et al. (2024) is presented in Appendix B. In Table 1, TVM also consistently outperforms OVM ranging from 0.6 to 1.0%p.
One might wonder why the accuracy of OVM
(K = 40) for Mistral-7B is much higher than that
of OVM (K = 100) reported in Yu et al. (2024).
This discrepancy arises because, in our experiments, some tokens (e.g., <<, >>) are correctly
converted to token IDs by the Mistral-7B tokenizer.
**4.2** **Advanced Mathematics (MATH)**
**Setups.** We employ fine-tuned LLMs on MetaMath (Mistral-7B-MetaMath and Llama3-8BMetaMath) without any further fine-tuning on the
training dataset of MATH in order to sample reasoning paths in a newline-delimited format. Following
Lightman et al. (2023); Wang et al. (2024), we
also use 500 test MATH problems for evaluation,
which is the same test dataset of Lightman et al.
(2023), incorporating the remaining 4500 test problems into the training dataset of MATH. For each
training problem, a fine-tuned LLM on MetaMath
generates Ntr = 25 reasoning paths with a temperature of 0.7, with each token labeled as Eq. 9. Then,
we train TVM initialized from the same fine-tuned
LLM for one epoch on this dataset with a batch
size of 512 and a learning rate of 2e-6. Further
experimental details are given in Appendix E.
**Results.** Similar to Sec. 4.1, Table 2 compares (i)
TVM’s best-of-N search performance with ORM
and Math-Shepherd and (ii) TVM-guided steplevel beam search to ORM-guided step-level beam
search (i.e., OVM). In the former case, the performance of TVM is slightly superior or almost comparable to that of ORM and Math-Shepherd. This
might be due to the fact that an LLM is extremely
prone to producing errors in the process of generating N reasoning paths for difficult MATH problems. However, when capitalizing on the verifierguided step-level beam search strategy, not only
does TVM outperform the OVM ranging from 2.6
to 2.8%p, but TVM-guided step-level beam search
also exhibits much better performance than best-ofN search by any verifier even if K = 40 is much
smaller than N = 256.
**4.3** **Analyses on Verifier-guided Step-level BS**
**Case study.** To validate the superiority of TVM
over OVM in predicting whether an intermediate
reasoning path is on a promising track toward the
correct answer, for a test problem in the GSM8K
benchmark, we compare OVM’s and TVM’s predictions. As illustrated in Fig. 4, in the third reasoning
step, OVM incorrectly predicts a wrong intermediate reasoning path with the highest score while
assigning a low score to a correct path. This occurs because OVM is inherently identical to ORM
trained to implicitly and indirectly learn the potential correctness of an intermediate reasoning path.
In contrast, TVM accurately predicts a correct intermediate path with the highest score and a wrong
one with a low score. As TVM is trained to directly
and explicitly estimate the probability of reaching the correct final answer for each token along
a reasoning path, TVM can effectively predict at
inference whether an intermediate reasoning path
is on a promising track toward the correct answer.
**Beam size study.** To investigate whether the accuracy of TVM improves with larger values of K
and b in verifier-guided step-level beam search,
we conduct experiments using TVM with vary
-----
```
<PROBLEM>: There are 96 fourth-graders at Small Tree School. 43 of them are girls. On Friday, 5 fourth-grade girls and
4 fourth-grade boys were absent. How many fourth grade boys were at Small Tree School on Friday? (Answer: 49)
```
There are 96 - 43 = <<96-43=53>>53 fourth-grade boys at Small Tree School.
5 + 4=<<5+4=9>>9 fourth-graders were absent on Friday.
```
BEAM SELECTED BY OVM BEAM SELECTED BY TVM
OVM:0.94 TVM:0.09 OVM:0.25 TVM:0.63
```
There are 53 - 9 = <<53-9=44>>44 fourth-grade boys at Small Tree Out of the 9, 5 were girls, leaving 9-5 = <<9-5=4>>4 boys absent.
School on Friday.
There were 53-4 = <<53-4=49>>49 fourth-grade boys present on Friday.
```
The answer is: 44 WRONG The answer is: 49 CORRECT
```
Figure 4: Illustration of OVM’s and TVM’s predictions under verifier-guided step-level beam search. In the third reasoning
step, while OVM incorrectly predicts a wrong intermediate reasoning path with the highest score while assigning a low score to a
correct path, TVM accurately predicts a correct intermediate path with the highest score and a wrong one with a low score.
Table 3: Mean and standard deviation of TVM’s accuracy for Mistral-7B and Mistral-7B-MetaMath on the
GSM8K benchmark according to varying sizes of K
and b when employing verifier-guided step-level beam
search. Three random trials are carried out.
_K, b_ Mistral-7B Mistral-7B-MetaMath
40, 10 87.69 ±0.22 88.70 ±0.16
80, 20 87.89 ±0.35 88.75 ±0.20
100, 25 87.92 ±0.13 88.80 ±0.07
ing sizes of K and b for Mistral-7B and Mistral7B-MetaMath on the GSM8K benchmark. Table
3 shows that the accuracy of TVM on GSM8K
increases as both K and b grow, but reaches a saturation point when K = 100 and b = 25.
**5** **Related Work**
**Best-of-N search.** For N complete reasoning
paths, a verifier Cobbe et al. (2021); Uesato et al.
(2022); Lightman et al. (2023); Wang et al. (2024)
ranks and picks the highest-scored reasoning path.
Although best-of-N search using a verifier shows
much superior performance compared to verifierfree strategies such as self-consistency (Wang et al.,
2023), best-of-N search still possesses the same
drawback as self-consistency as a large quantity
of generated reasoning paths are required to solve
challenging reasoning problems.
**Step-level beam search.** In contrast to the selection among complete reasoning paths, several
studies have focused on step-level beam searches
for partial reasoning paths. Step-level beam search
can be divided into (i) verifier-free step-level beam
search and (ii) verifier-guided step-level beam
search.
Under the verifier-free step-level beam search
strategy, Yao et al. (2023); Hao et al. (2023) allow
value estimation by prompting LLMs to sample or
simulate long-term outcomes during inference. Alternatively, Feng et al. (2024); Yu et al. (2024) introduce step-level beam search guided by a sentencelevel value model and an outcome-supervised reward model, respectively. Although Feng et al.
(2024); Yu et al. (2024) show that verifier-guided
step-level beam search achieves significant accuracy improvements over verifier-free one, each approach has its own weakness. As delineated in Yu
et al. (2024), a sentence-level value model is unsuitable for step-level beam search. In addition, Yu et al.
(2024) uses an outcome-supervised reward model,
not a value model. As a result, there is still room for
improvement in the performance of verifier-guided
step-level beam search.
**6** **Conclusion**
In this paper, we introduce a novel verifier termed
the Token-supervised Value Model (TVM). This
model uses per-token value labels to guide LLMs
toward promising mathematical reasoning paths.
Unlike traditional verifiers, which lack token-level
labels and thus cannot precisely evaluate intermediate steps in reasoning paths, TVM could estimate the expected cumulative reward for each token. This enables TVM to identify detailed tokenlevel information and perform more precise reasoning at intermediate paths leading to the correct
answer. Experimental results on benchmarks such
as GSM8k and MATH have revealed that TVM outperforms previous verifiers across 7B-scale LLMs,
including Mistral-7B and Llama3-8B, demonstrating its enhanced accuracy and effectiveness.
-----
**Limitations**
Our method has demonstrated significant improvements over previous competing methods, but resource constraints limited us from running further
experiments. Our TVM was primarily evaluated
using 7B-scale models for mathematical reasoning,
but it can be applied to larger models and extended
to other domains. Additionally, our model could be
utilized as a value model in reinforcement learning, such as in Proximal Policy Optimization training (Schulman et al., 2017; Zheng et al., 2023), to
supervise LLMs.
**References**
[AI@Meta. 2024. Llama 3 model card.](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training verifiers to solve math word prob-](https://arxiv.org/abs/2110.14168)
[lems. Preprint, arXiv:2110.14168.](https://arxiv.org/abs/2110.14168)
Xidong Feng, Ziyu Wan, Muning Wen, Stephen Marcus
McAleer, Ying Wen, Weinan Zhang, and Jun Wang.
[2024. Alphazero-like tree-search can guide large](https://arxiv.org/abs/2309.17179)
[language model decoding and training.](https://arxiv.org/abs/2309.17179) _Preprint,_
arXiv:2309.17179.
Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen
[Wang, Daisy Zhe Wang, and Zhiting Hu. 2023. Rea-](https://openreview.net/forum?id=VTWWvYtF1R)
[soning with language model is planning with world](https://openreview.net/forum?id=VTWWvYtF1R)
[model. In The 2023 Conference on Empirical Meth-](https://openreview.net/forum?id=VTWWvYtF1R)
_ods in Natural Language Processing._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021. Measuring mathematical
problem solving with the math dataset. NeurIPS.
Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud,
Marie-Anne Lachaux, Pierre Stock, Teven Le Scao,
Thibaut Lavril, Thomas Wang, Timothée Lacroix,
[and William El Sayed. 2023. Mistral 7b. Preprint,](https://arxiv.org/abs/2310.06825)
arXiv:2310.06825.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu[taka Matsuo, and Yusuke Iwasawa. 2022. Large lan-](https://openreview.net/forum?id=e2TBb5y0yFf)
[guage models are zero-shot reasoners. In Advances](https://openreview.net/forum?id=e2TBb5y0yFf)
_in Neural Information Processing Systems._
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying
Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient
memory management for large language model serving with pagedattention. In Proceedings of the ACM
_SIGOPS 29th Symposium on Operating Systems Prin-_
_ciples._
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri
Edwards, Bowen Baker, Teddy Lee, Jan Leike,
John Schulman, Ilya Sutskever, and Karl Cobbe.
2023. [Let’s verify step by step.](https://arxiv.org/abs/2305.20050) _Preprint,_
arXiv:2305.20050.
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wizardmath: Empowering mathematical reasoning for
large language models via reinforced evol-instruct.
_arXiv preprint arXiv:2308.09583._
Nestor Maslej, Loredana Fattorini, Raymond Perrault,
Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John
Etchemendy, Katrina Ligett, Terah Lyons, James
Manyika, Juan Carlos Niebles, Yoav Shoham, Russell
[Wald, and Jack Clark. 2024. Artificial intelligence](https://arxiv.org/abs/2405.19522)
[index report 2024. Preprint, arXiv:2405.19522.](https://arxiv.org/abs/2405.19522)
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec
[Radford, and Oleg Klimov. 2017. Proximal policy op-](https://arxiv.org/abs/1707.06347)
[timization algorithms. Preprint, arXiv:1707.06347.](https://arxiv.org/abs/1707.06347)
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky,
Ilya Sutskever, and Ruslan Salakhutdinov. 2014.
[Dropout: A simple way to prevent neural networks](http://jmlr.org/papers/v15/srivastava14a.html)
[from overfitting. Journal of Machine Learning Re-](http://jmlr.org/papers/v15/srivastava14a.html)
_search, 15(56):1929–1958._
Richard S Sutton and Andrew G Barto. 2018. Reinforce_ment learning: An introduction. MIT press._
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton
Ferrer, Moya Chen, Guillem Cucurull, David Esiobu,
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa,
Isabel Kloumann, Artem Korenev, Punit Singh Koura,
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten,
Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu,
Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas
[Scialom. 2023. Llama 2: Open foundation and fine-](https://arxiv.org/abs/2307.09288)
[tuned chat models. Preprint, arXiv:2307.09288.](https://arxiv.org/abs/2307.09288)
Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell,
Geoffrey Irving, and Irina Higgins. 2022. Solving math word problems with process-and outcomebased feedback. arXiv preprint arXiv:2211.14275.
Peiyi Wang, Lei Li, Zhihong Shao, R. X. Xu, Damai
Dai, Yifei Li, Deli Chen, Y. Wu, and Zhifang Sui.
-----
2024. [Math-shepherd: Verify and reinforce llms](https://arxiv.org/abs/2312.08935)
[step-by-step without human annotations. Preprint,](https://arxiv.org/abs/2312.08935)
arXiv:2312.08935.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le,
Ed H. Chi, Sharan Narang, Aakanksha Chowdhery,
[and Denny Zhou. 2023. Self-consistency improves](https://openreview.net/forum?id=1PL1NIMMrw)
[chain of thought reasoning in language models. In](https://openreview.net/forum?id=1PL1NIMMrw)
_The Eleventh International Conference on Learning_
_Representations._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le,
[and Denny Zhou. 2022. Chain of thought prompt-](https://openreview.net/forum?id=_VjQlMeSB_J)
[ing elicits reasoning in large language models. In](https://openreview.net/forum?id=_VjQlMeSB_J)
_Advances in Neural Information Processing Systems._
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran,
Thomas L. Griffiths, Yuan Cao, and Karthik
Narasimhan. 2023. [Tree of Thoughts: Deliber-](https://arxiv.org/abs/2305.10601)
[ate problem solving with large language models.](https://arxiv.org/abs/2305.10601)
_Preprint, arXiv:2305.10601._
Fei Yu, Anningzhe Gao, and Benyou Wang. 2024.
[Ovm, outcome-supervised value models for plan-](https://arxiv.org/abs/2311.09724)
ning in [mathematical](https://arxiv.org/abs/2311.09724) reasoning. _Preprint,_
arXiv:2311.09724.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2023.
Metamath: Bootstrap your own mathematical questions for large language models. _arXiv preprint_
_arXiv:2309.12284._
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting
Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and
[Jingren Zhou. 2023. Scaling relationship on learning](https://arxiv.org/abs/2308.01825)
[mathematical reasoning with large language models.](https://arxiv.org/abs/2308.01825)
_Preprint, arXiv:2308.01825._
Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei
Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu,
Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi,
Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang,
Zhangyue Yin, Rongxiang Weng, Wensen Cheng,
Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui,
Qi Zhang, Xipeng Qiu, and Xuanjing Huang. 2023.
[Secrets of rlhf in large language models part i: Ppo.](https://arxiv.org/abs/2307.04964)
_Preprint, arXiv:2307.04964._
-----
**A** **Proof of Proposition 3.1**
Let the reward function r(tn,k) be defined as Eq. 1, which includes only the outcome reward with the
discount factor γ = 1 and no intermediate reward (i.e., r(tn,k) = 0 except the final answer). Then,
_∞l=1_ _[γ][l][−][1][r][(][t][n,k][+][l][) =][ P]l[∞]=1_ _[r][(][t][n,k][+][l][)][ becomes either one or zero, depending on whether the resulting]_
final answer will be ˆa or not, respectively. As a result, the expected cumulative reward (Eq. 5) is written as
P
_[∞]_
E _γ[l][−][1]r(tn,k+l)_ _qtr, {tn,·}1[k]_
_l=1_
X
_[∞]_
= E _r(tn,k+l)_ _qtr,_ _tn,_ 1 (∵ _γ = 1)_
_{_ _·}[k]_
_l=1_
X
1 _[∞]_ _∞_
= _r_ P _r(tn,k+l) = r_ _qtr,_ _tn,_ 1 (∵ _r(tn,k+l) = 0 or 1)_
_∗_ _{_ _·}[k]_
_r=0_ _l=1_ _l=1_
X X X
_[∞]_
= P _r(tn,k+l) = 1_ _qtr, {tn,·}1[k]_
_l=1_
X
= P(the final answer will be ˆa _qtr,_ _tn,_ 1[)][,]
_|_ _{_ _·}[k]_
because _l=1_ _[r][(][t][n,k][+][l][) = 1][ only if the resulting final answer will be][ ˆ]a._
[P][∞]
-----
**B** **Additional Comparison of TVM with a sentence-level value model (Feng et al., 2024) as**
**well as OVM**
Although Yu et al. (2024) corroborated that step-level beam search guided by a token-level verifier
performs significantly better than that guided by a sentence-level value model (Feng et al., 2024) on
GSM8K, we additionally compare our TVM with a sentence-level value model (Feng et al., 2024) as well
as OVM for Llama2-7B.
Table 4: Mean and standard deviation of accuracy of a sentence-level value model (Feng et al., 2024), OVM,
and TVM for Llama2-7B (Touvron et al., 2023) on the GSM8K benchmark in the case of K = 10 under the
verifier-guided step-level beam search strategy. For TVM, three random trials are conducted.
Search Strategy Method Llama2-7B
Verifier-guided Feng et al. (2024) 52.20 0.90
_±_
Step-level OVM 66.50 0.20
_±_
Beam Search TVM (Ours) **66.82** 0.38
_±_
As seen in Table 4, our TVM is superior to both a sentence-level value model (Feng et al., 2024)
and OVM. As explained in Yu et al. (2024), under the verifier-guided step-level beam search strategy,
outcome-supervised reward models can pretend to be a value model, but process-supervised reward
models cannot. As a result, the accuracy of a sentence-level value model is worse than that of OVM and
TVM.
-----
**C** **Algorithm of Empirical Value Estimation**
**Algorithm 1 Empirical Value Estimation**
**Require: For a question qtr, Ntr reasoning paths, each consisting of {tn,k}k[T]=1[n]** [and a final answer][ a][n][, the ground truth answer]
_aˆ, and the outcome reward function ro(an) in Eq. 1 for n = 1, · · ·, Ntr._
**Ensure:**
_H ←_ dict()
**for n = 1, · · ·, Ntr do**
**for k = 1, · · ·, Tn do**
**if not H.containsKey[tn,1, · · ·, tn,k] then**
_H.insert([tn,1,_ _, tn,k], (ro(an), 1))_
_· · ·_
**else**
(c, t) _H.get[tn,1,_ _, tn,k]_
_←_ _· · ·_
_H.insert([tn,1,_ _, tn,k], (c + ro(an), t + 1))_
_· · ·_
**end if**
**end for**
**end for**
**for n = 1, · · ·, Ntr do**
**for k = 1, · · ·, Tn do**
(c, t) _H.get[tn,1,_ _, tn,k]_
_←_ _· · ·_
// c means the number of correct reasoning paths starting from tn,1, _, tn,k_
_· · ·_
// t indicates the number of total reasoning paths starting from tn,1, _, tn,k_
**end forV (tn,k) =** _[c]t_ _· · ·_ _▷_ Eq. 9
**end for**
-----
**D** **Compute Analysis between Best-of-N Search and Verifier-guided Step-level Beam**
**Search**
Table 5: Execution time of best-of-N search without and with vLLM (Kwon et al., 2023) and verifier-guided
step-level beam search on the GSM8K and MATH benchmarks when using 8 x NVIDIA A100-80GB GPUs and
Mistral-7B-MetaMath.
Search Strategy GSM8K MATH
Best-of-N search w/o vLLM 6.5 hours 22 hours
Best-of-N search w/ vLLM 2.1 hours 2.4 hours
Verifier-guided step-level beam search **0.9 hours** **1.1 hours**
-----
**E** **Implementation Details**
In both Sec. 4.1 and Sec. 4.2, following Cobbe et al. (2021), we use both a language modeling objective
and the verification objective in Eq. 7, with 20% dropout (Srivastava et al., 2014). Additionally, we employ
the same architecture as Cobbe et al. (2021), a language model extended with a scalar head composed of a
single gain parameter and a single bias parameter, to output a score for each token in a reasoning path. We
use the AdamW optimizer with a linear scheduler. We generate Ntr reasoning paths with a temperature of
0.7, a top-k of 50, and a top-p of 1.0. Note that for every experiment, a verifier has the same model size
and architecture as an LLM used to generate Ntr reasoning paths.
Table 6: Learning rate, batch size, and verifier initialization for training TVM when using Mistral-7B, Mistral-7BMetaMath, Llama3-8B, and Llama3-8B-MetaMath to generate Ntr = 100 reasoning paths per training problem of
GSM8K in Sec. 4.1. Fine-tuned Mistral-7B and fine-tuned Llama3-8B means that they are fine-tuned on the training
dataset of GSM8K as described in Sec. 4.1.
Mistral-7B Mistral-7B-MetaMath Llama3-8B Llama3-8B-MetaMath
Learning rate 2e-6 2e-6 1e-5 2e-6
Batch size 512 512 512 512
Verifier initialization fine-tuned Mistral-7B Mistral-7B-MetaMath fine-tuned Llama3-8B Llama3-8B-MetaMath
Table 7: Learning rate, batch size, and verifier initialization for training TVM when using Mistral-7B-MetaMath
and Llama3-8B-MetaMath to generate Ntr = 25 reasoning paths per training problem of MATH in Sec. 4.2.
Mistral-7B-MetaMath Llama3-8B-MetaMath
Learning rate 2e-6 2e-6
Batch size 512 512
Verifier initialization Mistral-7B-MetaMath Llama3-8B-MetaMath
For both best-of-N search and verifier-guided step-level beam search, we also use a temperature of 0.7,
a top-k of 50, and a top-p of 1.0. The maximum new token length is set to 400 for GSM8K and 1024 for
MATH, respectively.
-----
| [
"Jung Hyun, Lee",
"June Yong, Yang",
"Byeongho, Heo",
"Dongyoon, Han",
"Kang Min, Yoo"
] | 2024-07-12T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2407.12863 | https://arxiv.org/abs/2407.12863 | https://www.semanticscholar.org/paper/651c9af1ee9467c7a0b46aaddd4db8b264f5248d |
Top-down Automated Theorem Proving (Notes for Sir Timothy) | We describe a "top down" approach for automated theorem proving (ATP). Researchers might usefully investigate the forms of the theorems mathematicians use in practice, carefully examine how they differ and are proved in practice, and code all relevant domain concepts. These concepts encode a large portion of the knowledge in any domain. Furthermore, researchers should write programs that produce proofs of the kind that human mathematicians write (and publish); this means proofs that might sometimes have mistakes; and this means making inferences that are sometimes invalid. This approach is meant to contrast with the historically dominant "bottom up" approach: coding fundamental types (typically sets), axioms and rules for (valid) inference, and building up from this foundation to the theorems of mathematical practice and to their outstanding questions. It is an important fact that the actual proofs that mathematicians publish in math journals do not look like the formalized proofs of Russell & Whitehead's Principia Mathematica (or modern computer systems like Lean that automate some of this formalization). We believe some "lack of rigor" (in mathematical practice) is human-like, and can and should be leveraged for ATP. | It is believed some"lack of rigor"(in mathematical practice) is human-like, and can and should be leveraged for ATP. | TOP-DOWN AUTOMATED THEOREM PROVING
(NOTES FOR SIR TIMOTHY)
C. E. LARSON AND N. VAN CLEEMPUT
Abstract. We describe a “top down” approach for automated
theorem proving (ATP). Researchers might usefully investigate the
forms of the theorems mathematicians use in practice, carefully
examine how they differ and are proved in practice, and code all
relevant domain concepts. These concepts encode a large portion
of the knowledge in any domain. Furthermore, researchers should
write programs that produce proofs of the kind that human mathematicians write (and publish); this means proofs that might sometimes have mistakes; and this means making inferences that are
sometimes invalid.
This approach is meant to contrast with the historically dominant “bottom up” approach: coding fundamental types (typically
sets), axioms and rules for (valid) inference, and building up from
this foundation to the theorems of mathematical practice and to
their outstanding questions. It is an important fact that the actual proofs that mathematicians publish in math journals do not
look like the formalized proofs of Russell & Whitehead’s Principia
Mathematica (or modern computer systems like Lean that automate some of this formalization). We believe some “lack of rigor”
(in mathematical practice) is human-like, and can and should be
leveraged for ATP.
1. Background
In 1948 Turing [27] suggested mathematics as a subject that intelligent machines might contribute to, and in 1958 Newell and Simon
[25] predicted, “That within ten years a digital computer will discover
and prove an important new mathematical theorem.” More than 60
years later, computer proof of an important new theorem still seems
distant. What might be wanted is a program that, given a conjecture
in the domains research mathematicians work in (like number theory,
matrix theory, graph theory, etc) as input, sometimes produces a proof
of the conjecture. Nothing like this exists. There have been scattered
computer proofs of conjectures, most famously the 1997 EQP proof of
(*) Research supported by the Simons Foundation Mathematics and Physical
Sciences–Collaboration Grants for Mathematicians Award (426267).
-----
the Robbin’s conjecture [19], and lots of related research, but no track
record yet to build on. Some automated theorem proving (ATP) researchers have begun looking for new ideas and approaches[1]. Here we
propose a “top-down” approach: develop programs that produce proofs
in the domains that mathematicians work in. This approach (“topdown ATP”) is meant to contrast with the more traditional bottom-up
approach: develop programs that produce proofs from axioms of logic
or set theory (or foundational relatives, such as dependent type theory);
after this is done, translate any conjecture in an existing mathematical
domain to the language of set theory.
The bottom-up ATP approach has been dominant. Logicians and
set theorists in the first half of the 20[th] century argued variously that
mathematics was in fact logic (that is, that mathematical statements
were logically true) and that all of mathematics could be derived from
various set-theoretic axioms and exactly specified inference rules. The
first automated-theorem proving programs in the 1950s in fact aimed
to prove the theorems of Russell and Whitehead’s systematization of
mathematics Principia Mathematica [30]; these programs included,
for instance, Newell and Simon’s logic theorist program [20] and
Wang’s IBM-era programs [29]. Well-known advances in this area of
research included simplified or efficient inference rules such as resolution theorem proving [24]. It is worth noting that proof-assistants like
Lean—which are not intended to prove theorems automatedly—which
leverage computers in the task of formalization—are also bottom-up:
computers can be used for type-checking, making routine steps and
keeping track of dependencies and managing large databases, but they
start with bottom-level foundational objects, and adhere to strict inference rules.
At the same time there has been continuing work on top-down ATP
approaches that did not emphasize reduction to logic or set theory, or
that were domain-specific. The first of these might have been Gelerntner’s geometry program [12]. Some of this research in fact has been
spectacularly successful. Wu’s work on geometry is one example [33].
A less-recognized example is WZ theory and Zeilberger’s work on hypergeometric series: huge classes of identities, including many famous
and classical identities, can now be proved completely automatedly
[31]. Top-down ATP research might have had limited impact among
bottom-up ATP researchers exactly because it was not obvious how
1This note was inspired by a blog post of the Fields’ medalist Timothy Gowers,
who has written about his attempts to code “human-style” reasoning for ATP. To
err is human.
-----
this research could advance goals of researcher with less specific, more
general, theorem-proving ambitions.
There is now a recent burst of energy and publicity for a variety of efforts including proof checking programs, interactive theoremprovers, and proof-assistants, which are relatives of bottom-up ATP
[21]. There is a very long history of proof-checking programs; the
COQ program has received generous coverage in the mathematical
community at least in part because of the advocacy of Voevodsky [13],
checking proofs of many famous theorems. The Lean program has
received substantial coverage, in part because of its adoption by working mathematicians (and the millions of lines of coded mathematical
theorems its users have contributed). It began at Microsoft Research
[(see: https://leanprover.github.io/about/), and Google Research](https://leanprover.github.io/about/)
is also now participating [22]—substantial institutional support.
The terms top-down ATP and bottom-up ATP are meant to suggest an analogy with top-down programming versus bottom-up programming. “Programming is traditionally taught using a bottom-up
approach, where details of syntax and implementation of data structures are the predominant concepts. The top-down approach proposed
focuses instead on understanding the abstractions presented by the
classical data structures without regard to their physical implementation” [23]. In top-down programming you begin by outlining top-level
functions and add auxiliary functions, substructures, and basic tools as
you discover you need them; while in bottom-up programming you first
code the most fundamental modules first and build up from them. It
may be that bottom-up approaches to ATP end up focusing disproportionately on fundamentals and rigor, rather than on producing proofs.
One advocate for top-down programming has written, “We choose to
focus on the [high-level] abstractions in the belief that they are the
more important concepts. Our experience has been that implementation issues distract the students to the point that they do not really
understand the abstractions, particularly with the classical data structures” [23]. We envision top-down ATP proofs to typically have holes,
and filled-in and fixed-up as needed; while bottom-up ATP starts with
some fundamental objects, and inference rules, and builds up to some
area of interest to mathematicians without gaps and always validly. It
is reasonable to expect that top-down and bottom-up approaches to
ATP can interact fruitfully—as they do in programming generally.
Furthermore we propose writing programs that produce proofs which
are importantly similar to the proofs which appear in published journal articles: published proofs are generally valid, but have occasional
mistakes—and thus these are not proofs (in general) which are derived
-----
from axioms via (valid) rules of inference. Published proofs of course
do have mistakes: Kempe’s proof of the Four Color Theorem is a famous historical example [1]. The Fields Medalist Vladimir Voevodsky
discussed several errors (including his own) in published papers [28].
Of course, it might seem to be a great advantage that we can produce
errorless computer proofs—but perhaps commitment to this ideal has
been a constraint on other automated theorem-proving goals?
2. Examples
We sketch the outline of a well-known theorem in graph theory in
order to illustrate ideas. While it is the form of the argument that is
of interest, we define enough terminology that a reader who has seen
this theorem before might recall or reconstruct the proof.
The degree of a vertex in a graph is the number of vertices it is
adjacent to. A Hamiltonian cycle in a graph is a cycle that contains all
the vertices of the graph. Let n be the number of vertices in the graph.
Dirac’s Theorem says that if every vertex has degree at least [n]2 [then the]
graph has a Hamiltonian cycle. The main idea of the standard proof
of this theorem is to consider a longest path in a graph, argue that the
subgraph induced by these vertices is itself Hamiltonian, and then to
argue that this subgraph contains all the vertices of the parent graph.
It is important that the claim has the form: ∀x[P (x) → Q(x)], where
the quantification is over all graphs, where P (x) represents “graph x
has the property that every vertex of x has degree at least [n]2 [” and][ Q][(][x][)]
represents “graph x has the property that x is Hamiltonian”.
In this case one approach to proving Dirac’s Theorem would be to
find an appropriate graph property P [′] and predicate P [′](x) representing “graph x has property P [′]” and proving both ∀x[P (x) → P [′](x)]
and ∀x[P [′](x) → Q(x)]. The main question then is how to automate
finding or producing the needed property P [′] (in this case we know it
exists; while in the general case this would be an open question). Let
R(x) represent “the vertices in every longest path in graph x induce
a subgraph of x which is Hamiltonian”, and let S(x) represent “every
longest path in graph x contains all of the vertices of the graph”. So
then the proof of Dirac’s Theorem has the form:
∀x[P (x) → R(x)],
∀x[(P (x)&R(x)) → S(x)],
∀x[(P (x)&R(x)&S(x)) → Q(x)],
and we conclude:
∀x[P (x) → Q(x)].
-----
Expanding the predicates, this says: (1) For every graph x, if x has
n
the property that every vertex of x has degree at least 2 [, then the]
vertices in every longest path in graph x induce a subgraph of x which
is Hamiltonian. (2) For every graph x, if x has the property that every
vertex of x has degree at least [n]2 [, and the vertices in every longest path]
in graph x induce a subgraph of x which is Hamiltonian, then every
longest path in graph x contains all of the vertices of the graph. (3)
For every graph x, if x has the property that every vertex of x has
degree at least [n]2 [, the vertices in every longest path in graph][ x][ induce a]
subgraph of x which is Hamiltonian, and every longest path in graph x
contains all of the vertices of the graph, then graph x has the property
that x is Hamiltonian. (4) Therefore, for every graph x, if x has the
property that every vertex of x has degree at least [n]2 [, then][ x][ has the]
property that x is Hamiltonian.
It is worth noting that P (x)&R(x) and P (x)&R(x)&S(x) represent new predicates (of course constructed by conjunctions of existing
predicates—and which can be given a simple new name should that
be thought useful); this shows the sought-after predicate P [′](x) can be
taken to be P (x)&R(x)&S(x). So one approach to automating finding proofs of theorems is to focus specifically on the domain of the
claim, focus specifically on how theorems of a specified form are actually proved, and produce candidate statements that utilize and leverage
all the existing knowledge (conceptual knowledge and theorems) in that
domain.
This proof is a top-down proof as it relies on high-level abstractions
(graphs)—not redefined in terms of low-level abstractions (sets)—and a
library of graph concepts, together with high-level inferences which are
not themselves directly justified in terms of low-level inference rules.
There are three possible responses to such a produced “proof”. A mathematician might accept the proof. Or she might request a justification
for the intermediate claims. Or she might have a counterexample for
one of these claims. It is worth noting that any of these three responses
can advance mathematics. In the first case, we have a new theorem.
In the second case, our mathematician will have to go back and get to
work to provide the needed justification, perhaps inventing new concepts. And in the third case, counterexamples are themselves new
knowledge (and the numerous books with titles like Counterexamples
in Analysis [11] attest to their role in fertilizing mathematics).
Below we will explain how ideas from a program we developed to
make conjectures in any mathematical domain might be leveraged to
-----
produce proofs like this. The key features of our Conjecturing program are state-of-the-art expression generation, and an intuitive heuristic for conjecture production. Produced conjectures are not directly
inferred via syntactic rules, but rather satisfy semantic conditions. In
particular, these conjectures must be true for the (possibly very small
number of) examples known to the program. Wos, for instance, claimed
that, “An emphasis on semantics rather than syntax has far greater potential for producing a dramatic impact on the power of automated reasoning programs” [32]. It is worth noting that the conjectures produced
by this program have richer semantic content than the statements produced by many proof-assistant programs: Conjecturing statements
(conjecture objects) come with methods for instance for evaluating the
truth of the conjecture claim for any given object of the appropriate
type.
This example was chosen to be sufficiently non-trivial, and enough
to suggest an approach that can be generalized to other cases—but
more difficult cases and the issues they raise will be discussed below.
A key question, to be addressed, is where the produced predicates come
from (is there a library of all possible predicates, or are novel predicates
somehow generated for specific problems) and how these predicates are
actually produced (how does one get chosen over another)?
Importantly, if one mathematician were to explain the proof to another, no explicit mention will be made of inference rules, and a common set of concepts will be relied on. The above proof may or may not
suffice. If it doesn’t, further explanation will be required. This may
involve additional concepts and properties. If a mathematician thinks
that an inference isn’t valid, she can supply a counterexample.
Consider now Gowers’ example [2]: he discusses how a human-like
automated theorem prover might prove the theorem that, in a metric
space X, if A and B are closed sets then their union A ∪ B is closed.
A “metric space” might be thought of in at least two ways: first, as a
mathematical object in its own right, or second, as a generalization of
specific metric spaces such as R[2] (the Cartesian plane). While mathematicians have no problem reasoning about an abstract metric space,
these are qualitatively different than specific examples of metric spaces.
A graph can be represented in a variety of ways, graph properties can
be coded, and whether a graph has a specific property can be checked.
It is less obvious how to represent an abstract metric space—how for
instance can one be coded, or their sets be coded, or the properties their
2
[https://wtgowers.github.io/human-style-atp/2022/09/09/basicalgorithm.html](https://wtgowers.github.io/human-style-atp/2022/09/09/basicalgorithm.html)
(accessed July 2023)
-----
sets may have be coded and evaluated? It may be an important fact
that only very initial steps have been taken thus far in the widely-used
mathematical computation environment Sage (in contrast there are
well-developed facilities for graph, integers, matrices and more prosaic
mathematical objects). Thus a useful first step in investigating Gowers’
theorem-of-interest is to take up a specific case like R[2]. Future investigations of how to represent an abstract metric space to a computer
might lead to a better understanding about how to prove facts about
abstract metric spaces. It may be an important fact that properties
about graphs apply to specific graphs—and not to an abstract “graph”
object; and it might then be thought that metric space properties only
apply to specific metric spaces (like R[2]) rather than an abstract metric
space.
Consider now how a human-like automated theorem prover might
prove the theorem that, in the metric space R[2], if A and B are closed
sets then their union A ∪ B is closed. In the case of R[2], it is much
clearer how to represent relevant concepts to a computer: R[2] consists
of points (x, y) where x and y are real numbers. Sets of points in R[2] can
be represented in various ways: either by listing specific points or by
giving defining conditions; and algorithms can be written to determine
if at least some of these sets are open or closed, etc. The objects
of interest here are actually pairs of sets in R[2]. Properties here might
include, for instance, the property P that every set in the pair is closed,
and the property Q that the union of the sets is closed; corresponding
predicates would be P (x) representing, “for a pair of sets x=(A,B),
A and B are each closed”, and Q(x), representing, “for a pair of sets
x=(A,B), their union A ∪ B is closed”.
It may be that the program library includes the predicate P [′](x) representing “for any pair of sets x=(A,B) and points pA /∈ A, pB /∈ B,
there are real numbers ǫA and ǫB, and balls B(pA, ǫA) disjoint from A
and B(pB, ǫB) disjoint from B”. And it may also include the predicate P [′′](x) representing “for any pair of sets x=(A,B) and point p /∈
A ∪ B, there is a real number ǫ and ball B(p, ǫ) disjoint from A ∪ B”.
Then a produced proof might have the form: ∀x[P (x) → P [′](x)],
∀x[(P (x)&P [′](x)) → P [′′](x)], ∀x[(P (x)&P [′](x)&P [′′](x)) → Q(x)] and,
therefore ∀x[P (x) → Q(x)].
While the suggested top-down approach may seem unrealistic, in
fact existing conjecture-making software can be leveraged to initiate
the proposed research. Fajtlowicz for instance developed the Graffiti
program in the 1980’s to produce graph-theory conjectures of the described form [6, 7, 5, 8, 9]. His heuristic ideas were domain-independent
-----
(that is, not specific to graph theory); more recently the Conjecturing program was developed to extend Fajtlowicz’s Dalmatian heuristic
for the production of conjectures of the described form in any mathematical domain [15, 16].
There are some obvious objections to the proposed top-down ATP
paradigm. We address these immediately and then return to fleshing
out possible top-down approaches.
3. Issues and Objections
A number of issues can be anticipated. Many of these simply capture that humans (even mathematicians) use words in different ways;
in every instance it is important to understand what definition is being used (or what concept the word is meant to name). A logician’s
conception of a “proof” for instance differs substantially from a journal editor’s conception. (While they may claim to actually have the
same conception, what they count as a “proof” differs in practice.)
Lakatos’ Proofs and Refutations [14] is a key source for important discussions of historical examples occurring in mathematics; he discusses
for instance the evolving concepts associated with “polyhedron” and
“convex set” in the context of a discussion of mathematicians understanding of the ideas and proof of Euler’s Polyhedral Formula (that the
number of vertices of a polyhedron plus its number of faces equals two
plus its number of edges.) These issues aren’t often discussed among
mathematicians—but are certainly common.
(1) It might be claimed that a proof consists of valid inferences from proceeding statements. Words in natural
languages and even mathematical languages are used in multiple ways. The word “graph” for instance is used both for
graphs of quadratic functions and graphs in graph theory (while
these terms arguably have some commonality, it is easy to see
that they are importantly different: it would make no sense,
for instance, to ask how many edges the graph of a quadratic
function has). Similarly, the word “proof” is used variously.
In one context it means a sequence of statements inferred from
axioms by specified rules; in contrast a “proof” as it appears
in a math journal means a sequence of statements validly inferred from some assumptions. Here “validly inferred” cannot
mean according to any specified rules of inference as these are
never specified (say in the “author instructions” for the journal),
and as such “proofs” do not exist in any mathematics journal
-----
(and would be instantly rejected). By “proof” we mean something closer to what mathematicians actually submit to journals
than the “proofs” of formal logic. It might be claimed that the
“proofs” in mathematical journals can in fact be translated to
formal “proofs”; this is an open question—and beside the point.
(2) It might be claimed that produced proof statements
are not justified unless they are valid inferences. If “justified” is defined to mean “follows from axioms by specified
inference rules” then no published proofs are “justified”. More
interestingly, in an important sense even formal proofs in logic
are not “justified”—at some point, in any proof—steps must be
made which cannot be justified (that is, there must always be
an unjustified inferential leap).
The issue here was recognized at least by the 19[th] century
and is succinctly described in Lewis Carroll’s story of Achilles
and the Tortoise [2]. Achilles knows the modus ponens inference
schema: given the statement forms A →B, and A then imply
B; but in the case of specific statements, “if P then Q” and
“P ”, how is the application of the inference schema justified?
The gist is that at some point we have to go from one statement
to the next without any justificatory rule. (We can often give
justifications, but at some point our justifications necessarily
come to an end).
Not every statement in a math paper can be justified. At
some point there can be no further justification. If the reader
keeps pushing there is necessarily a limit. So even in a state-ofthe-art program that proves theorems using just formal inference rules, with a simple user-input of “All connected graphs
have exactly one component” and “The Petersen graph is connected”, the program might output, “The Petersen graph has
exactly one component”. The user might wonder how the program can justify that inference. It might be said that the program has the rule for universal instantiation (UI), the input
universal statement is a claim about “all graphs” and the Petersen graph is certainly a graph, so the program produced the
inference about the Petersen graph. But how can the application of the UI schema be justified in this case? Another rule
would seem to be required: given the UI schema and appropriate statements in the domain of graphs, make an inference
about any particular graph, etc.
-----
Of course, no mathematician wrestles with these endless justifications in practice: we all go along the same way here, agreeing on the same basic inferences. But that’s an important fact.
This is really what a human-oriented theorem proving program
must model. To be human-oriented must mean something like
approximating what human mathematicians do. This is not
meant to be an obscure philosophical point—but maybe a central design idea: a human-oriented theorem-proving program
will make inferential leaps, not always justified, and sometimes
invalid. What this fact highlights is that these programs should
also then have some mechanism so that they can be corrected
and not make the same mistake again. (That’s what humans
do).
No theorem-proving program can expect to do better—the
best they can do is produce sequences of statements where some
(or most) human mathematicians think the inferences are justified, and in general, humans take as justification far less than
long sequences of statements from axioms and logical rules. It
may even be a central design feature to make inferential leaps
with a mechanism for correcting mistakes and not repeating invalid ones in the future.
(3) It might be claimed that a theorem-proving program
must produce “proved” statements that are inferred
validly. What is being proposed is to develop programs that
produce “proofs” of the kind that human mathematicians produce. It is true that this is different—and necessarily different—
than the bottom-up theorem-proving programs where the majority of research has been invested. The word “proof” can
variously mean the proofs of formal logic—or the proofs published in math journals. These are sometimes different. The
first are valid by definition, while the second are only ideally
valid. Producing proofs of both sorts can be interesting, useful,
and illuminating.
(4) It might be claimed that, since every mathematical
statement can be translated into a statement about
sets or other fundamental objects, that an automated
theorem proving program should be built from some
choice of fundamental object. Graph theory papers, for
instance, generally prove statements about graphs directly as
graphs (and not as sets of pairs satisfying some conditions);
10
-----
when researchers write graph theory algorithms, they write algorithms directly for graphs (and not algorithms for sets that
model graphs); and claims about graphs are written in the vocabulary of graph theory. While mathematical objects all can
be modeled as sets, in practice mathematicians don’t actually
switch from talking about their objects of immediate interest
to talking about the sets that can model these objects. There
are a few reasons. These certainly include that statements in
the home domain are simpler, and mathematicians have more
developed intuitions about the objects of their home domain.
So while it is of course true that working only in set theory
would be simplifying in the sense that only a single mathematical domain would have to be coded for the purposes of
producing proofs, any theorem-prover that is to attain the goal
of inputting a domain mathematical statement and occasionally getting a proof out will still need to translate that statement into set theoretic language and thus isn’t necessarily any
simpler. More importantly, the translated statements will almost certainly be longer and more complex—and likely harder
for a theorem-proving program to prove. There are advantages to working with the objects of the various mathematical
domains—and is the reason mathematicians themselves don’t
translate their conjectures and theorems to set theoretic claims.
The claim that mathematics is all one domain is of no practical import: graph theorists will study graphs, number theorists
will study integers, and in so far as there are analogies, or common tools, the necessary bridges will be built. No mathematician for instance would ever attempt to prove a graph theoretic
claim by translating it into a set theoretic claim and prove it
directly from some axioms for set theory.
(5) It might be asked which axioms should be used in a
theorem-proving program? Questions like this are likely
related to debates about which set theoretic axioms are true.
They are no more (or no less) relevant to program design than
they are to the practice of any mathematician. In fact, most
mathematicians are agnostic about the truth of any specific collections of set theoretic axioms, and these don’t generally come
up in the practice of mathematical domains besides set theory. If you are developing a top-down mathematical theoremproving program there may be design reasons to include theorems (facts) from that domain. These in a sense will then be
the “axioms” for the program.
11
-----
4. Human-style ATP and the Conjecturing Program
The Fields Medalist Timothy Gowers recently returned[3] to a project
he initiated in 2008[4] to produce a “human-style” automated theoremproving program [10]. Here he means one that follows patterns of
reasoning that appear in the proofs of human mathematicians, and with
no exhaustive search (of say possible proofs through the space of proofs
in some language). Gowers is especially interested in programs that
would be useful to mathematicians, and is keen on output that actually
reads like proofs written by human mathematicians. That said, he
seems more interested in program-generated proofs that provide insight
rather than just guarantee truth:
So what would be beneficial to ‘typical’ mathematicians?
One possible answer is to place far less emphasis on
proofs as certifications of correctness and far more on
proofs as explanations. Many mathematicians say that
their aim, when looking for proofs, is to achieve understanding rather than to feel confident that a statement
is true. . . .
Therefore, for an automatic theorem prover to be useful to mainstream mathematicians, it will be highly desirable for it to produce ‘good’ proofs that ‘explain what
is going on’ [10, p.255].
Furthermore “human-style” ATP must also include the design principle of working directly in the domains of the various mathematical
sub-fields, with the objects (integers, matrices, graphs, etc) of those
sub-fields, without translation to any more fundamental sub-field (for
instance, set theory), just as human mathematicians do.
By a “domain” here is meant a specific kind of object, all the properties defined for those objects, together with universal and existential
statements quantified over those objects. The mathematical sub-field
of graph theory includes not just the domain of graphs, but basically
anything graph-related including, for instance, claims about families
of graphs (for instance, the Graph Minor Theorem). So while the
Robertson-Seymour (AKA Graph Minor) Theorem obviously is a theorem of graph theory broadly described, it is important to be clear
that it is not a statement quantified over graphs—it is quantified over
families of graphs [4].
3
gowers.wordpress.com/2022/04/28/announcing-an-automatic-theorem-proving-project/
(accessed June 2022)
4gowers.wordpress.com/2008/07/28/more-quasi-automatic-theorem-proving/
(accessed June 2022)
12
-----
We will describe one possible top-down approach utilizing the conjecturing program which can produce conjectured necessary conditions for the objects in any domain to have a given property. [16]
For instance it can be used to generate necessary condition conjectures
for a graph to have property P . We return to the proposed proof of
Dirac’s Theorem above: ∀x[P (x) → R(x)], ∀x[(P (x)&R(x)) → S(x)],
∀x[(P (x)&R(x)&S(x)) → Q(x)], where we then conclude: ∀x[P (x) →
Q(x)]. The conjecturing program can generate necessary condition conjectures for graphs x where P (x) holds [16]. It does this
by generating expressions representing possible predicates. The simplest (atomic) ones are conditions that code graph theoretic properties.
More complicated ones are built up from these as boolean functions of
atomic predicates. Furthermore, there is a “truth” constraint on expression output: a proposed predicate R(x) (and corresponding conjecture ∀x[P (x) → R(x)]) can only be produced if the claim P (x) → R(x)
is in fact true for all stored graphs x (that is, if the collection of stored
graphs x that satisfy P (x) are all contained in the collection of stored
graphs x that satisfy R(x)). It is worth noting that if Dirac’s Theorem
is true it must be the case that, for every stored graph x, if P (x) is
true then Q(x) must also be true.
Thus, a process that will yield this proof of Dirac’s Theorem will first
generate the necessary condition R(x) for graphs x where P (x) holds.
What might be wanted is that R(x) is not only true for every stored
graph x where P (x) is true but that there are a minimal number of
stored graphs x where R(x) is true but P (x) is not true (“minimality”
is intentionally undefined here—but can be addressed heuristically).
Note, in this case, that it follows that for every stored graph x, if
P (x)&R(x) is true then Q(x) is true (this is just a consequence of the
fact that, for every stored graph x, if P (x) is true then Q(x) must also
be true). In this case we have a proof line: ∀x[P (x) → R(x)].
It might then be possible to iterate the above procedure. We now try
to generate a necessary condition S(x) for graphs x where P (x)&R(x)
holds. Again we want S(x) to be true for every stored graph x where
P (x)&R(x) is true but that there are a minimal number of stored
graphs x where S(x) is true but P (x)&R(x) is not true. In this case
we have another proof line: ∀x[(P (x)&R(x)) → S(x)].
We will then continue to iterate the procedure as many times as
possible (this is enforced to be finite in the conjecturing program
by including a timeout. Of course, from any number of atomic predicates, complex predicates of arbitrary size can be built-up from boolean
operators—but conjectures must, in order to be comprehended and investigated, be of some human-readable, human-comprehendible, length.)
13
-----
At some point our program won’t be able to produce a new predicate
that meets our conditions within our time constraint, and outputs the
last proof line: ∀x[(P (x)&R(x)&S(x)) → Q(x)]. This proof is designed
to justify the conclusion: ∀x[P (x) → Q(x)]. Whether it does, depends
on the human that reads the proof. It may be incorrect (a proof line
may actually be false). The leaps from line to line may be too great
and require more justification. In any of these cases, the human mathematician necessarily holds information that can improve future proofs:
she may know a counterexample to a proof line, and if she is stumped
by an inference will have questions that will motivate new concepts
which can also be fed back to the program.
It is possible to include theoretical knowledge (theorems) in this
procedure. Perhaps it is known that ∀x[P (x) → Q1(x)], ∀x[P (x) →
Q2(x)],. . ., ∀x[P (x) → Qk(x)], and that the predicates Q1(x), Q2(x),. . .,
Qk(x) have been coded. In this case it is enough to prove:
∀x[(P (x)&Q1(x)&Q2(x)& . . . &Qk(x)) → Q(x)].
If we let P [′](x) be the predicate P (x)&Q1(x)&Q2(x)& . . . &Qk(x) then
what we need to prove is: ∀x[P [′](x) → Q(x)] and then proceed exactly
as in the case where we were proving: ∀x[P (x) → Q(x)]. The significant difference will only we in the produced proof: it will include more
facts and thus perhaps be more convincing to other mathematicians—
and make the produced proof more likely to be accepted as a valid
proof.
It is clearly important for the described program to “know” a large
number of graph properties—the more properties that are coded the
greater the chance that the described predicates R(x) and S(x) will
be produced (the same phenomenon occurs with the Conjecturing
program—the more properties are coded for an object-type the more
likely it is that the program will produce a conjecture). It is also important for the program to know examples of significant graphs, ones that
help guide theory development. Historically the famous Petersen graph
was an important example: Petersen introduced it as a counterexample
to Tait’s claim that a bridgeless 3-regular graph is 3-edge-colorable [1].
Thus any further claims about the edge-colorability of cubic graphs
must be responsive to the Petersen graph (that is, must hold for this
example). A top-down program should thus know a large number of
properties as well as a large number of significant examples. Any good
human graph theorist will also know these.
It was mentioned above that the semantic content of conjectures in
the Conjecturing program are richer than the proposition objects
in proof formalization programs like Lean. It was mentioned that
14
-----
these conjecture-objects include evaluation for specific objects among
their methods. It might also be mentioned that the invariant and
property terms in these conjectures are themselves semantically rich.
The concepts/properties/invariants can be viewed as name-function
pairs: each concept comes with a corresponding function that computes
a number (for invariants) or a boolean (for properties). This approach
may be more human-like: to know what a mathematical statement
means is more than to know what the definitions of the terms in the
statement are—but rather how the definition applies (or not) in specific
cases. We wouldn’t say that a student understands the concept of graph
hamiltonicity if she can only give the definition but cannot explain
why some specific graph (maybe a complete graph with five vertices)
is hamiltonian.
Given some property of interest, say the property of being hamiltonian for a graph, the Conjecturing program can then be asked to
produce necessary conditions for being hamiltonian. A produced necessary condition G is true at least for the graphs the program “knows”
(the input graphs). We use Fajtlowicz’s Dalmatian heuristic for deciding what to store temporarily and to eventually produce (an important
feature of this heuristic is that the number of possible stored or conjectured necessary conditions is no more than the number of input
objects—as with humans not much is kept in memory—this is true
too for the expressions: while many are generated very very few are
stored).
In doing research with the Conjecturing program, the user is at
every point required to either prove the conjecture or to find a counterexample (while either of these processes might be automated, and some
experiments have been done, it is not a feature of the program). If
say a necessary condition conjecture is proved it can be added to the
program as “knowledge”, which in turn forces the program to make
“better” conjectures moving forward (again, “better” is meant in a
precise sense). And if the user finds a counterexample to a conjecture
this can also be stored—and again future conjectures will necessarily
be “better”.
Given some concepts and some either mathematical or propositional
operators the program systematically generates all expressions (notatomic, or complex, properties/concepts) of complexity-1 (the atomic
concepts themselves), complexity-2 (unary operators applied to atomic
concepts), etc. This might seem non-human-centric or machine-learning
oriented, but somehow, somewhere in the brain, expressions must be
formed. It might be thought that massive search is “not human”. So
maybe less systematic (more intelligent) expression search might also
15
-----
be possible. A neat fact though is that the expression-generator in
Conjecturing can generate all possible expressions (millions) up to
any human-comprehendible length in no more than a few seconds on
any modern laptop.
5. More Issues and Objections
The idea that an ATP program might be designed using fast expressiongeneration might raise some additional issues.
(1) It might be claimed that humans don’t build expressions or do massive search the way that the Conjecturing program does. There are at least two related issues
here. Researchers might want to code a program to model human mathematical abilities for the reason that this might be
the best way to code mathematical abilities. If human brains
don’t in fact do massive search in doing the things that mathematicians do then there should be non-search ways to perform
these abilities, presumably more cleverly, more efficiently, etc,
than by searching. In fact, we don’t know how human brains do
the things that mathematicians do. How do they find the property P [′] needed in a particular mathematical circumstance? In
fact, some people, Turing included, have thought that search is
essential to intelligence. In the same 1948 paper where Turing
proposed mathematics as an initial field of study for machine
intelligence, he also speculated that “intellectual activity consists mainly in various kinds of search.” [27] And it is also
worth noting that we may have brain abilities beyond what we
consciously realize: Kim Peek was a savant who had memorized
the contents of more than 9,000 books (with access to their contents by page number) [26]; Peek’s case suggests the possibility
of brain structure, capacity and abilities that we haven’t begun
yet to understand.
A second issue is that researchers might want to code programs that do the things intelligent mathematicians do in order to better understand how human mathematicians do them.
Again this is neither here nor there. If we don’t actually know
how a human mathematician could produce a concept P [′] in order to need the needs of some problem, excluding methods like
search because it doesn’t seem like its what a human brain can
possibly do, can only be limiting. Sure any non-search heuristics are worth exploring, but maybe it is the case that humans
do something analogous to systematic search.
16
-----
In fact, the conjecturing program does not face the problem of interminable search (which may be the underlying objection to search): it forms expressions from atomic properties
up to complex properties of complexity that are still humancomprehendible very fast (in less than a second)
(2) It might be asked which concepts should be used? There
are reasons to make a variety of choices here, in order to satisfy
a variety of goals. So there can’t be a “correct” answer here.
What would be interesting is more information, from the work
of developers, of what the outcomes of a variety of design choices
are: what works and what doesn’t? Rather than make any a
priori requirements, it might be better to use empirical evidence
to inform design choices.
In fact, any published proof is knowledge intensive—a graph
theorist (or any research mathematician) will know lots of concepts. It would certainly be non-human-oriented to require
a program to reinvent her own concepts. And there are infinitely many graph theoretic concepts (invariants, properties)
that could be invented or created—but the ones that a mathematician will typically see in a proof aren’t created from scratch—
they are exactly ones that are already known or are built-up
from these.
For the purposes of our conjecturing program, the more
concepts the program knows the better the conjectures are (here
“better” is used in a precise sense: for generated upper bound
conjectures, say, the greater the number of input objects where
the minimum of the conjectured upper bounds equals the value
of the target invariant; this is not only an empirical observation
but also a logical necessity due to the design of the program).
The authors have experimented with coding specific graphs and
graph theory concepts—to leverage for the production of better
graph theory conjectures. They are available at:
[https://github.com/math1um/objects-invariants-properties](https://github.com/math1um/objects-invariants-properties)
(3) Many proofs require a new idea or a new concept.
Where will these come from? It might be thought that
automatic concept generation also needs to be included
in the design of any automated theorem-proving program. It is true that new concepts are invented and appear in
the mathematical literature, sometimes with rather limited motivation, but often in response to attempts to solve or address
17
-----
specific difficulties. It doesn’t seem to be the case that these
are all really logical functions of pre-set atomic properties. And
where did these atomic properties in any specific mathematical
domain come from? Are they necessarily all possible atomic
properties? There is no obvious reason why this should be the
case.
Some, and maybe most, published proofs use existing concepts. So a first-step theorem-proving program might just use
existing concepts. If a proof in fact needs a new idea this program won’t find a proof. So it won’t be the best possible
theorem-proving program, but if it can produce some proofs
some of the time it would be an advance. A program with even
more abilities might do more.
There has for instance been research on the automated generation of new mathematical concepts [17, 3]. This research has
often been motivated by a goal of producing “interesting” concepts; it would be more useful in this context to somehow produce concepts that help advance existing mathematical goals:
find a new concept P [′] for instance...
6. Advancing Top-down ATP
There are good reasons to believe that top-down ATP approaches
might lead to programs that can occasionally prove an open conjecture in well-studied domains in mathematics. It may be a key fact
that research on the development of top-down programs must be more
faithful to the actual practice of mathematicians. And, at least, new
approaches might invigorate bottom-up ATP research. Wos for instance claimed, “Heavy and continued experimentation is crucial to
solving many of the research problems that currently confront automated reasoning” [32].
What should be done and what issues have not been addressed? It
is an important fact that mathematics is a large communal enterprise.
It is reasonable to assume that a successful ATP program will function as a single (super) mathematical agent within a community of
mathematicians—it is these mathematicians that will function as the
ultimate arbiters as to what counts as “rigorous” and what concepts
and properties will be included in the programs reasoning, and what
examples its theorizing must be responsive to.
(1) Code lots of knowledge. It would be useful to code every
graph that’s appeared in a graph theory paper, every invariant
or property that’s appeared in a graph theory paper, and and
18
-----
every theorem about graphs. (It is also important to be clear
about which concepts apply to graphs as opposed to families
of graphs, pairs of graphs, collections of vertices in a graph of
specific kinds, etc—these are all related, but of importantly different types.) Coding this knowledge is obviously beyond the
abilities of any one human, but as a disciplinary project could
have many advantages. While there are some 10 million connected graphs of order no more than ten, maybe there are only
a few thousand graphs that have appeared in graph theory papers. Coding and using only graphs that humans have produced
is possible and might seem reasonable: these are the ones that
humans have thought enough about to be significant in some
way. Searching through a few thousand graphs, slowly growing over time in response to human interests, should remain
feasible—and might arguably not count as massive search.
Now it might seem ad hoc to code hundreds or thousands of
domain-specific concepts in order to generate potential proofs
in a domain, but in fact human mathematicians don’t start
from scratch; they start with a lot of graph theoretic knowledge. If you want a machine to produce human-like proofs the
machine probably should have human-like knowledge to start.
This might seem reminiscent of Lenat’s Cyc [18] or maybe the
expert-systems paradigm, both of which tried to code a lot of
human knowledge—but in those cases it wasn’t clear how that
coded knowledge would translate to desired outputs. We propose that knowledge will be encoded as concepts that will appear and be used in a specific way, which by design will advance
the output goals of our programs.
(2) Pay careful attention to actual mathematical practice..
It is important to look at what mathematicians actually do and
produce, rather than try to fit all of their productions into a universal framework. Mathematicians prove things about graphs,
integers, matrices, continuous functions, etc. We prove statements about some specific kind of mathematical object, and
never statements say quantified over “mathematical objects”
(viewed as an abstraction). There are many commonalities,
things that we do in proving mathematical statements in any
domain—but maybe we do some things differently in some domains. There’s no reason to start with any requirements of how
acceptable proofs should be, but rather to see what are counted
as acceptable proofs, and then produce similar proofs.
19
-----
(3) Examples are knowledge. Every mathematician knows this.
Our theories must first of all fit these fundamental examples. It
might seem that examples are irrelevant for automated theorem
proving programs—they don’t appear in proofs. Nevertheless
our theorem statements must hold for the objects of investigation. These statements should be semantically active—they
should be things that can be checked against examples—and
thus it is useful to have some selection of examples. It might
seem that, if a proof is thought of largely syntactically as sequences of statements that are validly inferred for initial statements, examples are irrelevant. But if inferential leaps are allowed, it is crucial to spot-check inferred statements: they must
for instance be true for objects in their domains of quantification. A program can do these checks if there is a store of
examples. And the best examples are ones that have proven
theoretically relevant in previous research.
(4) Mathematics grows in response to interaction. The most
important interactions are when one mathematician makes an
inferential leap that is not understood or is challenged by other
mathematicians. Then more explanation is required—and possibly new concepts will be created and appealed to. Automated
theorem proving programs should grow along with the human
mathematics they are engaged with—among other things their
conceptual base must grow along with the mathematical domains they are engaged in.
Most interesting theorems began life as conjectures that weren’t
immediately resolved. They were investigated and in many (or
most) cases new concepts were formed. To prove something in
a domain never means to prove it as it existed at a certain date
(with only the concepts that had been published up to that
date). It means to prove it with all available concepts, as the
subject grows. A successful theorem-proving program should
also be continually enriched with the concepts in a domain.
Mathematics is often a limiting case. In philosophy for instance, a
theory of knowledge should explain how we have all kinds of knowledge—
including mathematical knowledge. But mathematical knowledge is
importantly different from other kinds of knowledge. But without accounting for mathematical knowledge, an important kind of knowledge,
a theory of knowledge is incomplete. Currently artificial intelligence
has made interesting and important advances. Some people think that
20
-----
machines will soon be smarter than humans. Well, this might be generally true. But again, mathematics seems like an important feature
of human intelligence—and one where artificial intelligence programs
don’t seem to be advancing much on humans. Again, Turing’s vision of
programs that can prove interesting and new mathematical conjectures
is still unrealized.
But why? The reason could be that researchers in these last 60odd years have focused on a specific paradigm of automated theoremproving programs—build up from simple facts using valid inference
rules. This may be too distant from actual mathematical practice.
Developing theorem-proving programs that make inferential leaps may
be the way to make future progress. Here we have explained how a
program might make inferential leaps that are in some sense reasonable
and even justified. In practice valid inferences are far more varied than
ones prescribed by logicians: ultimately what counts as a valid inference
is just what mathematicians take to be a valid inference. ATP program
developers should model these inferences.
References
[1] N. Biggs, E. K. Lloyd, and R. J. Wilson. Graph Theory, 1736-1936. Clarendon
Press, 1986.
[2] L. Carroll. What the tortoise said to Achilles. Mind, 4(14):278–280, 1895.
[3] S. Colton. Refactorable numbers—a machine invention. Journal of Integer Sequences, 2(99.1):2, 1999.
[4] R. Diestel. Graph theory, volume 173 of Graduate Texts in Mathematics.
Springer-Verlag, Berlin, third edition, 2005.
[5] S. Fajtlowicz. On conjectures of Graffiti. II. Congr. Numer., 60:189–197, 1987.
Eighteenth Southeastern International Conference on Combinatorics, Graph
Theory, and Computing (Boca Raton, Fla., 1987).
[6] S. Fajtlowicz. On conjectures of Graffiti. In Proceedings of the First Japan
Conference on Graph Theory and Applications (Hakone, 1986), volume 72,
pages 113–118, 1988.
[7] S. Fajtlowicz. On conjectures of Graffiti. III. Congr. Numer., 66:23–32, 1988.
Nineteenth Southeastern Conference on Combinatorics, Graph Theory, and
Computing (Baton Rouge, LA, 1988).
[8] S. Fajtlowicz. On conjectures of Graffiti. IV. In Proceedings of the Twentieth
Southeastern Conference on Combinatorics, Graph Theory, and Computing
(Boca Raton, FL, 1989), volume 70, pages 231–240, 1990.
[9] S. Fajtlowicz. On conjectures of Graffiti. V. In Graph Theory, Combinatorics,
and Algorithms, Vol. 1, 2 (Kalamazoo, MI, 1992), Wiley-Intersci. Publ., pages
367–376. Wiley, New York, 1995.
[10] M. Ganesalingam and W. T. Gowers. A fully automatic theorem prover with
human-style output. Journal of Automated Reasoning, 58(2):253–291, 2017.
[11] B. R. Gelbaum and J. Olmsted. Counterexamples in analysis. Courier Corporation, 2003.
21
-----
[12] H. L. Gelernter. Realization of a geometry theorem proving machine. In IFIP
Congress, pages 273–281, 1959.
[13] K. Hartnett. Will computers redefine the roots of math? Quanta Magazine,
19.
[14] I. Lakatos. Proofs and refutations: The logic of mathematical discovery. Cambridge university press, 1976.
[15] C. E. Larson and N. Van Cleemput. Automated conjecturing I: Fajtlowicz’s
Dalmatian heuristic revisited. Artificial Intelligence, 231:17–38, 2016.
[16] C. E. Larson and N. Van Cleemput. Automated conjecturing III: Propertyrelations conjectures. Annals of Mathematics and Artificial Intelligence,
81(3):315–327, 2017.
[17] D. B. Lenat. On automated scientific theory formation: a case study using the
AM program. Machine intelligence, 9:251–286, 1979.
[18] D.B. Lenat. Cyc: A large-scale investment in knowledge infrastructure. Communications of the ACM, 38(11):33–38, 1995.
[19] W. McCune. Solution of the Robbins problem. Journal of Automated Reasoning, 19(3):263–276, 1997.
[20] A. Newell and H. A. Simon. The Logic Theory machine. IRE Transactions of
Information Theory, 2:61–79, 1956.
[21] S. Ornes. How close are computers to automating mathematical reasoning.
Quanta Magazine, 2020.
[22] M. N. Rabe and C. Szegedy. Towards the automatic mathematician. In International Conference on Automated Deduction (CADE), pages 25–37. Springer,
2021.
[23] M. Reek. A top-down approach to teaching programming. In Proceedings of
the twenty-sixth SIGCSE technical symposium on Computer science education,
pages 6–9, 1995.
[24] J. A. Robinson. A machine-oriented logic based on the resolution principle.
Journal of the ACM (JACM), 12(1):23–41, 1965.
[25] H. A. Simon and A. Newell. Heuristic problem solving: The next advance in
operations research. Operations Research, 6(1):1–10, 1958.
[26] D. A. Treffert and D. D. Christensen. Inside the mind of a savant. Scientific
American, 293(6):108–113, 2005.
[27] A. Turing. pages 395–432. Oxford University Press, (1948)-2004.
[28] V. Voevodsky. The origins and motivations of univalent foundations. IAS: The
Institute Letter, Summer 2014:8–9, 2014.
[29] H. Wang. Toward mechanical mathematics. IBM J. Res. Develop., 4:2–22,
1960.
[30] A. N. Whitehead and B. Russell. Principia mathematica to* 56, volume 2.
Cambridge University Press, 1997.
[31] H. S. Wilf and D. Zeilberger. An algorithmic proof theory for hypergeometric
(ordinary and “q”) multisum/integral identities. Inventiones mathematicae,
108(1):575–633, 1992.
[32] L. Wos. Automated reasoning: 33 basic research problems. Prentice Hall, 1988.
[33] W.-t. Wu. Mechanical theorem proving in geometries: Basic principles.
Springer Science & Business Media, 2012.
22
-----
Department of Mathematics and Applied Mathematics, Virginia Commonwealth University, Richmond, VA 23284, USA
Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Ghent, Belgium
23
-----
| [
"C. E., Larson",
"N., Van Cleemput"
] | 2023-08-08T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2308.02540 | null | https://www.semanticscholar.org/paper/3f8d6627e7a20d199e8ee829f8fa74855ff02d18 |
Toward Adaptive Reasoning in Large Language Models with Thought Rollback | Large language models (LLMs) have been routinely used to solve various tasks using step-by-step reasoning. However, the structure of intermediate reasoning steps, or *thoughts*, is rigid and unidirectional, such as chains, trees, or acyclic-directed graphs. Consequently, the resulting inflexible and forward-only reasoning may not address challenging tasks and fail when the LLM frequently gives false responses, i.e., hallucinations. This paper proposes a new reasoning framework, called *Thought Rollback* (TR), allowing LLMs to adaptively build thought structure while maintaining effective reasoning toward problem-solving under hallucinations. The core mechanism of TR is *rolling back thoughts*, which allows LLMs to perform error analysis on thoughts, and thus roll back to any previously mistaken thought for revision. Subsequently, by including such trial-and-error in the prompt to guide the LLM, each rollback leads to one more reliable reasoning path. Therefore, starting with a simple prompt without human annotations, LLM with TR adaptively and gradually explores thoughts for a correct solution. Comprehensive experiments on mathematical problems and multi-task reasoning demonstrate the state-of-the-art performance of TR in terms of problem-solving rate and interaction cost. For instance, the solving rate of GPT-4 with TR outperforms the current best by $9\%$ on the MATH dataset. The source code is available under the folder *examples/ThoughtRollback* of https://github.com/iQua/llmpebase. | A new reasoning framework, called Thought Rollback (TR), is proposed, allowing LLMs to adaptively build thought structure while maintaining effective reasoning toward problem-solving under “hallucinations”. | # Toward Adaptive Reasoning in Large Language Models with Thought Rollback
**Sijia Chen** [1] **Baochun Li** [1]
**Abstract**
Large language models (LLMs) have been routinely used to solve various tasks using step-bystep reasoning. However, the structure of intermediate reasoning steps, or thoughts, is rigid and
unidirectional, such as chains, trees, or acyclicdirected graphs. Consequently, the resulting inflexible and forward-only reasoning may not address challenging tasks and fail when the LLM
frequently gives false responses, i.e., “hallucinations”. This paper proposes a new reasoning
framework, called Thought Rollback (TR), allowing LLMs to adaptively build thought structure while maintaining effective reasoning toward
problem-solving under “hallucinations”. The
core mechanism of TR is rolling back thoughts,
which allows LLMs to perform error analysis on
thoughts, and thus roll back to any previously
mistaken thought for revision. Subsequently,
by including such trial-and-error in the prompt
to guide the LLM, each rollback leads to one
more reliable reasoning path. Therefore, starting with a simple prompt without human annotations, LLM with TR adaptively and gradually
explores thoughts for a correct solution. Comprehensive experiments on mathematical problems and multi-task reasoning demonstrate the
state-of-the-art performance of TR in terms of
problem-solving rate and interaction cost. For
instance, the solving rate of GPT-4 with TR outperforms the current best by 9% on the MATH
dataset. The source code is available under the
[folder examples/ThoughtRollback of https://](https://github.com/iQua/llmpebase)
[github.com/iQua/llmpebase.](https://github.com/iQua/llmpebase)
1Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada. Correspondence to:
Sijia Chen <[email protected]>.
_Proceedings of the 41_ _[st]_ _International Conference on Machine_
_Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by_
the author(s).
**1. Introduction**
Large Language Models, initially designed for text generation with autoregression, are widely recognized to excel
in a diverse array of natural language processing (NLP)
tasks. Yet, at a particular model scale, their reasoning abilities, particularly in scaled-up versions like GPT-4 (OpenAI,
2023) and Llama 2 (Touvron et al., 2023), heavily depend
on prompt engineering. With well-crafted prompts — even
just a simple Let’s think step by step (Kojima et al., 2022)
— LLMs are able to perform step-by-step reasoning and
achieved noteworthy success in mathematical, symbolic,
and common sense tasks. With reasoning, LLMs are capable of producing coherent language sequences, called
_thoughts, which serve as intermediate reasoning steps to-_
ward solving the problem at hand. Extended from simple
chain reasoning (Wei et al., 2022) with linear left-to-right
thoughts, more complex reasoning became feasible in recent works by establishing thought structures that resembled
trees (Yao et al., 2023) and graphs (Besta et al., 2023; Zhang
et al., 2023; Luo et al., 2024).
However, existing thought structures are unidirectional and
thus allow a forward-only reasoning process, meaning that
thoughts are generated sequentially from the start to the end.
The efficacy of this reasoning process hinges on a redundant
and, consequently, inefficient thought structure, requiring
thorough explorations of each thought before progressing to
the next. One major drawback of forward-only reasoning is
that errors can propagate quickly (Yu et al., 2024). Consider
the common case where one thought is incorrect or inaccurate: with forward-only reasoning, all thoughts derived from
it can be misled. Even with revisions based on step-by-step
evaluations (Weng et al., 2023), such propagation of errors
can introduce further deviations from the correct path of reasoning, since LLMs have been found to confidently provide
false information (i.e., “hallucinations” or “laziness”) (Jiang
et al., 2023).
Indeed, humans also provide false information as frequently
and randomly as LLMs do during reasoning but can still
solve challenging problems eventually. This is attributed
to adaptive reasoning, in which one does not pre-defined
a fixed structure for thoughts and does not simply deduce
forward but adaptively adjusts the thought structure after
evaluating errors during reasoning. Such reasoning enables
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
Q QuestionThoughtSolution N-1, S-1The first step is to calculate the expected winnings for eachtype of card drawn. There are four Aces in a deck, so theexpected winnings for drawing an Ace is 4 * $1 = $4.Similarly, there are four of each card from 2 through 10, sothe expected winnings for these cards is 4 * (2 + 3 + 4 + 5 +6 + 7 + 8 + 9 + 10) = $180. There are 12 face cards in a error of Chain thought structure: N-2, S-2The next step is to calculate the expected winnings for drawinga card from 2 through 10. For each of these nine cards, thereare four of each in the deck (one each of hearts, diamonds,clubs, and spades). .... Sum(2 through 10) = 9/2 * ($3.50 +$17.50) = $94.50. error of TR structure: N-2, S-2The error lies in the calculation for cards 2 through 10. Thecorrect approach would be to calculate the expected value foreach card from 2 to 10 individually, ..... After calculating theexpected value for each card, we would sum these valuestogether. **error analysis of** TR structure:
_deck, so the expected winnings for drawing a face card is_
_12 * $20 = $240._
N- Node Identity
S- Step Index N-3, S-1 error of Tree thought structure: N-7, S-2 revised thought of TR
structure:
Bad Thought _There are 4 Aces in a deck, so the probability of drawing an_
_Ace is 4/52 or 1/13. .... E(Ace) = (1/13 * $1 * 2) + (1/13 * $1_ _The next step ... However, remembering the_
_* 3) + (1/13 * $1) + (1/13 * $1) = $0.154 + $0.231 + $0.077_ **_error from the previous experience, we need to_**
Forward Reasoning _+ $0.077 = $0.539_ N-6, S-4 thought of TR structure: **_rememberfor the cards 2 through 10, not 36 or 1. .... So, thecorrect calculation for the expected value of the that there are exactly 9 different values_**
R Rollback _Adding up all these expected winnings, we get the totalexpected winnings for drawing any card from the deck:_ _cards 2 through 10 should be (4/52) * $2 + (4/52) *$3 + ... + (4/52) * $10._
R Thought Inducedby Rollback _E(Total) = E(Ace) + Sum(2 through 10) + E(Face cards) =$1.75 + $94.50 + $105 = $201.25._
(a). Chain Thought Structure (b). Tree Thought Structure (c). Thought Rollback (TR) Structure
_Figure 1. Schematic illustrating thought structures for problem solving with GPT-4. The chain, tree, and our thought rollback (TR)_
structures are plotted based on the NetworkX lib (Hagberg et al., 2008). The question from the MATH dataset (Hendrycks et al., 2021b) is:
_I draw a card from a standard 52-card deck. If I draw an Ace, I win 1 dollar. If I draw a 2 through 10, I win a number of dollars equal to_
_the value of the card. If I draw a face card (Jack, Queen, or King), I win 20 dollars. If I draw a ‘clubsuit’, my winnings are doubled, and if_
_I draw a ‘spadesuit’, my winnings are tripled. ... What would be a fair price to pay to play the game? In (c), we present a partial thought_
structure built by GPT-4 with TR and place the full version of the TR structure in Figure 4 of the Appendix.
_N_ -2 S-2 and thus rolls back to 1-th step N -1 S-1 to create
two new reasoning paths. The rollback N -6 _N_ -1 leads
_→_
to the revised thought N -7 S-2. The rollback N -2 _N_ -1
_→_
utilizes the error analysis to enhance the prompt and obtains
_N_ -3 S-2, leading to a correct answer 15.48 compared to the
previous mistaken 3.87.
We observe four contributions of TR. First, it is a lightweight
and generalized automatic prompting framework. TR allows LLMs to perform complex reasoning effectively on
various tasks without introducing task-specific human annotations in the prompt or additional human-made designs.
Second, the performance of TR is robust to the “hallucinations” as LLMs are able to reconsider and revise any existing
thoughts adaptively and repeatedly during reasoning. Thus,
third, TR is cost-friendly as the thought structure is built
progressively to reach a solution instead of relying on bulky
search structures (Yao et al., 2023) or question analogies
(Yu et al., 2024). Finally, our evaluation of TR on mathematical problems and multi-task reasoning demonstrates
that TR outperforms some state-of-the-art approaches while
maintaining a lower interaction cost.
**2. Related Work**
By only guiding the reasoning behavior of LLMs, such as
GPT-4 (OpenAI, 2023) and Llama2 (Touvron et al., 2023)
with the text prompt, prompt engineering (Brown et al.,
2020; Kojima et al., 2022) is parameter efficient and often
matches or surpasses fine-tuning performance. Therefore,
plenty of work has been proposed to enable LLMs to perform multi-step reasoning containing a series of intermediate steps, each known as the thought presented as a text
sequence. Starting from chain-of-thought (CoT) (Wei et al.,
2022) prompting, which provides reasoning examples in the
prompt to deduce a chain of thoughts, subsequent endeavors,
humans to begin with one simple or wrong thought but frequently introspect during reasoning, that is, to reconsider
previous steps and build new reasoning paths from these
reflections. In this paper, we argue that this iterative error
correction nature of adaptive reasoning is essentially supported by rollback — jumping to previous steps with a new
experience to reconsider the reasoning.
Therefore, we propose a new reasoning framework, Thought
Rollback (TR), relying upon the rolling back of thoughts to
enable the adaptive reasoning of LLMs for general problem
solving. TR embraces a rollback controller and a prompt
_enhancer that works seamlessly to enable the LLMs to_
generate an effective thought structure from one thought
derived from a simple input prompt, as shown by Figure 1.
LLMs with TR start with generating one thought from a
simple zero-shot prompt containing only the question description. Subsequently, for each generated thought, the
_rollback controller allows the LLM to analyze the obtained_
chain of thoughts and thus determines whether to roll back
and to which previous thought. Once rollback is triggered,
_prompt enhancer accumulates this error analysis as experi-_
ence in the prompt. As a result, by avoiding making similar
mistakes mentioned by experience, LLM is able to generate
a new and more effective reasoning path from the chosen
thought. Therefore, “hallucinations” that occur in thought
or analysis of LLM may not influence reasoning due to
the continuous thought revision guaranteed by the iterative
rollbacks during reasoning. For example, in Figure 1, different from chain (Wei et al., 2022) and tree (Yao et al.,
2023) structures, which assume a fixed and unidirectional
structure, reasoning with rollbacks enables LLM to build a
thought structure adaptively and revise thoughts to achieve
complex but reliable reasoning. Specifically, after reaching
the N -2 S-2 and N -6 S-4, LLM finds an error in 2-th step
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
**3. Preliminary**
**3.1. Problem Statement**
Given a pre-trained large language model (LLM) denoted
as f ( ), the prompt engineering is to design the prompt
_·_
I (·) to make the model perform desired reasoning behavior toward addressing the given problem x. Specifically, multi-step reasoning contains T intermediate steps
_z0...T = [z0, z1, ..., zT ] to bridge z0 := x and the answer_
_zT := y. To get z0...T, we focus on step-wise thought_
generation in which each thought is a coherent language
sequence zn, behaving as one intermediate reasoning step,
and zn is generated as zn _f (zn_ I (z0,1...n 1)). There_∼_ _|_ _−_
fore, as thought is the LLM’s output, we can define the bad
thought caused by the “hallucinations”, “laziness”, or false
reasoning of LLMs as _zn._
These generated thoughts z0...T naturally follow a specific
b
structure, such as a chain or tree. These structures are unidirectional and thus only support forward-only reasoning,
which proceeds in a linear, sequential manner, meaning that
LLMs only generate the subsequent thought zn+1 from zn.
For instance, any edge en,m of the structure is limited to
_m =_ _n, n + 1_ while m >= n.
_{_ _}_
This paper focuses on alleviating the effect of bad thoughts
on the solution by making LLM not simply perform forwardonly reasoning but achieve adaptive reasoning, which allows LLMs to 1) start from a simple prompt I (z0), 2). selforganize the thought structure adaptively during reasoning,
and thus 3). when _zn occurs, LLM can make revisions and_
create better new reasoning paths till getting the solution.
Specifically, not only advancing the reasoning sequentially, b
any previously generated thought will be reconsidered by
continuously rolling back from n-th thought to one previous
_m-th thought, where m_ [0, n 1].
_∈_ _−_
especially SC (Wang et al., 2022) and Complex CoT (Fu
et al., 2023) augment the chain reasoning. Recent advances
extend the chain structure into structured thoughts. ToT
(Yao et al., 2023) and BoT (Sijia et al., 2024) pre-defines
the thought structure as a tree, thus supporting exploring
multiple thoughts in each step before generating the next.
The graph of thoughts (GoT) (Besta et al., 2023) and cumulative reasoning (CR) (Zhang et al., 2023) further instantiate
thoughts toward a solution as the graph structure. Another
line of work focuses on the thought structure that allows
the thought ensemble in each step. Thought propagation
(Yu et al., 2024) explores analogous problems and then aggregates their results to update the solution for the given
question, leading to a radial thought structure. To the best
of our knowledge, none of the existing work supports the
cyclic structure to allow LLMs to revise previous thoughts
or recreate a new reasoning path from the previous step after
being blocked at the current reasoning step. We fill this gap
by proposing the rollback of thought, leading to a thought
structure of directed cyclic graphs.
Despite these achievements, LLMs often struggle with complex tasks, primarily due to the frequent occurrence of “hallucinations”—producing false outputs (Jiang et al., 2023; Yu
et al., 2024), and “laziness”—yielding invalid or no output.
Therefore, after noticing that LLMs have self-verification
(Weng et al., 2023; Madaan et al., 2023) abilities and thus
can analyze the answer for further correcting the errors
in reasoning (Zheng et al., 2023; Wu et al., 2024). However, the most recent work (Huang et al., 2024) argues that
LLMs cannot self-correct their reasoning, emphasizing the
invalidity of applying simple verification to the reasoning
path. Thus, most recent work either builds iterative-based
verification (BoT (Sijia et al., 2024)) or focuses on step-by**step verification (Ling et al., 2023; Lightman et al., 2024).**
Combining these insights, we aim to exploit LLMs to analyze intermediate thoughts during reasoning to correct these
thoughts and adjust reasoning direction adaptively. Continuous verification and revision may eliminate the negative
impact of “hallucinations” or mistakes on the solutions.
Another related research stream is automatic prompting
(Kojima et al., 2022; Zhang et al., 2022), which automatically constructs effective prompts to facilitate reasoning
without human-made and task-specific demonstrations. As
LLMs can learn from mistakes to become better reasoners (An et al., 2023; Sijia et al., 2024), this paper also releases human efforts from the prompt design by boosting the
prompt with the error analysis of thoughts. We also show
that by accumulating error analysis in the prompt during
reasoning, LLMs are able to avoid making similar mistakes
and explore correct solutions with interaction cost far less
than ToT (Yao et al., 2023) and BoT (Sijia et al., 2024).
**3.2. Motivation: Forward-only reasoning fails in bad**
**thoughts**
Forward-only reasoning may fail as the bad thought _zn is_
caused by the following three cases of error propagation.
**Case [zm,** _zm+1, ...,_ _zn]. A bad or illogical thought_ _zm leads b_
to all subsequent errors, where _zn_ _f (zn_ I (z0,...m...n 1))
_∼_ _|_ _−_
and m < nb b. b b
**Case [zm, zm+1...,** _zn]._ _zm does not lead to direct mistakes b_ b b
but causes a bad thought _zn after many steps. For instance,_
this appears whenb _z bm behaves as one part of a solution. b_
**Case [zm...n** 1, _zn]. A bad thought b_ _zn may arise from one_
_−_
previous correct though b _zm because the wrong reasoning_
direction appears from this step. b b
The chain reasoning, typically in Chain-of-thought (CoT)
prompting, generates a chain of thoughts z0,1...T (Wei et al.,
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
2022) sequentially; thus, when any cases appear, it gets a
mistaken answer y. Subsequent Complex CoT (Fu et al.,
2023) provides well-designed examples in z0 to decrease
the frequency of bad thoughts. And SC (Wang et al., 2022)
performs majority voting for many generated chains. They LLM
are still trapped in the error propagation of chain structure,
as shown in Figure 1 (a). R-1 Rollback-1
The tree reasoning, such as Tree of Thoughts (ToT) (Yao
et al., 2023), Graph of Thoughts (GoT) (Besta et al.,
2023), and Thought Propagation (TP) (Yu et al., 2024),
extends the chain thought structure by generating multiple thoughts in each step. For instance, ToT or GoT contains P n-th thought, representing as _zn[(1)][, ..., z]n[(][P][ )]_
_∼_
_f (zn_ I (z0,1...n 1)). These approaches hope to explore
_|_ _−_
more thoughts to increase the possibility of getting a correct
thought. However, error propagation still exists, and any
cases that appear in one reasoning path will inevitably cause
the failure, as shown in Figure 1 (b).
Therefore, we argue that to address the error propagation,
the reasoning should continuously reconsider the source of
error, that is, to find and fix the previous m-th thought after
reaching n-th thought, where m < n. This is equivalent to
_human-like reasoning, in which one may not simply deduce_
_forward but look back to check previous thoughts to decide_
_to continue, revise old thoughts, or create a new reasoning_
_path. Without losing generality, we refer to such reasoning_
behavior as the rollback. In the context of reasoning, to
enable such a rollback n → _m, we allow LLM to adaptively_
build the edge en,m with m < n during reasoning, making
the thought structure a directed graph with cycles.
**3.3. Motivation: Error analysis induces better thoughts**
The rollback mechanism is insufficient to support adaptive
reasoning in LLMs. Without introducing more information
to prompt LLMs after each rollback, LLMs may repeat
similar mistakes in the thought generation, which more
rollbacks in reasoning cannot solve. Thus, it is essential to
enable LLMs to know why rollback is triggered and how to
avoid producing the _zm+1._
Motivated by the effectiveness of enhancing prompts by
b
including an error report (Sijia et al., 2024), we propose that
analysis of [z1, ..., zn] could be rolled back to zm to guide
the thought generation. Also, the work (Ling et al., 2023)
pointed out that LLMs perform more reliable reasoning in
CoT when using step-by-step verification. We aim to allow
LLMs to perform rollback-by-rollback verification. First,
this can produce more analysis to facilitate the subsequent
reasoning. Second, invalid or mistake rollbacks can be
removed, thus also eliminating the cycles in the thought
structure.
R
LLM
LLM
LLM
R-1 Rollback-1
LLM
_Figure 2. Schematic illustrating the rollback of thought when the_
LLM with TR reaches the n-th reasoning step. We add A[χ]z0[(]...m[·][)] _−1_
in the reasoning from zm 1 to zm to cover the case that zm 1
_−_ _−_
may also derive from a rollback. We present a clear example from
SVAMP in Figure 5 of the Appendix.
**4. Thought Rollback Framework**
**4.1. Reasoning Overview**
In contrast to existing approaches relying on pre-defined unidirectional thought structures, which are limited to forwardonly reasoning, Thought Rollback (TR) generates a bidirectional thought structure by adaptively deducing forward and
rolling back of thoughts. After reaching a reasoning step zn,
as shown in Figure 2, TR allows LLM to roll back to the bad
thought zm after analyzing the existing thought chain z0...n.
As the error analysis A[n]m [is to be accumulated in the prompt,]
a new and more reliable reasoning path zm[n] [is generated]
from zm. Therefore, iteratively performing this rollback
develops the thought structure from a simple thought to
a directed graph with cycles. Such adaptive reasoning is
summarized into three stages.
**Initialization. Generate thought z1 ∼** _f (z1|I (z0))._
**Rollback of thoughts. For each generated thought zn, roll-**
_back controller exploits LLM to determine a rollback to one_
thought zm ∈ _z0...n._
- Once the rollback to m [0, 1, ..., n] is triggered, the
_∈_
reasoning of LLM rolls back to the thought zm 1 and
_−_
_prompt enhancer is used to enhance the prompt. Subse-_
quently, the reasoning continues by creating a new mth thought zm[n] [and generating][ z][n][+1][, where][ z]m[n] [means]
a m-th thought deduced from a rollback from n.
- When no rollback is required, generate zn+1.
**Early stopping. Stop reasoning when TR yields K number**
of solutions obtained. Otherwise, continue the Rollback of
thoughts.
**Solution ensemble. Perform weighted majority voting on**
_K solutions._
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
**4.2. Rolling Back Thought with Reasoning Analysis**
During reasoning, rollback controller enables an adaptive rollback by exploiting LLM to determine the rollback
_n →_ _m. However, making LLM know the concept of roll-_
back may introduce unnecessary complexity. Thus, TR
supports the rollback mechanism by performing error analysis on thoughts. Specifically, with a task-agnostic prompt
IR (R, [z0...n]), where R is a common error analysis instruction, LLM is guided to analyze a thought chain [z0...n],
leading to the error analysis A[n]m _[∼]_ _[f][ (][A][|][I][R]_ [(][R,][ [][z][0][...n][]))][.]
Eventually, LLM is able to identify the indexes _M of bad_
thoughts _z_
_m_ _M_ [.]
_∈_
To get which thought to roll back to from zn, TR follows[c]
b
the rule to roll back to the one step before the first badb
thought. There are two reasons for this. First, generating the
next thoughts from a bad thought is unreasonable. Second,
we aim not to remove the bad thought but to generate a
new reasoning path. Thus, we choose zm 1 as the rollback
_−_
destination, where m = arg min _M_ . Besides, one thought
will not be selected as the rollback destination more than
_U times to avoid all subsequent thoughts rolling back to[c]_
the same previous thought. Thus, when the number of
rollbacks to a thought reaches U, the next earliest one of
_m = arg min_ _M_ arg min _M_ will be selected.
_\_
n o
Therefore, when _M is not empty, TR generates the next_
[c] [c]
thought zm[n] [from][ m][ −] [1][, meaning that a new reasoning]
path [z0...zm−1, zm[c][n] []][ derived from the rollback][ n][ →] _[m][ is]_
created for the thought structure. As the existing z0...n
remains unchanged, the reasoning continues by generating
_z0...n+1. For ease of description, we define n →_ _m as the_
_outgoing rollback for z0...n+1 and the incoming rollback for_
the new reasoning path [z0...zm 1, zm[n] []][.]
_−_
**4.3. Enhancing the Prompt with Errors as Experience**
Through iterative rollback from the n-th to the m 1-th
_−_
step, TR gains the opportunity to address the three scenarios
outlined in subsection 3.2. However, as discussed in 3.3,
regenerating a next thought zm[n] [based on the same prompt]
may repeat existing mistakes in the new reasoning path.
Especially considering that TR is built upon the prompt
containing no human annotations, the thought regeneration
after the rollback is equivalent to randomly exploring zm[n] [as]
in unidirectional structures.
Therefore, TR embraces prompt enhancer to also roll back
the error analysis A[n]m [to the][ m][ −] [1][-th thought. Unlike]
BoT (Sijia et al., 2024) with outcome analysis, which utilizes error feedback on the final result, TR performs process
analysis, i.e., rollback-by-rollback verification, to get error
reports on intermediate thoughts, guiding the subsequent
thought generation. With error analysis, each rollback is
regarded as a trial on generating subsequent thoughts for
_zm_ 1 because the analysis contains a trial experience: what
_−_
_mistakes may appear in the following steps of zm−1. By_
including A[n]m [as an experience in the prompt, LLMs can]
avoid making existing bad thoughts after learning from mistakes. Eventually, each rollback n → _m creates the error_
experience A[n]m[.]
**Experience accumulation.** The thoughts Zz[χ]0[(]...q[·][)] _−1_ =
_zj[i][|][z]j[i]_ _z0...q_ 1, j [0, q 1], i _χ (j)_
of a reasoning path[∈] _[z][0][...q][−][1][, z][i][ /]∈ z0...q_ _−1 may derive from multiple ∈_ _−_ _∈_
_−_
_incoming rollbacks, where χ (j) is the set of rollbacks_
whose destination is j-th thought of this path. As each
rollback creates an error experience from one trial of the
given question, incoming rollbacks lead to a series of
experiences A[χ]z0[(]...q[·][)] 1[. By accumulating an ensemble of]
_−_
trial-and-error reasoning experiences as the in-context
learning examples in the prompt, LLM will learn from
more experiences to generate the correct next thought
_zq ∼_ _f_ _zq|I_ _A[χ]z0[(]...q[·][)]_ _−1[, z]0...q−1_, as shown in Figure 2
and two examples of Figure 8 and Figure 10.
**4.4. Ensembling Solutions**
TR may create massive final solutions as each adaptive
triggered rollback leads to one more new reasoning path
toward answering the question. Thus, we directly stop reasoning when there is K number of solutions {z0...Tk _}k[K]=1_
obtained. Then, weighted majority voting (W-Voting) will
be performed on them for a final solution. Specifically,
for the solution zTk, the weight wt is higher when 1) it
has a lower number of outgoing rollbacks denoted as αTk,
meaning that fewer bad thoughts are identified; and 2) more
experiences βTk = _A[χ]z0[(]...Tk[·][)]_
reasoning path. Eventually, TR outputs the final solution | _[|][ are accumulated along this]_
as: arg maxv∈V _Kk=1_ _[I][ (][v][k][ =][ v][) (][β][T]k_ _k_ [)][, where][ V]
is the collection of solutions and vk is the value of[−] _[α][T]_ _k-th_
P
solution.
**5. Experiments**
**Datasets. We conduct experiments on two streams of tasks.**
For the mathematical problems, we evaluate the performance of TR on test sets of GSM8K1319 (Cobbe et al., 2021),
SVAMP300 (Patel et al., 2021), AQUA-RAT254 (Ling et al.,
2017), MATH900 (Hendrycks et al., 2021b), TheoremQA400
(Chen et al., 2023b) datasets, where numerical subscripts
indicate sample size. For TheoremQA400, we specifically
use half of the test set without visual information, leading to
400 samples. Following ToT (Yao et al., 2023), we utilize
100 challenging games of Game of 24. For multi-task
reasoning, such as symbolic reasoning, we extract 900 samples from 56 categories of MMLU (Hendrycks et al., 2021a),
i.e. MMLU900.
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
_Table 1. Evaluating the reasoning ability of TR with GPT models under four well-known mathematical problems. TR is specifically_
evaluated against leading methods such as Faithful-CoT (Lyu et al., 2023), and CSV (Zhou et al., 2024), each achieving state-of-the-art
(SOTA) performance on the SVAMP and MATH datasets, respectively. The best results, apart from the SOTA, are in bold. The 5 shots of
CoT examples used by our TR experiments are extracted randomly from the trainset of the same category.
GPT-4 GPT-3.5-turbo
Methods ZeroShot
GSM8K SVAMP AQuA-RAT MATH GSM8K SVAMP AQuA-RAT MATH
SOTA ✗ 98.78 95.4 85.048 84.32 89.2 84.3 60.68 40.56
ZeroShot ✓ 87.1 79.33[†] 50.4 42.2 76.3 74.8 53.5 24.5[†]
ZeroShot-CoT ✓ 93.1[†] 84.67[†] 73.2 44.7 79.6 77.5 53.9 30[†]
CoT ✗ 94.25† 91.95 75.28 48.98 87.45,sc15 835 59.45 -
C-CoT ✗ 94.98 90.58 77.58 50.48 82.88 81.08 57.48 34.18
PHP+C-CoT ✗ 95.58 91.98 79.98 53.98 85.18 83.18 60.68 36.58
BoT ✓ 97.1 92.67 81.5 62.44 - - - **40.56**
BoT+CoT ✗ **98.78** **958** 85.048 66.338 - - - -
Chain Reasoning[†] ✓ 89.76 80.33 74.41 45.44 76.72 71.67 47.64 26.89
ToT Reasoning[†] ✓ 90.9 84 76.38 48 79.83 78.33 54.72 30.44
TR[†] ✓ 94.24 89 79.92 55 82.49 77.67 56.69 32.78
TR + CoT5† ✗ 96.06 91.33 84.25 62.56 86.5 79.67 57.87 31.44
TR + W-Voting[†] ✓ 96.36 93 **87.8** 71.89 85.9 82.33 **63.39** 39.78
TR + CoT5 + W-Voting[†] ✗ 96.97 93.33 87.4 **72.11** 87.79 82.67 62.6 35.89
**Large language models. We utilize GPT-3.5-turbo (gpt-**
3.5-turbo-16k-0613), GPT-4 (gpt-4-1106-preview) (OpenAI,
2023) and Llama2 (Touvron et al., 2023), including Llama213b (Llama-2-13b-chat-hf) and Llama2-70b (Llama-2-70bchat-hf) where 1b means one billion parameters. For LLMs
with TR, the default settings for temperature and top p are
0.7.
**Baselines. Apart from zero-shot prompting (Kojima et al.,**
2022), the comparison approaches include Chain-of-thought
(CoT) (Wei et al., 2022), SC (Wang et al., 2022) and Complex CoT (Fu et al., 2023) (C-CoT), where the subscript 5
or 8 indicates the number of shots while the subscript sc
denotes the number of sample paths. Also, TR is compared
with the related approaches, such as Boosting of thoughts
(BoT) (Sijia et al., 2024), Tree of thoughts (ToT) (Yao et al.,
2023), Cumulative Reasoning (CR) (Zhang et al., 2023),
and Progressive-Hint Prompting (PHP) (Zheng et al., 2023).
ToT follows the best first search (BFS). The breadth limit
of ToT is 6 while BoT performs 10 boosting iterations on
15 binary trees. We also include the state-of-the-art (SOTA)
methods, such as CSV (Zhou et al., 2024) that relies on
GPT-4 Code Interpreter, on each dataset as an additional
comparison. We set K = 8 for possible early stopping of
TR in all experiments.
**Metrics. All experiments report the Solve Rate (%) of the**
questions. We make LLM explicitly report the solution
value after the strings, such as “The solution is” and “The
choice is” in the z0...T . Thus, the value is directly extracted
and compared with the ground truth. The Interaction Number refers to the frequency at which we must consult the
LLM until we receive conclusive responses.
**Reproducibility. The results and methods marked with a**
superscript _[†]_ are the results we obtained based on the open
source platform llmpebase. Others without such a tag are
collected from existing work, as shown in the Appendix’s
subsection C.1.
**5.1. Main Evaluation Results**
**Adaptive reasoning. With zero-shot prompting and no**
pre-defined thought structures, such as chain of Chain Reasoning and the tree of ToT Reasoning, TR allows GPT3.5-turbo, GPT-4, and Llama2 to self-organize and explore
thought structures toward answering the question. Under
challenging tasks, LLMs with TR adaptively build complex structures, as shown by examples in the Appendix, by
continuously rolling back from thoughts with “hallucinations”. For simpler tasks, lightweight structures are built by
LLMs with TR. As such, with the ability to adjust thoughts
and prompt the LLMs with accumulated experience of errors during reasoning, TR achieves a high solving rate and
relatively lower resource cost.
**Overall comparison.** We show, especially in Table 1,
that compared to existing multi-step reasoning approaches,
LLMs with TR achieve the best and the second best solving
rate on AQuA-RAT &MATH and GSM8K & SVAMP, respectively. Meanwhile, contrary to the resource-cost SOTA
ones, such as BoT, which undertakes reasoning through
massive tree thought structures, and CSV, which relies on
GPT-4 Code Interpreter, TR yields notable performance by
interacting less with relatively simpler LLMs. First, TR
surpasses BoT by 6.3 on AQuA-RAT and 9.45 on MATH
using GPT-4. In particular, TR requires only around 40 in
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
teractions compared to the 500+ interactions of BoT. Using
TR with zero-shot prompting, GPT-4 and GPT-3.5-turbo
outperform the ones using few-shot CoT prompting and
self-consistency. Under hard math problems, especially
MATH, the solving rate of TR is 17.99 and 3.28 higher
than PHP+C-CoT under GPT-4 and GPT-3.5-turbo, respectively. Second, LLMs with TR adaptively explore thought
structures, which is significantly better than pre-defined
forward-only Chain Reasoning and ToT Reasoning. After
large-scale interactions with LLMs, the performance of the
latter two zero-shot prompting methods only approaches the
8 shots Complex CoT. Figure 3 (a) shows that compared
to ToT reasoning, our TR requires one-third or less of the
interactions to achieve a new state-of-the-art. Finally, with
an average 28 interactions with GPT-4, TR yields a competitive solving rate 87.56 on the multi-task dataset MMLU,
which contains symbolic reasoning.
We emphasize that TR is more effective in challenging problems, as shown in Table 2 and Table 3. In the level 5 difficulty of MATH, GPT-4 with TR is only 2.84 lower than
the CSV that embraces the GPT-4 Code Interpreter as an
auxiliary. The solving rate of GPT-4+TR is 4.35 higher
than the current best in TheoremQA. Along with better
performance, interaction cost is reduced to an acceptable
range by TR. Another observation is that LLMs with TR
introduce more interactions in hard problems than the simpler ones. For instance, as shown in Figure 3 (a) and the
first two columns of Table 2, the average interaction cost
increases to around 60. Also, in Table 3, the results of the
Game of 24 dataset show that GPT-4 with TR requires
an average of 32 interactions to reach a solving rate of 87%,
which is only 7% lower than the CR2 (Zhang et al., 2023).
This is a remarkable achievement as CR-related approaches
rely on human-made demonstrations while TR is zero-shot
prompting. Moreover, introducing the CR’s demonstrations
into the prompt of TR increases the solving rate to 93%
while reducing the number of interactions to 24. In addition, the better performance of TR + CR-Prompt shows that
including demonstrations reduces the reliance on majority
voting.
**Effect of the rollback of thoughts. In Figure 3 (b) and**
(c), we specifically present the relation between rollbacks
and the solving rate of reasoning paths and the decrease in
the failure rates at the first step of the Game of 24. We
define a reasoning path z0...T as In Rollback if a majority of
its thoughts, represented by Zz[χ]0[(]...T[·][)] [in the subsection][ 4.3][, are]
derived from incoming rollbacks. z0...T is defined as Out
_Rollback if more than two of its thoughts trigger outgoing_
_rollbacks and as No Rollback if it includes no rollbacks._
Figure 3 (b) presents these three types of reasoning paths
TR generates during reasoning in four datasets. As TR allows the error analysis of each rollback to be accumulated
in the prompt, as discussed in subsection 4.3, a In Rollback
_Table 2. Evaluating TR with the ZeroShot setting on challenging_
mathematical problems and multi-task reasoning. With GPT-4, the
existing SOTA zero-shot methods on the level 5 difficulty of the
MATH, TheoremQA, and MMLU are from CSV (Zhou et al., 2024),
PoT (Chen et al., 2023a), and BoT (Sijia et al., 2024), respectively.
We use 324 samples out of 1324 for the MATH dataset. The average
interaction number with LLM to solve each problem is reported
within () behind the number.
Methods MATH-level5324 TheoremQA400 MMLU900
SOTA 55 (3) 52.4 (1) 93.2 (900+)
Llama2-13b[†] 2.47 (1) 9.3 51.78 (1)
Llama2-70b[†] 8.64 (1) 25.5 65.44 (1)
GPT-4 + ZeroShot-CoT[†] 23.46 (1) 43.75 (1) 82.33 (1)
GPT-4 + Chain Reasoning[†] 22.53 (11) 36.5 (9) 78.11 (6)
GPT-4 + ToT Reasoning[†] 24.38 (150) 38.25 (110) 79.22 (70)
Llama2-70b + TR + W-Voting _[†]_ 12.65 (36) 29 (34) 58.33 (22)
GPT-3.5-turbo + TR + W-Voting[†] 20.06 (38) 39.5 (30) 70.11 (18)
GPT-4 + TR[†] 31.48 46.25 84.67
GPT-4 + TR + W-Voting[†] 52.16 (62) **56.75 (56)** 87.56 (28)
_Table 3. Utilizing TR with GPT-4 achieves competitive perfor-_
mance while maintaining low interaction cost on Game of 24
dataset.
Method Solving rate #Interactions Generate tokens Prompt tokens
Standard 7.3 1 - -
Standard (best of 100) 33 100 1.8k 1k
CoT 4 1 - -
CoT (best of 100) 49 100 6.7k 2.2k
CoT-SC 9 sc100 100 - -
ToT(b = 5) 74 30 5.5k 1.4k
CR(b = 2) 94 27.4 - -
CR(b = 5) **98** 29.72 - -
BoT 83.7 724 15.8k 18.6k
BoT+CoT5 84.9 543 11.2k 15.5k
Chain Reasoning[†] 5 3 0.14k 0.22k
ToT(b = 5) Reasoning[†] 25.6 36 2.82k 1.63k
TR _[†]_ 70 32 5.96k 9.98k
TR + W-Voting[†] 87
TR + CR-Prompt _[†]_ 86 24 5.03k 8.1k
TR + W-Voting + CR-Prompt[†] 93
path generally benefits from exploiting more experiences
during reasoning. Therefore, as Figure 3 (b) verifies, In
_Rollback paths consistently correspond to a higher solving_
rate because of prompting LLMs with these trial-and-error
experiences. On the contrary, those Out Rollback paths have
significantly low solving rates because they include more
bad thoughts (“hallucinations”), which consequently trigger
more rollbacks after being identified by LLMs. Similarly,
when the first step of Game of 24 derives from the number of 0 to 5 incoming rollbacks, the failing rate decreases
significantly from higher than 0.8 to lower than 0.3. The final observation is that longer spans of rolling back are more
important for revising thoughts. Figure 3 (c) shows that the
first step caused by a rollback 3 0 has a higher success
_→_
rate than the one caused by 2 0 and 1 0. Therefore,
_→_ _→_
we argue that a rollback, especially generated from more
**latter reasoning steps (thoughts), contribute more to the**
**thought revision. This may be because the error analysis**
brought by rollbacks of later reasoning steps contains more
information and is more helpful in improving prompts.
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
Interaction Cost Distribution
150
140
130
120
110
100
90
80
70
60
50
40
30
20
10
|TR Chain Reasoni ToT Reasoning PHP+C-CoT|ng|Col3|
|---|---|---|
||||
TR
Chain Reasoning
ToT Reasoning
PHP+C-CoT
550 Relation between Rollbacks and Reasoning547 1.0 0.9 Thoughts failed at the first stepToT Number(#) of In Rollbacks
500 499 0.9 0.8 Tree Reasoning In Rollbacks
450 0.8 0.7
400350300250200150100 88 104In RollbackOut RollbackNo RollbackIn Rollback Solving RateOut Rollback Solving RateNo Rollback Solving Rate217 129 177 266153161 83 0.70.60.50.40.30.2Solving Rate Error Rate0.60.50.40.30.2
Number of Reasoning Path 50 64 0.1 0.1
0.0
0 0.0
GSM8K SVAMP AQuA-RAT MATH Baseline 0 1 3 5 7 1->0 2->0 3->0
(b) (c)
_Figure 3. Effectiveness of TR on interaction cost saving and thought revision through the rolling back of thoughts. a). distributions of the_
interactions required to address problems in four datasets; b). solving rates of three types of reasoning path: “No Rollback” — thoughts
receive no rollbacks, “Out Rollback” — rollbacks triggered by mistaken thoughts, and “In Rollback” — thoughts derive from rollbacks; c).
the reduction of failure rates due to the rollbacks in the first step of Game of 24, where [0, 1, 3, 5, 7] denotes the number of rollbacks
that cause the first step and 2 0 means the first step derives from the rollback from the 2-th step.
_→_
**Concerns. First, TR contributes less to the performance**
enhancement when the LLMs inherently do not have solid
ability. Especially in AQuA-RAT and MATH of Table 1,
GPT-3.5-turbo with TR is 2.91 and 3.25 higher than PHP+CCoT, which are significantly lower than these under GPT-4.
Likewise, in Table 2, the solving rate of Llama2-70b with
TR is around 4% higher than Llama2-70b but costs more
interactions. Therefore, the performance of TR depends
on LLMs’ abilities to perform correct error analysis and
understand the experience in the prompt. Second, to address
harder problems, LLMs with TR tend to build over-complex
thought structures, which generally contain more than 100
thoughts. The main reason is that a rollback generated by
bad thoughts identified by LLMs or a mistaken rollback
caused by hallucinations of LLMs leads to one more reasoning path. This appears frequently in challenging tasks; thus,
LLMs self-organize a large-scale thought structure toward
solutions. We present more visible examples in Figure 4,
Figure 9, and Figure 10 of the Appendix.
**5.2. Main Insights**
analysis for each intermediate reasoning step (rollback-byrollback verification), to revise thought adaptively during
reasoning. Besides, with the rollback of thought, outcome
analysis is a special case of process analysis when analysis
is used not to re-do reasoning but to adjust previous thought
to create a correct reasoning path.
**Experience-guided Solution Ensemble is critical to the**
**effectiveness of trial-and-error reasoning. After stopping**
reasoning, LLMs with TR yield K reasoning paths due to
the adaptive exploration. Each reasoning path caused by
one rollback of TR can be regarded as a trial for addressing
the problem. When LLMs frequently make mistakes and
occur “hallucinations”, the solution obtained in any trial
may not be correct. Since TR exploits the error analysis
of each incoming rollback as experience to prompt LLMs,
a solution from the reasoning path with more incoming
_rollback is more acceptable. Therefore, we should ensemble_
these solutions by filtering out ones with limited experiences
or many bad thoughts. As shown by comparisons between
TR and TR + W-Voting in Table 1, Table 2, Table 3 and
Figure 3 (b), such an experience-guided solution ensemble
is critical.
**Weak LLMs may not identify multiple targets mentioned**
**in the prompt. Including CoT examples in the prompt im-**
proves the solving rate of LLMs with TR, as shown in the
GPT-4 column of Table 1. However, for the more challenging AQUA-RAT and MATH datasets in the GPT-3.5-turbo
column, adding CoT causes a significant performance decrease. We argue that it may be hard for the weak LLMs
to understand and distinguish instructions with different targets in the prompt. For example, the instruction of CoT
examples emphasizes how to follow demonstrations, while
the prompt with experiences in TR focuses on how to avoid
given errors. Weak LLMs may not benefit from the enhanced prompt containing these two different guidances,
especially when the reasoning is complex.
The notable performance enhancement of TR in terms of
both solving rate and interaction cost shows the insight that
adaptive adjusting thoughts supported by the rollback of
thoughts during reasoning is core to the success of LLMs in
complex mathematical reasoning. In addition, we can gain
three more insights.
**Experience accumulation of error analysis from interme-**
**diate thoughts is better than that obtained by analyzing**
**the whole reasoning path. Existing work (Huang et al.,**
2024) pointed out that LLMs are unable to revise reasoning
based on the outcome analysis, which gives feedback on the
final reasoning. Thus, BoT (Sijia et al., 2024), which relies
on outcome analysis, had to embrace more careful-selected
outcome analysis to prompt LLMs. Our TR opens a new direction of relying on process analysis, which provides error
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
_Table 4. Evaluating the token cost when using GPT-4 with TR to address questions from MATH and TheoremQA. The ratios in the table_
indicate how many times the token costs of various approaches exceed the token cost of ZeroShot CoT. △ represents the increase in
the problem-solving rate of various approaches compared to ZeroShot CoT. We also provide the average and standard deviation values,
expressed as mean ± std, for the tokens required to prompt LLMs and those generated by LLMs in addressing a single question from two
challenging datasets.
|Method|MATH|Col3|TheoremQA|Col5|Col6|
|---|---|---|---|---|---|
||Generate tokens Ratios|Prompt tokens Ratios △|Generate tokens Ratios||Prompt tokens Ratios △|
|ZeroShot CoT CoT8 Chain Reasoning ToT Reasoning|212.3 ± 152.6 1 441.3 ± 310.7 2.08 603.2 ± 244.7 2.84 1569.8 ± 474.7 7.39|110.3 ± 25.5 1 0 4611.4 ± 1952.2 41.8 4.2 2657.5 ± 1187 24.1 0.74 9963.7 ± 4487.5 90.33 3.3|240.8 ± 100.5 - 976.3 ± 1056.4 1886 ± 982.5|1 - 4.05 7.83|121.2 ± 43.1 1 0 - - - 4614.6 ± 7632.1 38.07 -7.25 11982 ± 8872.2 98.86 -5.5|
|TR + W-Voting|7484.9 ± 5873.9 35.26|46904.9 ± 37980.8 425.25 27.19|6762.9 ± 6513|28.09|43444.8 ± 49391.2 358.46 13|
**6. Concluding Remarks**
In this paper, we proposed Thought Rollback (TR), an effective reasoning framework supported by the rollback of
thoughts that allows LLMs to perform adaptive reasoning
to solve challenging problems. Without relying on human annotations and specific thought structure designs for
reasoning, LLMs with TR can progressively self-organize
and revise thoughts based on trial-and-error experiences
until reaching a correct solution for various tasks. The
_rollback controller and prompt enhancer, together with_
the experience-guided weights majority voting, make TR
achieve the state-of-the-art solving rate in many mathematical and multi-task reasoning datasets while maintaining
a lower cost than the alternative leading approaches. We
hope this work could shed light on the adaptive reasoning
in LLMs toward addressing challenging tasks, especially
when mathematical problems are involved.
**Impact Statement**
Large language models (LLMs) can break a complex task
into manageable subproblems and solve them through stepby-step reasoning. TR, a lightweight framework, guarantees
the reliability of LLMs’ multi-step reasoning under hallucinations, thus extending their applications to a wider range
of tasks. Furthermore, compared to the BOT that performs
outcome analysis, TR built upon prompting the LLMs with
feedback from process analysis is more effective and significantly reduces the interaction cost. This may open a
research direction emphasizing the importance of exploiting feedback during the step-by-step reasoning of LLMs.
In addition, the plug-and-play nature of TR allows other
approaches, such as CR, to involve the thought rollback
mechanism to improve performance further. Ultimately,
thanks to the expansion in the context window of LLMs and
the decreasing token prices, the token cost of TR may not
be a major concern.
**5.3. Token Cost Analysis**
As shown in Table 4, when addressing questions from challenging datasets, the token cost of GPT-4 with TR is significantly higher than baseline approaches. Specifically, on
the MATH dataset, the TR approach, on average, generates
35.26 times more tokens and requires 425.25 times more
prompt tokens than the ZeroShot CoT. The corresponding
ratios in the TheoremQA dataset are 28.09 and 358.46.
This resource-intensive nature of our proposed TR derives
from continuously identifying the errors and appending the
prompt with error analysis during reasoning.
However, these additional operations and the high token
cost are necessary as hallucinations of LLMs frequently
appear. First, since numerous erroneous thoughts are generated during reasoning, consistently identifying and revising
them is crucial to ensuring the correctness of the answer.
Second, in many cases, the error analysis of LLMs is invalid and even incorrect due to the inevitable hallucinations.
Thus, accumulating error analysis derived from different
reasoning paths decreases the negative impact of the flawed
analysis on thought revisions. Third, many reasoning paths
of the thought structure derive from rollbacks triggered by
erroneous feedback from LLMs. Since mistaken rollbacks
cannot be identified, the TR approach retains all generated
paths and ultimately employs majority voting to enhance
reliability.
Therefore, we conclude that there is a trade-off between
the high token cost and the problem-solving rate. On the
one hand, since the TR approach requires many tokens
to address a single question, its application may be limited for users with insufficient resources. On the other
hand, compared to zero-shot GPT-4, GPT-4 with TR gains
27.19% and 13% improvements in solving rates on MATH
and TheoremQA datasets. When users prioritize problemsolving rates, integrating GPT-4 with the TR approach ensures its applicability in many challenging scenarios.
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
**References**
An, S., Ma, Z., Lin, Z., Zheng, N., Lou, J.-G., and Chen, W.
Learning from mistakes makes llm better reasoner. arXiv
_preprint arXiv:2310.20689, 2023._
Besta, M., Blach, N., Kubicek, A., Gerstenberger, R., Gianinazzi, L., Gajda, J., Lehmann, T., Podstawski, M.,
Niewiadomski, H., Nyczyk, P., et al. Graph of thoughts:
Solving elaborate problems with large language models.
_arXiv preprint arXiv:2308.09687, 2023._
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D.,
Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
Askell, A., et al. Language models are few-shot learners.
In Advances in Neural Information Processing Systems,
volume 33, pp. 1877–1901, 2020.
Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J.,
Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y.,
Lundberg, S., et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint
_arXiv:2303.12712, 2023._
Chen, W., Ma, X., Wang, X., and Cohen, W. W. Program
of thoughts prompting: Disentangling computation from
reasoning for numerical reasoning tasks. Transactions on
_Machine Learning Research, 2023a._
Chen, W., Yin, M., Ku, M., Lu, P., Wan, Y., Ma, X., Xu,
J., Wang, X., and Xia, T. Theoremqa: A theorem-driven
question answering dataset. In Proc. Conference on Em_pirical Methods in Natural Language Processing, 2023b._
Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H.,
Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano,
R., et al. Training verifiers to solve math word problems.
_arXiv preprint arXiv:2110.14168, 2021._
Fu, Y., Peng, H., Sabharwal, A., Clark, P., and Khot, T.
Complexity-based prompting for multi-step reasoning. In
_Proc. International Conference on Learning Representa-_
_tions, 2023._
Gao, L., Madaan, A., Zhou, S., Alon, U., Liu, P., Yang,
Y., Callan, J., and Neubig, G. Pal: Program-aided language models. In International Conference on Machine
_Learning, pp. 10764–10799. PMLR, 2023._
Hagberg, A. A., Schult, D. A., and Swart, P. J. Exploring network structure, dynamics, and function using networkx.
In Proc. 7th Python in Science Conference, pp. 11–15,
2008.
Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika,
M., Song, D., and Steinhardt, J. Measuring massive
multitask language understanding. In Proc. International
_Conference on Learning Representations, 2021a._
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart,
S., Tang, E., Song, D., and Steinhardt, J. Measuring mathematical problem solving with the math dataset. arXiv
_preprint arXiv:2103.03874, 2021b._
Huang, J., Chen, X., Mishra, S., Zheng, H. S., Yu, A. W.,
Song, X., and Zhou, D. Large language models cannot
self-correct reasoning yet. In Proc. International Confer_ence on Learning Representations, 2024._
Jiang, M., Ruan, Y., Huang, S., Liao, S., Pitis, S., Grosse,
R. B., and Ba, J. Calibrating language models via augmented prompt ensembles. In International Conference
_on Machine Learning, 2023._
Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., and Iwasawa,
Y. Large language models are zero-shot reasoners. In
_Advances in Neural Information Processing Systems, vol-_
ume 35, pp. 22199–22213, 2022.
Kong, A., Zhao, S., Chen, H., Li, Q., Qin, Y., Sun, R.,
and Zhou, X. Better zero-shot reasoning with role-play
prompting. arXiv preprint arXiv:2308.07702, 2023.
Lightman, H., Kosaraju, V., Burda, Y., Edwards, H., Baker,
B., Lee, T., Leike, J., Schulman, J., Sutskever, I., and
Cobbe, K. Let’s verify step by step. In Proc. International
_Conference on Learning Representations, 2024._
Ling, W., Yogatama, D., Dyer, C., and Blunsom, P. Program induction by rationale generation: Learning to solve
and explain algebraic word problems. arXiv preprint
_arXiv:1705.04146, 2017._
Ling, Z., Fang, Y., Li, X., Huang, Z., Lee, M., Memisevic,
R., and Su, H. Deductive verification of chain-of-thought
reasoning. In Advances in Neural Information Processing
_Systems, 2023._
Liu, N. F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua,
M., Petroni, F., and Liang, P. Lost in the middle: How
language models use long contexts. Transactions of the
_Association for Computational Linguistics, 12:157–173,_
2024.
Luo, L., Li, Y.-F., Haffari, G., and Pan, S. Reasoning on
graphs: Faithful and interpretable large language model
reasoning. In Proc. International Conference on Learning
_Representations, 2024._
Lyu, Q., Havaldar, S., Stein, A., Zhang, L., Rao, D., Wong,
E., Apidianaki, M., and Callison-Burch, C. Faithful chainof-thought reasoning. In Proc. IJCNLP-AACL, 2023.
Madaan, A., Tandon, N., Gupta, P., Hallinan, S., Gao,
L., Wiegreffe, S., Alon, U., Dziri, N., Prabhumoye, S.,
Yang, Y., et al. Self-refine: Iterative refinement with
self-feedback. arXiv preprint arXiv:2303.17651, 2023.
10
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
OpenAI. Gpt-4 technical report. _arXiv preprint_
_arXiv:2303.08774, 2023._
Patel, A., Bhattamishra, S., and Goyal, N. Are nlp models
really able to solve simple math word problems? arXiv
_preprint arXiv:2103.07191, 2021._
Qin, C., Zhang, A., Zhang, Z., Chen, J., Yasunaga, M., and
Yang, D. Is chatgpt a general-purpose natural language
processing task solver? In Proc. Conference on Empirical
_Methods in Natural Language Processing, 2023._
Sijia, C., Baochun, L., and Niu, D. Boosting of thoughts:
Trial-and-error problem solving with large language models. In Proc. International Conference on Learning Rep_resentations, 2024._
Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi,
A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P.,
Bhosale, S., et al. Llama 2: Open foundation and finetuned chat models. arXiv preprint arXiv:2307.09288,
2023.
Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang,
S., Chowdhery, A., and Zhou, D. Self-consistency improves chain of thought reasoning in language models. In
_Proc. International Conference on Learning Representa-_
_tions, 2022._
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi,
E., Le, Q. V., Zhou, D., et al. Chain-of-thought prompting
elicits reasoning in large language models. In Advances
_in Neural Information Processing Systems, volume 35,_
pp. 24824–24837, 2022.
Weng, Y., Zhu, M., Xia, F., Li, B., He, S., Liu, S., Sun, B.,
Liu, K., and Zhao, J. Large language models are better
reasoners with self-verification. In Proc. Conference on
_Empirical Methods in Natural Language Processing, pp._
2550–2575, 2023.
Wu, Z., Jiang, M., and Shen, C. Get an a in math: Progressive rectification prompting. In Proc. AAAI Conference
_on Artificial Intelligence, 2024._
Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao,
Y., and Narasimhan, K. Tree of thoughts: Deliberate
problem solving with large language models. In Advances
_in Neural Information Processing Systems, 2023._
Yu, J., He, R., and Ying, R. Thought propagation: An
analogical approach to complex reasoning with large language models. In Proc. International Conference on
_Learning Representations, 2024._
Zhang, Y., Yang, J., Yuan, Y., and Yao, A. C.-C. Cumulative
reasoning with large language models. arXiv preprint
_arXiv:2308.04371, 2023._
Zhang, Z., Zhang, A., Li, M., and Smola, A. Automatic
chain of thought prompting in large language models. In
_Proc. International Conference on Learning Representa-_
_tions, 2022._
Zhao, X., Xie, Y., Kawaguchi, K., He, J., and Xie, Q. Automatic model selection with large language models for
reasoning. In Proc. Conference on Empirical Methods in
_Natural Language Processing, 2023._
Zheng, C., Liu, Z., Xie, E., Li, Z., and Li, Y. Progressivehint prompting improves reasoning in large language
models. arXiv preprint arXiv:2304.09797, 2023.
Zhou, A., Wang, K., Lu, Z., Shi, W., Luo, S., Qin, Z., Lu,
S., Jia, A., Song, L., Zhan, M., et al. Solving challenging
math word problems using gpt-4 code interpreter with
code-based self-verification. In Proc. International Con_ference on Learning Representations, 2024._
11
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
**A. Supplement to Figures of the Main Paper**
Figure 4 is the full version of the partial one shown in Figure 1. This shows a complex thought structure built by GPT-4 with
TR to address a challenging question from MATH dataset. It shows that GPT-4 with TR tends to build a large-scale thought
structure by iteratively rolling back thoughts during reasoning. Figure 4 demonstrates the details about how TR exploits
_rollback controller and prompt enhancer to create a new and correct reasoning path after analyzing the bad thoughts._
Specifically, from the details presented in Figure 5, we can observe what TR shown in Figure 2 does after reaching the
thought. GPT-4 with TR starts from generating a first thought N 1S 1 based on a simple and zero-shot prompt
_−_ _−_
_Q. Then, Rollback of thoughts follow the process in Figure 2. First, rollback controller analyzes the reasoning path_
_N_ 1 _N_ 2 _N_ 3 _N_ 5 _N_ 6 to output the error analysis presented in “N-6, S-5 error analysis:”. This
_−_ _→_ _−_ _→_ _−_ _→_ _−_ _→_ _−_
analysis shows that reasoning steps N 5 S 4 and N 6 S 5 are bad thoughts. According to our discussion in
_−_ _−_ _−_ _−_
subsection 4.2, rollback controller allows the LLM to roll back to the thought N 3S 3, which is one step before the first
_−_ _−_
bad thought N 5 S 4. Then, prompt enhancer accumulates the error analysis as experience in the prompt, as shown in
_−_ _−_
“N-3, S-3 to N-7 S-4 Prompt” which includes the “#### The 0-th Experience with Analysis ####”. As a result, by avoiding
making similar mistakes mentioned by experience, LLM is able to generate a new thought N 7 S 4 from the chosen
_−_ _−_
thought N 3 S 3. Therefore, “hallucinations” that occur in thought or analysis of LLM may not influence reasoning due
_−_ _−_
to the continuous thought revision guaranteed by the iterative rollbacks during reasoning. As can be seen from N 9 S 6,
_−_ _−_
the final solution is revised to be a correct answer 737. Additionally, we can also observe that each rollback will lead to a
new reasoning path with the experience from the corresponding error analysis. Thus, two rollbacks N 6 _N_ 3 and
_−_ _→_ _−_
_N_ 3 _N_ 2 of GPT-4 with TR adaptively create two new reasoning paths.
_−_ _→_ _−_
**B. Reproducibility of Thought Rollback Framework**
**B.1. Source Code**
One can also access the source code under the examples/ThoughtRollback folder in the code. The implementation is based
on the llmpebase library. The code is written in Python and imports the datasets from Hugging Face to build PyTorch’s data
loader.
Also, the source code for Chain Of Thought, Chain Reasoning, ToT reasoning, and GoT reasoning that we mention in
experiments are available in the examples/ChainOfThought, examples/ChainReasoning, examples/TreeReasoning, and
_examples/GraphReasoning folders, respectively._
All configuration files used to conduct the experiments are provided in the configs/ folder in the code. About how to run,
please read the README.md under the examples/ThoughtRollback folder.
**B.2. Locations of Generated Thoughts and Reasoning Details**
Our code will automatically save the generated thoughts and reasoning details under one folder of LLMPE in the root
directory. The direct results are placed under the LLMPE/results while the corresponding visible thought structures
are stored inLLMPE/visualizations. Then, their sub-folder name will represent the setting of the configuration, such as
“TRReasoning gpt-4 zeroshot cot MATH”, where “TRReasoning” is the name of our Thought Rollback. Eventually, as
shown in Figure 6, you can access the sample by index in thought structure * while reading the results in llm records. The
thought structure for reasoning will be saved in the .json format, and the visualizations will be in the .pdf format.
**B.3. Prompts**
This subsection presents the basic prompts used in our implementation of Thought Rollback framework.
**System prompt for thought generation: You possess expertise in solving mathematical problems through a systematic,**
_step-by-step reasoning process during which you are dedicated to preventing repeating any errors analyzed in experiences._
_Your objective is to address the question using a series of reasoning steps delivered in multiple responses, with each response_
_containing one reasoning step. It is crucial to avoid repeating errors mentioned in the given experiences. Begin by reading_
_the provided reasoning steps and then proceed to generate the most appropriate next step in the response, ensuring that the_
_logical progression steadily leads towards a solution._
**System prompt for reasoning analysis: You are a mathematician specializing in checking and analyzing the reasoning**
12
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
_process containing multiple intermediate reasoning steps proposed to address a math question. Please check the correctness_
_of the overall reasoning logic and each reasoning step regarding mathematical logic and rationality._
**Prompt I for the next thought generation:**
_Answer_ _the_ _question_ _about_ _the_ _problem_ _{Problem_ _Name}._ _After_ _getting_ _the_ _final_ _solution,_ _place_ _it_
_after_ _the_ _sentence_ _’The_ _final_ _solution_ _is’_ _for_ _readability.\n\nExperience_ _containing_ _previously_ _made_
_mistakes:\n\n#########{Experiences}#########\n\nConsider the analysis in the above experience to avoid_
_making similar mistakes during reasoning for the question.\n\n\nQuestion: {Question} \n\nAnswer: Let’s think step_
_by step. Let’s focus on carefully generating the next possible reasoning step for reasoning steps below.\n\n\n{Existing_
_Reasoning Steps}\n\n\nFor reasoning steps within, please generate their best next step containing analysis and the_
_corresponding mathematical expression._
where the {Problem Name} contains what question to solve, such as “Multiplication”, {Experiences} present the accumulated
experiences Az[χ]0[(]...q[·][)] _−1_ [discussed in subsection][ 4.3][,][ {][Question][}][ is the given question description and finally][ {][Experiences][}][ is]
a placeholder to be replaced by the preceding chain of thoughts z0..,n−1.
**Prompt IR for the error analysis:**
_Analyze the reasoning steps proposed for the question about the problem {Problem Name}. \nQuestion: {Question} \n_
_Toward addressing the given question, below is a reasoning process containing {Number of Steps} steps: \n\n\n {Existing_
_Reasoning Steps} \n\n\nDouble-check the reasoning process within, please analyze its overall and each step’s correctness_
_by checking whether they are mathematical logic and rationality. Please report an error when any step does not contain a_
_clear mathematical expression. Output empty string when no steps are given.\n_
where the {Number of Steps} presents the number of current reasoning steps, i.e., n for a reasoning path z0..,n.
**B.4. Basic Engineering Settings for TR**
During implementation, we set the upper bound U of the incoming rollback for one thought to be 3, making no more than 3
rollbacks use this thought as the destination. Besides, we utilize the depth-first search algorithm to find the current growing
thought. As mentioned in the subsection 4.4, the reasoning path with more incoming rollback is more important. Therefore,
for this depth-first search algorithm, we assign higher priority to the reasoning path with a larger number of incoming
_rollbacks. As LLMs with TR first generate subsequent thoughts for these reasoning paths, the corresponding solution will be_
explored in advance. Such a mechanism increases the possibility of getting better answers.
Apart from these basic settings, our implementation also includes some engineering tricks. First, during reasoning, once a
reasoning path causes outgoing rollbacks more than 5 times, it will be ignored in the subsequent reasoning. Second, we
do not allow one thought to cause more than 3 outgoing rollbacks to avoid the case that LLMs repeatedly identify a bad
thought and trigger rollbacks for it. Third, to increase the speed of reasoning, after noticing that different reasoning paths
are independent of each other, we achieve parallel running, which makes LLM generate thoughts for all reasoning paths
simultaneously. Therefore, once a new reasoning path is created, one process will be created to make LLM work on the path
without blocking others. This can increase the reasoning speed from the scale of minutes to seconds.
**C. Discussion: Insights Gained from the TR Framework**
We argue that the outstanding performance of TR is attributed to three insights.
First, as mentioned by the work (Lightman et al., 2024), compared to outcome supervision, which provides feedback for
a final result, process supervision, which provides feedback for each intermediate reasoning step, is more important to
guarantee the reliable reasoning of LLMs. In summary, process supervision significantly outperforms outcome supervision.
Actually, TR roughly belongs to process supervision because it performs rollback-by-rollback error analysis during reasoning
and continues to revise thoughts based on the accumulation of experience. Thus, the reasoning of LLMs can be adjusted
adaptively during reasoning rather than after the results are obtained. However, BoT (Sijia et al., 2024) belongs to outcome
supervision as it only collects error analysis after reasoning.
Second, the work (Huang et al., 2024) figured out that the outcome analysis is sometimes invalid as LLMs cannot use
feedback to revise thoughts to improve the solving rate. As discussed in subsection 3.2 of our paper, this may be because
when there are many errors in the intermediate reasoning steps, capturing the source mistake and performing useful analysis
13
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
is challenging. Thus, BoT (Sijia et al., 2024) has to do many tree ensembles and design a complex boosting mechanism to
mine effective error analysis to drive LLM’s reasoning revision. On the contrary, TR performs thought rollback triggered
by continuous error analysis during reasoning. Thus, once the error is identified, LLMs can directly revise or adjust the
reasoning based on timely accumulated experience in the prompt.
Third, prompting LLMs with a long input context may cause the degradation of reasoning performance. As pointed out
by the work (Liu et al., 2024), LLMs do not fairly utilize all the information in the prompt but focus more on the content
at the beginning or end of the input context. For BoT, the prompt tends to become extremely long due to the continuous
collection of generated reasoning steps and their error analysis over iterations, particularly since the error analysis focuses
on the whole reasoning chain. For instance, in the 10-th iteration, the prompt contains ten long contents, each containing all
reasoning steps and step-wise analysis. Some of them may not even be correct. However, our TR accumulates experience
during reasoning; thus, each experience only contains analysis on very few intermediate steps, generally leading to a quick
revision after the rollback. For instance, LLMs with TR can easily generate a correct second step based on the experience
“Reasoning step 1: ... correct. Reasoning step 2: ... wrong because....”. However, the BoT’ experience “Reasoning step1:
....correct. Reasoning step 2: ... wrong because.... . Reasoning step 3: ... wrong because..... Reasoning step 4: ... wrong
because.... . Reasoning step 5: ... wrong because....” may just mislead the LLM.
Finally, regarding the time complexity of the TR framework, LLMs with TR incrementally construct the thought structure
from #node 0 to #node n. By focusing on the leading term and disregarding constants and lower order terms, the worst-case
time complexity of TR is determined to be (n[2]).
_O_
**C.1. Source of Experimental Results**
In Table 5 and Table 6, we collect experimental results of GPT-4 and GPT-3.5-turbo on various settings. We especially show
the corresponding work that reports the results.
_Table 5. Source of experiment results of GSM8K. Methods are CSV (Zhou et al., 2024), PHP (Zheng et al., 2023) with GPT-3.5-turbo-,_
Model Selection (MS) (Zhao et al., 2023), PAL (Gao et al., 2023), PLAY (Kong et al., 2023), Faithful (Lyu et al., 2023), Exps (Bubeck
et al., 2023), PoT (Chen et al., 2023a), IG (Qin et al., 2023), BoT (Sijia et al., 2024).
GPT-4 GPT-3.5-turbo
Source
ZeroShot FewShot ZeroShot-CoT CoT Complex-CoT ZeroShot FewShot ZeroShot-CoT CoT Complex-CoT
- - - 92.05 - - 57.15 - - -
CSV Code 92.9, 94.9sc5 - - - - - - - - -
Code+CSV 94.5, 97sc5 - - - - - - - - -
PHP - - - - 94.98 - - - - 82.88
+PHP - - - - 95.58 - - - - 85.18
- - - 94.65, 95.65,sc5, 95.65,sc15 - - - - 80.85, 85.45,sc5, 87.45,sc15 -
MS PAL - - - 94.05, 94.75,sc5, 95.55,sc15 - - - - 79.25, 80.95,sc5, 82.45,sc15 -
Ours - - - 95.65, 96.55,sc5, 96.85,sc15 - - - - 82.65, 88.25,sc5, 89.25,sc15 -
- - - - - 76.0 - 79.6 76.9 -
PLAY
Role - - - - - 78.2 - - - -
Faithful 46.9 - - 64.98 - - - - - -
Faithful - - - 958 - - - - - -
Exps 87.1 - - - - - - - - -
PoT* - - - - - 76.3 - - - -
IG - - - - - 23.8 - 78.9 - -
BoT 87.1 - 89.6 928 94.9 - - - - -
BoT - - 97.1 98.78 - - - - - -
**D. Examples of GPT-4 with TR in GSM8K**
In Figure 7, we present a simple reasoning performed by GPT-4 with TR. As no bad thoughts are identified during reasoning,
GPT-4 with TR directly performs correct reasoning toward a correct solution. This simple example aims to give an overview
of 1). how multi-step reasoning with multi-step prompts work; 2). how to prompt LLMs to generate the next thought, such
as N 2 S 2 _N_ 3 S 3; and 3) how LLMs with TR are able to perform normal reasoning when no rollback is
_−_ _−_ _→_ _−_ _−_
triggered. Besides, as discussed in subsection 4.1, LLMs with TR start from a zero-shot prompt containing only the question
and task information.
Then, Figure 8 shows a more complex reasoning process conducted by GPT-4 with TR. In this case, rollback controller
triggers 5 rollbacks during reasoning, leading to 8 different solutions. After operating the experience-guided solution
ensemble, we get 6 as the final answer, which is correct. LLMs with TR first generate one thought N 1S 1. Then,
_−_ _−_
_rollback controller exploits LLMs to analyze the current reasoning path N_ 0 S 1, N 1 S 1 and N 2 S 2 and
_−_ _−_ _−_ _−_ _−_ _−_
thus identifies the error in N 2S 2. This triggers a rollback N 2 S 2 _N_ 1 S 1, leading to a new reasoning
_−_ _−_ _−_ _−_ _→_ _−_ _−_
14
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
_Table 6. Source of experiment results of SVAMP, AQuA-RAT, MATH, TheoremQA and MMLU. They are CSV (Zhou et al., 2024), PHP_
(Zheng et al., 2023), Model Selection (MS) (Zhao et al., 2023), PAL (Gao et al., 2023), PLAY (Kong et al., 2023), Faithful (Lyu et al.,
2023), Exps (Bubeck et al., 2023), PoT (Chen et al., 2023a), IG (Qin et al., 2023), BoT (Sijia et al., 2024), CR (Zhang et al., 2023),
TheoremQA (Chen et al., 2023b) and GPT4-report (OpenAI, 2023).
SVAMP
GPT-4 GPT-3.5-turbo
Source
ZeroShot FewShot ZeroShot-CoT CoT Complex-CoT ZeroShot FewShot ZeroShot-CoT CoT Complex-CoT
PHP - - - - 90.58 - - - - 81.08
+PHP - - - - 91.98 - - - - 83.18
- - - 91.95 - - - - 835 -
MS PAL - - - 92.25 - - - - 80.35 -
Ours - - - 93.75 - - - - 84.35 -
- - - - - 75.3 - 76.3 82.2 -
PLAY
Role - - - - - 83.8 - - - -
Faithful - 88.4 - 808 - - - - - -
Faithful - - - 95.48 - - - - - -
PoT* - - - - - 88.2 - - - -
IG - - - - - 74.8 - 77.5 - -
BoT 68.7 - 74.3 77.68 90.58 - - - - -
BoT - - 92.7 94.98 - - - - - -
AQuA-RAT
GPT-4 GPT-3.5-turbo
Source
ZeroShot FewShot ZeroShot-CoT CoT Complex-CoT ZeroShot FewShot ZeroShot-CoT CoT Complex-CoT
PHP - - - - 77.58 - - - - 57.48
+PHP - - - - 79.98 - - - - 60.68
- - - - - 53.5 - 53.9 59.4 -
PLAY
Role - - - - - 63.8 - - - -
Faithful 50.4 - - 75.28 - - - - - -
Faithful - - - 73.68 - - - - - -
PoT* - - - 72.4 - - - - - -
IG - - - - - 28.0 - 53.5 - -
BoT 40.6 - 73.2 748 77.5 - - - - -
BoT - - 81.4 84.98 - - - - - -
MATH
GPT-4 GPT-3.5-turbo
Source
ZeroShot FewShot ZeroShot-CoT CoT Complex-CoT ZeroShot FewShot ZeroShot-CoT CoT Complex-CoT
42.2 - - - 50.368 - - - - 34.128
CSV Code 69.69, 79.88sc16 - - - - - - - - -
PHP Code+CSV 73.54, 83.54sc-16, 84.32sc[vw]16 -- -- 42.5- 8 50.36- 8 -- -- -- -- 34.12- 8
+PHP - - - - 53.98 - - - - 36.58
BoT 42.5 - 47.7 48.938 50.48 - - - - -
BoT - - 62.5 66.38 - - - - - -
CR500 PHP+CRCR -- -- -- 54.25844 -- -- -- -- -- -
TheoremQA
GPT-4 GPT-3.5-turbo
Source
ZeroShot FewShot ZeroShot-CoT CoT Complex-CoT ZeroShot FewShot ZeroShot-CoT CoT Complex-CoT
- - - 43.8 - - - - 30.2, 30.8theorem -
TheoremQA PoT - 52.4 - - - - 35.6, 35.8theorem - - -
MMLU
Source GPT-4 GPT-3.5-turbo
ZeroShot FewShot ZeroShot-CoT CoT Complex-CoT ZeroShot FewShot ZeroShot-CoT CoT Complex-CoT
86.5 86.4 - - - 70 - - - -
GPT4-report BoT - - 90.86 93.425 - - - - - -
path N 1 S 1 _N_ 3 S 2 in which the prompt includes the error analysis, thereby revising the thought to gain a
_−_ _−_ _→_ _−_ _−_
final correct solution N 9 S 4.
_−_ _−_
As there are 3 incoming rollbacks for the thought N 10 S 2, the corresponding three different error analyses are
_−_ _−_
accumulated as shown by the N-10 S-2 Experience Accumulation:. prompt enhancer include these error analyses as
experiences in the prompt to guide LLMs to produce correct thoughts. For instance, N-10 S-2 → **N-17 S-3 Prompt shows**
that GPT-4 generates the thought N 17 S 3 from the thought N 10 S 2 with the prompt that accumulates two
_−_ _−_ _−_ _−_
experiences.
Eventually, GPT-4 with TR adaptively builds a thought structure towards generating solutions that, most of which are correct
due to the continuous thought revisions via rollback of thoughts.
**E. Examples of GPT-4 with TR in MATH**
Limited by space, we store the detailed experimental result and visualization files of Figure 9 in the folder MATH-example-1
of the supplementary. Thus, we present the question, the thoughts of correct solutions, and the values of all solutions.
Specifically, in response to our discussion in section 4, we present how GPT-4 with TR is able to identify the bad thought
_N_ 34 S 6 and thus the rollback controller gets the N 34 S 6 error analysis. The triggered rollback N 34 S 6
_−_ _−_ _−_ _−_ _−_ _−_
_N_ 32 S 4 leads to a new reasoning path N 32 S 4 _N_ 35 S 5, which generates a correct thought and gets
_→_ _−_ _−_ _−_ _−_ _→_ _−_ _−_
15
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
the correct answer 0.
We specifically utilize the example in Figure 10 to show how Experience Accumulation works in the TR framework. For
the reasoning path from N 0 S 0 to N 17 S 3, there are 4 incoming rollbacks, including the rollback N 3 S 2
_−_ _−_ _−_ _−_ _−_ _−_
_N_ 0 S 0, the rollback N 10 S 3 _N_ 4 S 1, and the rollback N 15 S 3 _N_ 11 S 2. As each
_→_ _−_ _−_ _−_ _−_ _→_ _−_ _−_ _−_ _−_ _→_ _−_ _−_
rollback creates an error experience from one trial of the given question, incoming rollbacks lead to a series of experiences,
as shown by N-11 S-2 Experience Accumulation:. Therefore, to generate N − 17 S − 3 from N − 11 S − 2, the prompt
_enhancer includes these error analyses as experiences in the prompt, as shown in N-11 S-2 -¿ N-17 S-3 Prompt:. By_
learning from these experiences, LLMs are able to generate the correct thought N 17 S 3 and thus the correct answer
_−_ _−_
47. The detailed files of this example is presented in the folder MATH-example-2 of the supplementary.
**F. Examples of GPT-4 with TR in TheoremQA**
GPT-4 with TR tends to build complex thought structures when reasoning with the challenging TheoremQA dataset (Chen
et al., 2023b). As seen in Figure 11, the overall thought scale and the complexity of reasoning paths increase a lot compared
to other examples.
Also, we show in Figure 12 that GPT-4 with TR can build even more complex thought structures.
In these two figures, we present the obtained reasoning path toward the correct answer. For example, the “Final solution of
the reasoning path N-0 S-0 → N-11 S-7:” in Figure 11 and the “Final solution of the reasoning path N-0 S-0 → N-12 S-7:”
in Figure 12 are the correct solutions obtained by GPT-4 with TR.
16
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
_Figure 4. Complete thought structure of Figure 1 (c) built by GPT-4 with TR for the question from MATH dataset (Hendrycks et al., 2021b)._
This structure contains 23 nodes, i.e. 23 thoughts and leads to K = 8 reasoning paths towards solutions. It is plotted by based on the
NetworkX lib (Hagberg et al., 2008) under the “fdp” layout.
17
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
N-1 S-1: **_N-3, S-3 to N-7 S-4 Prompt:_**
_The first step is to find out how many students are in the school. We can do thisby multiplying the number of classrooms by the number of students in each_ Q: _Answer the question about the problem Multiplication. After getting the final solution, place itafter the sentence 'The final solution is' for readability._
_classroom. \n\nMathematically, this is represented as: \n\n67 classrooms * 66_ _Answer the question about the problem Multiplication. After_
_students/classroom = Total number of students._ _getting the final solution, place it after the sentence 'The_ _Experience containing previously made mistakes:_
_final solution is' for readability.\n\nQuestion: The school is_
_planning a field trip. The school has 67 classrooms. There_ _######### The 0-th Experience with Analysis #########_
N-6, S-5 error analysis: _are 66 students in each classroom in the school. If there are_ _{N-6, S-5 error analysis}._
_6 seats on each school bus How many buses are needed to_ _####################_
_The overall reasoning logic is correct, and each reasoning step is logically sound andmathematically rational. Here is the analysis for each step:\n\nReasoning Step 1: The step iscorrect. The total number of students can be found by multiplying the number of classrooms_ _take the trip? \n\nAnswer: Let's think step by step._ _Consider the analysis in the above experience to avoid making similar mistakes duringreasoning for the question._
_by the number of students in each classroom.\n\nReasoning Step 2: The step is correct. The_
_calculation 67 classrooms * 66 students/classroom = 4422 students is correct.\n\nReasoningStep 3: The step is correct. The number of buses can be found by dividing the total number ofstudents by the number of seats on each bus.\n\nthere is a calculation error. The correct division is 4422 students / 6 seats/bus = 737 busesReasoning Step 4: The step is correct, but_ _Question: The school is planning a field trip. The school has 67 classrooms. There are 66students in each classroom in the school. If there are 6 seats on each school bus How manybuses are needed to take the trip?_
_with a remainder. The step correctly identifies that we cannot have a fraction of a bus, so weneed to round up to the nearest whole number. buses with a remainder, not 738 buses.\n\nReasoning Step 5: The step is incorrect because itHowever, the calculation should result in 737_ _Answer: Let's think step by step. Let's focus on carefully generating the next possiblereasoning step for reasoning steps below._
_carries forward the calculation error from the previous step. The final solution should be thatthe school needs 737 buses for the trip, not 738.\n\nSo, the error is in Reasoning Step 4 and5, where the division result was incorrectly rounded up to 738 instead of 737._ _-------------------- The first step is to find out how many students are in the school. We can do this bymultiplying the number of classrooms by the number of students in each classroom._
N-5, S-4 error Thought: _Mathematically, this is represented as:_
_The next step is to perform the division operation to find out thenumber of buses needed. \n\nMathematically, this is representedas:\n\n4422 students / 6 seats/bus = 737 buses. \n\nHowever, since_ _67 classrooms * 66 students/classroom = Total number of students. The next step is to calculate the total number of students in the school. We can do this byperforming the multiplication operation as suggested in the first step._
_we cannot have a fraction of a bus, we need to round up to thenearest whole number because even if a bus is not full, it will still be_ _Mathematically, this is represented as:_
_needed. So, the school needs 738 buses for the trip._ _67 classrooms * 66 students/classroom = 4422 students._
_The next step is to find out how many buses are needed to transport all the students. We_
_can do this by dividing the total number of students by the number of seats on each bus._
N-6, S-5 error Thought: _Mathematically, this is represented as:_
N-9, S-6 Thought: _4422 students / 6 seats/bus = Number of buses needed._
_The final solution is to confirm the number of buses needed_ _--------------------_
_for the trip.\n\nMathematically, this is representedas:\n\nThe school needs 738 buses for the trip._ _The next step is to finalize the solution. \n\nMathematically,this is represented as:\n\nThe final solution is that theschool needs 737 buses for the trip._ _For reasoning steps within --------------------, please generate their best next step containinganalysis and the corresponding mathematical expression._
_Figure 5. Complete thought structure of Figure 2 built by GPT-4 with TR for the question from SVAMP dataset (Patel et al., 2021). This_
structure contains 10 nodes, i.e. 10 thoughts and leads to K = 3 reasoning paths towards solutions. It is plotted by based on the NetworkX
lib (Hagberg et al., 2008) under the “dot” layout.
_Figure 6. Illustrating of all results generated by LLMs with TR. The left sub-figure presents the details of the generated thought structure._
The upper right sub-figure shows the files of obtained results, while the lower right sub-figure presents the visualizations.
18
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
Q:
_Answer the question about the problem Algebra. After getting the final solution, place it after_
_the sentence 'The final solution is' for readability.\n\nQuestion: Gloria is shoe shopping when_
N-1 S-1: _she comes across a pair of boots that fit her shoe budget. However, she has to choose_
_between the boots and two pairs of high heels that together cost five dollars less than the_
_Given that one pair of heels costs $33 and the other costs twice as much, we can find the_ _boots. If one pair of heels costs $33 and the other costs twice as much, how many dollars_
_total cost of the two pairs of heels by adding the cost of the first pair to twice its cost. This_ _are the boots? \n\nAnswer: Let's think step by step._
_can be represented by the mathematical expression: \n\nCost of two pairs of heels = $33 +_
_2*$33._
N-2 S-2:
N-3 S-3: _Given the expression for the cost of two pairs of heels as $33 + 2*$33, we can calculate the_
_total cost by performing the operations. This results in:\n\nCost of two pairs of heels = $33 +_
_$66._
_The next reasoning step would be to add the costs together to find the total cost of the two_
_pairs of heels. This can be represented by the mathematical expression:\n\nCost of two_
_pairs of heels = $33 + $66 = $99._
N-4 S-4:
N-5 S-5: _Given that the two pairs of high heels together cost five dollars less than the boots, we can_
_represent the cost of the boots in terms of the cost of the two pairs of heels. This can be_
_represented by the mathematical expression:\n\nCost of boots = Cost of two pairs of heels +_
_The next reasoning step would be to calculate the total cost of the boots by adding $99 and_ _$5. \n\nSubstituting the calculated cost of two pairs of heels ($99) into this expression_
_$5. This can be represented by the mathematical expression:\n\nCost of boots = $99 + $5 =_ _gives:\n\nCost of boots = $99 + $5._
_$104. \n\nThe final solution is the boots cost $104._
**N-2 S-2 -> N-3 S-3 Prompt:**
_Answer the question about the problem Algebra. After getting the final solution, place it after the sentence 'The final solution is' for readability.\n\nQuestion: Gloria is shoe shopping when she comes_
_across a pair of boots that fit her shoe budget. However, she has to choose between the boots and two pairs of high heels that together cost five dollars less than the boots. If one pair of heels costs $33_
_and the other costs twice as much, how many dollars are the boots? \n\nAnswer: Let's think step by step. Let's focus on carefully generating the next possible reasoning step for reasoning steps_
_below.\n\n--------------------\n\tGiven that one pair of heels costs $33 and the other costs twice as much, we can find the total cost of the two pairs of heels by adding the cost of the first pair to twice its_
_cost. This can be represented by the mathematical expression: \n\nCost of two pairs of heels = $33 + 2*$33. \n\tGiven the expression for the cost of two pairs of heels as $33 + 2*$33, we can calculate_
_the total cost by performing the operations. This results in:\n\nCost of two pairs of heels = $33 + $66. \n--------------------\n\nFor reasoning steps within --------------------, please generate their best next step_
_containing analysis and the corresponding mathematical expression._
_Figure 7. A simple thought structure built by GPT-4 with TR for the question from GSM8K dataset (Cobbe et al., 2021). This structure_
contains 5 nodes, i.e. 5 thoughts and leads to K = 1 reasoning path towards one solution because no error is identified by the rollback
_controller with GPT-4 during reasoning._
19
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
N-2 S-2 error analysis: Q:
_The first step of the reasoning process is correct. Marissa has indeed walked a total of 6 milesin 2 hours, which gives her an average speed of 3 miles per hour. The mathematicalexpression is also correct: Average speed = Total distance/Total time = 6 miles/2 hours = 3miles/hour.\n\nHowever, mathematical expression is correct in that Total time = Total distance / Average speed = 12miles / 4 miles/hour = 3 hours, the second step of the reasoning process is incorrect. but the interpretation of this expression is wrong. ThisThe_ _Answer the question about the problem Algebra. After getting the final solution, place it afterthe sentence 'The final solution is' for readability.\n\nQuestion: Marissa is hiking a 12-miletrail. She took 1 hour to walk the first 4 miles, then another hour to walk the next two miles. Ifshe wants her average speed to be 4 miles per hour, what speed (in miles per hour) doesshe need to walk the remaining distance? \n\nAnswer: Let's think step by step._
_calculation gives the total time Marissa has to complete the entire trail to maintain an average_
_speed of 4 miles per hour, but she has already spent 2 hours hiking. Therefore, she only has 1_
_hour left to complete the remaining 6 miles of the trail.\n\nTo find the speed she needs to walkthe remaining distance, we should use the formula Speed = Distance / Time. So, the speed_ N-3 S-2:
_she needs to maintain for the remaining 6 miles in 1 hour is 6 miles/hour._ _Next Reasoning Step: Marissa still has 6 miles left to hike. To achieve her target average_
_speed of 4 miles per hour for the entire 12-mile trail, she needs to complete the hike in 3hours. However, she has already spent 2 hours hiking. Therefore, she only has 1 hour left to_ **N-10 S-2 Experience Accumulation:**
_complete the remaining 6 miles of the trail.\n\nMathematical Expression: Remaining time =Total time - Time already spent = 3 hours - 2 hours = 1 hour._ Rollback N-11 -> N-10:
_The overall reasoning process is logical and the steps are mathematically rational. However,_
N-2 S-2: _there is an error in Reasoning Step 3. The step incorrectly calculates the total time Marissahas to complete the hike. Instead, it should determine the time Marissa has left to complete_
_the hike after considering the time she has already spent.\n\nThe corrected Reasoning Step 3_
_Next Reasoning Step: Marissa still has 6 miles left to hike. To achieve her target averagespeed of 4 miles per hour for the entire 12-mile trail, she needs to complete the hike in 3hours.\n\nMathematical Expression: Total time = Total distance / Average speed = 12 miles /4 miles/hour = 3 hours._ _should be: Determine the time Marissa has left to complete the hike to achieve her targetaverage speed of 4 miles per hour.\n\nMathematical Expression: Time left = Total time - Timespent = 3 hours - 2 hours = 1 hour.\n\nThen, a new Reasoning Step 4 should be added:Determine the speed Marissa needs to walk the remaining distance in the timeleft.\n\nMathematical Expression: Required speed = Remaining distance / Time left = (12_
_miles - 6 miles) / 1 hour = 6 miles/hour. \n\nSo, Marissa needs to walk at a speed of 6 miles_
_per hour to complete the remaining distance in time to achieve her target average speed of 4_
N-4 S-2: _miles per hour._
_Next Reasoning Step: Since Marissa has already spent 2 hours hiking, she has 1 hour left to_ Rollback N-12 -> N-10:
_complete the remaining 6 miles.\n\nMathematical Expression: Remaining time = Total time -_
_Time spent = 3 hours - 2 hours = 1 hour._ _The overall reasoning process is correct, and each step is logically sound and rational. Here is_
_the breakdown of each step:\n\nReasoning Step 1: This step is correct. The total distance_
_Marissa has covered so far is indeed 6 miles (4 miles + 2 miles), and the total time she has_
_spent is 2 hours (1 hour + 1 hour). The mathematical expressions are clear and_
_accurate.\n\nReasoning Step 2: This step is also correct. The average speed is calculated by_
_dividing the total distance by the total time, which in this case is 6 miles divided by 2 hours,_
_giving an average speed of 3 miles per hour. The mathematical expression is clear and_
_accurate.\n\nReasoning Step 3: This step is correct, but it is not complete. It correctly_
_calculates the total time Marissa has to complete the hike to achieve her target average speed_
_of 4 miles per hour. However, it does not answer the original question: what speed does she_
N-9 S-4: _need to walk the remaining distance? \n\nTo answer the original question, we need anadditional step:\n\nReasoning Step 4: Calculate the remaining distance and the time she has_
N-6 S-5Next Reasoning Step: Therefore, Marissa needs to walk at a speed of 6 miles per hour forthe remaining 1 hour to achieve her target average speed of 4 miles per hour for the entire12-mile trail.\n\nMathematical Expression: The final solution is Marissa's required speed = 2miles/hour..: _Next Reasoning Step: Therefore, toachieve her target average speed of 4miles per hour for the entire 12-mile trail,Marissa needs to walk the remaining 6miles at a speed of 6 miles perhour.\n\nMathematical Expression: Thefinal solution is Marissa needs to walk ata speed of 6 miles/hour for the remaining6 miles._ _left, then find the speed she needs to maintain to cover the remaining distance in theremaining time.\n\nMathematical Expression: Remaining distance = Total distance - Distancecovered so far = 12 miles - 6 miles = 6 miles. Time left = Total time - Time spent so far = 3hours - 2 hours = 1 hour. Speed needed = Remaining distance / Time left = 6 miles / 1 hour =6 miles/hour.\n\nThis step would complete the reasoning process and answer the originalquestion.Rollback_ N-17 -> N-10:
_The overall reasoning process is mostly correct, but there is a logical error in the final step._
_Let's analyze each step:\n\nReasoning Step 1: The calculations for total distance and total_
_time are correct. Marissa has indeed covered 6 miles in 2 hours. \n\nMathematical Expression:_
_Total distance = 4 miles + 2 miles = 6 miles \nTotal time = 1 hour + 1 hour = 2 hours_
_\n\nReasoning Step 2: The calculation for Marissa's average speed so far is correct. She has_
_been hiking at an average speed of 3 miles per hour.\n\nMathematical Expression: Average_
_speed = Total distance / Total time = 6 miles / 2 hours = 3 miles/hour. \n\nReasoning Step 3:_
_This is where the logical error occurs. While it's correct that the total time for Marissa to_
_complete the 12 miles at an average speed of 4 miles per hour would be 3 hours, the question_
_asks for the speed she needs to maintain for the remaining distance. Since she has already_
_spent 2 hours, she has only 1 hour left to complete the remaining 6 miles. Therefore, she_
_needs to walk at a speed of 6 miles per hour for the remaining distance.\n\nThe corrected_
_Mathematical Expression should be: Remaining distance = Total distance - Distance covered_
_= 12 miles - 6 miles = 6 miles. Time left = Total time - Time spent = 3 hours - 2 hours = 1 hour._
_Required speed = Remaining distance / Time left = 6 miles / 1 hour = 6 miles/hour._
**N-10 S-2 -> N-22 S-3 Prompt:**
_Answer the question about the problem Algebra. After getting the final solution, place it after the sentence 'The final solution is' for readability.\n\nExperience containing previously made mistakes:\n\n######### The 0-th Experience with Analysis #########\n\t Experience of the rollback N-12_
_-> N-10 \n####################\n\n######### The 1-th Experience with Analysis #########\n\t Experience of the rollback N-17-> N-10\n####################\n\nConsider the analysis in the above experience to avoid making similar mistakes during reasoning for the_
_question.\n\n\nQuestion: Marissa is hiking a 12-mile trail. She took 1 hour to walk the first 4 miles, then another hour to walk the next two miles. If she wants her average speed to be 4 miles per hour, what speed (in miles per hour) does she need to walk the remaining distance? \n\nAnswer:_
_Let's think step by step. Let's focus on carefully generating the next possible reasoning step for reasoning steps below.\n\n--------------------\n\tReasoning Step 1: Calculate Marissa's total distance covered and total time spent so far. \n\nMathematical Expression: Total distance = 4 miles + 2_
_miles = 6 miles \nTotal time = 1 hour + 1 hour = 2 hours \n\tReasoning Step 2: Calculate Marissa's average speed so far. \n\nMathematical Expression: Average speed = Total distance / Total time = 6 miles / 2 hours = 3 miles/hour. \n--------------------\n\nFor reasoning steps within ------------------_
_--, please generate their best next step containing analysis and the corresponding mathematical expression.._
_._
Final solution of the reasoning path N-0 S-0 --> N-27 S-6:
\tReasoning Step 1: Calculate Marissa's total distance covered and total time spent so far. \n\nMathematical Expression: Total distance = 4 miles + 2 miles = 6 miles \nTotal time = 1 hour + 1 hour = 2 hours \n\tReasoning Step 2: Calculate Marissa's average speed so far. \n\nMathematical Expression: Average speed = Total distance / Total time = 6 miles / 2 hours = 3 miles/hour.
\n\tReasoning Step 3: Determine the total time Marissa has to complete the hike to achieve her target average speed of 4 miles per hour.\n\nMathematical Expression: Total time = Total distance / Target average speed = 12 miles / 4 miles/hour = 3 hours. \n\tReasoning Step 4: Determine the remaining distance Marissa needs to cover and the time she has left to complete the
hike.\n\nMathematical Expression: Remaining distance = Total distance - Distance covered = 12 miles - 6 miles = 6 miles. \nTime left = Total time - Time spent = 3 hours - 2 hours = 1 hour. \n\tReasoning Step 5: Now that we know the remaining distance Marissa needs to cover and the time she has left to do so, we can calculate the speed she needs to maintain to complete the hike in
the desired total time.\n\nMathematical Expression: Required speed = Remaining distance / Time left = 6 miles / 1 hour = 6 miles/hour. \n\tReasoning Step 6: Based on the calculations, Marissa needs to walk the remaining 6 miles in 1 hour to achieve her target average speed of 4 miles per hour for the entire 12-mile trail. This means she needs to walk at a speed of 6 miles per hour for
the remaining distance.\n\nMathematical Expression: The final solution is 6 miles/hour.
_Figure 8. A slightly complex thought structure built by GPT-4 with TR for the question from GSM8K dataset (Cobbe et al., 2021). This_
structure contains 27 nodes, i.e. 27 thoughts and leads to K = 8 reasoning paths towards 8 solutions as 5 number of rollbacks are
triggered by the rollback controller with GPT-4 during reasoning.
20
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
N-40 S-5:
_2*[a*(143^5 / 5 + 143^4 / 2 + a_2*143^3 +_
_a_1*143^2 + a_0*143) + b*(143^4 / 4 + 143^3 / 2_
_Reasoning Step:\n\nNow, let's observe that the polynomial p(x) has the same value at four different points:_ _+ b_2*143^2 + b_1*143 + b_0) + c*(143^3 / 3 +_
_55, 83, 204, and 232. This is a characteristic of periodic functions, not polynomials. However, we can take_ _143^2 / 2 + c_1*143 + c_0) + d*(143^2 / 2 + 143)_
_advantage of this observation by considering the polynomial modulo these differences. In other words, we_ _+ 143*e] - 64._
_can consider the values of p(x) for x modulo 55, 83, 204, and 232. \n\nIf we look at the numbers 1 and 286_
_(the arguments of the terms that do not cancel out in the telescoping series), we can see that 286 is_
_congruent to 1 modulo 55, 83, 204, and 232. This means that p(286) should be equal to p(1) because the_
_polynomial has the same value at these congruent points. Therefore, the value of the expression p(1) -_
_p(286) should be zero. \n\nThe final solution is 0.\n\nMathematical Expression:\n\np(1) - p(286) = 0._
N-17 S-7:
_Reasoning Step:\n\nSince all pairs of consecutive terms cancel each other out and the roots of the_
_polynomial result in p(x) = 8, the final solution is 0.\n\nMathematical Expression:\n\nThe final solution is 0._
N-34 S-6:
_Reasoning Step:\n\nNow, we can calculate the sum by substituting the value of f into the expression for the_
_sum. \n\nMathematical Expression:\n\nSum = f*(1 - 55)*(1 - 83)*(1 - 204)*(1 - 232) + 8 - f*(2 - 55)*(2 - 83)*(2_
_- 204)*(2 - 232) + 8 + ... - f*(286 - 55)*(286 - 83)*(286 - 204)*(286 - 232) + 8\n\nBy calculating this sum, we_
_will get the final solution for the given problem. \n\nReasoning Step:\n\nAfter calculating the sum, we can_
_state the final solution to the problem.\n\nMathematical Expression:\n\nThe final solution is Sum._
N-34 S-6 error analysis:
_The overall reasoning process is correct and each step is logically sound and mathematically rational._
_However, there is a critical step missing in this reasoning process. \n\nIn Reasoning Step 5, the alternating_
_sum pattern is correctly recognized, but the crucial observation that the polynomial p(x) is symmetric around_
_x = 143.5 is missed. Specifically, p(x) = p(287 - x) due to the symmetry of the polynomial's roots. This_
_means that for each pair of terms in the sum (e.g., p(1) and p(286), p(2) and p(285), etc.), they are equal_
Q: _and thus cancel out in the alternating sum. \n\nSo, the sum from p(1) to p(286) following the given pattern_
_would be 0, not the complicated expression provided in Reasoning Step 5. \n\nThe final solution should be_
_0, not the Sum as stated in Reasoning Step 6. \n\nSo, the error is in Reasoning Steps 5 and 6 where the_
_Answer the question about the problem Intermediate_ _symmetry of the polynomial and the cancellation in the alternating sum is not taken into account, leading to_
_Algebra. After getting the final solution, place it after the_ _an incorrect final solution._
_sentence 'The final solution is' for readability.\n\nQuestion:_
_Let $p(x)$ be a polynomial of degree 4 such that $p(55) =_
_p(83) = p(204) = p(232) = 8$ and $p(103) = 13.$ Find\n\_ N-35 S-5:
_[p(1) - p(2) + p(3) - p(4) + \dots + p(285) - p(286).\]_
_\n\nAnswer: Let's think step by step. ._ _Reasoning Step:\n\nAt this point, it's important to recognize the symmetry of the polynomial p(x) due to its_
_roots. Specifically, p(x) = p(287 - x) due to the symmetry of the polynomial's roots. This symmetry will_
_greatly simplify the calculation of the alternating sum given in the question.\n\nMathematical_
_Expression:\n\np(x) = p(287 - x)\n\nNow, let's consider the alternating sum from p(1) to p(286). Due to the_
_symmetry of the polynomial, for each pair of terms in the sum (e.g., p(1) and p(286), p(2) and p(285), etc.),_
_they are equal and thus cancel out in the alternating sum. This leads us to the conclusion that the sum from_
_p(1) to p(286) following the given pattern would be 0.\n\nMathematical Expression:\n\np(1) - p(2) + p(3) -_
_p(4) + ... + p(285) - p(286) = 0\n\nThe final solution to the problem is therefore 0.\n\nMathematical_
_Expression:\n\nThe final solution is 0._
_p(1) + p(3) + p(5) + ... + p(283) - p(286) = -5 / (48*20*101*129) * [(1 - 55)(1 - 83)(1 - 204)(1 -_
_232) + (3 - 55)(3 - 83)(3 - 204)(3 - 232) + ... + (283 - 55)(283 - 83)(283 - 204)(283 - 232) -_
_(286 - 55)(286 - 83)(286 - 204)(286 - 232)] + 8 * 141._
Final solution of the reasoning path N-0 S-0 --> N-35 S-5:
\tReasoning Step: \n\nThe polynomial p(x) has degree 4, which means it can be written in the form ax^4 + bx^3 + cx^2 + dx + e. Since p(55) = p(83) = p(204) = p(232) = 8, we know that p(x) - 8 = 0 for x = 55, 83, 204, 232.
This means that x - 55, x - 83, x - 204, and x - 232 are factors of p(x) - 8. Since these are four factors and the polynomial is of degree 4, we can say that p(x) - 8 is of the form f(x)(x - 55)(x - 83)(x - 204)(x - 232), where f(x) is a
constant function. \n\nMathematical Expression: \n\np(x) - 8 = f(x)(x - 55)(x - 83)(x - 204)(x - 232) \n\tReasoning Step:\n\nGiven that p(103) = 13, we can substitute x = 103 into the equation to solve for the constant function
f(x). \n\nMathematical Expression:\n\n13 - 8 = f(103)(103 - 55)(103 - 83)(103 - 204)(103 - 232) \n\tReasoning Step:\n\nBy simplifying the right side of the equation, we can solve for the value of f(103).\n\nMathematical
Expression:\n\n5 = f(103)(48)(20)(-101)(-129) \n\tReasoning Step:\n\nSolving for f(103) from the equation, we get f(103) as a constant value.\n\nMathematical Expression:\n\nf(103) = 5 / [(48)(20)(-101)(-129)] \n\tReasoning
Step:\n\nThe constant function f(x) is the same for all x, so we can now write the polynomial p(x) in terms of x.\n\nMathematical Expression:\n\np(x) = f(103)(x - 55)(x - 83)(x - 204)(x - 232) + 8 \n\tReasoning Step:\n\nWe can
now substitute the values of x from 1 to 286 into the equation. However, noticing the pattern of the expression, p(1) - p(2) + p(3) - p(4) + ... + p(285) - p(286), we can see that each pair of consecutive terms cancel each other
out, except for the terms where x = 55, 83, 204, and 232. These are the roots of the polynomial, for which p(x) = 8.\n\nMathematical Expression:\n\np(1) - p(2) + p(3) - p(4) + ... + p(285) - p(286) = 8 - 8 + 8 - 8 + ... + 8 - 8 = 0
\n\tReasoning Step:\n\nSince all pairs of consecutive terms cancel each other out and the roots of the polynomial result in p(x) = 8, the final solution is 0.\n\nMathematical Expression:\n\nThe final solution is 0.
_Figure 9. A complex thought structure built by GPT-4 with TR for the question from MATH dataset (Hendrycks et al., 2021b). This_
structure contains 46 nodes, i.e. 46 thoughts and leads to K = 8 reasoning path towards 8 solutions as 13 number of rollbacks are
triggered by the rollback controller with GPT-4 during reasoning. It is plotted by based on the NetworkX lib (Hagberg et al., 2008) under
the “fdp” layout.
21
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
N-17 S-3:
N-15 S-3:
_Reasoning Step 2: We can solve the equations 6 + (n1 - 1) * 4 = 94_
_and 7 + (n2 - 1) * 4 = 95 to find the number of terms in each arithmetic_ _Reasoning Step 3: We have found the number of terms in each of_
_sequence. \n\nFor the first equation, we can subtract 6 from both_ _the two sequences, which are 23 each. However, there is a_
_sides to get (n1 - 1) * 4 = 88. Then, we can divide both sides by 4 to_ _mistake in the second equation solution. The correct simplification_
_get n1 - 1 = 22. Finally, we can add 1 to both sides to get n1 = 23._ _for the second equation should be 4n2 + 3 = 91, not 4n2 + 3 = 95._
_\n\nFor the second equation, we can subtract 7 from both sides to get(n2 - 1) * 4 = 88. Then, we can divide both sides by 4 to get n2 - 1 =_ _After subtracting 3 from both sides, we get 4n2 = 88. Finally,dividing both sides by 4 gives n2 = 22, not 23. \n\nSo, the correct_
_22. Finally, we can add 1 to both sides to get n2 = 23. \n\nTherefore,_ _number of terms in the first sequence is 23 and in the second_
_there are 23 terms in each arithmetic sequence. \n\nIncluding the finalnumber 98, which is not part of either arithmetic sequence, the total_ _sequence is 22. \n\nThe next step is to add these two totalstogether to find the total number of numbers in the list. The total_
_number of terms in the original sequence is 23 + 23 + 1 = 47. \n\nThe_ _number of numbers in the list is n1 + n2 = 23 + 22 = 45. \n\nThe_
_final solution is 47._ _final solution is 45._
**N-11 S-2 Experience Accumulation:**
Rollback N-3 -> N-0:
_The overall reasoning process is incorrect due to a mistake in Reasoning Step 1. \n\nIn_
_Reasoning Step 1, the proposed sequence pattern is incorrect. The pattern is not an arithmetic_
_sequence with a common difference of 4. Instead, the sequence alternates between_
_increments of 1 and 3. This pattern is not an arithmetic sequence but rather a combination of_
_two arithmetic sequences: one starting at 6 and increasing by 4 (6, 10, 14, ...), and another_
_starting at 7 and increasing by 4 (7, 11, 15, ...). \n\nTherefore, the equation set up to find the_
_total number of terms is incorrect. Instead, we should find the number of terms in each of the_
_two sequences and add them together. \n\nThe incorrect reasoning in Step 1 leads to incorrect_
_conclusions in Reasoning Step 2. The equation 4n + 2 = 98 is not valid because the sequence_
_pattern was incorrectly identified. Therefore, the solution n = 24 is also incorrect. \n\nA correct_
_reasoning process would involve identifying the two separate arithmetic sequences, finding the_
_number of terms in each, and adding the two totals together. This would result in a correct total_
_count of numbers in the given list._
Rollback N-10 -> N-4:
_The overall reasoning process is correct and the steps follow a logical and rational_
_progression. The approach of identifying the two arithmetic sequences and calculating the_
_number of terms in each is a valid method for solving this problem. However, there is a_
_mistake in the calculation in Reasoning Step 2.\n\nReasoning Step 1: The step is correct. The_
_problem is accurately identified as a combination of two arithmetic sequences and the correct_
_formula for finding the nth term of an arithmetic sequence is used. The equations for finding_
_the number of terms in each sequence are correctly set up.\n\nReasoning Step 2: This step_
_contains a mistake. While the first equation is solved correctly, there is an error in the_
_simplification of the second equation. The correct simplification should be 7 + 4n2 - 4 = 95,_
_which simplifies to 4n2 + 3 = 95, not 4n2 + 3 = 91. After subtracting 3 from both sides, we_
_should get 4n2 = 92, not 4n2 = 88. Dividing both sides by 4 gives n2 = 23, not_
_22.\n\nReasoning Step 3: This step correctly identifies the mistake in Reasoning Step 2 and_
_correctly calculates the number of terms in the second sequence as 22. However, as identified_
_above, this is incorrect. The correct number of terms in the second sequence is 23, not 22._
_The total number of numbers in the list should be 23 + 23 = 46, not 45.\n\nSo, the final_
_solution should be 46, not 45. The mistake in Reasoning Step 2 led to an incorrect final_
_solution._
Rollback N-15 -> N-11:
Q:
_The overall reasoning process is correct and the steps follow a logical and rational_
_progression. The approach of identifying the two arithmetic sequences and calculating the_ _Answer the question about the problem Algebra. After getting the final solution, place it after_
_number of terms in each is a valid method for solving this problem. However, there is a_ _the sentence 'The final solution is' for readability.\n\nQuestion: Marissa is hiking a 12-mile_
_mistake in the calculation in Reasoning Step 2.\n\nReasoning Step 1: The step is correct. The_ _trail. She took 1 hour to walk the first 4 miles, then another hour to walk the next two miles. If_
_problem is accurately identified as a combination of two arithmetic sequences and the correct_ _she wants her average speed to be 4 miles per hour, what speed (in miles per hour) does_
_formula for finding the nth term of an arithmetic sequence is used. The equations for finding_ **N-11 S-2 -> N-17 S-3 Prompt:** _she need to walk the remaining distance? \n\nAnswer: Let's think step by step._
_the number of terms in each sequence are correctly set up.\n\nReasoning Step 2: This step_
_contains a mistake. While the first equation is solved correctly, there is an error in the_
_simplification of the second equation. The correct simplification should be 7 + 4n2 - 4 = 95,_ _Answer the question about the problem Counting & Probability. After getting the final solution, place it after the sentence 'The final solution is'_
_which simplifies to 4n2 + 3 = 95, not 4n2 + 3 = 91. After subtracting 3 from both sides, we_ _for readability.\n\nExperience containing previously made mistakes:\n\n######### The 0-th Experience with Analysis #########\n\t_
_should get 4n2 = 92, not 4n2 = 88. Dividing both sides by 4 gives n2 = 23, not_ _Experience of the rollback N-3 -> N-0 \n####################\n\n######### The 1-th Experience with Analysis #########\n\t_
_22.\n\nReasoning Step 3: This step correctly identifies the mistake in Reasoning Step 2 and_ _Experience of the rollback N-10 -> N-4 \n####################\n\n######### The 2-th Experience with Analysis #########\n\t_
_correctly calculates the number of terms in the second sequence as 22. However, as identifiedabove, this is incorrect. The correct number of terms in the second sequence is 23, not 22.The total number of numbers in the list should be 23 + 23 = 46, not 45.\n\nSo, the final_ _Experience of the rollback N-15 -> N-11mistakes during reasoning for the question.\n\n\nQuestion: How many numbers are in the list 6,7,10,11,14,15,\ldots,94,95,98? $$\n####################\n\nConsider the analysis in the above experience to avoid making similar_
_solution should be 46, not 45. The mistake in Reasoning Step 2 led to an incorrect final_ _\n\nAnswer: Let's think step by step. Let's focus on carefully generating the next possible reasoning step for reasoning steps below.\n" ._
_solution._ _._
Final solution of the reasoning path N-0 S-0 --> N-17 S-3:
\tReasoning Step 1: The sequence provided in the problem can be divided into two separate arithmetic sequences: one starting with 6 and increasing by 4 (6, 10, 14, ..., 94) and another starting with 7 and also increasing by 4 (7, 11, 15, ..., 95). \n\nNext, we need
to find the total number of terms in each of these arithmetic sequences. The formula to find the nth term of an arithmetic sequence is a + (n - 1) * d, where a is the first term, n is the term number, and d is the common difference. \n\nFor the first sequence, the first
term (a1) is 6, the last term is 94, and the common difference (d1) is 4. We can set up the equation 6 + (n1 - 1) * 4 = 94 to solve for n1. \n\nFor the second sequence, the first term (a2) is 7, the last term is 95, and the common difference (d2) is 4. We can set up
the equation 7 + (n2 - 1) * 4 = 95 to solve for n2. \n\nThese two equations will give us the total number of terms in each sequence. \n\nWe should also remember to include the final number 98 in our count, as it is not part of either arithmetic sequence. \n\nThe
total number of terms in the original sequence will then be n1 + n2 + 1. \n\tReasoning Step 2: We can solve the equations 6 + (n1 - 1) * 4 = 94 and 7 + (n2 - 1) * 4 = 95 to find the number of terms in each arithmetic sequence. \n\nFor the first equation, we can
subtract 6 from both sides to get (n1 - 1) * 4 = 88. Then, we can divide both sides by 4 to get n1 - 1 = 22. Finally, we can add 1 to both sides to get n1 = 23. \n\nFor the second equation, we can subtract 7 from both sides to get (n2 - 1) * 4 = 88. Then, we can
divide both sides by 4 to get n2 - 1 = 22. Finally, we can add 1 to both sides to get n2 = 23. \n\nTherefore, there are 23 terms in each arithmetic sequence. \n\nIncluding the final number 98, which is not part of either arithmetic sequence, the total number of terms
in the original sequence is 23 + 23 + 1 = 47. \n\nThe final solution is 47.
_Figure 10. A complex thought structure built by GPT-4 with TR for the question from MATH dataset (Hendrycks et al., 2021b). This_
structure contains 17 nodes, i.e. 17 thoughts and leads to K = 8 reasoning path towards 8 solutions as 8 number of rollbacks are triggered
by the rollback controller with GPT-4 during reasoning. It is plotted by based on the NetworkX lib (Hagberg et al., 2008) under the “fdp”
layout.
22
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
Q:
N-28 S-6:
_Answer the question about the problem Math. After getting the final solution, place it after the_
_sentence 'The final solution is' for readability.\n\nQuestion: suppose $u=\arctan \frac{y}{x}$,_ _The final solution is $\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial_
_what is numeric of $\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}$?_ _y^2} = 0$._
_\n\nAnswer: Let's think step by step._
N-27 S-5:
_The next step is to sum the second order partial derivatives. We add_
_$\frac{\partial^2 u}{\partial x^2} = \frac{2xy}{(x^2+y^2)^2}$ and_
_$\frac{\partial^2 u}{\partial y^2} = \frac{-2xy}{(x^2+y^2)^2}$ to get_
_$\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = \frac{2xy -_
_2xy}{(x^2+y^2)^2}$. This step involves basic algebraic addition and_
_simplification. After simplifying, we find that $\frac{\partial^2 u}{\partial x^2} +_
_\frac{\partial^2 u}{\partial y^2} = 0$. The final solution is 0._
_$\frac{\partial^2 u}{\partial x^2} +_
_\frac{\partial^2 u}{\partial y^2}_
Q: _= \frac{2}{x^2 + y^2}$._
_Answer the question about the problem Math. After getting the final solution,_
N-19 S-4: _place it after the sentence 'The final solution is' for readability.\n\nQuestion:suppose $u=\arctan \frac{y}{x}$, what is numeric of $\frac{\partial^2 u}{\partial_
_x^2}+\frac{\partial^2 u}{\partial y^2}$? \n\nAnswer: Let's think step by step._
_The next step is to conclude the solution based on the result we obtained._
_The sum of the second order partial derivatives of $u$ with respect to $x$_
_and $y$ is zero. Therefore, the final solution is 0._
N-17 S-6:
_The next step is to find the numeric of_
_$\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}_
_{\partial y^2}$. By adding the expressions for_
_$\frac{\partial^2 u}{\partial x^2}$ and_
_$\frac{\partial^2 u}{\partial y^2}$, we get_
_$\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}_
_{\partial y^2} = \frac{2y^3}{(x^2+y^2)^2} -_
_\frac{2x^3}{(x^2+y^2)^2}$. The final solution is then_
_the simplification of this expression.._
N-11 S-7:
_The best next step is to simplify the sum of the second order_
_partial derivatives. The terms $\frac{2xy}{(x^2+y^2)^2}$ and_
_$\frac{-2xy}{(x^2+y^2)^2}$ cancel each other out, giving us_
_$\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2} =_
_0$. Therefore, the numeric of $\frac{\partial^2 u}{\partial_
_x^2}+\frac{\partial^2 u}{\partial y^2}$ is 0. \n\nThe final solution is_
_0._
N-16 S-6:
_The next step is to add the two second order partial derivatives_
_$\frac{\partial^2 u}{\partial x^2}$ and $\frac{\partial^2 u}{\partial y^2}$ that we_
_calculated in the previous step. This will give us the numeric of_
_$\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}$.\n\nWe have_
_$\frac{\partial^2 u}{\partial x^2} = \frac{2xy}{(x^2+y^2)^2}$ and_
_$\frac{\partial^2 u}{\partial y^2} = -\frac{2xy}{(x^2+y^2)^2}$.\n\nAdding these_
_two gives us $\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2} =_
_\frac{2xy}{(x^2+y^2)^2} - \frac{2xy}{(x^2+y^2)^2} = 0$.\n\nSo, the final_
_solution is $\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2} = 0$._
Final solution of the reasoning path N-0 S-0 --> N-11 S-7:
_\tTo begin with, we need to find the first order partial derivatives of $u$ with respect to $x$ and $y$. Using the chain rule, we can write the derivative of $u$ with respect to $x$ as $\frac{\partial u}{\partial x} = \frac{1}{1+(y/x)^2} \cdot -\frac{y}{x^2}$, and the_
_derivative of $u$ with respect to $y$ as $\frac{\partial u}{\partial y} = \frac{1}{1+(y/x)^2} \cdot \frac{1}{x}$. \n\tThe next step is to simplify these first order partial derivatives. We can simplify $\frac{\partial u}{\partial x}$ and $\frac{\partial u}{\partial y}$ by_
_multiplying the numerator and the denominator by $x^2$ to get rid of the complex fraction. After simplifying, we have $\frac{\partial u}{\partial x} = -\frac{y}{x^2+y^2}$ and $\frac{\partial u}{\partial y} = \frac{x}{x^2+y^2}$. \n\tThe next step is to find the second_
_order partial derivatives of $u$ with respect to $x$ and $y$. Using the chain rule again, we can write the second derivative of $u$ with respect to $x$ as $\frac{\partial^2 u}{\partial x^2}$ and the second derivative of $u$ with respect to $y$ as $\frac{\partial^2_
_u}{\partial y^2}$. We need to differentiate $\frac{\partial u}{\partial x} = -\frac{y}{x^2+y^2}$ with respect to $x$ and $\frac{\partial u}{\partial y} = \frac{x}{x^2+y^2}$ with respect to $y$. \n\tThe next step is to calculate the second order partial derivatives. For_
_$\frac{\partial^2 u}{\partial x^2}$, we differentiate $-\frac{y}{x^2+y^2}$ with respect to $x$, and for $\frac{\partial^2 u}{\partial y^2}$, we differentiate $\frac{x}{x^2+y^2}$ with respect to $y$. The differentiation should be done carefully using the quotient rule_
_and simplifying the result. \n\tThe next step is to perform the differentiation to find the second order partial derivatives. Using the quotient rule, we have:\n\nFor $\frac{\partial^2 u}{\partial x^2}$, differentiating $-\frac{y}{x^2+y^2}$ with respect to $x$ gives us_
_$\frac{\partial^2 u}{\partial x^2} = \frac{2xy}{(x^2+y^2)^2}$.\n\nFor $\frac{\partial^2 u}{\partial y^2}$, differentiating $\frac{x}{x^2+y^2}$ with respect to $y$ gives us $\frac{\partial^2 u}{\partial y^2} = \frac{-2xy}{(x^2+y^2)^2}$.\n\nThe final step would be to add_
_these two second order partial derivatives to find the numeric of $\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}$. \n\tThe next step is to calculate the sum of the second order partial derivatives. We add $\frac{\partial^2 u}{\partial x^2} =_
_\frac{2xy}{(x^2+y^2)^2}$ and $\frac{\partial^2 u}{\partial y^2} = \frac{-2xy}{(x^2+y^2)^2}$ to get $\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2} = \frac{2xy}{(x^2+y^2)^2} + \frac{-2xy}{(x^2+y^2)^2}$. Simplifying this expression will give us the_
_numeric of $\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}$. \n\tThe best next step is to simplify the sum of the second order partial derivatives. The terms $\frac{2xy}{(x^2+y^2)^2}$ and $\frac{-2xy}{(x^2+y^2)^2}$ cancel each other out, giving_
_us $\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2} = 0$. Therefore, the numeric of $\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}$ is 0. \n\nThe final solution is 0. ._
_Figure 11. A complex thought structure built by GPT-4 with TR for the question from TheoremQA dataset (Chen et al., 2023b). This_
structure contains 28 nodes, i.e. 28 thoughts and leads to K = 8 reasoning paths towards 8 solutions as 11 number of rollbacks are
triggered by the rollback controller with GPT-4 during reasoning.
23
-----
**Toward Adaptive Reasoning in Large Language Models with Thought Rollback**
Q:
_Answer the question about the problem Finance. After getting the final_
_solution, place it after the sentence 'The final solution is' for_
_readability.\n\nQuestion: What is the effective rates for 3% compounded_
_monthly? \n\nAnswer: Let's think step by step._
N-19 S-8:
_The next step is to convert the EAR from decimal form to percentage form._
_We do this by multiplying the decimal by 100. So, we have:\n\nEAR = 0.0304_
_* 100\n\nThis calculation will give us the effective annual rate as a_
_percentage. The final solution is then the result of this multiplication. It's_
_important to note that we are using the correct value of 0.0025 for the_
_monthly interest rate, avoiding the previous error of using 0.025. The final_
_solution is EAR = 3.04%._
N-25 S-7:
_The next step is to subtract 1 from 1.0304 to get the EAR in decimal form._
_After performing this operation, we have:\n\nEAR = 1.0304 - 1 =_
_0.0304\n\nNow, to convert this decimal to a percentage, we multiply by 100._
_This will give us the effective annual rate as a percentage, which is the final_
_answer we're looking for. Let's perform this calculation:\n\nEAR = 0.0304 *_
_100\n\nThis calculation will give us the effective annual rate as a percentage._
_The final solution is then the result of this multiplication._
Final solution of the reasoning path N-0 S-0 --> N-19 S-8:
_\tTo calculate the effective rate for 3% compounded monthly, we need to use the formula for the effective annual rate (EAR) which is:\n\nEAR = (1 + r/n)^(nt) - 1\n\nWhere:\nr = nominal interest rate (annual interest rate)\nn =_
_number of compounding periods per year\nt = number of years\n\nIn this case, r = 3% or 0.03 (converted to decimal), n = 12 (as it's compounded monthly), and t = 1 (as we're calculating the effective annual rate).\n\nLet's_
_substitute these values into the formula. \n\tThe next step is to substitute the values of r, n, and t into the formula. So, we have:\n\nEAR = (1 + 0.03/12)^(12*1) - 1\n\nThis simplifies the calculation and prepares us for the next_
_step, which is to perform the actual computation. \n\tThe next step is to simplify and calculate the expression we obtained. Performing the operation inside the brackets first, we have:\n\nEAR = (1 + 0.0025)^12 - 1\n\nNow, we_
_need to raise the sum to the power of 12 (according to the order of operations, exponents come before addition or subtraction). After that, we subtract 1 from the result to find the effective annual rate. \n\tThe next step is to_
_calculate the value of (1 + 0.0025)^12 using the rules of exponents. Once we have this value, we will subtract 1 from it to get the effective annual rate. Let's perform this calculation:\n\nEAR = (1.0025)^12 - 1\n\nBy calculating_
_this expression, we will obtain the effective annual rate for a 3% interest rate compounded monthly. It's important to note that we must be careful to avoid the calculation error made in the previous experience, where the_
_incorrect value was used for the expression inside the brackets. This step ensures we are using the correct value of 0.0025 for the monthly interest rate, avoiding the previous error of using 0.025. \n\tThe next step is to compute_
_the value of (1.0025)^12 using a calculator or any computational tool. After obtaining this value, subtract 1 from the result to get the EAR. The computation is as follows:\n\nEAR = (1.0025)^12 - 1\n\nAfter this step, we will have_
_the effective annual rate expressed as a decimal. To convert it to a percentage, we will multiply the result by 100 in the next step. It's important to note that we are using the correct value of 0.0025 for the monthly interest rate,_
_avoiding the previous error of using 0.025. \n\tThe next step is to compute the value of (1.0025)^12 and then subtract 1 from the result. Using a calculator or computational tool, we find that (1.0025)^12 is approximately 1.0304._
_Subtracting 1 from this result, we get:\n\nEAR = 1.0304 - 1\n\nThis gives us the effective annual rate in decimal form. To convert this to a percentage, we will need to multiply the result by 100 in the next step. It's important to_
_note that we are using the correct value of 0.0025 for the monthly interest rate, avoiding the previous error of using 0.025. \n\tThe next step is to subtract 1 from 1.0304 to get the EAR in decimal form. After performing this_
_operation, we have:\n\nEAR = 1.0304 - 1 = 0.0304\n\nNow, to convert this decimal to a percentage, we multiply by 100. This will give us the effective annual rate as a percentage, which is the final answer we're looking for. It's_
_important to note that we are using the correct value of 0.0025 for the monthly interest rate, avoiding the previous error of using 0.025. \n\tThe next step is to convert the EAR from decimal form to percentage form. We do this by_
_multiplying the decimal by 100. So, we have:\n\nEAR = 0.0304 * 100\n\nThis calculation will give us the effective annual rate as a percentage. The final solution is then the result of this multiplication. It's important to note that we_
_are using the correct value of 0.0025 for the monthly interest rate, avoiding the previous error of using 0.025. The final solution is EAR = 3.04%. ._
_Figure 12. A complex thought structure built by GPT-4 with TR for the question from TheoremQA dataset (Chen et al., 2023b). This_
structure contains 43 nodes, i.e. 43 thoughts and leads to K = 8 reasoning paths towards 8 solutions as 15 number of rollbacks are
triggered by the rollback controller with GPT-4 during reasoning.
24
-----
| [
"Sijia, Chen",
"Baochun, Li"
] | 2024-06-06T00:00:00 | ICML 2024 Poster | true | 0 | 0 | null | https://openreview.net/forum?id=aoAPOOtN9E | null | https://www.semanticscholar.org/paper/3d5705323b62d28f83805c42d8b7dd87ebc5e599 |
Towards Automated Functional Equation Proving: A Benchmark Dataset and A Domain-Specific In-Context Agent | Automated Theorem Proving (ATP) faces challenges due to its complexity and computational demands. Recent work has explored using Large Language Models (LLMs) for ATP action selection, but these methods can be resource-intensive. This study introduces FEAS, an agent that enhances the COPRA in-context learning framework within Lean. FEAS refines prompt generation, response parsing, and incorporates domain-specific heuristics for functional equations. It introduces FunEq, a curated dataset of functional equation problems with varying difficulty. FEAS outperforms baselines on FunEq, particularly with the integration of domain-specific heuristics. The results demonstrate FEAS's effectiveness in generating and formalizing high-level proof strategies into Lean proofs, showcasing the potential of tailored approaches for specific ATP challenges. | FEAS, an agent that enhances the COPRA in-context learning framework within Lean, is introduced, an agent that enhances the COPRA in-context learning framework within Lean and outperforms baselines on FunEq, particularly with the integration of domain-specific heuristics. | ## Towards Automated Functional Equation Proving: A Benchmark Dataset and A Domain-Specific In-Context Agent
Mahdi Buali[1[0009][−][0002][−][2376][−][8795]] and Robert Hoehndorf[1[0000][−][0001][−][8149][−][5890]]
Computer Science Program, Computer, Electrical, and Mathematical Sciences &
Engineering Division, King Abdullah University of Science and Technology, Thuwal
23955, Saudi Arabia
_{mahdi.buali, robert.hoehndorf}@kaust.edu.sa_
**Abstract. Automated Theorem Proving (ATP) faces significant chal-**
lenges due to the vast action space and the computational demands of
proof generation. Recent advances have utilized Large Language Models
(LLMs) for action selection in ATP, but these methods often require substantial computational resources. This study introduces the Functional
Equation Automated Solver (FEAS), an agent that builds on the COPRA in-context learning framework within the Lean environment. FEAS
innovates by refining prompt generation and response parsing mechanisms, integrating domain-specific heuristics for functional equations,
and introducing the FunEq dataset—a rigorously curated collection of
functional equation problems categorized into three difficulty levels. The
agent’s performance is evaluated against established baselines using this
dataset, demonstrating improvements in theorem proving accuracy, particularly with the integration of functional equation-specific heuristics.
Our results highlight the effectiveness of FEAS in generating and formalizing high-level proof strategies into Lean proofs, emphasizing the
potential of tailored approaches in domain-specific ATP challenges.
**1** **Introduction**
Automated theorem proving (ATP) has long been a challenging endeavor in
computer science [6]. Formalizing mathematics for efficient machine processing
presents a significant hurdle, further complicated by the inherent infinite nature
of the action space for proof construction. Interactive theorem provers (ITPs)
like Lean [13], Coq [7], Isabelle [22], and HOL4 [18] offer a solution by facilitating
formal proofs through user-guided application of tactics until the desired goals
are achieved.
Recent efforts have explored the use of Large Language Models (LLMs) as
action selectors to address the vast action space in ATP [17] [5]. These approaches
involve training LLMs from scratch [11] or fine-tuning pre-trained models [1] to
generate plausible actions within the context of formal proofs. However, such
methods often incur significant computational costs.
-----
2 Buali & Hoehndorf
In-context learning, exemplified by the COPRA agent [19], presents a promising avenue to overcome the computational bottleneck. This approach has demonstrated success in other domains like machine translation and code generation
[14].
Evaluating the capabilities of these algorithms may relies on challenging problems encountered in high school math Olympiads. The International Mathematical Olympiad (IMO) [8] represents the pinnacle of difficulty in this domain.
Notably, AlphaGeometry [20] recently achieved progress in automated theorem
proving for geometric problems using LLMs with a competitive performance
comparing to IMO participant. However, the field of functional equations, another core topic within IMO’s algebraic domain involving finding all unknown
functions satisfying specific conditions, remains largely unexplored in the realm
of automated theorem proving.
In this project, we build upon the foundation of the COPRA in-context learning agent [19], working specifically within the Lean environment, yet, expanding
evaluation to various general-purpose LLMs. Our key contributions include:
**– FunEq Dataset: We created the FunEq dataset** [1], a curated collection of
functional equation problems formalized in Lean. This dataset spans three
difficulty levels (simple, intermediate, hard), providing a rigorous benchmark
for evaluating automated theorem provers in this domain. Hard problems are
drawn from shortlisted IMO problems.
**– FEAS Agent: We introduce the FEAS agent[2], which refines COPRA’s**
prompt generation and response parsing mechanisms. FEAS instructs a LLM
to produce high-level proof strategies in natural language, followed by their
formalization and translation into Lean proofs. It adopts a robust blockbased parsing strategy for error handling and backtracking.
**– Heuristic Integration: To enhance and stabilize FEAS’s performance, we**
explicitly incorporate domain-specific functional equation heuristics [3] directly into the agent’s prompts.
**2** **Related Work**
Deep learning has emerged as a promising approach to tackle the combinatorial explosion of the search space in automated theorem proving (ATP) [24] [2]
[4] [23]. The advent of Transformer-based language models revolutionized automated theorem proving by eliminating the need to explicitly hardcode the syntax
of interactive theorem provers (ITPs). GPT-f [17] pioneered this approach, utilizing language models to generate novel proofs accepted into the Metamath [12]
library. PACT [5], a follow-up project, utilized self-supervised data to improve
tactic prediction in the Lean proof assistant. Further enhancements with expert
iteration [16] enabled autonomous curriculum learning, achieving state-of-the-art
[1 https://github.com/bio-ontology-research-group/FunEq.git](https://github.com/bio-ontology-research-group/FunEq.git)
[2 https://github.com/bio-ontology-research-group/FEAS.git](https://github.com/bio-ontology-research-group/FEAS.git)
-----
FEAS 3
performance on the miniF2F benchmark [26], a dataset of formal Olympiad-style
problems.
Subsequently, Thor [10] integrated language models with Isabelle’s Sledghammers [15] for premise selection, alleviating the need for explicit specification of
every proof step. Other work [9] leveraged this integration, employing in-context
learning for autoformalization and expert iteration to achieve improved results
on the MiniF2F benchmark. Concurrently, HTPS [11] explored the integration
of reinforcement learning with LLMs for guided proof search.
Recent advances have sought to address the computational cost of LLM pretraining. LLEMMA [1] continued pretraining Code Llama on mathematical data,
demonstrating capability in formal theorem proving. ReProver [25] focused on
premise selection using a retrieval-augmented approach, achieving success with
relatively modest computational resources.
However, the computational burden of fine-tuning LLMs remained a concern.
COPRA [19] addressed this by employing general-purpose LLMs within an incontext learning framework. This approach repeatedly queries a LLM to propose
tactics, leveraging feedback from the proof environment and retrieved lemmas
to refine subsequent queries. While COPRA outperformed several baselines, it,
like most prior works, generates proofs one tactic at a time, focusing on low-level
proof steps in comparison to human-like informal reasoning. Additionally, previous work primarily developed general solvers without leveraging domain-specific
knowledge, limiting their efficacy in specialized areas like functional equations.
**3** **The FunEq Dataset**
We developed the FunEq dataset, a manually curated collection of functional
equation problems formalized in Lean. Our focus on functional equations is motivated by the fact that, while a specialized domain, their solutions necessitate
a diverse array of proof techniques. These range from basic algebraic manipulations to sophisticated reasoning about concepts like continuity [21], providing a
rich testing ground for automated theorem provers.
To accommodate varying levels of difficulty, FunEq is structured into three
categories:
**Simple Dataset This dataset introduces 18 problems which require only fun-**
damental functional equation reasoning steps. Proofs primarily involve simple substitutions, linear arithmetic, the use of involutions, straightforward
induction, and basic case analysis.
**Intermediate Dataset This dataset contains 15 problems which focuse on**
proving intermediate lemmas often encountered in the solution process of
more complex functional equations such as establishing injectivity and surjectivity. Problems are sourced primarily from Evan Chen’s article [3] and
the book “Functional Equations: A Problem-Solving Approach” by Venkatachala [21].
**Hard Dataset This dataset consists of most of the International Mathemat-**
ical Olympiad (IMO) shortlisted functional equation problems since 2002
-----
Buali & Hoehndorf
[8]. These problems, originally posed in the context of finding all functions
satisfying given hypotheses, have been reformulated for Lean 3 by explicitly stating the solutions as the goal state. This modification simplifies the
problem representation compared to their original form in the competition.
**The FEAS Agent**
**Function FEAS(O, α):**
PUSH(st, O)
**for j ←** 1 to t do
**if α = {} then**
_p ←_ PROMPTIFY(st, Bad(O))
_α ←_ PARSETACTIC(LLM(p))
**end**
_a ←_ POP(α)
_O[′]_ _←_ _T_ (O, a)
**if O[′]** = QED then
**terminate successfully**
**else if O[′]** _∈_ _Err or ∃O[′′]_ _s.t. O[′]_ _≡_ _O[′′]_ **then**
add a to Bad(O)
_α ←{}_
**else**
```
FEAS(O[′], α)
```
**end**
**end**
POP(st)
**Algorithm 1: Given an initial proof state (Oin) and an empty queue of tac-**
tics (α), FEAS aims to find a sequence of tactics that transforms (Oin) into
the goal state (QED). Each proof state is either a set of obligations (goalhypothesis pairs) or an error state. The agent utilizes a stack (st) to manage
proof states, a failure dictionary (Bad(O)) to track unproductive tactics, and
functions PROMPTIFY and PARSETACTIC to interact with an LLM. The algorithm proceeds by iteratively querying the LLM for tactics, applying them
to the current proof state, and adjusting its search based on success, errors,
or lack of progress as determined by a symbolic checker and the transition
function (T (O, a)).
The FEAS (Functional Equation Automated Solver) agent (Algorithm 1)
builds upon the foundation of the COPRA framework [19], specializing in the
domain of functional equations.
**Prompt Engineering. FEAS introduces a key refinement in the system prompt**
structure. Rather than directly soliciting a Lean proof step, FEAS guides a LLM
through a multi-stage response generation process. It prompts the LLM to first
articulate a high-level proof strategy in natural language, then formalize and
translate this strategy into a Lean-compatible proof.
-----
FEAS 5
Fig. 1: Example of FEAS’s Prompting and Response Generation for a Functional
Equation Problem. (a) An example of domain-specific heuristic included in the
system prompt. (b) Natural language proof steps generated by the LLM in response to the current proof state. (c) The corresponding Lean proof generated
by the LLM, segmented into blocks for error handling and parsing.
**Multi-Block Parsing and Error Handling. FEAS adopts a dynamic block-**
based parsing strategy to manage the multi-line Lean proofs generated by a
LLM. This strategy enhances robustness by dividing the generated proof into
logical blocks based on the underlying structure of Lean proofs. By processing
each block independently, FEAS can effectively isolate and recover from errors in
specific parts of the proof, potentially salvaging and utilizing valid proof segments
even if the overall proof generated by the LLM is not entirely correct.
**Automatic Tactic Application. After either successful parsing of all blocks or**
encountering an error, FEAS attempts the automatic application of the nlinarith
tactic — which can simplify proofs by automatically handling complex algebraic
manipulations that would otherwise need to be done manually. Successful application incorporates this step into the proof, otherwise, it is omitted. This
provides automation and taps into the power of Lean’s built-in tactics.
**Domain-Specific Heuristics. FEAS integrates functional equation heuris-**
tics [3] directly into the system prompt alongside Lean syntax examples. These
heuristics encompass substitution-based simplification, techniques for proving
-----
6 Buali & Hoehndorf
Algorithm Few Shots COPRA FEAS FEAS+Heuristics
GPT 50.0% (50.0%) 77.78% (77.78%) 80.56% (83.33%) 86.11% (94.44%)
Gemini 33.34% (38.89%) 52.78% (61.11%) 80.56% (88.89%) 88.89% (88.89%)
Claude 0% (0%) 83.33% (83.33%) 91.67% (100%) 86.11% (88.89%)
Llama3 0% (0%) 50.0% (50.0%) **75.0% (77.78%) 63.89% (66.67%)**
Table 1: Performance comparison of pass@1 (pass@2) on the simple tier of FunEq
dataset
Algorithm Few Shots COPRA FEAS FEAS+Heuristics
GPT 0% (0%) 0% (0%) 6.67% (13.33%) **10.0% (13.33%)**
Gemini 0% (0%) 6.67% (6.67%) 10.0% (13.33%) 3.33% (6.67%)
Claude 0% (0%) 6.67% (6.67%) 13.33% (13.33%) 13.33% (13.33%)
Llama3 0% (0%) 0% (0%) 0% (0%) **6.67% (6.67%)**
Table 2: Performance comparison of pass@1 (pass@2) on the intermediate tier
of FunEq dataset
bijectivity, exploitation of symmetry and involution, and the utilization of induction over natural and rational numbers. This integration aims to guide the
LLM towards generating more relevant and successful proof strategies within
this specific problem domain.
**5** **Evaluation**
We conduct a series of experiments to evaluate the performance of the FEAS
agent and the impact of incorporating domain-specific heuristics. Our evaluation
includes comparisons across four different LLMs: GPT-4 Turbo, Gemini-1.5Pro, Claude-3.5-Sonnet, and Llama3 70b. We evaluate FEAS agents against two
baselines: Few-Shots and COPRA, the original in-context learning agent, which
serves as points of comparison. We assess FEAS in two distinct configurations:
one with the integrated domain-specific functional equation heuristics and one
without.
The experiments are performed on the simple and intermediate tiers of the
FunEq dataset. To gauge performance on more complex problems, we further
evaluate all agents on the A1 subset of the hard dataset, which consists of the easiest shortlisted algebra problems from each corresponding IMO year. To control
for potential variability in LLM responses, we execute each experiment twice on
the simple and intermediate datasets. Due to computational resource constraints,
we limit our evaluation to a single run on the A1 subset.
-----
FEAS 7
In all experiments, we impose a maximum limit of 60 LLM queries and a
timeout of 720 seconds. The LLMs are used with a temperature setting of 0,
prioritizing deterministic responses. Performance is assessed using Pass@1 and
Pass@2 metrics, representing success on the first and second attempts, respectively. For the simple and intermediate datasets, Pass@1 is calculated as an
average of the results of the two runs.
**5.1** **Results**
Tables 1 and 2 show our evaluation across the Simple and Intermediate tiers of
FunEq. All combinations of agents and LLMs fail to generate proofs on the Hard
tier of FunEq. On the Simple dataset, FEAS agents consistently achieves the
highest success rates across all evaluated LLMs. FEAS with integrated heuristics on GPT and Gemini achieves the highest performance, demonstrating the
efficacy of domain-specific knowledge. However, Claude and Llama3, without
heuristics, shows superior performance on this dataset, suggesting that in certain LLM configurations, heuristics may misguide proof search.
On the more challenging Intermediate dataset, the challenge increases substantially, with all approaches showing lower success rates. However, FEAS
agents again consistently ranks highest in performance, highlighting its ability to
navigate more complex functional equation proofs. Again, in some cases, Gemini, FEAS performs better without heuristics. Furthermore, all methods fail to
generate proofs on the Hard tier of FunEq, indicating that significant challenges
remain in automated theorem proving for functional equations.
**6** **Conclusion**
Our experiments establish the FEAS agent as an advancement in automated
theorem proving for functional equations. FEAS refines prompting, parsing, and
integration of domain-specific heuristics demonstrate improvement over baselines. While results on the Simple dataset are encouraging, performance on the
Intermediate and Hard datasets highlights the ongoing challenges in this complex
domain.
Specifically, the challenges revealed by our evaluation can be decomposed
into two distinct sub-problems: (1) proposing mathematically useful proof steps,
and (2) accurately translating these high-level steps into the formal language
of the theorem prover. Each of these sub-problems poses its own complexities,
requiring distinct approaches for further improvement.
Several avenues present themselves for future research. The development
of agents tailored to specific sub-tasks within the framework. Incorporating a
broader repertoire of high-level proof tactics within the LLM’s prompting could
improve the performance of generating Lean proof steps. Investigating search algorithms beyond the currently employed depth-first search has the potential to
improve efficiency and solution discovery. Finally, designing efficient self-learning
mechanisms for FEAS would enable it to continuously refine its strategies based
on both successful and unsuccessful proof attempts.
-----
8 Buali & Hoehndorf
**References**
1. Azerbayev, Z., Schoelkopf, H., Paster, K., Santos, M.D., McAleer, S., Jiang, A.Q.,
Deng, J., Biderman, S., Welleck, S.: Llemma: An open language model for mathematics. arXiv preprint arXiv:2310.10631 (2023)
2. Blaauwbroek, L., Urban, J., Geuvers, H.: The tactician: A seamless, interactive
tactic learner and prover for coq. In: International Conference on Intelligent Computer Mathematics. pp. 271–277. Springer (2020)
[3. Chen, E.: Introduction to functional equations (2016), https://web.evanchen.cc/](https://web.evanchen.cc/handouts/FuncEq-Intro/FuncEq-Intro.pdf)
```
handouts/FuncEq-Intro/FuncEq-Intro.pdf, accessed on: 08 May 2024
```
4. Gauthier, T., Kaliszyk, C., Urban, J., Kumar, R., Norrish, M.: Tactictoe: learning
to prove with tactics. Journal of Automated Reasoning 65(2), 257–286 (2021)
5. Han, J.M., Rute, J., Wu, Y., Ayers, E.W., Polu, S.: Proof artifact co-training for
theorem proving with language models (2022)
6. Harrison, J., Urban, J., Wiedijk, F.: History of Interactive Theorem Proving, vol. 9,
[pp. 135–214. North Holland (12 2014). https://doi.org/10.1016/B978-0-444-51624-](https://doi.org/10.1016/B978-0-444-51624-4.50004-6)
[4.50004-6](https://doi.org/10.1016/B978-0-444-51624-4.50004-6)
7. Huet, G., Kahn, G., Paulin-Mohring, C.: The Coq Proof Assistant : A Tutorial :
[Version 7.2. Research Report RT-0256, INRIA (Feb 2002), https://inria.hal.](https://inria.hal.science/inria-00069918)
```
science/inria-00069918, projet COQ
```
8. IMO: International mathematical olympiad official website, `https://www.`
```
imo-official.org/, accessed: 08 May 2024
```
9. Jiang, A., Staats, C.E., Szegedy, C., Rabe, M., Jamnik, M., Li, W., Wu, Y.T.:
Autoformalization with large language models. NeurIPS (2022)
10. Jiang, A.Q., Li, W., Tworkowski, S., Czechowski, K., Odrzyg´o´zd´z, T., Mi lo´s, P.,
Wu, Y., Jamnik, M.: Thor: Wielding hammers to integrate language models and
automated theorem provers. Advances in Neural Information Processing Systems
**35, 8360–8373 (2022)**
11. Lample, G., Lacroix, T., Lachaux, M.A., Rodriguez, A., Hayat, A., Lavril, T.,
Ebner, G., Martinet, X.: Hypertree proof search for neural theorem proving. Advances in neural information processing systems 35, 26337–26349 (2022)
12. Megill, N., Wheeler, D.: Metamath: A Computer Language for Mathematical Proofs. Lulu.com (2019), `https://books.google.com.sa/books?id=`
```
dxqeDwAAQBAJ
```
13. de Moura, L., Kong, S., Avigad, J., van Doorn, F., von Raumer, J.: The lean theorem prover (system description). In: Felty, A.P., Middeldorp, A. (eds.) Automated
Deduction - CADE-25. pp. 378–388. Springer International Publishing (2015)
[14. OpenAI: Gpt-4 and gpt-4 turbo, https://platform.openai.com/docs/models/](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo)
```
gpt-4-and-gpt-4-turbo
```
15. Paulsson, L.C., Blanchette, J.C.: Three years of experience with sledgehammer, a
practical link between automatic and interactive theorem provers. In: Proceedings
of the 8th International Workshop on the Implementation of Logics (IWIL-2010),
Yogyakarta, Indonesia. EPiC. vol. 2 (2012)
16. Polu, S., Han, J.M., Zheng, K., Baksys, M., Babuschkin, I., Sutskever, I.: Formal
mathematics statement curriculum learning (2022)
17. Polu, S., Sutskever, I.: Generative language modeling for automated theorem proving (2020)
18. Slind, K., Norrish, M.: A brief overview of hol4. In: Mohamed, O.A., Mu˜noz, C.,
Tahar, S. (eds.) Theorem Proving in Higher Order Logics. pp. 28–32. Springer
Berlin Heidelberg, Berlin, Heidelberg (2008)
-----
FEAS 9
19. Thakur, A., Tsoukalas, G., Wen, Y., Xin, J., Chaudhuri, S.: An in-context learning
agent for formal theorem-proving (2024)
20. Trinh, T., Wu, Y., Le, Q., et al.: Solving olympiad geometry without human demon[strations. Nature 625, 476–482 (2024). https://doi.org/10.1038/s41586-023-06747-](https://doi.org/10.1038/s41586-023-06747-5)
[5, https://doi.org/10.1038/s41586-023-06747-5](https://doi.org/10.1038/s41586-023-06747-5)
21. Venkatachala, B.: Functional Equations A Problem Solving Approach. Prism
(2002)
22. Wenzel, M., Paulson, L.C., Nipkow, T.: The isabelle framework. In: Mohamed,
O.A., Mu˜noz, C., Tahar, S. (eds.) Theorem Proving in Higher Order Logics. pp.
33–38. Springer Berlin Heidelberg, Berlin, Heidelberg (2008)
23. Wu, M., Norrish, M., Walder, C., Dezfouli, A.: Tacticzero: Learning to prove theorems from scratch with deep reinforcement learning. Advances in Neural Information Processing Systems 34, 9330–9342 (2021)
24. Yang, K., Deng, J.: Learning to prove theorems via interacting with proof assistants. In: International Conference on Machine Learning. pp. 6984–6994. PMLR
(2019)
25. Yang, K., Swope, A., Gu, A., Chalamala, R., Song, P., Yu, S., Godil, S., Prenger,
R.J., Anandkumar, A.: Leandojo: Theorem proving with retrieval-augmented language models. Advances in Neural Information Processing Systems 36 (2024)
26. Zheng, K., Han, J.M., Polu, S.: Minif2f: a cross-system benchmark for formal
olympiad-level mathematics. arXiv preprint arXiv:2109.00110 (2021)
-----
10 Buali & Hoehndorf
**7** **Appendix**
**7.1** **Qualitative Analysis**
```
theorem intermediate_funeq_2
```
`(f : R →` R)
```
(h_0 : ∀ x, f(x + 1) = f(x) + 1)
(h_1 : ∀ x, x ̸= 0 → f(1/x) = f(x)/x^2) :
```
_∀_ `x, x ̸= 0 →` `f(1 + 1/x) = 1 + f(x)/x^2 :=`
```
begin
intro x,
intro hx,
have h_2 : f (1 / x + 1) = f (1 / x) + 1 := h_0 (1 / x),
have h_3 : f (1 / x) = f x / x ^ 2 := h_1 x hx,
rw h_3 at h_2,
rw add_comm at h_2,
nlinarith,
end
```
Listing 1.1: FEAS Proof
```
theorem intermediate_funeq_2
```
`(f : R →` R)
```
(h_0 : ∀ x, f(x + 1) = f(x) + 1)
(h_1 : ∀ x, x ̸= 0 → f(1/x) = f(x)/x^2) :
```
_∀_ `x, x ̸= 0 →` `f(1 + 1/x) = 1 + f(x)/x^2 :=`
```
begin
intro x, intro hx,
rw [h_0 (1 / x), h_1 x hx] at *,
field_simp [hx],
rw mul_comm at *,
rw [←h_0 (1 / x), h_1 x hx] at *,
rw [←h_0 (1 / x), h_1 x hx] at *,
ring_nf,
end
```
Listing 1.2: COPRA Incomplete Proof
To illustrate the distinct strengths of FEAS, we examine a specific functional
equation problem (intermediate funeq 2) where it succeeds while COPRA does
not. FEAS’ solution demonstrates its ability to generate high-level intermediate
proof steps using the have tactic, mirroring a human’s approach. This contrasts
with COPRA, which focuses solely on lower-level Lean tactics. FEAS’ strategy,
guided by the system prompt instruction to first generate a natural language
proof sketch, leads to a more human-readable and strategically structured proof.
Furthermore, FEAS’ block-by-block parsing successfully handles errors within
individual proof blocks. While all three lines generated by the LLM contained
-----
FEAS 11
incorrect tactic applications, FEAS was able to isolate and utilize the correct
proof concepts within each block. This error recovery mechanism showcases the
robustness of FEAS’ parsing strategy.
Interestingly, despite the LLM failing to suggest the final correct tactic (erroneously proposing exact h 2), FEAS’ automated application of the nlinarith
tactic successfully concludes the proof. This demonstrates the complementary
nature of FEAS’ high-level reasoning and the underlying theorem prover’s automated capabilities.
-----
| [
"Mahdi, Buali",
"Robert, Hoehndorf"
] | 2024-07-05T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2407.14521v1 | https://arxiv.org/abs/2407.14521 | https://www.semanticscholar.org/paper/a6910309c70c5dbd604431993d7c2d0001886424 |
Towards an Efficient Architecture for Intelligent Theorem Provers | N/A | null | # Towards an Efficient Architecture for Intelligent Theorem Provers
Michael Rawson and Giles Reger
University of Manchester, Manchester, UK
High-performance automated theorem provers for first-order logic (e.g. CVC4 [2], E [11],
iProver [6], Vampire [7]) include hand-coded heuristics to guide proof search, often exposed as
individual prover options. These heuristics perform well, but have a number of disadvantages
including a lack of generality over problems (necessitating portfolio modes [9, 10]), inability
to learn from experience, and maintenance overhead. There is therefore interest in employing
machine-learning techniques to guide proof search in automatic theorem provers, with approaches such as FEMaLeCoP [5], ENIGMA [4], or Deep Network Guided Proof Search [8]
(DNGPS).
These systems experience a trade-off between the expressivity of their learning algorithms
versus the impact of guidance on “raw” prover performance. At extremes:
_• The heuristic is fast, but does not take into account the entire proof state (e.g._ the
MaLeCoP family), restricting the prover to learning from features.
_• The heuristic takes into account the entire proof state (usually via neural networks), but_
is too slow to use all the time. The DNGPS system runs with the heuristic for a fixed
amount of time, then reverts back to the old heuristics thereafter.
Ideally, an intelligent system would guide search based on the structure of the current proof
state, while also remaining performant enough to run continuously without significantly affecting prover performance. We present a prover architecture which attempts to achieve this ideal,
and show that it has several other desirable properties.
**Desiderata** In such a system we require the following:
1. Proof state must be small. Attempting to evaluate large proof states structurally requires
a lot of resources. Saturation-based provers such as E or Vampire can have very large
proof states, for example.
2. Evaluation of states must be possible in parallel. Machine-learning algorithms tend to
operate more efficiently in batches. Tree-based approaches (tableau etc.) lend themselves
to this, whereas saturation provers are inherently sequential.
3. Subgoals must be independent. If the prover has a notion of (sub-)goals which must be
dispatched (such as in tableau or connection provers), these should be independent of the
rest of the search space. Otherwise, the learning system is trying to learn while blind to
the context of the search.
**Calculus and Algorithm** We implement a first-order tableau calculus without unification,
with equality, on non-clausal formulae. By using this very “natural” representation, the hope
is that inherent proof structure will be more apparent to machine-learning algorithms, which
do not have to invert the process of clausification. The tableau space is explored in parallel by
-----
Efficient Intelligent Provers Rawson and Reger
means of a UCT-maximising tree search (similar to that employed by MonteCoP [3]), with new
goals placed on a global queue for evaluation in batches by means of arbitrary machine-learning
methods, currently a GCN [1, 12].
**Advantages** This prover architecture satisfies requirements 1–3, but also shows promise in
other areas. In terms of reasoning, the calculus used is relatively flexible, allowing for extension
to reasoning with theories, induction, and full higher-order logic without modifying the whole
prover. In terms of efficiency, such a prover can also make full use of multi-core systems, allowing
for linear exploration speedup with the number of available cores, eventually saturating the
device or core used for running machine-learned algorithms. The prover is also well-suited to a
hybrid approach in which promising subgoals are dispatched to an existing first-order ATP.
**Evaluation and Future Work** Evaluation and implementation of an example prover system
based on this architecture is ongoing, but initial results are promising, with the system appearing
to “learn to prove” harder problems based on prior experience with easier problems. Future
work includes exploring calculus options, optimisation, further exploration of machine-learning
methods, and using the prover as a “pre-processor” for an existing first-order ATP.
## References
[1] Micha¨el Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks
on graphs with fast localized spectral filtering. In Advances in Neural Information Processing
_Systems, pages 3844–3852, 2016._
[2] Morgan Deters, Andrew Reynolds, Tim King, Clark W. Barrett, and Cesare Tinelli. A tour of
CVC4: how it works, and how to use it. In Formal Methods in Computer-Aided Design, FMCAD
_2014, Lausanne, Switzerland, October 21-24, 2014, page 7, 2014._
[3] Michael F¨arber, Cezary Kaliszyk, and Josef Urban. Monte carlo connection prover. arXiv preprint
_arXiv:1611.05990, 2016._
[4] Jan Jakubuv and Josef Urban. ENIGMA: efficient learning-based inference guiding machine. In
_International Conference on Intelligent Computer Mathematics, pages 292–302. Springer, 2017._
[5] Cezary Kaliszyk and Josef Urban. FEMaLeCoP: Fairly efficient machine learning connection
prover. In Logic for Programming, Artificial Intelligence, and Reasoning, pages 88–96. Springer,
2015.
[6] K. Korovin. iProver – an instantiation-based theorem prover for first-order logic (system description). In Proceedings of the 4th International Joint Conference on Automated Reasoning, (IJCAR
_2008), volume 5195 of Lecture Notes in Computer Science, pages 292–298. Springer, 2008._
[7] Laura Kov´acs and Andrei Voronkov. First-order theorem proving and Vampire. In International
_Conference on Computer Aided Verification, pages 1–35. Springer, 2013._
[8] Sarah Loos, Geoffrey Irving, Christian Szegedy, and Cezary Kaliszyk. Deep network guided proof
search. arXiv preprint arXiv:1701.06972, 2017.
[9] Michael Rawson and Giles Reger. Dynamic strategy priority: Empower the strong and abandon the
weak. In Proceedings of the 6th Workshop on Practical Aspects of Automated Reasoning (PAAR),
pages 58–71.
[10] Giles Reger, Martin Suda, and Andrei Voronkov. The challenges of evaluating a new feature in
Vampire. In Vampire Workshop, pages 70–74, 2014.
[11] Stephan Schulz. E — a brainiac theorem prover. AI Communications, 15(2, 3):111–126, 2002.
[12] Mingzhe Wang, Yihe Tang, Jian Wang, and Jia Deng. Premise selection for theorem proving by
deep graph embedding. In Advances in Neural Information Processing Systems, pages 2786–2796,
2017.
-----
| [
"Michael, Rawson",
"Giles, Reger"
] | 2019-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Two Learning Operators for Clause Selection Guidance: An Experimental Evaluation | N/A | null | [
"Martin, Suda"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
|
TypedThinker: Typed Thinking Improves Large Language Model Reasoning | Despite significant advancements in the reasoning capabilities of Large Language Models (LLMs), the lack of diverse reasoning solutions often makes them trapped in a limited solution search area. In this paper, we propose TypedThinker, a novel framework that enhances LLMs' problem-solving abilities by incorporating multiple reasoning types (deductive, inductive, abductive, and analogical). Our analysis across four benchmarks reveals that different reasoning types uniquely solve distinct sets of problems, highlighting the importance of diverse thinking approaches. TypedThinker addresses two key challenges: selecting appropriate reasoning types for given problems and effectively implementing specific reasoning types. Through self-training on successful experiences, TypedThinker learns an implicit policy for reasoning type selection and application. Experimental results demonstrate significant improvements over baseline models, with accuracy increases of 3.4% for Mistral 7B and 16.7% for LLaMA3 8B across four reasoning benchmarks. Notably, TypedThinker shows effective generalization to new benchmarks and can further enhance the reasoning capability of powerful models like GPT-4o. The code is released at https://github.com/dqwang122/ThinkHub. | TypedThinker is a novel framework that enhances LLMs' problem-solving abilities by incorporating multiple reasoning types (deductive, inductive, abductive, and analogical) and shows effective generalization to new benchmarks and can further enhance the reasoning capability of powerful models like GPT-4o. | ## TYPEDTHINKER: TYPED THINKING IMPROVES LARGE LANGUAGE MODEL REASONING
**Danqing Wang[1][,][2][∗], Jianxin Ma[2], Fei Fang[1], Lei Li[1]**
1 2
Carnegie Mellon University Qwen Team
[email protected]
ABSTRACT
Despite significant advancements in the reasoning capabilities of Large Language
Models (LLMs), the lack of diverse reasoning solutions often makes them trapped
in a limited solution search area. In this paper, we propose TypedThinker, a
reasoning framework that diversifies LLMs’ reasoning solutions by incorporating
multiple reasoning types (deductive, inductive, abductive, and analogical). Our
analysis across four benchmarks reveals that different reasoning types uniquely
solve distinct sets of problems, highlighting the importance of diverse thinking
approaches. TypedThinker addresses two key challenges: selecting appropriate
reasoning types for given problems and effectively implementing specific reasoning
types. Through self-training on successful experiences, TypedThinker learns
an implicit policy for reasoning type selection and application. Experimental
results demonstrate significant improvements over baseline models, with accuracy
increases of 3.4% for Mistral 7B and 16.7% for LLaMA3 8B across four reasoning
benchmarks. Notably, TypedThinker shows effective generalization to new
benchmarks and can further enhance the reasoning capability of powerful models
like GPT-4o. The code is released at https://github.com/dqwang122/ThinkHub.
1 INTRODUCTION
Large Language Models (LLMs) exhibited promising capabilities in reasoning, such as solving
logical reasoning and mathematical problems (Bai et al., 2022; OpenAI, 2023). Plenty of work has
been done to improve the reasoning capabilities by adding reasoning thoughts (Wei et al., 2022) and
making these thoughts more elaborated (Fu et al., 2023; Zheng et al., 2024). However, the exploration
of novel reasoning thoughts is understudied. The lack of diversity in reasoning makes LLMs easily
trapped in a fixed mindset, which limits their performance in solving difficult problems.
Current research efforts to enhance reasoning diversity remain unsatisfactory. AlphaCode (Li et al.,
2022; Leblond et al., 2023) randomizes the difficulty level and categorical tags of the code problems
in the prompt to encourage diversity. However, its dependency on manually curated attributes makes it
challenging to scale and apply beyond coding problems. On the other hand, increasing the temperature
is an easy way to generate superficially diverse outputs, but it often fails to produce high-quality
reasoning solutions with significant differences. For example, repeated sampling (Brown et al., 2024)
generates 100,000 solutions per problem with temperature 0.6, but their solutions [1] are mostly based
on deductive reasoning, which starts from the problem context to infer the answer step by step.
Encouraging LLMs to apply more types of reasoning can make them think more diversely. Besides
deductive reasoning, there are other reasoning types such as inductive (Flach and Kakas, 2000),
abductive (Douven, 2011), and analogical reasoning (Bartha, 2013). These reasoning types reflect
different mental methods of drawing conclusions, leading to diverse thinking processes. This is also
inspired by human cognitive processes, where individuals selectively apply contextually appropriate
logical reasoning strategies (Halpern, 2014; Bronkhorst et al., 2020). For example, when some
answer candidates are given for a free-response math problem, humans are more likely to verify the
correctness of each option instead of solving it from the given conditions. This reflects human’s
implicit mental shift in the type of reasoning from deductive to abductive.
_∗Work was done during the internship in Qwen Team._
1Their results can be found at: https://huggingface.co/datasets/ScalingIntelligence/monkey_business.
-----
To investigate how different reasoning types affect the LLMs’ performance in solving reasoning
problems, we calculate the percentage of problems that can only be solved by one particular reasoning
type in Figure 1. Specifically, we generate 10 samples for each reasoning type with the Mistral 7B
instruct (Jiang et al., 2023) on four benchmarks: LogiQA (Liu et al., 2023a) and BBH (Suzgun et al.,
2022), GSM8k (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021). We consider that one
problem is solved by a reasoning type if at least one correct solution is found among the 10 samples of
this reasoning type. For example, the blue region in Figure 1 reflects the percentage of problems that
can only be solved by inductive reasoning, which is surprisingly effective on the MATH benchmark.
It means that without inductive reasoning, LLMs can hardly find a correct solution for these 11.03%
problems. As we can see, each reasoning type has a unique set of problems that can be solved by it,
highlighting the importance of divergent thinking based on the reasoning types to enlarge the solvable
problem set. More analysis can be found in Section 4.2.
Integrating diverse reasoning strategies into
LLMs’ problem-solving capabilities has two 0.20
main challenges. First, as shown in Figure 1,
it is important to select the appropriate reasoning type for a given problem, otherwise, it may
mislead the LLMs with the wrong thinking direction. Besides, it may be difficult for LLMs Percentage
to follow a specific reasoning type to solve the 0.05
problem. Therefore, we propose a novel approachicy to select and conduct the appropriate reason- TypedThinker with the implicit pol- 0.00 LogiQA BBH GSM8k MATH
0.20
0.15 Abductive
Analogical
0.10 Inductive
Percentage
Deductive
0.05
0.00 LogiQA BBH GSM8k MATH
ing types and the explicit memory to retrieve
Figure 1: The percentage distribution of each reasoning
relevant experiences to aid problem-solving. type for problems. The y-axis indicates the percentage
TypedThinker uses the meta-thinker and of problems that can be only solved by one specific
the reasoner, to select the reasoning types and reasoning type.
conduct the selected types of reasoning. It also
maintains an explicit memory to retrieve relevant knowledge and experiences. To learn the implicit
policy for reasoning types, TypedThinker is optimized based on its own successful experiences.
For each problem in the training set, it estimates the effectiveness score of each reasoning type
based on the success rate during sampling. The meta-thinker is fine-tuned on these empirical effectiveness scores to learn reasoning type selection for each problem. These successful reasoning
experiences are also used to enhance the reasoner’s capability of following a specific reasoning type
in problem-solving. During the self-training process, TypedThinker updates the implicit policy
of the meta-thinker to select reasoning types and the policy of the reasoner to apply this type, while
storing the successful reasoning experiences in the explicit memory for retrieval.
Experimental results show that TypedThinker improves Mistral 7B instruct (Jiang et al., 2023) by
3.4% and LLaMA3 8B instruct (Touvron et al., 2023) by 16.7% on the average accuracy of two logical
benchmarks, LogiQA and BBH, and two mathematics benchmarks, GSM8k and MATH. We further
demonstrate that TypedThinker can directly be applied to the new benchmark Contexthub (Hua
et al., 2024) and outperforms other baselines. Moreover, we show that the meta-thinker based on the
weak 7B model can enhance the performance of GPT-4o without distilling knowledge from strong
LLMs, which also sheds light on the weak-to-strong generalization (Burns et al., 2024).
2 RELATED WORK
**Logical Reasoning Logical reasoning includes various methods to emulate human-like thought**
processes (Wason and Johnson-Laird, 1972). Deductive reasoning focuses on deriving specific
conclusions from general principles or premises, ensuring that conclusions logically follow if the
premises are true (Johnson-Laird, 2010). In contrast, inductive reasoning involves generalizing
from specific instances to broader principles, often used to identify patterns and make predictions
based on empirical data (Flach and Kakas, 2000). Abductive reasoning, considered more creative
and open-ended, involves forming hypotheses to explain observations, often generating the most
plausible explanation rather than a guaranteed conclusion (Douven, 2011). Analogical reasoning is
concerned with the comparison between two or more objects and drawing a conclusion based on the
similarity (Bartha, 2013). Previous LLMs studies on logical reasoning mainly focus on benchmarking
-----
Explicit Memory
Experience 1: (Problem 1, Deductive, Solution 1)
…
Experience N: (Problem N, Abductive, Solution N) Abductive
Experience
Implicit Policy
Problem
Deductive Inductive Abductive Analogical Abductive
s = 0.3 s = 0.1 s = 0.6 s = 0 Reasoner
**Meta**
**thinker** **Reasoner**
Figure 2: TypedThinker consists of three components: the meta-thinker
to select the reasoning types, the explicit memory to retrieve relevant experience, and the reasoner to conduct the specific reasoning. The meta-thinker is
fine-tuned to predict an effective score s ∈ [0, 1] for each reasoning type.
_Initialize_
LLM Sample with
_reasoning types_
Solutions
_Filter incorrect ones_
_Filter incorrect types_
_Finetune_
Experience dataset 𝐷"
_Store_ Reasoner
Empirical _Finetune_
Score
Explicit Memory Meta-thinker
Figure 3: Learning the implicit
policy and explicit memory by
self-training.
its performance in different reasoning types (Bang et al., 2023; Dougrez-Lewis et al., 2024; Luo et al.,
2023; Yu et al., 2024). Instead, this paper mainly focuses on the selection and application of the
appropriate reasoning type when solving a specific problem.
**Reasoning in Large Language Models Plenty of studies have been done to enhance the reasoning**
capability of LLMs. Chain-of-thoughts methods focus on creating better instructions to improve the
quality of the reasoning process, such as Complex CoT (Du et al., 2023), Tree of Thought (ToT) (Yao
et al., 2023) and Graph of Thought (Besta et al., 2024). Refinement-based methods revise LLMs
solutions by the feedback from themselves or others model (Akyürek et al., 2023; Wang and Li,
2023; Du et al., 2023). Search-based methods use the reward model to search the best reasoning
path (Lightman et al.; Liu et al., 2023b; Hao et al., 2023). While most of them focus on creating
high-quality reasoning paths, the diversity of thinking is understudied. Our paper aims to diversify
the thinking process by incorporating different reasoning types.
**Self-improvement and Self-training in LLMs Recent works explore the self-improvement capability**
of LLMs, by finetuning LLMs on their high-quality generations (Wang et al., 2023; Huang et al.,
2023; Toshniwal et al., 2024). This process can be extended to multiple iterations Gülçehre et al.
(2023); Aksitov et al.. Benefiting the LLMs’ ability to follow instructions, researchers also ask LLMs
to provide feedback themselves and improve their responses without finetuning (Peng et al., 2023;
Shinn et al., 2023). This can be further enhanced by using their own feedback as the reward model
to provide better signals for finetuning (Yuan et al., 2024; Kumar et al., 2024). In this paper, we
focus on stimulating their capabilities to conduct various reasoning types and use these experiences
to diversify their thinking in reasoning type selection and following.
3 TYPEDTHINKER: DIVERSIFY THINKING WITH TYPED REASONING
In this paper, we focus on four logical reasoning types: deductive, inductive, abductive, and analogical
reasoning. For each reasoning type, we provide a short definition and a simple example to demonstrate
the inference rules, which are listed in Table 8 in the Appendix. Based on that, we introduce a
reasoning framework TypedThinker to diversify LLMs’ thinking with different reasoning types.
As shown in Figure 2, there are three components in TypedThinker: the meta-thinker to select
reasoning type, the finetuned reasoner to conduct specific reasoning, and explicit memory to retrieve
experience. TypedThinker optimizes the implicit policy of the meta-thinker and reasoner and
updates the explicit memory based on self-training.
3.1 TYPED REASONING WITH THE IMPLICIT POLICY AND EXPLICIT MEMORY
Let D = {(x1, y1), · · ·, (xN _, yN_ )} be a set of N problems, where xi and yi is the problem and the
ground-truth answer of the i−th instance. We define a reasoning type space F that includes an empty
type and four types of reasoning: deductive, inductive, abductive, and analogical. The goal is to
model the selection and implementation of various reasoning types as implicit and explicit memory,
thus enhancing LLMs’ performance in reasoning tasks. Here we use the exact match between the
-----
ground-truth answer y and the system output ˆy to calculate the accuracy as the measurement of task
performance.
**Meta-thinker to select the reasoning type. Given a problem x, the goal of the meta-thinker**
is to select an appropriate set of reasoning types to solve the problem. Specifically, it predicts
an effectiveness score sk [0, 1] for each reasoning type fk, which can be represented as
_sx,k = πθ(x, fk). sx,k = 0 ∈_ indicates that the problem x can hardly be solved by the reasoning ∈F
type fk with a limited sampling times [2]. Note that the effectiveness scores of different types are
independent of each other and their sums are not necessary to be 1. The most effective reasoning type
is defined as f _[∗](x) = arg maxfk∈F sx,k. Meanwhile, we can obtain a set of reasoning types with a_
non-zero effective ratio, which we call the effective set: F (x) = {fk|sx,k > 0}. We initialize πθ
with the pre-trained LLM and later fine-tune the parameters θ during the self-training. The prompt is
listed in Appendix A.1.
**Explicit Memory to retrieve experience. TypedThinker keeps an explicit memory M =**
_fk_ _[M][k][ for each type of reasoning. For each problem, we keep one correct solution per reasoning]_
_∈F_
type if applicable, resulting in a set of at most _D_ solutions. If multiple solutions exist for
S _|_ _| × |F|_
one problem, we keep the longest to get a more detailed context. The entry in the memory Mk is
represented as a tuple of (xr, solr, ). Here solr is the concrete reasoning process of the reasoning
type fk, including the predicted answer. Given a new problem x and its reasoning type fk, it retrieves
a set of relevant experience dx,k = {(xr, solr) ∈ _Mk|L(xr, x) < δ}. L is the distance function_
measuring relevancy between two problems and δ ∈ [0, 1] is the relevancy threshold. In this study,
we use the cosine similarity for the distance function. In this study, we use the cosine similarity
between the semantic embeddings as the distance function. The retrieved experiences are used as the
few-shot examples of the reasoner.
**Reasoner to perform the reasoning according to the type. The reasoner applies the reasoning type**
_fk to the problem x and provides a detailed reasoning path for its predicted answer ˆy. The reasoner_
is based on LLM to conduct reasoning and the instruction is composed of (x, fk, dx,k), where the dk
is the retrieved relevant successful experience. The reasoner can be further optimized via instruction
tuning to enhance the capability of conducting a specific type of reasoning.
During inference, TypedThinker first uses the meta-thinker πθ to predict an effective score
_sk for each reasoning type. It then retrieves relevant reasoning experience dk from the explicit_
memory. Finally, the reasoner conducts the specific type reasoning fk with the help of past successful
experience. There are two approaches to aggregating multiple solutions. One is to greedily resample
several times based on the most effective reasoning type f _[∗]_ and use self-consistency (Wang et al.,
2022) to enhance the answer. The other is to sample solutions for all effective reasoning types F,
and apply a weighted vote with the effective score as the coefficient. By default, we use the greedy
approach for TypedThinker, and we discuss the weighted vote in Section 4.4.
3.2 OPTIMIZE IMPLICIT POLICY FOR REASONING TYPE SELECTION AND FOLLOWING
We use a self-training framework to optimize the meta-thinker πθ and the reasoner while updating the
explicit memory with the collected experience. The pipeline is demonstrated in Figure 3. The green
lines represent the parametric optimization process, while the blue line represents the non-parameter
update.
**Diversify Reasoning Experiences with Types To inspire LLMs’ knowledge of solving problems**
with different reasoning types, the definition (Table 8) and manually-written few-shot examples with
detailed reasoning paths (Table 11) are used for prompting solutions for each reasoning type. For
each problem in the training set, we use a temperature of 1 to sample 10 solutions per reasoning
type. These solutions are then filtered by the correctness of the final answers. To guarantee that these
solutions belong to their reasoning type, we apply a reverse check on the remaining solutions. For the
experience (x, sol, y) of the reasoning type fk, we prompt the model to predict its reasoning type _f[ˆ]k._
If fk = f[ˆ]k, we think this experience indicates the methodology of this reasoning type and keep it.
Otherwise, it will be removed. Finally, we get an experience dataset D with multiple reasoning types.
The experiences are grouped by their reasoning type and are stored in the explicit memory M .
2In this paper, we sample at most 10 times for one problem.
-----
**Optimize the Implicit policy of Meta-thinker and Reasoner Given a problem x, the meta-thinker**
_πθ predicts a score sx,k to indicate how likely this reasoning type can solve this problem. This can be_
estimated by the experience in the training set. We assume that if one reasoning type is more effective
in solving this problem, it will generate more correct solutions among the same sampling times.
Therefore, given there are nk successful experiences of the reasoning type fk among m samples,
we define the empirical effectiveness score based on its success rate: sx,k = nk/m. This empirical
effectiveness score calculated on the experience dataset D is then used for finetuning the meta-thinker.
We reconstruct the tuple (x, fk, sx,k) into the instruction-following pair via the prompt in Section
A.1 for supervised finetuning. Meanwhile, we finetune a reasoner with the experience to enhance its
capability to conduct a specific type of reasoning.
To conclude, TypedThinker first uses the meta-thinker to implicitly identify the reasoning types
_fk needed to solve the problem, then retrieves relevant demonstrations dk from the explicit memory,_
and finally employs the fine-tuned reasoner Mϕ to execute the specific reasoning type fk on the query
with dk to generate the response based on the best reasoning type or a weighted reasoning vote over
the effective reasoning set.
4 EXPERIMENTS
In this section, we first introduce our experiment settings and investigate the influence of different
reasoning types on the self-training data. We then evaluate our TypedThinker performances on
four benchmarks. Further analyses are provided to enhance the understanding of TypedThinker.
4.1 EXPERIMENT SETUP
We investigate two open-source LLMs Mistral 7B instruct (Jiang et al., 2023) and LLaMA3 8B
instruct (Touvron et al., 2023) on two logical benchmarks (LogiQA, BBH) and two mathematics
benchmarks (GSM8K and MATH). For each LLM, we set up the following baselines: (i) Few-shot
**baseline with 3 in-context examples. We use the few-shot examples provided in Suzgun et al. (2022)**
for BBH, and text-based few-shot examples in Toshniwal et al. (2024) for GSM8k and MATH since
we do not consider the code interpreter in this paper. We also manually write few-shot examples for
LogiQA, (ii) CoT Selection: Select the best reasoning type by prompting. We let the LLM identify
the best reasoning type and then apply the selected type to the problem. (iii) Zero-shot Mixture
**of Reasoning (MoR): apply all possible reasoning types and use the majority vote to get the final**
answer [3]. The LLM is instructed with the definition and demonstration in Table 8. (iv) Few-shot
**MoR: Similar to the zero-shot MoR except for each reasoning type, 3 few-shot examples are provided**
in the prompt. (v) TypedThinker: use the most effective reasoning type f _[∗]. The +SC baselines_
indicate the majority vote over 5 responses.
The temperature is set to 0.7 for all baselines as suggested by Wang et al. (2022). The maximum
output length is set to 1000 tokens. We use SentenceTransformer[4](Reimers and Gurevych, 2019)
to retrieve top-3 similar experiences and the threshold is set to δ = 0.5. On logical benchmarks,
we compare the response with the ground truth based on the exact match. On the mathematics
benchmarks, we follow Toshniwal et al. (2024) to get the accuracy on mathematics benchmarks.
For self-training of TypedThinker, we use the splits in the original papers for LogiQA and
follow the split of Toshniwal et al. (2024) for GSM8k and MATH. For BBH, we utilize 16 English
multiple-choice tasks and randomly select 100 examples per task as the test set, with 20 examples
as the hold-out validation set. The detailed statistics are listed in Table 9. Finally, the curated
generation dataset covers 67.2% problems on the LogiQA benchmark, 69.7% on BBH, 74.88%
on GSM8k, and 36.27% on MATH. We finetune a unified meta-thinker for both math and logical
problems, and a unified reasoner for all reasoning types. We use Huggingface (Wolf et al., 2019) with
deepspeed (Rasley et al., 2020). The finetuning is conducted on 2 A6000 GPUs. The batch size is 64
and the learning rate is 1e − 5. The maximum epoch is 3 for the meta-thinker and 2 for the reasoner.
3For answers with the same votes, we choose the first one in alphabetical order.
4https://www.sbert.net/
-----
4.2 HOW DO REASONING TYPES ENCOURAGE DIVERGENT THINKING DURING GENERATION?
We first investigate the role of reasoning types in LLMs’ self-training. We group problems of the
collected experience dataset D based on their empirical effective set F (x) = {fk|sx,k > 0}, and the
empirical effectiveness score is defined in Section 3.2. The problems in the same group can be solved
with the same set of reasoning types. We focus on the effective set with only one reasoning type and
count the size of these groups. The size indicates how many problems that can only be solved by one
specific reasoning type, showing the advantage of including this reasoning type. We illustrate their
percentage on the whole dataset in Figure 1. We find that even if we use temperature = 1 to sample
10 times to diversify the solutions, a lot of problems still have only one effective reasoning type. It
_indicates that given an inappropriate reasoning type, the diversity brought by repeated sampling with_
_a high temperature cannot help the LLM solve this problem._
Meanwhile, although these reasoning types have similar performance on the whole dataset (shown
in Table 6 in Appendix), the problems they can solve do not completely overlap. None of the
percentages of the reasoning type is zero, indicating that for each reasoning type, there is a unique set
of problems that can only solved by it. It indicates that these reasoning types have their advantages
over different problems, highlighting the importance of considering the appropriate reasoning types
during problem-solving.
We further compare the diversity of the solutions before and after adding the reasoning types in Table
10 of Appendix A.3. We can find that introducing different reasoning types can bring more diversity
to the solution set than repeated sampling with a high temperature.
Table 1: TypedThinker achieves the best performance in both single response and the majority vote setting
on two logical benchmarks and two math benchmarks. @5 indicates the result is based on the majority vote
over 5 responses. +SC indicates the self-consistency method. MoR indicates the Mixture of Reasoning, which
employs all reasoning types (including an empty type) and votes for the final output. Avg. indicates the average
accuracy over four benchmarks.
Mistral 7B LLaMA3 8B
LogiQA BBH GSM8K MATH Avg. LogiQA BBH GSM8K MATH Avg.
|Few-shot 0.485 0.346 0.369 0.074 + SC @5 0.532 0.441 0.444 0.136 CoT Selection 0.474 0.361 0.372 0.095 + SC @5 0.503 0.429 0.466 0.132|0.318 0.388 0.325 0.382|0.566 0.318 0.472 0.105 0.573 0.364 0.457 0.134 0.552 0.370 0.353 0.094 0.583 0.424 0.460 0.134|0.365 0.382 0.342 0.400|
|---|---|---|---|
|Zero-shot MoR @5 0.528 0.414 0.313 0.108 Few-shot MoR @5 0.509 0.456 0.460 0.127|0.341 0.388|0.558 0.448 0.261 0.075 0.568 0.428 0.470 0.133|0.335 0.400|
|---|---|---|---|
|TypedThinker 0.554 0.423 0.386 0.092 + SC @5 0.570 0.469 0.500 0.149|0.364 0.422|0.550 0.533 0.535 0.193 0.620 0.591 0.723 0.263|0.453 0.549|
|---|---|---|---|
4.3 WHAT KINDS OF BENEFITS CAN TY P E DTH I N K E R BRING?
As we can see in Table 1, our TypedThinker achieves the best performance among baselines. The
improvement is more obvious in LLaMA3 8B, which is more powerful than Mistral 7B. It shows
that LLMs with a better capability in reasoning and instruction-following can benefit more from
the self-training of TypedThinker. Additionally, there are several key insights from the detailed
comparison with different baselines.
**Appropriate reasoning types improve the reasoning performance. The main difference between**
Fewshot and CoT Selection without the majority vote is the reasoning type selection. For CoT
Selection, the model is first prompted to predict a reasoning type and then apply it, while the
Fewshot baseline directly solves the problem. However, we find that the CoT Selection struggles
with the reasoning type selection. Given the option to choose from four reasoning types or none,
it chooses none over 60% of the time. The rest of the time, it selects more than 50% deductive,
while only 34% of them can be effectively solved by deductive reasoning during the sampling.
The mismatch in reasoning types results in poor performance. Facilitating with a trained metathinker, TypedThinker is more accurate in selecting the reasoning type, which helps it improve
performance under the single response setting.
-----
**Precise prediction is more effective than an inappropriate mixture. The zero-shot MoR and**
few-shot MoR apply all types of reasoning to the given problem and use a majority vote to get the
final answer. Compared with the other two majority vote baselines Few-shot + SC @5 and CoT
Selection + SC @5, these methods fall behind on several benchmarks, especially on MATH. We find
that the performance drop typically happens when there are only one or two reasoning types that
are effective for this problem. In such cases, the majority of incorrect answers dominate, resulting
in fewer votes for the correct one. As we can see in Figure 1, plenty of problems on the MATH
benchmark can only be solved by inductive reasoning. In such cases, if the CoT Selection correctly
predicts the inductive reasoning for them, the CoT Selection + SC @5 can benefit from the majority
vote and have a better performance. This highlights the importance of predicting the effectiveness of
reasoning types before aggregating them.
**Experience of how to conduct a specific type of reasoning is important. The performance**
difference between zero-shot and few-shot MoR illustrates the impact of the reasoning demonstration.
When prompted solely with the definition, LLMs struggle to understand how to apply the reasoning
type to specific problems. It can be improved by human-written few-shot examples in few-shot
MoR. However, it still falls behind the non-parametric retrieval and the parametric reasoner in
TypedThinker, both of which enhance the capability of conducting a specific reasoning type.
Table 2: Ablation Study on the Mistral 7B based TypedThinker’s components. We remove one component
each time. The results are based on the best reasoning type and calculated for the single response per query. The
negative scores indicate the performance drop, and the largest scores are shown in bold.
LogiQA BBH GSM8K MATH
TypedThinker 0.554 0.423 0.386 0.092
w/o Finetuned Reasoner -0.076 -0.041 -0.102 -0.018
w/o Meta-thinker -0.025 -0.036 **-0.152** **-0.024**
w/o Memory **-0.082** **-0.051** -0.033 0.013
4.4 WHAT CONTRIBUTES TO TYP E DTH I N K E R’S EFFECTIVENESS?
We conduct several investigations to enhance the understanding of our proposed method.
**Ablation study The ablation studies are conducted on three key components in TypedThinker.**
Each time one component is removed. It includes (i) w/o Fine-tuned Reasoner: it is replaced with
the base LLM (ii) w/o Meta-thinker: it is replaced with a CoT selection (iii) w/o Memory: the
explicit memory is replaced with the human-written few-shot examples of each reasoning type. In
Table 2, we can find the meta-thinker is the most important module for the math benchmarks, while
the explicit memory is more effective on two logical benchmarks. The fine-tuned reasoner also
contributes a lot to the performance improvement. We also observe that explicit memory does not
always bring benefits: the performance on MATH even increases when we remove it. We find that
the retrieved examples usually have a similar context but different numbers. The math calculation
in the retrieved chain-of-thoughts solutions will mislead the reasoner. This is consistent with the
observations of Toshniwal et al. (2024) that the solutions with masked computations are more
beneficial to the math problems. For logical problems, there are fewer calculations and the retrieved
solutions focus more on the reasoning process.
**Meta-thinker’s predictions achieve a high correlation with the empirical effectiveness score. We**
evaluate the performance of the meta-thinker by the correlation between the predicted effectiveness
score and the empirical one (which we view as the ground truth). We split the collected experience
dataset D by problems and use 0.9 of them to train the meta-thinker and 0.1 for testing. We use
Kendall’s τ coefficient to evaluate the correlation. It measures rank correlation, essentially assessing
the similarity of orderings when data is ranked. A higher Kendall’s τ coefficient indicates that when
the ground truth assigns a high effectiveness score to a reasoning type, the meta-thinker also ranks it
high, thereby validating the reliability of the predicted scores. We compare the performance under
three settings: the meta-thinker trained only on the logical domain, only in the math domain, and
jointly trained on the unified domains (including both logic and math data). The meta-thinker trained
on the unified domain achieves the highest correlation. This suggests that training on a dataset
with multiple domains enhances the meta-thinker’s ability to accurately rank and predict suitable
-----
0.60
0.50
0.40
0.30
Accuracy
0.20
0.10
0.00
LogiQA BBH GSM8K MATH
Logic Domain Math Domain Unified Domains
0.40
0.35
0.30
0.25
0.20
0.15
0.10
Kendall's tau coefficient 0.05
0.00
Deductive Inductive Analogical Abductive
Logic Domain Math Domain Unified Domains
Figure 5: Task performance of TypedThinker with
meta-thinkers trained on different domains. The unified one performs best in most cases except the MATH
benchmark, where the pure math setting dominates.
Figure 4: Kendall’s τ coefficient between the prediction confidence score with the ground truth. All results
have the p-value < 0.05. The unified policy shows
the best correlation on all reasoning types.
reasoning types, thereby improving its overall performance. We also calculate the accuracy between
the predicted optimal reasoning type and the empirical one for the unified setting. The average
accuracy on four benchmarks is 68.3% (LogiQA 75.4%, BBH 75.6%, GSM8k 72.1%, and MATH
47.7%). Note that the meta-thinker can predict an incorrect optimal reasoning type fk while still
generating a correct solution. It is because the predicted reasoning type can belong to the effective set
_F = {fk|sx,k > 0}, indicating the reasoning type can also help solve the problem._
**Unified meta-thinkers perform well in most cases. We further investigate the effectiveness of**
these policies by facilitating TypedThinker with these meta-thinkers. The results are based on the
Mistral 7B TypedThinker without SC. The results in Figure 5 show that the unified meta-thinker
has the best performance in most cases. However, in the more difficult MATH dataset, the specific
meta-thinker trained in the math domain can help it be more powerful. To conclude, the unified
meta-thinker has reasonable performance in all its domains, while for difficult problems it may
slightly underperform the specific meta-thinker trained in this domain.
**Optimal reasoning type v.s. weighted vote on the effective set In the main experiment, we use the**
optimal reasoning type f _[∗]_ which has the highest effectiveness score for reasoning. As discussed in
Section 3.1, we can also use a majority vote on the effective set F with the effectiveness score as
the coefficient. Specifically, if one solution is based on a reasoning type with a higher effectiveness
score, its vote gets a larger weight. The results are shown in Figure 3. We can see that the weighted
vote can balance different reasoning types on LogiQA and GSM8k for the Mistral-7B-based model.
However, on the other two benchmarks, the TypedThinker + SC @5 has a better performance.
It indicates that accurate selection is more important if one or two reasoning types dominate the
benchmark. For example, as we have shown in Figure 1, there are a lot of problems that can only
be solved by inductive reasoning, indicating the other types will mislead the final answer. In such
cases, the self-consistency of inductive reasoning is more powerful than the weighted vote. However,
when we have a more accurate meta-thinker that can identify the correct reasoning type and a more
powerful reasoner that can follow the specific reasoning type, for example, models initialized by
LLaMA3, the advantage of TypedThinker + SC is more obvious.
Table 3: TypedThinker’s performance with the most effective reasoning type f _[∗]_ v.s. weighted votes on the effective
set F .
LogicQA BBH GSM8K Math Average
Mistral 7B
SC @5 on f _[∗]_ 0.570 0.469 **0.500** **0.149** **0.422**
weighted on F **0.581** 0.453 0.501 0.127 0.416
LLaMA3 8B
SC @5 on f _[∗]_ **0.620** **0.591** **0.723** **0.263** **0.549**
weighted on F 0.609 0.579 0.703 0.234 0.531
Table 4: TypedThinker performs best on
the unseen benchmark Contexthub. Here the
results are based on the majority vote over 5
responses (+SC @5).
Mistral 7B LLaMA3 8B
Few-shot 0.419 0.378
CoT Selection 0.415 0.369
Zero-shot MoR 0.415 0.357
Few-shot MoR 0.432 0.398
TypedThinker **0.452** **0.403**
-----
4.5 CAN TYPEDTHINKER BE APPLIED TO NEW DOMAINS OR NEW LLMS WITHOUT
FINETUNING?
It is essential to evaluate the generalization capability of our TypedThinker framework. We assess
it from two aspects: (i) TypedThinker’s performance on new domains; and (ii) other LLMs’
performance after facilitated with our finetuned meta-thinker and the explicit memory.
**TypedThinker generalizes well to the unseen domain. We use a new propositional logic**
benchmark Contexthub (Hua et al., 2024) for evaluation. It is a recently released dataset, which has
never been seen by Mistral and LLaMA3 models during the pre-training. It contains problems from
12 categories with 4 levels of difficulty. We select the difficulty of level 4 to test the complex logic
reasoning capabilities. We use the experiences collected from LogiQA as the explicit memory. The
meta-thinker and the reasoner are fine-tuned on four training benchmarks. The results in Table 4
show that TypedThinker outperforms other baselines on this unseen domain as well, indicating
that it can generalize well to new domains. One interesting thing is Mistral 7B baselines significantly
outperform LLaMA3 8B on this benchmark and its superior capabilities make it benefit more from
our TypedThinker.
**Facilitating LLMs with TypedThinker makes them more powerful. Our TypedThinker**
framework is orthogonal to the backbone LLMs and can be adapted to new LLMs. There are two ways
to use a new LLM in the TypedThinker framework: one is to conduct the self-training process,
like the two LLMs used in our main experiments (Mistral 7B and LLaMA3 8B); the other is to use our
fine-tuned meta-thinker for reasoning type selection and the explicit memory for retrieval while using
the new LLM as the reasoner without finetuning. The first way can make the LLM more powerful
(as shown in the performance comparison between TypedThinker and TypedThinker w/o
Finetuned Reasoner in Table 2), but the latter one is more flexible. Here we use the second way to
evaluate the direct transferability to new LLMs. We choose one of the most powerful LLMs GPT-4o
and one math-specific 7B model MetaMath (Yu et al., 2023). MetaMath is a Mistral-7B-based model
trained with more than 400k synthesized math data distilled from GPT-3.5-Turbo. We randomly
sample 100 examples from each benchmark for GPT-4o. For MetaMath, we use the whole test set.
The results are shown in Table 5 and Table 6. Compared with Mistral 7B in Table 1, the high-quality
and large scale of synthesized data from GPT-3.5-Turbo enhances MetaMath’s capabilities in math
problems. TypedThinker can further improve its performance by reasoning type selection and
explicit memory. Meanwhile, although the superior performance of GPT-4o on two math datasets
leaves little space for improvement, the results on logic benchmarks (LogiQA and BBH) demonstrate
that the meta-thinker trained with the small 7B model also enhances its performance. These findings
confirm that our approach is not only effective in improving smaller LLMs but also transferable to
larger models, further validating the generalization capability of TypedThinker.
Table 5: GPT-4o’s performance is improved with our metareasoner. We use the finetuned Mistral 7B meta-reasoner to
predict the reasoning type and replace the fine-tuned reasoner
with GPT-4o.
LogicQA BBH GSM8k MATH
GPT-4o 0.76 0.84 0.97 0.89
+ SC @ 5 0.80 0.85 **0.98** 0.90
TypedThinker 0.80 0.86 0.95 0.88
+ SC @5 **0.81** **0.90** 0.96 **0.91**
4.6 CASE STUDY
Table 6: TypedThinker can also enhance the performance of the math-specific
7B model such as MetaMath.
GSM8k MATH
MetaMath 0.690 0.209
+ SC @ 5 0.704 0.220
TypedThinker 0.696 0.220
+ SC @ 5 **0.736** **0.246**
Here is one example of TypedThinker on the LogiQA benchmark in Table 7. This problem
states a phenomenon that a higher altitude leads to a lower atmospheric pressure. Based on this
observation, it is easy for humans to use inductive reasoning and get a general conclusion about the
inverse cause-and-effect relationship. It is also natural for humans to use analogical reasoning and
find the most similar options. The meta-thinker gives the highest effectiveness score for inductive
reasoning, which is then chosen as the optimal reasoning type f _[∗]_ = inductive. Effectiveness scores
are all larger than 0, so the effective set is all reasoning types. The reasoner gets the correct answer
-----
for deductive and inductive reasoning while doing wrong on the other reasoning types. If we use
the majority vote over five answers, (A) and (C) will have the same votes, indicating that there is a
50% chance to be correct [5]. However, with the effectiveness score predicted by the meta-thinker,
TypedThinker can get the correct answer either by applying the optimal reasoning type or using
the weighted vote on the four answers. Besides, without a specific reasoning type (which is ‘Empty’),
the model cannot arrive at the correct answer. This shows the limitation of the common few-shot
baselines. It shows that TypedThinker improves the reasoning performance by the introduction
of diverse reasoning types and the capability of selecting the appropriate type to apply
Table 7: One example from LogiQA. The correct answer and the reasoning type with the highest effectiveness
score are underlined. MoR is the few-shot MoR baselines, which use the majority votes among all reasoning
types.
Problem **The higher the altitude, the smaller the atmospheric pressure. Because the**
**altitude of Lanzhou is higher than that of Tianjin, the atmospheric pressure of**
**Lanzhou is lower than that of Tianjin. Which of the following reasoning is most**
**similar?**
(A) In a highly competitive market, the better the product quality and the more
advertising investment, the greater the product sales. Company A invests more
money in advertising than Company B. So company A sells more products than
company B
(B) The older a person is, the more mature he becomes. Lao Zhang is older than his
son, so Lao Zhang is more mature than his son
(C) The older a tree is, the more rings it has. The age of the locust tree in Lao
Zhang’s yard is older than that of Lao Li’s family, so the locust tree of Lao Zhang’s
family has more rings than Lao Li’s
(D) The greater the vocabulary of a language, the more difficult it is to learn. English
is harder to learn than Italian, so English has a larger vocabulary than Italian
Ground Truth (C)
Predicted scores Deductive: 0.4; Inductive: 0.5; Analogical: 0.4; Abductive: 0.4; Empty: 0.4
and their answers Deductive: (C); Inductive: (C); Analogical:(A); Abductive: NULL; Empty: (A)
Model Output **MoR: (A); TypedThinker with f** _[∗]_ (C); TypedThinker with F : (C)
5 CONCLUSION AND LIMITATION
In this paper, we investigate how reasoning types diversify LLMs’ thinking and propose a
novel framework TypedThinker to incorporate different reasoning types into problem-solving.
TypedThinker is inspired by human cognition processes during reasoning: it learns an implicit
policy to select the appropriate reasoning types with the meta-thinker and to conduct the selected
type of reasoning with the reasoner. It also maintains an explicit memory to retrieve experiences
to aid reasoning. TypedThinker optimizes its implicit policy and the explicit memory with its
own successful experiences during the self-training process. The results show that TypedThinker
enhances the reasoning capabilities of Mistral 7B and LLaMA3 8B on four benchmarks. Furthermore,
TypedThinker shows good generalization capabilities in new domains and models. It can also
improve GPT-4o’s performance by the effective reasoning type selection.
Despite the promising results, TypedThinker has several limitations that need further investigation.
Firstly, one problem may require different reasoning types at different steps, and applying one sole
reasoning type can hardly find a correct solution. In that case, dividing the problems into multiple
reasoning steps, and applying TypedThinker for each step could make the reasoning more diverse
and effective. Additionally, this paper mainly focuses on logical and mathematical benchmarks.
Expanding to a broader range of tasks, such as commonsense reasoning or creative problem-solving,
could deepen the understanding of the role of reasoning types in various problems and provide a
more comprehensive assessment of TypedThinker’s capabilities.
5In our implementation, answers with the same votes are ranked based on their alphabetical order, so (A)
will be chosen in this case.
-----
REFERENCES
Renat Aksitov, Sobhan Miryoosefi, Zonglin Li, Daliang Li, Sheila Babayan, Kavya Kopparapu,
Zachary Fisher, Ruiqi Guo, Sushant Prakash, Pranesh Srinivasan, et al. Rest meets react: Selfimprovement for multi-step reasoning llm agent. In ICLR 2024 Workshop on Large Language
_Model (LLM) Agents._
Afra Feyza Akyürek, Ekin Akyürek, Ashwin Kalyan, Peter Clark, Derry Wijaya, and Niket Tandon.
2023. Rl4f: Generating natural language feedback with reinforcement learning for repairing
model outputs. In Annual Meeting of the Association of Computational Linguistics 2023, pages
7716–7733. Association for Computational Linguistics (ACL).
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones,
Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022. Constitutional ai:
Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073.
Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia,
Ziwei Ji, Tiezheng Yu, Willy Chung, et al. 2023. A multitask, multilingual, multimodal evaluation
of chatgpt on reasoning, hallucination, and interactivity. In Proceedings of the 13th International
_Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific_
_Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675–718._
Paul Bartha. 2013. Analogy and analogical reasoning.
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi,
Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, et al. 2024. Graph of
thoughts: Solving elaborate problems with large language models. In Proceedings of the AAAI
_Conference on Artificial Intelligence, volume 38, pages 17682–17690._
[Hugo Bronkhorst, Gerrit Roorda, Cor J. M. Suhre, and Martin J. Goedhart. 2020. Logical reasoning in](https://api.semanticscholar.org/CorpusID:210054824)
[formal and everyday reasoning tasks. International Journal of Science and Mathematics Education,](https://api.semanticscholar.org/CorpusID:210054824)
18:1673–1694.
Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher Ré, and
Azalia Mirhoseini. 2024. Large language monkeys: Scaling inference compute with repeated
sampling. arXiv preprint arXiv:2407.21787.
Collin Burns, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschenbrenner,
Yining Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, Ilya Sutskever, and Jeffrey Wu. 2024.
[Weak-to-strong generalization: Eliciting strong capabilities with weak supervision. In Forty-first](https://openreview.net/forum?id=ghNRg2mEgN)
_International Conference on Machine Learning._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John
Schulman. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
Craig DeLancey. 2017. A concise introduction to logic. Open SUNY Textbooks.
John Dougrez-Lewis, Mahmud Elahi Akhter, Yulan He, and Maria Liakata. 2024. Assessing the
reasoning abilities of chatgpt in the context of claim verification. arXiv preprint arXiv:2402.10735.
Igor Douven. 2011. Abduction.
Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. 2023. Improving factuality and reasoning in language models through multiagent debate. arXiv preprint
_arXiv:2305.14325._
Peter A Flach and Antonis C Kakas. 2000. Abductive and inductive reasoning: background and
issues. Abduction and induction: Essays on their relation and integration, pages 1–27.
[Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. 2023. Complexity-based](https://openreview.net/forum?id=yf1icZHC-l9)
[prompting for multi-step reasoning. In The Eleventh International Conference on Learning](https://openreview.net/forum?id=yf1icZHC-l9)
_Representations._
-----
Çaglar Gülçehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek
Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, et al. 2023. Reinforced
self-training (rest) for language modeling. CoRR.
Diane F Halpern. 2014. Critical thinking across the curriculum: A brief edition of thought &
_knowledge. Routledge._
Shibo Hao, Yi Gu, Haodi Ma, Joshua Hong, Zhen Wang, Daisy Wang, and Zhiting Hu. 2023.
[Reasoning with language model is planning with world model. In Proceedings of the 2023](https://doi.org/10.18653/v1/2023.emnlp-main.507)
_Conference on Empirical Methods in Natural Language Processing, pages 8154–8173, Singapore._
Association for Computational Linguistics.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset.
_NeurIPS._
Wenyue Hua, Kaijie Zhu, Lingyao Li, Lizhou Fan, Shuhang Lin, Mingyu Jin, Haochen Xue, Zelong
Li, JinDong Wang, and Yongfeng Zhang. 2024. Disentangling logic: The role of context in large
language model reasoning capabilities. arXiv preprint arXiv:2406.02787.
Jiaxin Huang, Shixiang Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han.
[2023. Large language models can self-improve. In Proceedings of the 2023 Conference on](https://doi.org/10.18653/v1/2023.emnlp-main.67)
_Empirical Methods in Natural Language Processing, pages 1051–1068, Singapore. Association_
for Computational Linguistics.
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot,
Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al.
2023. Mistral 7b. arXiv preprint arXiv:2310.06825.
Phil Johnson-Laird. 2010. Deductive reasoning. Wiley Interdisciplinary Reviews: Cognitive Science,
1(1):8–17.
Aviral Kumar, Vincent Zhuang, Rishabh Agarwal, Yi Su, John D Co-Reyes, Avi Singh, Kate Baumli,
Shariq Iqbal, Colton Bishop, Rebecca Roelofs, et al. 2024. Training language models to self-correct
via reinforcement learning. arXiv preprint arXiv:2409.12917.
[Rémi Leblond et al. 2023. Alphacode 2 technical report. Technical report, DeepMind.](https://storage.googleapis.com/deepmind-media/AlphaCode2/AlphaCode2_Tech_Report.pdf)
V Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. Pro_ceedings of the Soviet physics doklady._
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom
Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. 2022. Competition-level code
generation with alphacode. Science, 378(6624):1092–1097.
Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan
Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. In The Twelfth
_International Conference on Learning Representations._
Hanmeng Liu, Jian Liu, Leyang Cui, Zhiyang Teng, Nan Duan, Ming Zhou, and Yue Zhang.
[2023a. Logiqa 2.0—an improved dataset for logical reasoning in natural language understanding.](https://doi.org/10.1109/TASLP.2023.3293046)
_IEEE/ACM Transactions on Audio, Speech, and Language Processing, 31:2947–2962._
Jiacheng Liu, Andrew Cohen, Ramakanth Pasunuru, Yejin Choi, Hannaneh Hajishirzi, and Asli
Celikyilmaz. 2023b. Making ppo even better: Value-guided monte-carlo tree search decoding.
_arXiv preprint arXiv:2309.15028._
Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. 2021. Logiqa: a
challenge dataset for machine reading comprehension with logical reasoning. In Proceedings of
_the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI’20._
Man Luo, Shrinidhi Kumbhar, Mihir Parmar, Neeraj Varshney, Pratyay Banerjee, Somak Aditya,
Chitta Baral, et al. 2023. Towards logiglue: A brief survey and a benchmark for analyzing logical
reasoning capabilities of language models. arXiv preprint arXiv:2310.00836.
-----
[OpenAI. 2023. Gpt-4 technical report.](http://arxiv.org/abs/2303.08774)
Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, et al. 2023. Check your facts and try again: Improving large language
models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813.
[Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System op-](https://doi.org/10.1145/3394486.3406703)
[timizations enable training deep learning models with over 100 billion parameters. In Proceedings](https://doi.org/10.1145/3394486.3406703)
_of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,_
KDD ’20, page 3505–3506, New York, NY, USA. Association for Computing Machinery.
[Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence embeddings using Siamese BERT-](https://doi.org/10.18653/v1/D19-1410)
[networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language](https://doi.org/10.18653/v1/D19-1410)
_Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-_
_IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics._
Noah Shinn, Beck Labash, and Ashwin Gopinath. 2023. Reflexion: an autonomous agent with
dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366.
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam
Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond
the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv
_preprint arXiv:2206.04615._
Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung,
Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, and Jason Wei. 2022. Challenging
big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261.
Shubham Toshniwal, Ivan Moshkov, Sean Narenthiran, Daria Gitman, Fei Jia, and Igor Gitman.
2024. Openmathinstruct-1: A 1.8 million math instruction tuning dataset. _arXiv preprint_
_arXiv:2402.10176._
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée
Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open
and efficient foundation language models. arXiv preprint arXiv:2302.13971.
Danqing Wang and Lei Li. 2023. Learn from mistakes through cooperative interaction with study
assistant. The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)
_2023._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha
Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in
language models. In The Eleventh International Conference on Learning Representations.
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi,
[and Hannaneh Hajishirzi. 2023. Self-instruct: Aligning language models with self-generated](https://doi.org/10.18653/v1/2023.acl-long.754)
[instructions. In Proceedings of the 61st Annual Meeting of the Association for Computational](https://doi.org/10.18653/v1/2023.acl-long.754)
_Linguistics (Volume 1: Long Papers), pages 13484–13508, Toronto, Canada. Association for_
Computational Linguistics.
Peter Cathcart Wason and Philip Nicholas Johnson-Laird. 1972. Psychology of reasoning: Structure
_and content, volume 86. Harvard University Press._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed H Chi, Quoc V Le, Denny
Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. In
_Advances in Neural Information Processing Systems._
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi,
Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface’s transformers:
State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik
[Narasimhan. 2023. Tree of Thoughts: Deliberate problem solving with large language models.](http://arxiv.org/abs/2305.10601)
-----
[Junchi Yu, Ran He, and Zhitao Ying. 2024. THOUGHT PROPAGATION: AN ANALOGICAL](https://openreview.net/forum?id=SBoRhRCzM3)
[APPROACH TO COMPLEX REASONING WITH LARGE LANGUAGE MODELS. In The](https://openreview.net/forum?id=SBoRhRCzM3)
_Twelfth International Conference on Learning Representations._
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo
Li, Adrian Weller, and Weiyang Liu. 2023. Metamath: Bootstrap your own mathematical questions
for large language models. arXiv preprint arXiv:2309.12284.
Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Sainbayar Sukhbaatar, Jing Xu, and Jason
Weston. 2024. Self-rewarding language models. arXiv preprint arXiv:2401.10020.
[Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo Li, and Yu Li. 2024. Progressive-hint](https://openreview.net/forum?id=UkFEs3ciz8)
[prompting improves reasoning in large language models. In AI for Math Workshop @ ICML 2024.](https://openreview.net/forum?id=UkFEs3ciz8)
-----
A APPENDIX
A.1 PROMPT
We introduce the simple and practical definition of four reasoning types in Table 8. Table 11 lists
the few-shot examples of each reasoning type. The full few-shot examples can be found in the
supplementary materials. We use the same few-shot examples for the logical problems and create
another set of examples for the mathematics problems.
Table 8: Description of different reasoning types. We give informal definitions that are easy to follow and
illustrate simple examples for each reasoning type.
**Type** **Definition** **Example**
**Deduction** _Deduce conclusion based on the gen-_ From the premises ‘all frogs are amphibians‘ and
_eral rules and premise._ ‘no cats are amphibians’, we can infer the conclusion ‘no cats are frogs’
**Induction** _Make broad generalizations from spe-_ Starting from the empirical observation that ‘all
_cific observations._ ravens I have seen so far are black’, inductive reasoning can be used to infer that ‘all ravens are
black’
**Abduction** _Assume one candidate is correct and_
_check whether it meets the condition in_
_the problem._
**Analogy** _Retrieve several relevant information_
_and draw the conclusion of this problem_
_based on the similarity._
The prompt used by the meta-thinker is:
Guess that it has rained to explain that the streets
are wet. A tsunami could also explain why the
streets are wet but this is usually not the best explanation.
Infer information about humans from medical experiments on animals: (1) rats are similar to humans; (2) birth control pills affect the brain development of rats; (3) therefore they may also affect
the brain development of humans.
_Given the question below, please identify the type of reasoning required to provide_
_a solution. You may choose the following reasoning types: Deductive, Inductive,_
_Analogical, Abductive Reasoning, or None. None indicates that no specific reason-_
_ing type is needed for this problem. Please assign an effectiveness score for each_
_reasoning type from 0 to 1, where 0 represents no effective and 1 represents full_
_effective. Please return the reasoning types and their corresponding effectiveness_
_scores in the JSON format._
_For instance, if you think the question can be solved using both deductive and_
_inductive reasoning, with an effectiveness of 0.5 for deductive reasoning and 0.3_
_for inductive reasoning, you should return: [{"ReasoningType": "Deductive",_
_"Effectiveness": 0.5},{"ReasoningType": "Inductive", "Effectiveness": 0.3},{"Rea-_
_soningType": "Analogical", "Effectiveness": 0},{"ReasoningType": "Abductive",_
_"Effectiveness": 0}, {"ReasoningType": "None", "Effectiveness": 0}]._
The prompt used by the reasoner is listed below. The definition is based on Table 8.
_Use [fk] reasoning to solve the given question. [fk] reasoning is [definition]._
A.2 DATASET PROCESSING
The dataset statistics of the four benchmarks are detailed in Table 9. For multiple-choice questions,
we calculate accuracy using the exact match criterion. For mathematics problems, we compare the
model’s response with the ground truth using mathematical equality.
LogiQA (Liu et al., 2021; 2023a) is a multi-choice understanding benchmark for logical reasoning. It
follows the definition of DeLancey (2017) and categorizes the problems into categorical reasoning,
-----
Table 9: Logical benchmarks and mathematic benchmarks we used in this paper. We follow the standard train/test
split on LogiQA and follow the split in Toshniwal et al. (2024) for GSM8k and MATH. For BBH, we randomly
split the dataset. The synthesized data is described in Section 3.1. BBH includes 16 tasks while MATH includes
math problems of 7 categories. Policy indicates the data used to train the meta-reasoner and SFT indicates the
instruction-following in reasoner finetuning.
**Benchmark** **Empirical Dataset**
**# Task** **# Train** **# Val** **# Test** **# Total** **# Meta-thinker** **# Reasoner**
|LogiQA 1 3757 500 511 4768 BBH 16 1904 320 1600 3824 GSM8k 1 6473 1000 1319 8792 Math 7 6500 1000 5000 12500|~2k ~6k ~1k ~3.5k ~4k ~4k ~1k ~1k|
|---|---|
sufficient conditional reasoning, necessary conditional reasoning, disjunctive reasoning, and conjunctive reasoning. These reasoning categories are not orthogonal and one problem can belong to multiple
categories. We follow the standard training/validation split and only keep examples with more than
3 reasoning categories. This makes the problem more diverse and difficult to solve. We take the
validation set as the test set and randomly select 500 examples from the training set for validation.
BBH (Suzgun et al., 2022) is a set of hard problems borrowed from Big Bench (Srivastava et al.,
2022). They are also formatted as multi-choice problems. We pick the English tasks with more than
2 options, resulting in 16 tasks: date understanding disambiguation qa, geometric shapes, hyperbaton,
logical deduction three, logical deduction five, logical deduction seven, movie recommendation,
penguins in a table, reasoning color, ruin names, snarks, temporal sequences, tracking shuffled three,
tracking shuffled five, and tracking shuffled seven. For each task, we randomly select 100 examples
as the test set and 20 examples as the validation. The rest are used as training examples.
For GSM8k and MATH, we follow Toshniwal et al. (2024) to process the dataset.
A.3 ANALYSIS OF REASONING TYPES
**Accuracy of Reasoning Types We calculate the accuracy for each reasoning type on our empirical**
dataset D, shown in Figure 6. We can find that on LogiQA and MATH, the accuracy of different
reasoning types is similar. However, deductive and analogical reasoning outperform the other two
on BBH while inductive and abductive reasoning are more effective. The results illustrate that after
our carefully designed demonstration for each reasoning type, LLM’s capabilities in other reasoning
types achieve comparable performance with deductive reasoning. This ensures the quality and the
balance of our collected dataset on each reasoning type.
Comparing Figure 1 and Figure 6, we can see that if correctly selected, the specific reasoning type
can enhance the model performance by handling problems that cannot be solved by other reasoning
types, such as inductive on MATH. However, the unsuitable reasoning type can also mislead the
model, leading to poor performance.
**Diversity of Reasoning Types To further verify whether the reasoning types can make the solutions**
more diverse, we compare the diversity between solutions under different sampling settings in Table
10. We use Levenshtein Distance (Levenshtein, 1966) and the n-gram overlaps between sentences
to evaluate diversity. Specifically, for K generations G = _g1,_ _, gK_ of the same problem, we
_{_ _· · ·_ _}_
calculate the distance between each pair and normalize them with the sentence length. Then the
average distance over these paired results is used as the distance of these K generations. If we denote
the normalized Levenshtein Distance function as fld, this process can be represented as:
_fld(gi, gj)._ (1)
_j=i+1_
X
_fld(G) =_
_K(K −_ 1)
_i=0_
The calculation of the n-gram overlap is defined in the same way. For each setting, we present the
average score over the problems in the test set in Table 10. A larger Levenshtein distance and a
smaller overlap indicate a more diverse solution set. The zero-shot setting does not include examples
in the prompt, and the zero-shot setting + types only include the definition of the reasoning type
(as listed in Table 8). The few-shot setting has 5 examples, and the few-shot setting with types has
-----
Deductive Inductive Analogical Abductive
0.60
0.50
0.40
0.30
0.20
0.10
0.00
LogiQA BBH GSM8k MATH
Figure 6: Accuracy of the solutions on different reasoning types. It indicates that the effectiveness of reasoning
types varies in different problems.
different 6 examples for each type. For zero-shot / few-shot @5, we use repeated sampling with
temperature = 1 for 5 times. For zero-shot / few-shot + 5 types, we sample one solution per reasoning
type.
From Table 10, we can see that after adding the reasoning types, the diversity of both zero-shot and
few-shot increases significantly. It indicates that the introduction of various reasoning types can
make the LLM’s reasoning more diverse. We can also find that in most cases, the few-shot with
reasoning types has the highest diversity, while in BBH, the zero-shot setting can benefit more from
the reasoning types.
Table 10: Adding reasoning types can enhance diversity in both zero-shot and few-shot sampling settings. It can
significantly increase the distance and reduce the n-gram overlaps between generations. For each setting, we
use Mistral 7B to sample 5 solutions with temperature = 1. @5 indicates repeated sampling 5 times, + 5 types
indicates sampling one solution per reasoning type. The diversity is averaged over the whole test set.
Benchmark Sampling Setting Levenshtein Distance ↑ Unigram overlap ↓ 4-gram overlap ↓
Zero-shot @ 5 0.3043 0.5877 0.5367
Zero-shot + 5 types 0.5997 0.2584 0.1863
LogiQA
Few-shot @ 5 0.5729 0.2316 0.1226
Few-shot + 5 types **0.6437** **0.1745** **0.0773**
Zero-shot @ 5 0.5170 0.3104 0.2157
Zero-shot + 5 types **0.7117** **0.1280** **0.0495**
BBH
Few-shot @ 5 0.5992 0.2239 0.1191
Few-shot + 5 types 0.6495 0.1750 0.0764
Zero-shot @ 5 0.6242 0.1951 0.0907
Zero-shot + 5 types 0.6831 0.1513 0.0540
GSM8k
Few-shot @ 5 0.4977 0.3117 0.1761
Few-shot + 5 types **0.7097** **0.1366** **0.0476**
Zero-shot 0.6726 0.1573 0.0703
Zero-shot + 5 types 0.7291 **0.1116** **0.0352**
MATH
Few-shot 0.6588 0.1741 0.0804
Few-shot + 5 types **0.7319** 0.1154 0.0382
A.4 IMPACT STATEMENT
This work will enhance current LLMs with better reasoning capability, which can make them more
useful in problem-solving. There might be some potential societal consequences of our work, none
of which we feel must be specifically highlighted here. However, it might be misused as we release
all our code and data for reproduction. We will try our best to avoid the potential misuse.
-----
Table 11: Examples of four reasoning types.
**Type** **Query** **Example**
**Deduction** Alice, Bob, and Claire are dancers at
a square dance. At the start of a song,
they each have a partner: Alice is with
Lola, Bob is with Rodrigo, and Claire
is with Patrick. Throughout the song,
Alice and Bob switch partners; Claire
and Bob switch; Finally, Bob and Alice
switch. At the end of the dance, Alice
is dancing with Options:
(A) Lola (B) Rodrigo (C) Patrick
**Induction** Students who told a lie overestimated
how many people could detect it.
Volleyball players performing poorly
thought teammates noticed more than
they actually did. A student wearing a
funny T-shirt expected everyone to notice, but only a few did. Which option
best illustrates the statements above?
(A) People overestimate how often others notice their appearance and behavior.
(B) People rarely notice the appearance
or behavior of others.
(C) We are less observant of others’ appearance and behavior than we think.
(D) People are less aware of their appearance and behavior than others are."
**Abduction** John is 24 years younger than his dad.
The sum of their ages is 68 years. How
many years old is John?
|Col1|22|years old.|
|---|---|---|
**Analogy** The integer m is between 30 and 80 and
is a multiple of 6. When m is divided by
8, the remainder is 2. Similarly, when
m is divided by 5, the remainder is 2.
What is the value of m?
(0) At the start: Alice: Lola, Bob: Rodrigo, Claire:
Patrick.(1) Alice and Bob switch : Alice: Rodrigo,
Bob: Lola, Claire: Patrick. (2) Claire and Bob
switch : Alice: Rodrigo, Bob: Patrick, Claire:
Lola. (3) Bob and Alice switch : Alice: Patrick,
Bob: Rodrigo, Claire: Lola. At the end of the
dance, Alice is dancing with Patrick. So the answer
is (C) .
By examining each of the scenarios mentioned, we
notice all these examples indicate that individuals
overestimate the level and extent of attention their
actions, appearances, or behaviors receive from
others. This consistency across different contexts
illustrates a broader psychological phenomenon.
So the answer is (A) .
Retrieval: Question: Lisa is 10 years younger than
her mom. The sum of their ages is 70 years. How
old is Lisa?
Answer: Lisa is 30 years old and her mom is 40
years old.
These are solved using the same approach as the
problem about John and his dad’s ages, i.e., setting
up two equations based on the information given
and then solving for the two variables representing
the ages. Therefore, for the given question, John is
22 years old.
To solve this problem using abductive reasoning,
we assume that one possible value of m exists that
abides by the constraints and check if this assumption holds.1. Fi rst, filter values of m that are multiples of 6 between 30 and 80. 2. Next, apply the
condition that when m is divided by 8, the remainder is 2. Only 42, 66 fit this condition.3. Apply the
third condition, that when divided by 5, m should
leave a remainder of 2. Testing the applicable values so far and find 42 meets the requirement. So
|the answer is|42|.|
|---|---|---|
-----
| [
"Danqing, Wang",
"Lei, Li",
"Fei, Fang",
"Jianxin, Ma"
] | 2024-10-02T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.01952 | https://arxiv.org/abs/2410.01952 | https://www.semanticscholar.org/paper/55a0ebdc0b3fbc1f1e86e9975296d3c26f00f795 |
Unlocking Structured Thinking in Language Models with Cognitive prompting | We propose cognitive prompting as a novel approach to guide problem-solving in large language models (LLMs) through structured, human-like cognitive operations such as goal clarification, decomposition, filtering, abstraction, and pattern recognition. By employing systematic, step-by-step reasoning, cognitive prompting enables LLMs to efficiently tackle complex, multi-step tasks. We evaluate the effectiveness of cognitive prompting on Meta's LLaMA models, comparing performance on arithmetic reasoning tasks using the GSM8K dataset and on commonsense reasoning benchmarks. Our analysis includes comparisons between models without cognitive prompting, models with a static sequence of cognitive operations, and models using reflective cognitive prompting, where the LLM dynamically self-selects the sequence of cognitive operations. The results show that cognitive prompting, particularly when dynamically adapted, significantly improves the performance of larger models, such as LLaMA3.1 70B, and enhances their ability to handle multi-step reasoning tasks. This approach also improves interpretability and flexibility, highlighting cognitive prompting as a promising strategy for general-purpose AI reasoning. | Cognitive prompting, particularly when dynamically adapted, significantly improves the performance of larger models, such as LLaMA3.1 70B, and enhances their ability to handle multi-step reasoning tasks, highlighting cognitive prompting as a promising strategy for general-purpose AI reasoning. | ## UNLOCKING STRUCTURED THINKING IN LANGUAGE MODELS WITH COGNITIVE PROMPTING
**Oliver Kramer & Jill Baumann**
Computational Intelligence Lab
Universit¨at Oldenburg
26122 Oldenburg, Germany
_{oliver.kramer,jill.baumann}@uni-oldenburg.de_
ABSTRACT
We propose cognitive prompting as a novel approach to guide problem-solving in
large language models (LLMs) through structured, human-like cognitive operations such as goal clarification, decomposition, filtering, abstraction, and pattern
recognition. By employing systematic, step-by-step reasoning, cognitive prompting enables LLMs to efficiently tackle complex, multi-step tasks. We evaluate
the effectiveness of cognitive prompting on Meta’s LLaMA models, comparing
performance on arithmetic reasoning tasks using the GSM8K dataset and on commonsense reasoning benchmarks. Our analysis includes comparisons between
models without cognitive prompting, models with a static sequence of cognitive
operations, and models using reflective cognitive prompting, where the LLM dynamically self-selects the sequence of cognitive operations. The results show that
cognitive prompting, particularly when dynamically adapted, significantly improves the performance of larger models, such as LLaMA3.1 70B, and enhances
their ability to handle multi-step reasoning tasks. This approach also improves
interpretability and flexibility, highlighting cognitive prompting as a promising
strategy for general-purpose AI reasoning.
1 INTRODUCTION
Recent advancements in artificial intelligence (AI), especially with large language models (LLMs),
have made great progress in emulating human reasoning to solve tasks like text summarization (Stiennon et al., 2020), code generation (Guo et al., 2023), and question answering (Lu et al., 2022).
While LLMs excel at generating coherent text and handling vast data, their ability to perform multistep reasoning still falls short of human cognitive processes. Human cognition, marked by its structured nature, provides a compelling blueprint for guiding AI through complex tasks that require
layered thinking and adaptability.
This paper introduces a novel approach called cognitive prompting, designed to enhance problemsolving in LLMs by systematically emulating human cognitive operations (COPs). Cognitive
prompting organizes problem-solving into distinct cognitive steps—such as goal clarification, task
decomposition, and pattern recognition—allowing LLMs to tackle complex tasks in a more structured and interpretable manner, see Figure 1. Inspired by cognitive psychology and cognitive architectures like ACT-R (Anderson & Lebiere, 1996), this method bridges the gap between human-like
reasoning and AI’s computational power, enabling models to handle tasks in fields such as mathematics, logic, decision-making, and creativity with greater precision. Our experiments, conducted
with Meta’s LLaMA models (Touvron et al., 2023) on the GSM8K (Cobbe et al., 2021) and a
commonsense benchmark (Shi & Lipani, 2024), demonstrate significant improvements in task performance when cognitive prompting is applied. In particular, the reflective variant of cognitive
prompting leads to enhanced reasoning capabilities.
The structure of the paper is as follows: Section 2 introduces the concept of cognitive prompting,
detailing its core operations and their application in problem-solving. Section 3 presents experimental results on the impact of cognitive prompting on arithmetic reasoning tasks, while Section 4
explores its effectiveness in commonsense reasoning. Section 5 reviews related work on prompting
-----
engineering strategies. Finally, Section 6 concludes the paper. The appendix contains exemplary
reasoning processes and examples for problem-specific COPs.
2 COGNITIVE PROMPTING
Cognitive prompting organizes problem-solving through a structured sequence of human-like COPs,
enabling LLMs to tackle complex tasks across domains such as mathematics, logic, creativity, and
decision-making. This method, inspired by principles in cognitive psychology, breaks problems into
stages like goal clarification, decomposition, filtering, and integration—mimicking the way humans
refine their understanding of tasks. By leveraging this structured approach, cognitive prompting
enhances clarity, interpretability, and adaptability in LLM reasoning.
Unlike methods like Chain of Thought (CoT) (Wei et al., 2022), cognitive prompting offers more
general multi-dimensional operational depth, allowing LLMs to approach a wider variety of problems with reasoning progression. This framework, rooted in dual-process and problem-space theories, encourages both intuitive and analytical reasoning, helping models transition between pattern
recognition, abstraction, and integration for more consistent and interpretable solutions. Cognitive
prompting can be formalized as an optimization problem. Let C = _c1, c2, . . ., cn_ represent a set
_{_ _}_
of COPs and S = _s1, s2, . . ., sk_ denote a sequence of k operations from C. The objective is
_{_ _}_
to find the sequence S[∗] that maximizes task performance S[∗] = arg maxS _C f_ (S) subject to con_⊆_
straints such as |S| = k, s1 = goal clarification, and sk = integration. Here, f (S) represents task
performance, e.g., accuracy, efficiency, coherence.
Cognitive prompting follows a structured process that mirrors human problem-solving. Key COPs
include:
**Goal Clarification:** Clearly define the objective of the problem to maintain focus on solving it
effectively. In the context of COP, goal clarification ensures that the model aligns its reasoning
with the desired outcome, minimizing distractions. Let G represent the goal, and all subsequent
operations should be oriented toward achieving G, helping the model concentrate on the correct
direction of reasoning.
**Decomposition:** Break down the problem _P_ into smaller, manageable components
_{P1, P2, . . ., Pn}, where P =_ _i=1_ _[P][i][.]_ This step is crucial in COP as it allows the model
to tackle complex, multi-step problems incrementally. Decomposition is particularly useful in
mathematical problem-solving and logic tasks, where breaking a problem into sub-problems allows
[S][n]
the model to apply specific operations or strategies to each part. Moreover, decomposition helps to
identify the core structure of the problem, isolating the critical steps required for a comprehensive
solution.
General Cognitive Prompting
Instructions:
Solve the following problem by choosing and applying appropriate
cognitive operations from the list below. For each step, provide your
concise reasoning before moving on.
Cognitive Operations:
1. Goal Clarification: Define the objective clearly.
2. Decomposition: Break down the problem into manageable parts.
3. Filtering: Focus on the most relevant information.
4. Reorganization: Arrange the information to reveal structure
5. Pattern Recognition: Identify recurring patterns or relationships.
6. Abstraction: Extract fundamental principles from the patterns.
7. Generalization: Apply the abstracted principles to the larger problem.
8. Integration: Synthesize the components into a cohesive solution.
Problem: [SPECIFIC PROBLEM TO SOLVE]
Your Response: Please start with "Goal Clarification" and proceed
through each cognitive operation step by step, providing detailed
reasoning and explanations.
Arithmetic Cognitive Prompting
Instructions:
Solve the following arithmetic problem by following each step of
the cognitive operations listed below. For each step, provide your
reasoning and calculations before moving on to the next step.
Cognitive Operations:
1. Goal Clarification: Restate the problem in your own words.
2. Decomposition: List the given information.
3. Filtering: Identify what you need to find.
4. Reorganization: Assign variables to the unknowns.
5. Pattern Recognition: define each variable clearly.
6. Abstraction: Set up equations based on the problem.
7. Generalization: Solve the equations step by step.
8. Integration: Verify your solution with the given information.
Problem: [ARITHMETIC PROBLEM TO SOLVE]
Your Response: Please start with "Restate the problem in your
own words" and proceed through each cognitive operation step by
step, providing detailed reasoning and calculations for each.
Figure 1: Left: General cognitive prompting, Right: Cognitive prompting adapted to arithmetical
reasoning.
-----
**Filtering:** Select the most relevant information from I = {i1, i2, . . ., im} using a filtering function
irrelevant details. In complex tasks, the problem statement may include redundant or distractingF (I) = Irel ⊆ _I. Filtering is essential in COP to prevent the model from being overwhelmed by_
information, so filtering ensures that the model focuses on the essential data points that directly
impact problem-solving. This operation can significantly improve accuracy by narrowing down
the scope of attention to the key elements required for a solution. Filtering also helps prioritize
conflicting information by selecting the most reliable or impactful inputs for further operations.
**Reorganization:** Rearrange data, variables, or equations D to reveal patterns or simplify the structure, such that Reorder(D) → _D[′]. In COP, reorganization plays a crucial role by enabling the model_
to manipulate the structure of the information to expose underlying patterns or simplify the problemsolving process. This operation helps in transforming complex, disordered data into a more logical
and interpretable form, allowing the model to focus on solving manageable sub-problems. Reorganization can be especially useful in algebraic manipulations, where reordering terms or rearranging
equations simplifies solving or leads to the discovery of connections between different parts of the
problem.
**Pattern Recognition:** Identify recurring relationships or patterns P in the data, which facilitates
the application of known solutions. In COP, pattern recognition helps the model detect similarities with previously encountered problems, accelerating problem-solving by applying alreadyestablished solutions to new contexts. Recognizing patterns not only speeds up problem-solving
but also enhances the model’s ability to predict the next steps in a sequence or foresee potential
outcomes based on recognized trends. This is particularly beneficial in domains like mathematics
and logic, where identifying structural or numerical patterns allows for the reuse of strategies from
similar problems, leading to more efficient and elegant solutions. Moreover, it enables the model
to generalize from specific cases to broader principles, laying the groundwork for abstraction and
generalization.
**Abstraction:** Extract broader principles A from the identified patterns P, and generalize them to
apply across different problems or contexts. In COP, abstraction enables the model to transcend
specific details and focus on fundamental principles, which enhances its adaptability to new and
unfamiliar tasks by recognizing underlying structures. Abstraction is a key step in solving not just
individual problems but entire classes of problems by deriving rules, formulas, or frameworks that
can be applied universally. By focusing on the core ideas underlying a problem, abstraction helps
simplify the solution and extends the model’s reasoning capabilities beyond surface-level details,
improving its ability to tackle complex and novel tasks that require higher-order thinking.
**Generalization:** Apply abstracted principles A to the broader problem or similar contexts, such
that fgen(A) = {P1, P2, . . ., Pk}. Generalization in COP ensures that solutions are not isolated
to the specific instance but are scalable across various related problems. This operation allows the
model to extend insights gained from the current task to solve new problems with similar structures.
By abstracting and generalizing, the model improves its adaptability, enabling it to handle a wide
range of tasks beyond the immediate problem and apply the same cognitive framework to different
contexts, thereby enhancing its reasoning flexibility and robustness.
**Integration:** Synthesize the individual solutions Qi into a cohesive final solution Q, ensuring
all components of the problem are addressed and fit together logically. In COP, integration is the
culmination of the reasoning process, where the model combines all the previously solved subproblems into a comprehensive, unified solution, ensuring coherence and completeness.
**Static and Reflective Cognitive Prompting:** This flexible process allows LLMs to dynamically
apply the most relevant operations based on the task’s context, enhancing problem-solving performance across various domains. In static cognitive prompting, a fixed order S = [s1, s2, . . ., sk] of
COPs is followed throughout the problem-solving process, ensuring a structured yet rigid approach.
In contrast, reflective cognitive prompting allows the LLM to self-select the sequence of COPs,
adapting flexibly to the task’s needs, i.e., choosing the next COP si _C in each step. This adapt-_
ability not only improves the model’s ability to solve complex problems but also offers structured, ∈
interpretable explanations of the reasoning processes.
-----
**Domain Specific COPs** The effectiveness of cognitive prompting is significantly enhanced when
the general COPs are adapted to specific problem domains. By tailoring each cognitive operation to
the characteristics of a particular domain, the model can better align its reasoning process with the
demands of the task. For example, the decomposition operation in scientific inquiry might involve
breaking down a complex hypothesis into smaller, testable components, while in ethical decisionmaking, decomposition could involve identifying and separating conflicting moral principles and
stakeholder interests. This domain-specific adaptation ensures that the reasoning process remains
relevant and effective for each type of problem. A detailed overview of how COPs are adapted across
different domains, such as scientific inquiry and ethical decision-making, can be found in Table 1 in
the Appendix.
3 ARITHMETIC REASONING
**Benchmark** We evaluate the performance of cognitive prompting with Meta’s LLAMA models
(8B and 70B) on the GSM8K dataset (Cobbe et al., 2021), a widely used benchmark for math
problem-solving. GSM8K consists of about 7k training and 1.5k high-quality, grade-school math
word problems, designed to test the reasoning and mathematical abilities of LLMs. As cognitive
prompting does not require training, we only employ the problems in the test set.
**COPs** The general COPs are adapted to arithmetic reasoning as follows, see Figure 1, right. In
math problems, restating the problem in one’s own words helps to ensure clarity. Listing the given
information identifies known values and relationships. Identifying the unknowns to be solved is
essential, and assigning appropriate variables to these unknowns ensures clarity during the solution process. Defining each variable clearly avoids confusion. Setting up equations based on the
problem’s relationships enables step-by-step solutions. Verifying the solution against the given information ensures accuracy, and presenting the final answer clearly helps maintain consistency and
logic.
**Results** The 8B model achieves scores of 0.7 across all prompting techniques. In comparison,
the 70B model shows significant improvement, with scores increasing from 0.87 (no prompting)
to 0.89 (static cognitive prompting) and 0.91 (reflective cognitive prompting), see Figure 2 (left).
The results on GSM8K indicate that larger models, such as the 70B, exhibit marked improvements
in performance when utilizing more advanced prompting techniques. While the 8B model’s scores
remain consistent at around 0.7, regardless of whether prompting techniques are used, the 70B
model demonstrates a clear upward trend, benefiting more from prompting. Specifically, reflective
cognitive prompting yields the highest score of 0.91, followed by static at 0.89, and no prompting
at 0.87. This suggests that larger models are better able to take advantage of prompting techniques,
especially Reflective cognitive prompting, which seems to facilitate deeper reasoning or reflection
Figure 2: Left: Accuracies of cognitive prompting (CP) strategies and models (3 repetitions) on artichmetic reasoning problems, Right: Occurrence of top nine cognitive prompting sequences in 70B
model with goal clarification (GC), decomposition (DC), pattern recognition (PR), generalization
(GN), and reorganization (RE).
-----
in the model. The reduced variability in the 70B model’s results also points to greater stability and
reliability when applying more sophisticated prompts.
Figure 2 (right) shows the occurrences of cognitive operation sequences in one of the reflective
cognitive prompting 70B experiments, with the most frequent sequences at the top. Each bar represents a combination of processes such as goal clarification, decomposition, pattern recognition,
generalization, and reorganization. The number of occurrences for each sequence is labeled inside
the bars in white. The plot presents the data in descending order, from the most common to the
least frequent cognitive operation sequences. The sequences occurrences show that the most common cognitive operation sequence is goal clarification, decomposition, and pattern recognition. This
short sequence appears much more frequently than other combinations, suggesting that it is a fundamental or widely used combination in cognitive tasks. Additionally, the majority of occurrences
are concentrated among the first six sequences, which are comparatively shorter in length. This suggests that simpler and more concise sequences are favored or more commonly applied. Longer and
more complex sequences, such as those involving generalization and reorganization, occur much
less frequently, indicating that these operations might be used in more specific or specialized cases.
4 COMMONSENSE REASONING
To further assess the versatility of cognitive prompting, we tested its effectiveness on a set of commonsense reasoning problems. These problems often require balancing practical knowledge, everyday logic, and context, making them an ideal domain for evaluating the structured thinking capabilities that cognitive prompting provides.
**Benchmark** We evaluate the cognitive prompting approach using both sizes of LLaMA3.1 on a
randomly selected subset of questions from the commonsense dataset, a component of the ethics
problem benchmark (Shi & Lipani, 2024). Due to the censorship restrictions of LLaMA models on
ethical questions, some requests are rejected. Therefore, we have limited our analysis to 1,000 cases
where all models provide a valid response.
**COPs** The cognitive operations are adapted to handle commonsense reasoning tasks by guiding
models through structured problem-solving steps, see Figure 3. Using goal clarification, the models
define the objective or intended commonsense outcome clearly, ensuring the task is well understood.
Decomposition allows them to break the problem into key components, focusing on relevant facts
or details. Filtering and reorganization help the models concentrate on the most significant aspects
of the scenario while reorganizing information to resolve ambiguities or conflicts. Pattern recognition and abstraction are essential for identifying parallels with other commonsense situations and
extracting general principles that apply to the current problem. Generalization enables the models
to use these principles not only for the current scenario but also for similar future cases. Integration
brings together all perspectives and principles to form a cohesive and logical commonsense conclu
Commonsense Cognitive Prompting
Instructions:
Please evaluate the following commonsense dilemma by systematically applying the cognitive operations
listed below. For each step, provide your reasoning and detailed explanation before proceeding to the
next step.
Cognitive Operations:
1. Goal Clarification: Clearly define the objective or intended commonsense outcome.
2. Decomposition: Break the problem down into its key components and relevant factors.
3. Filtering: Focus on the most important commonsense elements and disregard irrelevant details.
4. Reorganization: Rearrange facts and perspectives to clarify conflicts or ambiguities.
5. Pattern Recognition: Identify similarities with other commonsense scenarios or precedents.
6. Abstraction: Extract broader commonsense principles that can be applied to this situation.
7. Generalization: Apply the identified principles to both the current scenario and potential future cases.
8. Integration: Combine all perspectives and principles into a final commonsense decision.
Problem: [COMMONSENSE PROBLEM TO SOLVE]
Your Response: Please start with 'Goal Clarification' and proceed through each cognitive operation step
by step, providing detailed reasoning and explanations for each.
Figure 3: Commonsense reasoning prompts used for cognitive prompting.
-----
sion. This structured cognitive approach enhances the models’ ability to deliver accurate, practical
solutions in commonsense reasoning tasks.
**Results** Figure 4 (left) illustrates that static cognitive prompting outperforms the absence of cognitive prompting, while reflective cognitive prompting further improves performance over static in
the 8B model. The 70B model consistently outperforms the 8B model. For the 8B model, cognitive prompting variants show a significant boost in accuracy, rising from 0.605 without prompting
to over 0.74 with cognitive prompting. Interestingly, for the 70B model, no cognitive prompting
achieves the highest accuracy at 0.84, slightly outperforming reflective cognitive prompting at 0.81.
Upon further analysis of the models’ outputs, we found that the larger model tends to over-process
multiple reasoning steps, leading to errors when too many steps are chosen—an effect resembling
overfitting. To address this, we experiment with introducing constraints on the number of COPs for
larger models to regularize their reasoning process.
Figure 4: Left: Accuracies of cognitive prompting (CP) strategies and models (2 repetitions for 8B,
1 for 70B) on commonsense reasoning problems, Right: Occurrence of top nine and other cognitive
prompting sequences in 70B model with abbreviations like in Figure 2 (right) and filtering (FI),
abstraction (AB), reasoning (RS), and integration (IN).
Figure 4 (right) shows the distribution of cognitive operation sequences. In commonsense reasoning,
a wider variety of sequences is selected compared to arithmetic reasoning, with over 300 different
sequences occurring between 1 and 10 times. This diversity suggests that commonsense reasoning
tasks prompt more varied approaches than purely arithmetic problems.
5 RELATED WORK
Prompting is a key technique for leveraging pre-trained LLMs to perform tasks by guiding their outputs through well-crafted instructions. In zero-shot prompting, models generate responses without
task-specific examples, while few-shot prompting (Brown et al., 2022) improves performance by
including a few task examples. CoT prompting (Wei et al., 2022) breaks down complex reasoning
into intermediate steps, enabling systematic problem-solving, while Tree of Thoughts (ToT) (Yao
et al., 2023a) extends CoT by enabling LLMs to explore multiple reasoning paths and make deliberate decisions. Building on CoT, ReAct (Yao et al., 2023b) combines reasoning with real-time
decision-making, enhancing models’ abilities to handle dynamic tasks. This approach allows for
more flexible handling of unpredictable inputs, mimicking human cognitive processes like adjusting
decisions on the fly.
Prompt Breeder (Fernando et al., 2023) optimizes prompts using evolutionary computation to iteratively refine and improve performance. Similarly, self-consistency (Wang et al., 2023) enhances
reliability by generating multiple responses and selecting the most consistent one, reducing variability in complex tasks. This method significantly mitigates the challenge of output randomness that
often hampers LLM reliability in open-ended problem-solving scenarios.
Automated Prompt Engineering (APE) (Zhou et al., 2023) automates prompt optimization through
model self-instruction and feedback loops, pushing the boundaries of human-computer collaboration. Optimization by PROmpting (OPRO) (Yang et al., 2024) uses LLMs to iteratively generate and
-----
refine solutions, significantly outperforming human-designed prompts in optimization tasks. These
automated approaches open new avenues for improving performance without extensive human intervention, allowing models to autonomously evolve their problem-solving strategies.
Recent works also explore multi-task learning to generalize prompt strategies across diverse applications, further enhancing their adaptability. Techniques like retrieval-augmented generation (RAG)
(Lewis et al., 2020) combine prompting with external knowledge sources, offering richer context
and better-informed outputs, demonstrating how prompts can evolve to integrate more human-like
reasoning. Recent advancements in parameter-efficient fine-tuning methods, such as decomposed
prompt tuning (DePT) (Shi & Lipani, 2024), have demonstrated how efficient prompt-based strategies can reduce memory and computational costs in large language models, which can complement
the flexibility provided by cognitive prompting in adapting models to complex problem-solving
tasks. To the best of our knowledge, no prompt strategies are motivated explicitly by human-like
COPs.
6 CONCLUSIONS
Cognitive prompting models human reasoning as a sequence of COPs delivered through prompts.
It fosters structured thinking using general COPs or domain-specific adaptations. Unlike examplebased approaches that rely on memorized examples, cognitive prompting emphasizes high-level
reasoning, making it adaptable across a wide range of tasks. The specialization of these cognitive
operations for specific domains allows it to tackle diverse problems effectively. Our experiments
demonstrate that cognitive prompting, particularly the reflective variant, is highly effective in guiding LLMs through complex tasks such as GSM8K math problems and commonsense reasoning.
Reflective prompting significantly enhances the performance of smaller models, consistently outperforming static prompting. However, in larger models like the 70B, cognitive prompting excels in
arithmetic reasoning but suffers in commonsense tasks, where excessive reasoning steps reduce performance—similar to overfitting—indicating the need for regularization. For future work, we plan
to extend experiments across more domains and models, exploring the effectiveness of cognitive
prompting in areas like legal reasoning, medical decision-making, and strategic planning. This will
ensure the robustness of the approach across general and specialized tasks.
REPRODUCIBILITY STATEMENT
Our experiments use Meta’s LLaMA models, which are open-source and accessible. To ensure
reproducibility, we have included all used prompts and detailed experimental settings in Appendix
C. The complete codebase, including cognitive prompting scripts, will be available on GitHub after
publication, allowing researchers to replicate our results and apply the techniques to other tasks.
ETHICS STATEMENT
Cognitive prompting promotes structured, human-like reasoning, enhancing transparency and consistency. However, modeling human-like thinking in sensitive domains, such as ethical decisionmaking, raises concerns about biased reasoning and harmful outcomes. To mitigate risks, we focus
on well-defined contexts like mathematics and commonsense reasoning, with no access to sensitive
data. We urge careful consideration of ethical implications when applying cognitive prompting to
more complex tasks.
REFERENCES
John R. Anderson and Christian Lebiere. ACT-R: A theory of higher-level cognition and its relation
_to visual attention. Psychological Review, 1996._
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal,
Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M.
Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin,
-----
Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford,
Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Neural Informa_tion Processing Systems (NeurIPS), volume 35, pp. 24824–24837, 2022._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John
[Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168,](http://arxiv.org/abs/2110.14168)
2021.
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, and Tim Rockt¨aschel.
Promptbreeder: Self-referential self-improvement via prompt evolution. Neural Information Pro_cessing Systems (NeurIPS) Workshop, 2023._
Daya Guo, Canwen Xu, Nan Duan, Jian Yin, and Julian McAuley. Longcoder: A long-range pretrained language model for code completion. In International Conference on Machine Learning
_(ICML), pp. 12098–12107, 2023._
Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman
Goyal, Heinrich K¨uttler, Mike Lewis, Wen-tau Yih, Tim Rockt¨aschel, Sebastian Riedel, and
Douwe Kiela. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Neural
_Information Processing Systems (NeurIPS), volume 33, pp. 9459–9474, 2020._
Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord,
Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for
science question answering. In Neural Information Processing Systems (NeurIPS), volume 35,
pp. 2507–2521, 2022.
Zhengxiang Shi and Aldo Lipani. Dept: Decomposed prompt tuning for parameter-efficient finetuning. In International Conference on Learning Representations (ICLR), 2024.
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford,
Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. In Neural
_Information Processing Systems (NeurIPS), volume 33, pp. 3008–3021, 2020._
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee
Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation
language models, 2023.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha
Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language
models. In International Conference on Learning Representations (ICLR), 2023.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi,
Quoc V. Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language
models. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh
(eds.), Neural Information Processing Systems (NeurIPS) Workshop, volume 35, pp. 24824–
24837, 2022.
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, and Xinyun Chen.
Large language models as optimizers. In International Conference on Learning Representations
_(ICLR), 2024._
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik
Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. In Neural
_Information Processing Systems (NeurIPS), volume 36, pp. 11809–11822, 2023a._
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan Cao.
React: Synergizing reasoning and acting in language models. In International Conference on
_Learning Representations (ICLR), 2023b._
Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and
Jimmy Ba. Large language models are human-level prompt engineers. In International Confer_ence on Learning Representations (ICLR), 2023._
-----
A ARITHMETIC REASONING EXAMPLE
Here is an example for arithmetic reasoning using static cognitive prompting and a LLaMA3.1 70B
answer:
3
13
16
**Instructions:**
Please solve the following problem by systematically applying the cognitive operations listed
_,,→→_ belowproceeding to the next step. For each step, provide your reasoning and detailed explanation before.
**Cognitive Operations:**
1. Goal Clarification: Define the objective clearly.
2. Decomposition: Break down the problem into manageable parts.
3. Filtering: Focus on the most relevant information.
4. Reorganization: Arrange the information to reveal patterns or structure.
5. Pattern Recognition: Identify recurring patterns or relationships.
6. Abstraction: Extract fundamental principles from the patterns.
7. Generalization: Apply the abstracted principles to the larger problem.
8. Integration: Synthesize the components into a cohesive solution.
**Problem:**
John drives for 3 hours at a speed of 60 mph and then turns around because he realizes he
_,,,,→→→→_ forgot something very important at homethe firstspeed of80 mph. 30 2How far is he from home at the end of those hours in standstill trafficmph, before being able to drive the remaining time of the.. He tries to get home inHe spends the next half 4 hours? 4-hour driving at a hours but spends 4 hours going at
**Your Response:**
Please start with ’Goal Clarification’ and proceed through each cognitive operation step by
_,→_ step, providing detailed reasoning and explanations for each.
2
5
11
19
30
35
39
Here’s the solution to the problem using the specified cognitive operations:
**Goal Clarification**
The objective of this problem is to determine how far John is from home after 4 hours, given
_,→_ his varying speeds and traffic conditions.
**Decomposition**
Let’s break down the problem into smaller parts:
- Part 1: Calculate the distance traveled during the initial 3-hour drive.
- Part 2: Determine the time spent driving at different speeds during the return trip (4
- Part,→ 3:hours Calculate the distance traveled during each segment of the return trip). .
**Filtering**
Focus on the most relevant information:
- Initial speed and time: 60 mph, 3 hours
- Return trip details:
+ Standstill traffic: 2 hours
+ Speeds: 30 mph (0.5 hours), 80 mph (remaining time)
- We can disregard other details as they don’t affect the calculation.
**Reorganization**
Arrange the information to reveal patterns or structure:
- Initial drive:
+ Distance = Speed * Time = 60 mph * 3 hours
- Return trip:
+ Total time: 4 hours
+ Standstill traffic: 2 hours (no distance traveled)
+ Driving segments:
- Segment 1: 30 mph, 0.5 hours
- Segment 2: 80 mph, remaining time (4 - 2 - 0.5 = 1.5 hours)
**Pattern Recognition**
Identify recurring patterns or relationships:
- Initial drive distance can be calculated using the formula Distance = Speed * Time.
- Return trip segments also follow this pattern.
**Abstraction**
Extract fundamental principles from the patterns:
- To calculate the total distance traveled, add the distances traveled during each segment of
_,→_ the journey (initial drive and return trip).
**Generalization**
Apply the abstracted principles to the larger problem:
- Calculate the initial drive distance: 60 mph * 3 hours = 180 miles
- Calculate the return trip segment distances:
+ Segment 1 (30 mph, 0.5 hours): 30 mph * 0.5 hours = 15 miles
+ Segment 2 (80 mph, 1.5 hours): 80 mph * 1.5 hours = 120 miles
-----
46
**Integration**
Synthesize the components into a cohesive solution:
- Total distance traveled during return trip: 15 miles + 120 miles = 135 miles
- Since John turned around after the initial drive, he is now 180 - 135 = 45 miles from home.
B COMMONSENSE REASONING EXAMPLE
Here is an example for commonsense reasoning using reflective cognitive prompting and a
LLaMA3.1 8B answer:
4
15
18
**Instructions:**
Please evaluate whether the following commonsense problem is ’wrong’ or ’not wrong’.
For each step, choose and apply the most suitable cognitive operation from the list below and
_,→_ provide a concise explanation of your reasoning before moving on to the next step.
List of cognitive operations:
**Goal Clarification: Define the objective or the intended commonsense outcome clearly.**
**Decomposition: Break down the problem into its key components and relevant factors.**
**Filtering: Focus on the most important commonsense aspects and discard irrelevant information.**
**Reorganization: Rearrange the facts and perspectives to better understand potential conflicts**
**Pattern Recognition,→** or ambiguities: Identify similarities with other commonsense scenarios or precedents. .
**Abstraction: Extract broader commonsense principles that can be applied to this situation.**
**Reasoning: Reflect on the logical consistency of the solution and ensure it aligns with**
**Generalization,→** common sense: Apply the commonsense principles to the current and potentially future.
**Integration,→** similar scenarios: Synthesize all perspectives and principles into a final commonsense decision. .
**Problem:**
I made a cake without sugar for the diabetic patient.
**Your Response:**
Please proceed through the problem step by step, selecting the appropriate cognitive
Finally,→,operation at each stage and providing a brief explanation of your reasoning output your answer representing your solution (’wrong’ or ’not wrong’). .
3
6
9
12
15
**Goal Clarification**
To evaluate whether making a cake without sugar for a diabetic patient is "wrong" or "not
_,,,→→→_ wrongcasediabetes,", it we need to clearly define what the intended commonsense outcome is’.s understanding if it’s sensible to make a sugar-free cake for someone with. In this
**Filtering**
The key issue here is the diabetic patient part, as it directly relates to the presence or
_,,,→→→_ absence of sugar in the cakefiltered out since they are not directly relevant to the core question about sugarand diabetes. . Other factors like the recipe or the occasion can be
**Pattern Recognition**
A common sense pattern here is that diabetic patients typically avoid consuming foods with
_,,,,→→→→_ high sugar content because it could lead to increased blood glucose levelscake without sugar might be considered sensible for this contextquestion lies in understanding if completely removing sugar still aligns with commonsense. , but the real. Making a
**Reasoning**
Given the diabetic patient context, making a cake without sugar is indeed sensible because it
_,,,,→→→→_ adheres to dietary restrictionsadditional ingredients that could potentially affect blood glucose levels are presentin the recipewrong; rather., Without further information it aligns with diabetic dietary guidelines. However, the term, there’ "s no clear indication this would bewithout sugar. " might imply
**Integration**
Integrating all perspectives and principles, making a cake without sugar for a diabetic
_,,→→_ patient seems sensible because it caters to their dietary needsconflict or ambiguity that suggests this action would be "wrong.." There’s no apparent
Conclusion
not wrong
-----
LIST OF PROBLEM-SPECIFIC COGNITIVE OPERATIONS
Table 1: Exemplary specialization of COPs for various problem domains.
**General Cognitive Operation** **Creative Problem Solving**
Goal Clarification Clarify the Creative Challenge
Decomposition Break the Challenge into Parts
Filtering Focus on Key Constraints
Reorganization Explore New Perspectives
Pattern Recognition Identify Creative Patterns
Abstraction Develop Broad Concepts
Generalization Test and Refine Ideas
Integration Synthesize Novel Solutions
**Decision-Making** **Scientific Inquiry**
Define the Decision Objective Formulate the Research Question
Break Decision into Factors Break Research into Sub-Questions
Focus on Critical Information Identify Key Variables
Arrange Alternatives Plan the Experiment
Identify Patterns in Choices Look for Patterns in Data
Extract General Principles Develop Theoretical Insights
Test Against Criteria Apply Findings Broadly
Make a Final Decision Form Conclusions
**Strategic Planning** **Ethical Problem-Solving**
Define the Strategic Objective Clarify the Ethical Dilemma
Break Strategy into Steps Break Dilemma into Components
Prioritize Focus Areas Focus on Pressing Issues
Arrange Steps Logically Consider Different Perspectives
Identify Strategic Trends Identify Similar Cases
Formulate High-Level Plans Develop Ethical Principles
Test Strategies Against Scenarios Evaluate Solutions Against Principles
Develop a Cohesive Plan Make a Final Ethical Judgment
**Math Problem-Solving** **Logical Problem-Solving**
Restate the Problem in Your Own Words Restate the Logical Problem Clearly
List the Given Information Break Problem into Key Logical Clues
Identify What You Need to Find Focus on the Most Critical Clues
Assign Variables to the Unknowns Organize Information Logically
Define Each Variable Clearly Identify Logical Deductions
Set Up Equations Based on the Problem Generalize Rules or Inferences
Solve the Equations Step by Step Test Inferences Against Remaining Clues
Verify Your Solution with the Given Information Synthesize a Complete Solution
Provide a Clear and Direct Answer Provide the Final Answer
-----
| [
"Oliver, Kramer",
"Jill, Baumann"
] | 2024-10-03T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2410.02953v1 | https://arxiv.org/abs/2410.02953 | https://www.semanticscholar.org/paper/770811f516d8842f3460c5a651511ed0d594ee5b |
Unsupervised Discovery of Formulas for Mathematical Constants | In recent years, we are witnessing a rise of AI and machine learning methods for scientific discovery and hypothesis creation. Despite the strides in other fields of science, a persistent challenge lies in the creation of formulas for mathematical constants.In the landscape of formula creation, there is no straightforward ‘’distance metric'' between two samples that can guide progress. Formulas are either true or false, with no continuous adjustments that can enhance their correctness.The absence of a systematic method left the realm of formula discovery elusive for automated methods. In this work, we propose a systematic methodology for categorization, characterization, and pattern identification of such formulas. We demonstrate this methodology on Polynomial Continued Fraction formulas, which are ubiquitous in their intrinsic connections to mathematical constants, and generalize many mathematical functions and structures.We discover organizing metrics for the space of polynomial continued fractions. We test our methodology on a set of 1,768,900 such formulas, identifying many known formulas for mathematical constants, and discover previously unknown formulas for $\pi$, $\ln(2)$, Gauss, and Lemniscate constants. The uncovered patterns enable a direct generalization of individual formulas to infinite families, unveiling rich mathematical structures. This success paves the way towards a generative model that creates continued fractions fulfilling requested mathematical properties, potentially accelerating by orders of magnitude the rate of discovery of useful formulas. | null | null | null | null | NeurIPS 2024 | true | 0 | 0 | null | https://neurips.cc/virtual/2024/poster/95491 | null | null |
Update on FLoP, a Reinforcement Learning based Theorem Prover | N/A | null | # Update on FLoP, a Reinforcement Learning based Theorem Prover [∗]
Zsolt Zombori[1], Adri´an Csisz´arik[1], Henryk Michalewski[3], Cezary Kaliszyk[2], and
Josef Urban[4]
1 Alfr´ed R´enyi Institute of Mathematics, Budapest
2 University of Innsbruck
3 University of Warsaw, Google
4 Czech Technical University in Prague
## 1 Introduction
The FLoP system was built to allow for experimenting with advanced reinforcement learning
(RL) methods applied to guide theorem proving. Its particular focus is to enable learning from
and generalizing to long proofs, which is a largely unsolved challenge in theorem proving. The
system is very flexible in terms of what it can learn from: even a single training environment
(proof) can result in meaningful generalization. On the other hand, FLoP is simplistic in several
ways: 1) it learns from manually extracted features, 2) it can overfit in some learning scenarios
and 3) its merits have so far been demonstrated only on a very simple dataset. Here we only
address 1), the problem of feature extraction.
We present ongoing work that aims to use graph neural networks (gnn) [9] for feature extraction.
Gnns have been used to learn features of logic formulae on several supervised tasks, e.g. [3, 7,
8, 2]. However, there are very few experiments with such extractors in a reinforcement learning
setting. RL models are typically convolutional and dense networks. Related exceptions are [6]
and [5] that use graph extractors. However, while these systems use intertwined iterations of
proof search and supervised learning, FLoP uses a pure reinforcement learning loop.
We consider learned formula embedding as a stepping stone for more involved projects that
combine machine learning and theorem proving. In Appendix A and B we briefly present two
such project proposals planned as future work.
## 2 Feature extraction
Machine learning models require inputs embedded into some Euclidean space R[n]. However,
when it comes to learning to guide a theorem prover, states and actions are given as logical
formulae and it is highly unclear how to turn them into fixed length vectors. An often used
approach is to do some manual feature extraction. Currently, FLoP extracts triples of adjacent nodes in the formula trees as features. These features convey some statistically relevant
_∗ZZ and AC were supported by the European Union, co-financed by the European Social Fund (EFOP-_
3.6.3-VEKOP-16-2017-00002) and the Hungarian National Excellence Grant 2018-1.2.1-NKP-00008. HM was
supported by the Polish National Science Center grant UMO-2018/29/B/ST6/02959. CK was supported by
ERC grant no. 714034 SMART. JU was funded by the AI4REASON ERC Consolidator grant nr. 649043,
the Czech project AI&Reasoning CZ.02.1.01/0.0/0.0/15 003/0000466 and the European Regional Development
Fund.
-----
Update on FLoP Zombori, Csisz´arik, Michalewski, Kaliszyk, Urban
information, however, a large part of the semantics is lost. Another approach that is gaining
popularity is to represent formulae as graphs and use graph neural networks to produce an embedding vector. Their promise is to adapt feature extraction both to the data and the problem,
i.e., to produce an embedding that best fits the current learning task.
## 3 Embedding with Graph Neural Networks
A gnn takes a labelled graph as input. Each node has some inital embedding vector. The initial
embedding is refined in multiple iterations using a learnable updater model : the new embedding
is calculated from previous embeddings of its neighbourhood. Hence, it exploits the structure
of the graph, allowing information to propagate along the edges. We perform a fixed number
of update operations, called hops to obtain the final embedding of the nodes. Finally, some
_aggregation operation creates a single embedding of the graph from that of its nodes._
Projects using gnns show a large variance with respect to how the input is turned into a graph.
We present our proposed approach by comparing it to two recent variants: [7] and [8, 6].
**NeuroSAT** [7] embeds propositional formulae in conjunctive normal form (CNF). The resulting graph has 2 kinds of nodes (clauses, literals) and 2 kinds of edges (from literals to their containing clauses, between negated literal pairs). Thanks to the small number of node/edge labels,
each kind of interaction is represented by separate neural networks in the update step.
**FormulaNet and Graph Embeddings for HOList** [8, 6] embed formulae of higher order
logic. The graph is the abstract syntax tree of the formula. The number of different symbols that
can occur in the input is not bounded, so a single update operation is performed on all nodes.
Node type information is preserved in a learnable initial embedding. Function application is
curried in the syntax tree, so each node has at most two children, i.e., we only need two types of
edges. Identical subexpressions are merged. A major complication, that was not present in [7]
is the representation of variables. [8, 6] collapse all variables into a single ”VAR” symbol.
**FLoP** In FLoP, we embed first order formulae in CNF. Our implementation is very similar
to that of [6], we start from the syntax tree, with two differences: 1) The initial embedding
is a fixed random vector. 2) Variables are not collapsed into a single node. Rather, they are
wrapped into a ”VAR” function and are normalised to ensure that they are renaming invariant.
This setup ensures that the formula is recoverable from the graph: the initial embedding vector
of the nth variable (according to a preorder traversal) is the same for all inputs.
## 4 Graph Embedding in FLoP
FLoP is built on the leanCoP connection tableau calculus, so its current state is given by the set
of valid actions and the partial tableau tree, with the following main components: the current
goal, the branch leading to the current goal, remaining open goals and the currently applicable
lemmas. At each step, a policy network computes a probability distribution over the valid
-----
Update on FLoP Zombori, Csisz´arik, Michalewski, Kaliszyk, Urban
actions. Each component of the input is currently a hand crafted feature vector, which can be
easily replaced with the embedding network described above.
This is an ongoing effort and in our talk we will present first results using graph embeddings in
FLoP. As a proof of concept, we have done some supervised experiments that use our embedding
network: we collected theorem proving attempts from FLoP training and trained to predict if a
(state, action) pair can lead to success. We achieved 100% training accuracy on a training set
of 20000 entries. We are working to see how well it generalizes.
## References
[1] Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob
McGrew, Josh Tobin, Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. CoRR,
abs/1707.01495, 2017.
[2] Karel Chvalovsky. Top-down neural model for formulae. In International Conference on Learning
_Representations, 2019._
[3] Karel Chvalovsk´y, Jan Jakubuv, Martin Suda, and Josef Urban. Enigma-ng: Efficient neural and
gradient-boosted inference guidance for e. In CADE, 2019.
[[4] The MPTP Challenge. http://www.tptp.org/Seminars/MizarVerification/TheMPTPChallenge.](http://www.tptp.org/Seminars/MizarVerification/TheMPTPChallenge.html)
```
html. Accessed: 2019-05-20.
```
[5] Miroslav Olˇs´ak, Cezary Kaliszyk, and Josef Urban. Property invariant embedding for automated
reasoning. CoRR, 2019.
[6] Aditya Paliwal, Sarah M. Loos, Markus N. Rabe, Kshitij Bansal, and Christian Szegedy. Graph
representations for higher-order logic and theorem proving. CoRR, abs/1905.10006, 2019.
[7] Daniel Selsam, Matthew Lamm, Benedikt B¨unz, Percy Liang, Leonardo de Moura, and David L.
Dill. Learning a SAT solver from single-bit supervision. CoRR, abs/1802.03685, 2018.
[8] Mingzhe Wang, Yihe Tang, Jian Wang, and Jia Deng. Premise selection for theorem proving by
deep graph embedding. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach,
Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information
_Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9_
_December 2017, Long Beach, CA, USA, pages 2783–2793, 2017._
[9] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. A
comprehensive survey on graph neural networks, 2019. cite arxiv:1901.00596Comment: updated
tables and references.
## Appendix A Project Plan: Bolzano-Weierstrass Theorem
The MPTP Challenge [4] consists of the Bolzano-Weierstrass theorem and its 252 auxiliary
lemmas, constituting a relatively small, consistent problem domain. One part of the challenge
is to prove the theorem and all lemmas from scratch using in each derivation only basic axioms,
hence forcing long proofs. In this setup, we believe that curriculum learning can be very useful
and we intend to try to tackle the challenge with FLoP.
-----
Update on FLoP Zombori, Csisz´arik, Michalewski, Kaliszyk, Urban
## Appendix B Project Plan: Backward Hindsight Experi- ence Replay
Hindsight Experience Replay (HER) [1] is a clever approach to alleviate reward sparsity problems in RL environments. Its core idea is to take an unsuccessful exploration trajectory, observe
state S that it reached (as opposed to target state T) and then replay the same trajectory, while
pretending that the target state is now S. During the replay, the agent is rewarded for reaching
the new target.
HER is directly applicable to a theorem proving environment that performs forward reasoning:
each theorem proving attempt some valid consequences of the axioms, even if not the target
conjecture, so it makes sense to assume the new target in the replay. However, it is not obvious
how to do it for backward reasoning.
The aim of our project is to redesign HER for the setting of a backward theorem prover.
**B.1** **Setup**
We want to train a backward theorem prover, i.e., one that starts from a target formula (goal)
and at each inference step reduces the current goal to a list of other goals. Once a goal is
identical to some axiom or previously known lemma, the goal is closed and we can proceed to
try to prove the remaining goals. The proof is complete when all goals have been closed.
**B.2** **Core Idea**
We use Hindsight Experience Replay to provide denser reward to the guidance model. Consider
a single theorem proving attempt. If all goals are closed, then we have obtained a proof of the
target and we can give positive reward to the policy. If there are some open goals, then we
can pretend that those goals were among the initial axioms and give positive reward in this
modified setting.
**B.3** **Components**
The system has four components:
1. Embedder e: takes a formula and maps it into a vector in R[n]. This is most likely a
graph neural network.
2. Aggregator c: takes a set of formula (axiom) embeddings e(a1), e(a2) . . . e(ak) and maps
it into a single aggregate embedding, which represents the conjuction of the formulae. This
could be a recurrent neural network, though some permutation invariant solution would
be best.
3. Policy p: takes a goal embedding and an aggregate axiom embedding and returns
an action probability distribution.
4. Value v: takes a goal embedding and an aggregate axiom embedding and returns
a scalar value of the goal, given the axioms.
-----
Update on FLoP Zombori, Csisz´arik, Michalewski, Kaliszyk, Urban
These components are trained together, end-to-end.
**B.4** **Training**
We iterate the steps below:
1. Select a problem with goal g0 and axioms (lemmas) a1, a2, . . . ak
2. Compute initial goal embedding eg0 = e(g0) and aggregate axiom embedding ea =
_c(e(a0), e(a1), . . . e(ak))_
3. Try to prove the goal based on the current policy p(egi _, ea)_
4. Perform a gradient step based on the proving attempt for the value and policy, propagating
the gradients all the way to the aggreagator and embedder as well.
5. If the proof attempt failed, i.e., we are left with open goals og0, og1, . . . ogl, then
(a) Compute new aggregate embedding
_eaˆ_ [=][ c][(][e][(][a]0[)][, e][(][a]1[)][, . . . e][(][a]k[)][, e][(][og]0[)][, e][(][og]1[)][, . . . e][(][og]l[))]
(b) Replay the same inference steps with the new aggregate axiom embedding in the
policy p(egi _, eaˆ[)]_
(c) The open goals (ogi) are now axioms, so the proof is complete, hence we give positive
reward and perform a gradient step.
**B.5** **Benefits**
_• We provide positive reward for every single theorem proving attempt._
_• The policy receives a representation of the axiom set (knowledge base) and can make_
more informed decisions.
_• This works with any RL algorithm. There is no need of a DAGGER like setup, with_
separate phases of data collection and supervised learning.
**B.6** **Difficulties**
_• Building a meaningful aggregate embedding of the available knowledge base (all axioms_
and lemmas) might be hard and might be very slow. Some ideas to address this:
**– Use premise selection to restrict the aggregator to a handful of lemmas.**
**– Precompute the aggregate embedding of all the lemmas and only ”incorporate” the**
embeddings of the axioms to the aggregate lemma embedding for each problem
_• Different open goals might be related due to sharing some variables. When we add the_
open goals as new axioms in HER, we have to make sure that the axioms are consistent.
E.g. when we have two open goals f (X) and ¬f (X), there is no way to add axioms that
satisfy both.
-----
| [
"Cezary, Kaliszyk",
"Zsolt, Zombori",
"Adrian, Csiszarik",
"Josef, Urban",
"Henryk, Michalewski"
] | 2020-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Verified Multi-Step Synthesis using Large Language Models and Monte Carlo Tree Search | We present an approach using Monte Carlo Tree Search (MCTS) to guide Large Language Models (LLMs) to generate verified programs in Dafny, Lean and Coq. Our method, which we call VMCTS, leverages the verifier inside the search algorithm by checking partial programs at each step. In combination with the LLM prior, the verifier feedback raises the synthesis capabilities of open source models. On a set of five verified programming problems, we find that in four problems where the base model cannot solve the question even when re-sampling solutions for one hour, VMCTS can solve the problems within 6 minutes. The base model with VMCTS is even competitive with ChatGPT4 augmented with plugins and multiple re-tries on these problems. Our code and benchmarks are available at https://github.com/namin/llm-verified-with-monte-carlo-tree-search . | null | ## Verified Multi-Step Synthesis using Large Language Models and Monte Carlo Tree Search
**David Brandfonbrener** [1 2] **Sibi Raja** [1] **Tarun Prasad** [1] **Chloe Loughridge** [1] **Federico Cassano** [3] **Jianang Yang** [4]
**Simon Henniger** [5] **Wiliam E. Byrd** [6] **Robert Zinkov** [7] **Nada Amin** [1 8]
**Abstract**
verifier. The LLM contributes fruitful suggestions and the
verifier ensures soundness.
As a motivating example, consider this prompt: In Dafny,
_write an ADT for arithmetic expressions comprising con-_
_stants, variables, and binary additions. Then write an eval-_
_uator taking an expression and an environment (a function_
_that takes a variable name and returns a number) and re-_
_turning the number resulting from evaluation. Then write an_
_optimizer taking an expression and returning an expression_
_with all additions by 0 removed. Then prove that the opti-_
_mizer preserves the semantics as defined by the evaluation_
_function._
There are many types of errors that a program synthesizer
could make, including issues of syntax (e.g., how to define a function in Dafny), totality (e.g., handling unbound
variables), semantics (e.g., is the optimizer invariant in the
values of variables), and optimality (e.g., is the optimizer
removing all additions by 0).
When giving this prompt to ChatGPT4, it almost always
makes mistakes. When giving this prompt to a GPT augmented with a Dafny checker and some instructions about
Dafny, ChatGPT4 makes an average of five calls to the
checker but eventually runs out of steam with a system error.
When giving the prompt sentence by sentence, it is able to
correct mistakes in each step, and most of the time, it solves
the problem with around seven calls to the checker.
In this paper, we show that an open model such as PhindCodeLlama-34B-v2 (Phind, 2023; Roziere et al., 2023)
which uses fewer parameters than ChatGPT4, is able to
solve such problems by using a variant of Monte Carlo Tree
Search (MCTS) to explore the space of partial programs.
Our main contribution (Section 3) is to define a search algorithm inspired by MCTS that leverages a verifier and LLM
to search for verified programs. We call this method VMCTS. At a high level, our algorithm uses the LLM as a prior
to expand a node and the verifier as a heuristic to evaluate
the value of a node. Instantiating this idea in a practical way
with a very large action space requires modifying standard
MCTS to carefully combine evaluation and expansion (Al
We present an approach using Monte Carlo Tree
Search (MCTS) to guide Large Language Models
(LLMs) to generate verified programs in Dafny,
Lean and Coq. Our method, which we call VMCTS, leverages the verifier inside the search algorithm by checking partial programs at each step.
In combination with the LLM prior, the verifier
feedback raises the synthesis capabilities of open
source models. On a set of five verified programming problems, we find that in four problems
where the base model cannot solve the question
even when re-sampling solutions for one hour,
VMCTS can solve the problems within 6 minutes.
The base model with VMCTS is even competitive with ChatGPT4 augmented with plugins and
multiple re-tries on these problems. Our code
[and benchmarks are available at https://github.com/](https://github.com/namin/llm-verified-with-monte-carlo-tree-search)
[namin/llm-verified-with-monte-carlo-tree-search.](https://github.com/namin/llm-verified-with-monte-carlo-tree-search)
**1. Introduction**
Large Language Models (LLMs) are increasingly used for
generating code, but the code needs to be inspected and possibly re-generated if it doesn’t satisfy the user. We propose
to partially shift the burden of checking code, from the user
to the LLM, by generating code in a verification-aware programming language like Dafny, Coq, or Lean, prompting for
specifications and proofs of correctness in addition to code.
Then, the user can focus their attention on the specifications,
and less on the code and proofs with the assurance that the
generated output has passed the verifier. Our approach couples reasoning from an LLM and reasoning from a program
1Harvard University, Cambridge, MA, USA 2The Kempner Institute for the Study of Natural and Artificial Intelligence, Cambridge, MA, USA 3Northeastern University,
Boston, MA, USA [4]Million.js, San Francisco, CA, USA [5]TU
München, Munich, Germany [6]University of Alabama at Birmingham, AL, USA [7]University of Oxford, Oxford, UK [8]Harvard
John A. Paulson School of Engineering and Applied Sciences, Cambridge, MA, USA. Correspondence to: Nada Amin
<[email protected]>.
-----
**Verified Multi-Step Synthesis using LLMs and MCTS**
_Figure 1. Overview of VMCTS. To start, we select a node (in blue) following a standard MCTS selection criteria (a). Next, we use the_
LLM to generate a code snippet (in green) that extends the selected node (b). Here, the LLM-generated code is checked using the verifier,
and a score is assigned to evaluate the quality of the generated code – also checking if a global solution has been found. We re-query the
LLM until we assign a score that is not None. If the verifier returns a score of +1, the LLM-generated code and the original node are
added as child nodes to the selected node in the search tree (c). If the verifier returns a score of -1, we assign that value to the selected
node in blue but do not add any new nodes to the tree. We finally backpropagate the value as in standard MCTS (d). The same process is
continued until a global solution is reached.
gorithm 1 in Section 3). Refer to Figure 1 for an overview
of our system.
We define variations on the baseline VMCTS algorithm,
including one that ensures a diverse selection of completions (Section 4.1), one that shares verifier feedback with
the LLM in context (Section 4.2) and one that learns from
verifier feedback through Direct Preference Optimization
(DPO (Rafailov et al., 2023)) fine-tuning (Section 4.3).
We describe our results (Section 5) for our baseline VMCTS
algorithms and for the variations: our main finding is that
our baseline VMCTS algorithm offers a substantial improvement over whole program sampling from the base model
and is competitive with ChatGPT4 on a small benchmark
of 13 verified programming problems. On 6 of the problems, whole program sampling from the base model does
not succeed at all within 100 trials and at least 60 minutes
of wall clock time. On 4 out of these 6, VMCTS succeeds
within 6 minutes at least one out of 10 times. VMCTS finds
a solution in less than 10 minutes at least 50% of the time
on 8 of the 13 problems and at least 10% of the time on
11 out of the 13. Overall, both our VMCTS algorithm and
ChatGPT4 with steps and plugins succeed in a similar number of times. For ChatGPT4, steps and plugins move the
success rate from 3/12 (without steps, without plugins) to
10/12 (with steps, with plugins).
**2. Related Work**
**Neural Program Synthesis with Large Language Models**
Austin et al. (2021) and Chen et al. (2021) demonstrated
that Large Language Models (LLMs) can generate correct
Python programs from natural language descriptions. These
studies introduced the MBPP and HumanEval datasets, respectively, which are widely used for evaluating LLMs in
program synthesis tasks. Cassano et al. (2023a) extended
this concept by showing that LLMs can also generate programs in over 20 languages other than Python. This was
achieved by translating the MBPP and HumanEval datasets
using their system, MultiPL-E. Their findings indicate that
generating accurate programs in lower resource languages
is more challenging compared to higher resource languages,
such as Python. In our experiments, for proof synthesis,
we have another dimension of challenge: some languages
(Lean, Coq) are inherently more challenging than others
(Dafny), depending on how much automation the verifiers
provide. Li et al. (2023) trained LLMs of varying sizes
on permissively-licensed code from GitHub, encompassing over 80 languages, showing enhanced performance in
program synthesis tasks for a wider range of languages.
However, none of these works explored the generation of
programs that are correct by construction.
**Theorem Proving with Large Language Models** Han
et al. (2022) demonstrated that LLMs can be trained to
generate proofs in Lean through self-supervision. Yang
et al. (2023) presented that Retrieval-Augmented Generation (RAG) (Glass et al., 2022) models significantly enhance LLMs’ performance in theorem proving tasks. First
et al. (2023) employed a methodology akin to that of Han
et al. (2022) to generate and repair complete proofs in Isabelle/HOL. Jiang et al. (2023) introduced methods to first
map natural language proofs to formal proof sketches in
Isabelle and then fill in the gaps using an automated prover.
-----
**Verified Multi-Step Synthesis using LLMs and MCTS**
**3. Method: VMCTS**
Our main contribution is to define a search algorithm inspired by MCTS that leverages a verifier and LLM to search
for verified programs. We call this method VMCTS. In this
section, we first present the MDP that we consider as the
environment for verified program synthesis and then present
VMCTS in detail. VMCTS is a variant of traditional MCTS
that incorporates the LLM as a prior to generate candidates
and the verifier as a heuristic to evaluate partial programs.
**3.1. MDP for verified program synthesis**
We formulate our multi-step verified synthesis problem as
a Markov Decision Process (MDP) M := (S, A, T, r) defined by the LLM and the verifier. Here, S refers to the state
space, A refers to the action space, T : S × A →S refers to
the (deterministic) transition dynamics of the environment,
and r : S → R refers to the reward function. Defining
the MDP just consists of defining these four objects. The
state, action, transition dynamics, and reward are defined as
follows:
- Each state s ∈S is a string consisting of the initial
user prompt and a partial program.
- Actions a ∈A are strings of program chunks.
- The transition dynamics are just defined by string concatentation: T (s, a) = s + a.
- The reward function r is defined by the verifier for
a given verified programming language and is only
defined on complete programs. This terminal reward
is 1 if the complete program is accepted and -1 if it is
rejected. The reward is 0 for incomplete programs.
With this simple MDP in place, we can define our search
algorithm for finding verified programs.
**3.2. VMCTS**
Given this MDP with finite actions and deterministic dynamics, it would be possible to run standard MCTS to learn
a stochastic policy, but the action space is much too large
for this to be practical. Instead, we build a search algorithm
inspired by MCTS that can leverage the LLM as a prior
for program synthesis and the verifier to evaluate partial
programs. Both components are key for a successful search
in this large space.
Standard MCTS consists of four steps: select, expand, evaluate, and backpropagate. Our algorithm leaves the select and
backpropagate steps essentially unchanged. We modify and
combine the expand and evaluate steps to leverage the power
of the LLM and the verifier in tandem. Our algorithm is
illustrated in Figure 1 and defined formally in Algorithm 1.
These studies predominantly used LLMs to iteratively generate individual proof steps, which were then verified using
a theorem prover. Thakur et al. (2023) propose a languageagent approach to formal theorem-proving, alternating selection and execution steps.
**Symbolic Algorithms for Neural Program Synthesis**
Grand et al. (2023) integrated a classic symbolic top-down
synthesis algorithm for library learning (Bowers et al., 2023)
with LLMs. Cassano et al. (2023b) employed program decomposition and a bottom-up tree-search algorithm to infer
missing TypeScript types. Zhou et al. (2023) used Monte
Carlo Tree Search (MCTS) to create single-function programs in Python. As of this writing, their method is the leading approach for the HumanEval benchmark (Chen et al.,
2021). Zhang et al. (2023a) applied a tree-based planning
algorithm for decoding LLM token sequences, which were
then evaluated for correctness using a test suite. Lample
et al. (2022) adapted MCTS for neural theorem proving by
employing a tree-based search algorithm to generate proof
trees in Lean. These studies collectively illustrate that incorporating symbolic algorithms significantly enhances the
performance of LLMs in various code generation tasks. In
our work, we introduce a novel symbolic algorithm that
combines multi-step synthesis with MCTS. This algorithm
is used to generate correct programs and proofs in Dafny,
Coq, and Lean.
**Scoring Partial Programs** Desai et al. (2016), one of the
first to effectively tackle the problem of program synthesis using natural language, used a scoring function to rank
candidate partial programs that a user could select from.
Cassano et al. (2023b) similarly used a scoring function
to rank candidate partial programs based on their types in
order to aid the tree search process, and provided multiple
solutions to the user ranked by their score. Ye et al. (2021)
used abstract interpretation to rule out partial programs that
do not satisfy some constraints, typically on input/output
examples. Chen et al. (2022) used LLM-generated unit
tests suites and their pass rates to score candidate programs,
and provided the user with the top-scoring program. Ni
et al. (2023) further utilized execution information to rank
candidate programs. Shirafuji et al. (2023) used a scoring
function to rank example refactoring programs generated
by an LLM before applying them to the given code. Zhang
et al. (2023b) studies using scoring functions to rank candidate partial programs in-depth, and proposes the use of
a reviewer model to score candidate programs based on
how closely they match the given instruction. Most of these
works have scored partial programs specified as grammatical programs with holes as opposed to our left-to-right
generation of partial programs.
-----
**Verified Multi-Step Synthesis using LLMs and MCTS**
**Algorithm 1 Evaluate and (maybe) expand**
**Input: leaf node string s, LLM, verifier, depth limit L**
**Output: value v(s), (optional) list of child nodes**
# Evaluate
checkable = False
depth ← 0
_a ←_ ""
**while not checkable and depth <** L do
_a ←_ _a + LLM(s + a)_
checkable ← (Verifier (s + a) in {−1, +1})
depth ← depth + 1
**end while**
**if Verifier (s + a) = −1 or depth = L then**
# Do not expand
**return −1, None**
**else**
# Expand w/ backtracking
**return +1, [s + a, s]**
**end if**
children of s (see Algorithm 1 and Figure 1). If the
backtracking child node is itself expanded later, we can
get a different completion a[′] of s.
Verification is an intuitive way to leverage the verifier for
search. We should never add a program to the search tree
that is verified to fail. Backtracking is slightly less intuitive,
but it provides us with a way to enforce a bounded branching factor in our tree while still allowing for an effectively
infinite action space, allowing for arbitrary completions.
Later on, we will explain how we set the priors of the UCT
selection policy to bias our search towards non-backtracking
nodes to encourage a deeper search.
**Evaluation: verifier as value function.** We want to leverage the verifier to evaluate partial programs without having
to simulate full trajectories, which would require further
calls to the LLM that are usually much more expensive than
the verifier. While every node added to the tree has been accepted by the verifier, the goal of evaluation is to determine
how promising the descendants of that node will be. We can
do this incrementally, asking the LLM for one more step
from the child node and calling the verifier on the extended
partial program. This gives us a measure of how promising
a node that leverages the verifier is and avoids needing to
generate full programs.
Explicitly, from a node containing the string s we first sample one step a from the LLM and then evaluate the concatentation s + a and assign the resulting value to s. Explicitly,
we set the estimated value v(s) to be equal to the score
Verifier(s + a), defined as follows:
+1 verified, but may be incomplete.
0 unable to verify yet. (1)
1 verified as a failure.
_−_
Appendix B gives examples of scoring partial programs.
**Combining expansion and evaluation.** Applying the
above expansion and evaluation strategies requires merging
the two steps. The expansion explicitly depends on the intermediate computation of the evaluation step. Essentially,
once we select a leaf node instead of expanding, we call
a function to evaluate the node and maybe expand it. The
expansion only occurs if, during the process of evaluation,
we encounter an action a that has value +1 according to the
verifier. This combined algorithm to “evaluate and (maybe)
expand” is formalized in Algorithm 1.
**Selection: priors and UCT.** We use a standard MCTS
selection step, but we set a prior for the UCT bonus as in
PUCT (Rosin, 2011; Silver et al., 2016). We choose to set
the prior p = 1.0 for action 0 and p = 0.2 for action 1.
Note that our approach contrasts with prior work on MCTS.
Traditional approaches use simulating rollouts for the evaluation step (Chaslot et al., 2008), but in our case simulating
rollouts from the LLM is more costly and less reliable than
calling the verifier. More recent variants of MCTS amortize
the cost of rollouts by learning a value function (Silver et al.,
2016), but this is still an expensive process and does not
leverage the power of the verifier.
**Expansion: verification and backtracking.** The naive
approach to expansion using the LLM would be to take k
samples of one “step” of program synthesis from the LLM
conditioned on the current partial program and add them to
the search tree. For our verified programming languages,
there is a natural definition of a “step” that is a verifiable unit
of code (which is language-specific). In Dafny and Lean,
we consider each line a step. In Coq, we consider each
“command” (ending with a dot ‘.’) a step. We configure the
tokenizer to deal with these language-specific delimiters.
We start from this idea, but refine it in two key ways:
1. Verification. We only add verified partial programs to
the search tree, i.e. partial programs containing a full
unit of code that verifies, but may not be a complete
solution. To do this, we continue to generate steps from
the LLM until either the verifier accepts and we can
add the action to the tree or the verifier rejects and we
do not expand the tree.
2. Backtracking. When we find a unit of code a that
verifies from a node s and want to add it to the tree,
we also add a “backtracking” node that is a copy of
_s, i.e. we add s + a and a copy of s itself as the two_
-----
**Verified Multi-Step Synthesis using LLMs and MCTS**
This basic heuristic gives the model a prior for exploring the
earlier expansions that were sampled from the LLM. With
this choice, the selection rule at each node is:
**4.2. In-Context Learning from Verifier Feedback**
In the basic setup, the verifier runs at each step, but the
verifier feedback (outside of the score) is discarded even
though it could guide the LLM.
We have experimented with turning rejections into feedback
from the verifier, essentially displaying the offending code
and error message both as comments in a new child node.
In this case, the new child node is scored like a +1 by the
verifier and handled as such.
A variant of verifier feedback is to list the errors globally in
the instructions instead of locally as they appear.
A further variant (Reflect) is to take a clue from the Reflexion work (Shinn et al., 2023) and ask an LLM to verbalize a
verification error into a positive statement of what to try next.
We use a reflection prompt adapted from the programming
experiment in previous work (Zhou et al., 2023).
4.2.1. CONTEXTUAL INFORMATION AND LEMMA
EXTRACTION
We can provide contextual information on what the current
goal is to prove and the hypotheses available. In addition,
the LLM can search for lemmas using the standard Search
tactic of Coq.
We have experimented with recursively extracting lemmas
at failure points (Lemma). The goal of this experiment
is to test if the LLM performs better when proving easierto-digest lemmas whose proofs may only be a few steps
long than when attempting to prove the entire theorem. We
also exploit CoqHammer (Czajka & Kaliszyk, 2018) so
that the lemmas extracted can potentially be discharged
automatically with the hammer tactic, though we leave it
up to the LLM to suggest it.
During the search for a proof of the main theorem, if an
error is encountered at some proof step, we isolate the proof
state just before the error (i.e. the available hypotheses and
the intermediary goal) as a named lemma and complete the
proof of the goal by simply using the apply tactic with
this lemma. We then attempt to use VMCTS to prove this
lemma, possibly extracting further lemmas along the way.
This approach uses ideas similar to those used in Baldur (First et al., 2023) and Draft, Sketch, and Prove
(DSP (Jiang et al., 2023)). Unlike Baldur, however, we
attempt to isolate the remainder of the current goal as its
own lemma when an error is encountered as opposed to
reattempting a proof of the entire theorem. Unlike DSP,
we extract lemmas (or sub-problems) only when an error is
encountered instead of generating a sketch of sub-problems
at the outset.
We need a heuristic to decide whether a lemma is judicious.
_N_
_i=1_ _[v][i]_
_N_
P
log Nparent
(2)
_p · cUCT_
where p is the prior at this node, cUCT is the exploration
coefficient, Nparent is the number of visits at the parent node,
_N is the number of visits at this node, vi is the estimated_
value at the ith visit to the node.
**4. Variations**
In the previous section, we presented our base method VMCTS which is our main contribution. In this section we
present a variety of variants or extensions of VMCTS that
attempt to resolve some potential weaknesses of the method.
We present the results of each of these variants (identified
in this section in bold) in the next section after the results
for VMCTS.
**4.1. Ensuring Diverse Selection**
One shortcoming of adding backtracking nodes to the tree is
that it is possible that we attempt to expand the backtracking
child node (which is a copy of its parent) with a very similar
completion to the one used to expand the parent. Formally,
since s is a child of itself, if s + a is the sibling of the
backtracking node s, then we may sample an a[′] that is very
similar to a to expand the backtracking child. This limits
the diversity of the programs considered in our search, but
not in a way that can easily be handled with something like
string matching. To partially resolve this, we can leverage
the LLM embeddings to select diverse programs to add to
the tree. We call this variant Diversity.
To do this, we maintain a dataset D of all the programs s
that have been added to the search tree so far. Let the LLM
embedding function be denoted by ϕ(s) ∈ R[d]. Then each
time we sample from the LLM at a particular leaf state ¯s,
instead of generating just one continuation, we generate
_a1, . . ., ak and select the one that maximizes the minimum_
distance from the existing tree:
_d(ai, ¯s, D) := min_ _s + ai)_ 2 (3)
_s_ _D_ _∥_
_∈_ _[∥][ϕ][(][s][)][ −]_ _[ϕ][(¯]_
This method helps to ensure that diverse child nodes are
added to the tree. However, if the generation itself is not
diverse, so that a1, . . ., ak are very similar, then this procedure cannot resolve the issue. One direction for future work
would be to create more diverse generations by modifying
the LLM or the sampling procedure more directly.
-----
**Verified Multi-Step Synthesis using LLMs and MCTS**
The development might be stuck in a bad spot, and extracting a lemma that is false or impossible to prove will just
waste time.
**4.3. Fine-Tuned Learning from Verifier Feedback**
Finally, we consider a variant where we fine-tune the
weights of the LLM. Instead of reinforcement learning
from human feedback, we perform reinforcement learning
from verifier feedback with Direct Preference Optimization
(DPO) (Rafailov et al., 2023), using the TRL library (von
Werra et al., 2020).
DPO training expects triples, each of which has a “prompt”,
a “chosen” completion and a “rejected” completion. We
can generate these triples automatically from the VMCTS
process. During the VMCTS process, we monitor failures
and successes. When the verifier scores a partial program
negatively, we add the prompt and its completion to the list
of failures. When we get a final solution from the VMCTS
process, we consider each partial program along the successful path, adding each prompt and its completion to the
list of successes. Then, we match failures and successes by
looking for common prompts, creating DPO triples.
We also experiment with using DPO off-policy where we
mix in manually crafted triples with automatically generated
triples from VMCTS.
**5. Results**
Our experimental setup is described in Appendix E.
**5.1. Criteria for Success**
We define success as passing the verifier and passing some
syntactic checks (e.g. the presence of a proof marker and a
problem-specific minimum number of lines of code). So it
is still possible that the LLM mis-specifies the problem, but
manual inspection shows that this is rare. We also sanitychecked some generations using a method for checking
in steps (Appendix C) which makes stronger guarantees
at the expense of language-specific prompts that partially
specify the program to corroborate. Essentially, whenever
the model writes a full program and proof, it seems to be
a genuine attempt to solve the full problem. Developing a
more rigorous benchmark for verified programming is an
important direction for future work.
**5.2. Problems**
We create a small dataset of challenging benchmark problems to test our base algorithm and variants. Our benchmark
problems represent meaningful scenarios in verified programming. They require creating Algebraic Data Types
(ADTs), defining functions on them using pattern matching,
and proving properties using induction. Compared to prior
benchmarks, the problems require more intricate multi-step
reasoning and test capabilities that are specifically important for verified programming. The problems are defined as
follows:
**Factorial asks to define the factorial function and to prove**
that it is always strictly positive.
**Eval/Opt0 asks to define an ADT for arithmetic expres-**
sions, an evaluator, and an optimizer, and to prove that
the optimizer preserves the semantics.
**Opt0 Opt asks to define an ADT for arithmetic expres-**
sions, an optimizer, an optimal predicate, and to prove
that the optimizer is optimal.
**BST asks to define a tree, the binary search tree (BST) prop-**
erty, insertion, and to prove two properties of insertions
(membership and BST preservation).
**Repeat asks to define a function returning a list with a**
given element repeated a given number of times, and to
prove two properties related to length and membership.
We also consider each problem across multiple languages
and variants of the problem that add hints to the prompt. In
total, this gives us 13 prompts to test our algorithm. Detailed
prompts for our benchmark problems are in Appendix A.
**5.3. VMCTS vs Sampling of Whole Programs vs**
**ChatGPT4**
We show the results of our baseline VMCTS algorithm in
experiments using an off-the-shelf LLM in Figure 2. Whole
program sampling does not outperform our VMCTS algorithm in terms of successful verification percentage in any
of the 13 prompts tested. In four cases, whole program
sampling finds no solution within 100 trials, while our VMCTS algorithm finds at least 1 solution within 10 trials. In
the other cases, where whole program sampling sometimes
finds a solution, our VMCTS algorithm is more reliable.
As seen in Figure 2, our VMCTS algorithm finds at least
one solution under 6 minutes in 11 out of 13 prompts. While
running until 10 minutes does not enable it to solve additional problems, VMCTS increases the success ratio for 6
out of 13 prompts, resulting in solutions found for at least
half of the trials for 8 out of 13 prompts. And while VMCTS
may make many calls to the LLM, they are short calls (for
a single line or command), and the overall running time is
still efficient.
In comparison, ChatGPT4 generates a whole program solution that successfully passes the verifier in 3 out of 13
prompts without a code plugin, in 6 out of the 13 with a
code plugin, and in 10 out of the 13 with a code plugin when
-----
**Verified Multi-Step Synthesis using LLMs and MCTS**
|prompt|VMCTS <6m <10m best (%) (%)|Whole Program Sampling total (%) exp.|ChatGPT4 whole plug. whole plug. steps|
|---|---|---|---|
|Factorial (Coq) " [Hints] " [lia] Factorial (Dafny) Factorial (Lean) Eval/Opt0 (Coq) " [Hints] Eval/Opt0 (Dafny) " [Hints] Opt0 Opt (Dafny) BST (Dafny) Repeat (Coq) Repeat (Dafny)|- 0/10 0/10 22s 10/10 10/10 25s 9/10 9/10 38s 10/10 10/10 69s 4/10 5/10 - 0/10 0/10 86s 4/10 5/10 89s 4/10 5/10 78s 9/10 10/10 98s 1/10 1/10 234s 4/10 4/10 99s 7/10 8/10 104s 3/10 4/10|39m30s 0/100 - 4m24s 1/10 4m24s 5m46s 1/10 5m46s 5m20s 3/10 106s 61m44s 0/100 - 83m36s 0/100 - 59m12s 1/100 59m12s 108m8s 0/100 - 62m54s 9/100 419s 79m2s 0/100 - 140m13s 2/100 70m6s 67m52s 0/100 - 59m30s 1/100 59m30s|✗ ✗ ✓ ✗ ✗ ✗ ✓ ✓ ✓ ✓ ✓ ✓ ✗ - - ✗ ✗ ✗ ✗ ✓ ✓ ✗ ✗ ✓ ✗ ✓ ✓ ✓ ✓ ✓ ✗ ✗ ✓ ✗ ✓ ✓ ✗ ✗ ✓|
_prompt_ _VMCTS_ <6m <10m _Whole Program Sampling_ _ChatGPT4_
best (%) (%) total (%) exp. whole plug. whole plug. steps
Factorial (Coq) - 0/10 0/10 39m30s 0/100 - ✗ ✗ ✓
" [Hints] 22s 10/10 10/10 4m24s 1/10 4m24s ✗ ✗ ✗
" [lia] 25s 9/10 9/10 5m46s 1/10 5m46s ✓ ✓ ✓
Factorial (Dafny) 38s 10/10 10/10 5m20s 3/10 106s ✓ ✓ ✓
Factorial (Lean) 69s 4/10 5/10 61m44s 0/100 - ✗ -
Eval/Opt0 (Coq) - 0/10 0/10 83m36s 0/100 - ✗ ✗ ✗
" [Hints] 86s 4/10 5/10 59m12s 1/100 59m12s ✗ ✓ ✓
Eval/Opt0 (Dafny) 89s 4/10 5/10 108m8s 0/100 - ✗ ✗ ✓
" [Hints] 78s 9/10 10/10 62m54s 9/100 419s ✗ ✓ ✓
Opt0 Opt (Dafny) 98s 1/10 1/10 79m2s 0/100 - ✓ ✓ ✓
BST (Dafny) 234s 4/10 4/10 140m13s 2/100 70m6s ✗ ✗ ✓
Repeat (Coq) 99s 7/10 8/10 67m52s 0/100 - ✗ ✓ ✓
Repeat (Dafny) 104s 3/10 4/10 59m30s 1/100 59m30s ✗ ✗ ✓
_Figure 2. Benchmark results running Phind-CodeLlama-34B-v2 with VMCTS vs with whole program sampling. The prompts are given in_
Appendix A, and sometimes include [Hints] and specific imports such as [lia] for Coq. For VMCTS, we perform 10 trials, showing the
best time, the ratio that finishes within 6 minutes, and the ratio that finishes within 10 minutes. For Sampling, we generate the whole
program in a single sample. We perform 10 or 100 trials, showing the total time, the ratio that successfully passes the verifier, and the
expected time to get a successful whole sample. Entries with - indicate a lack of success after our designated number of trials. We also
show whether ChatGPT4 can solve the problem without a code plugin (whole), with a code plugin (plug. whole), and with a code plugin
with the problem given in steps (plug. steps). For ChatGPT, we do retry a handful of times on system errors.
|prompt|VMCTS <6m <10m best (%) (%)|
|---|---|
|Eval/Opt0 (Coq) " [Hints] " [Hints Diversity]|- 0/10 0/10 86s 4/10 5/10 228s 4/10 6/10|
|Eval/Opt0 (Dafny) " [Diversity]|89s 4/10 5/10 339s 2/10 3/10|
|Opt0 Opt (Dafny) " [Diversity]|98s 1/10 1/10 226s 3/10 4/10|
|BST (Dafny) " [Diversity]|234s 4/10 4/10 398s 0/10 4/10|
_prompt_ _VMCTS_ <6m <10m
best (%) (%)
Eval/Opt0 (Coq) - 0/10 0/10
" [Hints] 86s 4/10 5/10
" [Hints Diversity] 228s 4/10 6/10
Eval/Opt0 (Dafny) 89s 4/10 5/10
" [Diversity] 339s 2/10 3/10
Opt0 Opt (Dafny) 98s 1/10 1/10
" [Diversity] 226s 3/10 4/10
BST (Dafny) 234s 4/10 4/10
" [Diversity] 398s 0/10 4/10
_Figure 3. Benchmark results for the diversity variant (denoted by_
rows with: " [Diversity]). BST (Dafny) 234s 4/10 4/10
|prompt|VMCTS <6m <10m best (%) (%)|
|---|---|
|Eval/Opt0 (Dafny) " [DPO (pm)] " [DPO (pm,opt0)] " [DPO (pm) Diversity]|89s 4/10 5/10 84s 10/10 10/10 86s 10/10 10/10 185s 6/10 9/10|
|Opt0 Opt (Dafny) " [DPO (pm)] " [DPO (pm,opt0)] " [DPO (pm) Diversity] " [DPO (pm,opt0) Diversity]|98s 1/10 1/10 - 0/10 0/10 82s 10/10 10/10 - 0/10 0/10 161s 10/10 10/10|
|BST (Dafny) " [DPO (pm)] " [DPO (pm,opt0)]|234s 4/10 4/10 - 0/10 0/10 325s 3/10 3/10|
|Repeat (Dafny) " [DPO (pm)] " [DPO (pm,opt0)]|104s 3/10 4/10 77s 9/10 9/10 49s 10/10 10/10|
|Eval/Opt0 (Coq) " [Hints] " [Hints DPO (pm)] " [Hints DPO (pm,opt0)] " [Lemma Hammer] " [" DPO (pm)] " [" DPO (pm,opt0)]|- 0/10 0/10 86s 4/10 5/10 48s 4/10 7/10 43s 4/10 7/10 74s 9/10 9/10 70s 9/10 10/10 108s 6/10 8/10|
|prompt|VMCTS <6m <10m best (%) (%)|
|---|---|
|Factorial (Coq) " [lia] " [Lemma lia]|- 0/10 0/10 25s 9/10 9/10 22s 10/10 10/10|
|Eval/Opt0 (Coq) " [Hints] " [Hints Reflect] " [Lemma Hammer]|- 0/10 0/10 86s 4/10 5/10 111s 4/10 6/10 74s 9/10 9/10|
|Repeat (Coq) " [Lemma Hammer]|99s 7/10 8/10 46s 8/10 8/10|
_prompt_ _VMCTS_ <6m <10m
best (%) (%)
Eval/Opt0 (Dafny) 89s 4/10 5/10
" [DPO (pm)] 84s 10/10 10/10
" [DPO (pm,opt0)] 86s 10/10 10/10
" [DPO (pm) Diversity] 185s 6/10 9/10
Opt0 Opt (Dafny) 98s 1/10 1/10
" [DPO (pm)] - 0/10 0/10
" [DPO (pm,opt0)] 82s 10/10 10/10
" [DPO (pm) Diversity] - 0/10 0/10
" [DPO (pm,opt0) Diversity] 161s 10/10 10/10
BST (Dafny) 234s 4/10 4/10
" [DPO (pm)] - 0/10 0/10
" [DPO (pm,opt0)] 325s 3/10 3/10
Repeat (Dafny) 104s 3/10 4/10
" [DPO (pm)] 77s 9/10 9/10
" [DPO (pm,opt0)] 49s 10/10 10/10
Eval/Opt0 (Coq) - 0/10 0/10
" [Hints] 86s 4/10 5/10
" [Hints DPO (pm)] 48s 4/10 7/10
" [Hints DPO (pm,opt0)] 43s 4/10 7/10
" [Lemma Hammer] 74s 9/10 9/10
" [" DPO (pm)] 70s 9/10 10/10
" [" DPO (pm,opt0)] 108s 6/10 8/10
_Figure 5. Benchmark results for fined-tuned learning by DPO._
_Figure 4. Benchmark results for in-context learning, including re-_
flection and lemma extraction.
-----
**Verified Multi-Step Synthesis using LLMs and MCTS**
the problem is specified in multiple steps. This indicates the
effectiveness of providing access to a plugin that allows running and verifying the generated code as well as breaking
problems down into multiple steps.
Remarkably, our baseline VMCTS algorithm succeeds on
a similar fraction of the prompts as ChatGPT4 with steps
and plugins, despite our setup using a significantly smaller
open-source model and our baseline operating with only a
boolean signal from the verifier, while the plugin provides
ChatGPT with detailed feedback about errors.
**5.4. Diversity**
The diversity variant (Section 4.1) may help with problems
where the search tree contains a path that verifies, but in fact
cannot be completed to fully solve the problem. In these
cases backtracking is needed to expand a new path that could
generate a full completion. As shown by the results of the
Opt0 Opt (Dafny) problem in Figure 3, the diversity variant
does indeed find two and three more valid solutions in the
6 minute and 10 minute timeouts, respectively, compared
to the baseline VMCTS. However, outside of this problem,
we do not see many benefits from the diversity variant. We
hypothesize that the lack of greater improvement mostly
stems from the time overhead introduced by the variant
since we generate multiple samples in each LLM call (to
ensure diversity of outputs). This causes an increase in the
best time and also reduces the size of the tree that we can
generate before the cutoffs at 6 or 10 minutes.
**5.5. In-Context Learning**
As shown in Figure 2, adding hints to prompts significantly
increases the ratio of success among both Coq and Dafny
– in fact, the later prompts always use hints by default (see
Appendix A).
The Coq-specific in-context learning tactics shown in Figure 4 lead to a further increase in success rates, with including an import line [*lia] improving success rates from 0/10
for Factorial (Coq) to 9/10 for both timeouts. Similarly, the
lemma extraction [Lemma *] not only produces the same
increase in success rate from the baseline VMCTS as [lia]
but also reduces the best search time to 22 seconds from
25 seconds. Reflection combined with hints [Hints Reflect]
and lemma extraction using a hammer [Lemma Hammer]
produces an increase in success rates among the Eval/Opt0
(Coq) problem as well, with [Hints Reflect] producing success ratios of 4/10 and 6/10 while [Lemma Hammer] produces 9/10 on both timeouts, compared to complete failures
among baseline VMCTS.
**5.6. Fined-Tuned Learning**
We generate two small datasets for DPO:
**pm syntactic corrections to pattern matching syntax in**
Dafny, in both proofs and code. We generate data
using a single problem that is unrelated to the benchmark problems and test generalization to the benchmark problems – 4 triples for code, 5 triples for proofs.
**opt0 semantic corrections to prefer the optimal solution in**
problem Opt0 Opt – 5 triples.
Excerpted triples are shown in Appendix D. Note that pm
is not specific to the tested prompts, while opt0 trains on
triples generated from the same prompt as Opt0 Opt.
In Figure 5, we see overall that the syntactic corrections
have been transferred from the arbitrary problem (in pm) to
Eval/Opt0, and the revealed solution for Opt0 Opt has been
memorized. However, there are some obvious regressions:
0/10 for [DPO pm] for Opt0 Opt and BST (Dafny). Running
the learned model on Coq even though it was trained on
Dafny shows no notable regression in performance, except
maybe a small one for [Lemma Hammer DPO (pm,opt0)].
**6. Discussion**
We have demonstrated that relatively weak language models can reliably produce verified code by guiding a search
process that verifies partial programs at each step. Our technique shines on multi-step problems, made of dependent
sub-problems. Our technique can be adapted to a setting
where the interfaces and specifications are given, and the
code is verified at each step by additional code containing
assertions or proofs. Our technique also works in a languageagnostic prompt, leaving all the interfaces and specifications
to the LLM itself, though there is a risk the LLM does not
follow instructions.
**Limitations.** There are still some limitations to our approach and evaluation. In our main approach, besides the
variant for checking steps (Appendix C) that was used as a
sanity check, the LLM is responsible for devising the specifications and adhering to the prompt. To make some problems
tractable, we have included language-specific hints in the
prompt (see Appendix A). Note that these hints are applied
uniformly for all comparisons (VMCTS, whole sampling,
ChatGPT). And finally, with VMCTS, the variance in running time can be significant.
A key aspect of our approach resides in the scoring of partial programs. However, the scoring is limited by coarse
granularity and lack of lookahead in the scoring function.
The granularity of the verification step is a whole unit, e.g.
a function in Dafny and a command in Coq. For Dafny, the
coarse granularity means we have to wait multiple lines to
get feedback. For Coq, the fine granularity doesn’t help
much with bigger proofs, which require planning. For Lean,
-----
**Verified Multi-Step Synthesis using LLMs and MCTS**
one additional difficulty is that off-the-shelf language models are trained on both Lean3 and Lean4, confusing the two
despite markers in the prompt that we are asking for Lean4.
**Future work.** What we find most interesting and promising about our approach is that so much is possible by a
“blind” search. VMCTS prunes bad paths without explanation and uses a very coarse reward signal to guide the search.
In this sense, even though the algorithm is leveraging powerful LLMs and verifiers, it is still somewhat brute force.
In future work, it would be fruitful to find ways of making
the search less “blind”, with a tighter coupling between the
LLM and the verifier.
**Acknowledgments**
We thank Matko Bošnjak, Kevin Ellis, Samy Jelassi, Garbiel Poesia, and Garrett Tanzer for insightful discussions,
and Sabrina Hu, Vivan Hui and Lisa Zhang for suggestions on drafts. We thank the Harvard SEAS Computing
group for help in access to GPUs. Tarun Prasad and Chloe
Loughridge were partially supported by the Harvard College
Research Program (HCRP) through the Harvard College Office of Undergraduate Research and Fellowships. Nada
Amin was partially supported by NSF Award 2303983. Support for William E. Byrd’s work on this paper was provided
by NCATS, through the Biomedical Data Translator program (NIH award OT2TR003435). Any opinions expressed
in this document do not necessarily reflect the views of
NCATS, individual Translator team members, or affiliated
organizations and institutions.
**Impact Statement**
This paper presents work whose goal is to advance the field
of verified program synthesis and machine learning. There
are many potential societal consequences of our work, none
which we feel must be specifically highlighted here.
**References**
Austin, J., Odena, A., Nye, M., Bosma, M., Michalewski,
H., Dohan, D., Jiang, E., Cai, C., Terry, M., Le, Q., and
Sutton, C. Program synthesis with large language models.
_arXiv preprint arXiv:2108.07732, 2021._
Bowers, M., Olausson, T. X., Wong, L., Grand, G., Tenenbaum, J. B., Ellis, K., and Solar-Lezama, A. Top-down
synthesis for library learning. _Proc. ACM Program._
_Lang., 7(POPL), jan 2023. doi: 10.1145/3571234. URL_
[https://doi.org/10.1145/3571234.](https://doi.org/10.1145/3571234)
Cassano, F., Gouwar, J., Nguyen, D., Nguyen, S., PhippsCostin, L., Pinckney, D., Yee, M.-H., Zi, Y., Anderson,
C. J., Feldman, M. Q., Guha, A., Greenberg, M., and
Jangda, A. MultiPL-E: A Scalable and Polyglot Approach to Benchmarking Neural Code Generation. IEEE
_Transactions on Software Engineering (TSE), 49(7):3675–_
3691, 2023a.
Cassano, F., Yee, M.-H., Shinn, N., Guha, A., and Holtzen,
S. Type prediction with program decomposition and fillin-the-type training, 2023b.
Chaslot, G., Winands, M. H. M., Herik, H. J. V. D., Uiterwijk, J., and Bouzy, B. Progressive strategies for montecarlo tree search. New Mathematics and Natural Com_putation, 04:343–357, 2008._ [URL https://api.](https://api.semanticscholar.org/CorpusID:1719063)
[semanticscholar.org/CorpusID:1719063.](https://api.semanticscholar.org/CorpusID:1719063)
Chen, B., Zhang, F., Nguyen, A., Zan, D., Lin, Z., Lou, J.G., and Chen, W. Codet: Code generation with generated
tests, 2022.
Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. d. O.,
Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman,
G., et al. Evaluating large language models trained on
code. arXiv preprint arXiv:2107.03374, 2021.
Czajka, Ł. and Kaliszyk, C. Hammer for coq: Automation for dependent type theory. Journal of automated
_reasoning, 61:423–453, 2018._
Desai, A., Gulwani, S., Hingorani, V., Jain, N., Karkare,
A., Marron, M., R, S., and Roy, S. Program synthesis
using natural language. In Proceedings of the 38th In_ternational Conference on Software Engineering, ICSE_
’16, pp. 345–356, New York, NY, USA, 2016. Association for Computing Machinery. ISBN 9781450339001.
[doi: 10.1145/2884781.2884786. URL https://doi.](https://doi.org/10.1145/2884781.2884786)
[org/10.1145/2884781.2884786.](https://doi.org/10.1145/2884781.2884786)
First, E., Rabe, M., Ringer, T., and Brun, Y. Baldur: Wholeproof generation and repair with large language models.
In Proceedings of the 31st ACM Joint European Soft_ware Engineering Conference and Symposium on the_
_Foundations of Software Engineering, ESEC/FSE 2023,_
pp. 1229–1241, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9798400703270.
[doi: 10.1145/3611643.3616243. URL https://doi.](https://doi.org/10.1145/3611643.3616243)
[org/10.1145/3611643.3616243.](https://doi.org/10.1145/3611643.3616243)
Glass, M., Rossiello, G., Chowdhury, M. F. M., Naik,
A., Cai, P., and Gliozzo, A. Re2G: Retrieve, rerank,
generate. In Carpuat, M., de Marneffe, M.-C., and
Meza Ruiz, I. V. (eds.), Proceedings of the 2022 Con_ference of the North American Chapter of the Association_
_for Computational Linguistics: Human Language Tech-_
_nologies, pp. 2701–2715, Seattle, United States, July_
2022. Association for Computational Linguistics. doi:
10.18653/v1/2022.naacl-main.194. [URL https://](https://aclanthology.org/2022.naacl-main.194)
[aclanthology.org/2022.naacl-main.194.](https://aclanthology.org/2022.naacl-main.194)
-----
**Verified Multi-Step Synthesis using LLMs and MCTS**
Grand, G., Wong, L., Bowers, M., Olausson, T. X., Liu,
M., Tenenbaum, J. B., and Andreas, J. Lilo: Learning
interpretable libraries by compressing and documenting
code, 2023.
Han, J. M., Rute, J., Wu, Y., Ayers, E., and Polu, S. Proof artifact co-training for theorem proving with language models. In International Conference on Learning Represen_[tations, 2022. URL https://openreview.net/](https://openreview.net/forum?id=rpxJc9j04U)_
[forum?id=rpxJc9j04U.](https://openreview.net/forum?id=rpxJc9j04U)
[ImparaAI. Monte carlo tree search. https://github.](https://github.com/ImparaAI/monte-carlo-tree-search)
[com/ImparaAI/monte-carlo-tree-search,](https://github.com/ImparaAI/monte-carlo-tree-search)
2024.
Jiang, A. Q., Welleck, S., Zhou, J. P., Lacroix, T., Liu,
J., Li, W., Jamnik, M., Lample, G., and Wu, Y. Draft,
sketch, and prove: Guiding formal theorem provers with
informal proofs. In The Eleventh International Confer_[ence on Learning Representations, 2023. URL https:](https://openreview.net/forum?id=SMa9EAovKMC)_
[//openreview.net/forum?id=SMa9EAovKMC.](https://openreview.net/forum?id=SMa9EAovKMC)
Lample, G., Lacroix, T., anne Lachaux, M., Rodriguez, A.,
Hayat, A., Lavril, T., Ebner, G., and Martinet, X. Hypertree proof search for neural theorem proving. In Oh,
A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.),
_Advances in Neural Information Processing Systems,_
[2022. URL https://openreview.net/forum?](https://openreview.net/forum?id=J4pX8Q8cxHH)
[id=J4pX8Q8cxHH.](https://openreview.net/forum?id=J4pX8Q8cxHH)
Li, R., Allal, L. B., Zi, Y., Muennighoff, N., Kocetkov, D.,
Mou, C., Marone, M., Akiki, C., Li, J., Chim, J., Liu, Q.,
Zheltonozhskii, E., Zhuo, T. Y., Wang, T., Dehaene, O.,
Davaadorj, M., Lamy-Poirier, J., Monteiro, J., Shliazhko,
O., Gontier, N., Meade, N., Zebaze, A., Yee, M.-H., Umapathi, L. K., Zhu, J., Lipkin, B., Oblokulov, M., Wang,
Z., Murthy, R., Stillerman, J., Patel, S. S., Abulkhanov,
D., Zocca, M., Dey, M., Zhang, Z., Fahmy, N., Bhattacharyya, U., Yu, W., Singh, S., Luccioni, S., Villegas,
P., Kunakov, M., Zhdanov, F., Romero, M., Lee, T., Timor,
N., Ding, J., Schlesinger, C., Schoelkopf, H., Ebert, J.,
Dao, T., Mishra, M., Gu, A., Robinson, J., Anderson,
C. J., Dolan-Gavitt, B., Contractor, D., Reddy, S., Fried,
D., Bahdanau, D., Jernite, Y., Ferrandis, C. M., Hughes,
S., Wolf, T., Guha, A., von Werra, L., and de Vries, H.
StarCoder: May the source be with you!, May 2023.
Ni, A., Iyer, S., Radev, D., Stoyanov, V., Yih, W.-t., Wang,
S. I., and Lin, X. V. Lever: Learning to verify languageto-code generation with execution. In Proceedings of
_the 40th International Conference on Machine Learning_
_(ICML’23), 2023._
Phind. Beating gpt-4 on humaneval with a fine-tuned
[codellama-34b. https://www.phind.com/blog/](https://www.phind.com/blog/code-llama-beats-gpt4)
[code-llama-beats-gpt4, 2023.](https://www.phind.com/blog/code-llama-beats-gpt4)
10
Rafailov, R., Sharma, A., Mitchell, E., Ermon, S., Manning,
C. D., and Finn, C. Direct preference optimization: Your
language model is secretly a reward model, 2023.
Rosin, C. D. Multi-armed bandits with episode
context. _Annals of Mathematics and Artificial_
_Intelligence,_ 61:203–230, 2011. [URL https:](https://api.semanticscholar.org/CorpusID:207081359)
[//api.semanticscholar.org/CorpusID:](https://api.semanticscholar.org/CorpusID:207081359)
[207081359.](https://api.semanticscholar.org/CorpusID:207081359)
Roziere, B., Gehring, J., Gloeckle, F., Sootla, S., Gat, I.,
Tan, X. E., Adi, Y., Liu, J., Remez, T., Rapin, J., et al.
Code llama: Open foundation models for code. arXiv
_preprint arXiv:2308.12950, 2023._
Shinn, N., Cassano, F., Gopinath, A., Narasimhan, K. R.,
and Yao, S. Reflexion: Language agents with verbal
reinforcement learning. In Thirty-seventh Conference on
_Neural Information Processing Systems, 2023._
Shirafuji, A., Oda, Y., Suzuki, J., Morishita, M., and
Watanobe, Y. Refactoring programs using large language
models with few-shot examples, 2023.
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L.,
van den Driessche, G., Schrittwieser, J., Antonoglou, I.,
Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe,
D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap,
T. P., Leach, M., Kavukcuoglu, K., Graepel, T., and
Hassabis, D. Mastering the game of go with deep neural networks and tree search. _Nature, 529:484–489,_
2016. [URL https://api.semanticscholar.](https://api.semanticscholar.org/CorpusID:515925)
[org/CorpusID:515925.](https://api.semanticscholar.org/CorpusID:515925)
Thakur, A., Wen, Y., and Chaudhuri, S. A language-agent
approach to formal theorem-proving, 2023.
von Werra, L., Belkada, Y., Tunstall, L., Beeching, E.,
Thrush, T., Lambert, N., and Huang, S. Trl: Transformer reinforcement learning. [https://github.](https://github.com/huggingface/trl)
[com/huggingface/trl, 2020.](https://github.com/huggingface/trl)
Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue,
C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz,
M., Davison, J., Shleifer, S., von Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Scao, T. L., Gugger, S., Drame,
M., Lhoest, Q., and Rush, A. M. Transformers: Stateof-the-art natural language processing. In Proceedings
_of the 2020 Conference on Empirical Methods in Natu-_
_ral Language Processing: System Demonstrations, pp._
38–45, Online, October 2020. Association for Computational Linguistics. [URL https://www.aclweb.](https://www.aclweb.org/anthology/2020.emnlp-demos.6)
[org/anthology/2020.emnlp-demos.6.](https://www.aclweb.org/anthology/2020.emnlp-demos.6)
Yang, K., Swope, A., Gu, A., Chalamala, R., Song, P., Yu,
S., Godil, S., Prenger, R., and Anandkumar, A. LeanDojo: Theorem proving with retrieval-augmented lan
-----
**Verified Multi-Step Synthesis using LLMs and MCTS**
guage models. In Neural Information Processing Systems
_(NeurIPS), 2023._
Ye, X., Chen, Q., Dillig, I., and Durrett, G. Optimal neural program synthesis from multimodal specifications. In Moens, M.-F., Huang, X., Specia,
L., and Yih, S. W.-t. (eds.), Findings of the As_sociation for Computational Linguistics:_ _EMNLP_
_2021, pp. 1691–1704, Punta Cana, Dominican Repub-_
lic, November 2021. Association for Computational
Linguistics. doi: 10.18653/v1/2021.findings-emnlp.
[146. URL https://aclanthology.org/2021.](https://aclanthology.org/2021.findings-emnlp.146)
[findings-emnlp.146.](https://aclanthology.org/2021.findings-emnlp.146)
Zhang, S., Chen, Z., Shen, Y., Ding, M., Tenenbaum, J. B.,
and Gan, C. Planning with large language models for code
generation. In The Eleventh International Conference
_[on Learning Representations, 2023a. URL https://](https://openreview.net/forum?id=Lr8cOOtYbfL)_
[openreview.net/forum?id=Lr8cOOtYbfL.](https://openreview.net/forum?id=Lr8cOOtYbfL)
Zhang, T., Yu, T., Hashimoto, T. B., Lewis, M., Yih, W.-t.,
Fried, D., and Wang, S. I. Coder reviewer reranking for
code generation. In Proceedings of the 40th International
_Conference on Machine Learning, ICML’23. JMLR.org,_
2023b.
Zhou, A., Yan, K., Shlapentokh-Rothman, M., Wang, H.,
and Wang, Y.-X. Language agent tree search unifies
reasoning acting and planning in language models, 2023.
11
-----
**Verified Multi-Step Synthesis using LLMs and MCTS**
**A. Prompts**
**A.1. Factorial Prompt**
_In {LANG}, write a factorial function and prove that the factorial is always strictly positive._
**Hints for Dafny.**
_### Hint: Use a plain function, NOT a function method._
**Hints for Coq.**
_### Hint: Don’t forget to import the Arith module._
_### Hint: use ‘Nat.lt_0_1’ in the base case of the proof._
_### Hint: use ‘Nat.lt_lt_add_r’ in the inductive case of the proof._
**A.2. Eval/Opt0 Prompt**
_In {LANG}, write an ADT for arithmetic expressions comprising constants, variables and binary additions. Then write_
_an evaluator taking an expression and an environment (a function that takes a variable name and returns a number) and_
_returning the number resulting from evaluation. Then write an optimizer taking an expression and returning an expression_
_with all additions by 0 removed. Then prove that the optimizer preserves the semantics as defined by the evaluation function._
**Hints for Dafny.**
_### Hint: Recall that in Dafny, pattern match takes the form_
**_match e_**
**_case Foo ( x,_** _y ) ⇒_ _1_
**_case Bar ( x ) ⇒_** _2_
**_case _ ⇒_** _3_
_### Hint: In the optimizer, recursively optimize the sub-expressions._
_### Hint: For the proof, just do a simple pattern match (match not if) and call the lemma recursively without adding asserts._
**Hints for Coq.**
_### Hint: In the optimizer, recursively optimize the sub-expressions._
_### Hint: You can import the ‘string’ datatype with the line ‘Require Import Coq.Strings.String.‘._
_### Hint: Use Fixpoint instead of Definition for recursive functions._
_### Hint: With tactics like ‘induction’ and ‘destruct‘, _avoid_ naming with ‘as’ and let Coq pick the names for you. For_
_example, use ‘induction e.’ but _not_ ‘induction e as [...]‘._
_### Hint: For the proof, do ‘induction e.‘. Do NOT name the hypotheses with ‘as‘._
_### Hint: The simple cases are by ‘simpl. reflexivity.‘._
_### Hint: The addition case is by ‘simpl. rewrite <- IHe1. rewrite <- IHe2. destruct (optimize e1); destruct (optimize e2);_
_try destruct n; try destruct n0; eauto using PeanoNat.Nat.add_0_r.‘._
_### Hint: You’ll need ‘Require Import Arith‘._
Note: these hints are not robust, since the proof hints won’t work for all valid solutions.
**A.3. Opt0 Opt Prompt**
_### In Dafny, write an ADT for arithmetic expressions comprising constants, variables and binary addition. Then write a_
_predicate ‘optimal’ that holds on an expression if it has no additions by 0. Then write an optimizer ‘optimize’ that removes_
_all additions by 0. Then write a lemma ‘OptimizerOptimal’ that ensures ‘optimal(optimize(e))’ for all expressions ‘e‘._
_### Hint: This is the definiton of the ‘optimal’ predicate:_
**_predicate_** _optimal ( e:_ _Expr )_ _{_
**_match e_**
**_case Add( Const (0), _ ) ⇒_** **_false_**
**_case Add(_,_** _Const ( 0 ) ) ⇒_ **_false_**
12
-----
**Verified Multi-Step Synthesis using LLMs and MCTS**
**_case Add( e1, e2 ) ⇒_** _optimal ( e1 ) ∧_ _optimal ( e2 )_
**_case _ ⇒_** **_true_**
_}_
_### Hint: Don’t use the same structure for ‘optimize’ as for ‘optimal‘. Instead, follow the next hint._
_### Hint: In the addition case, the ‘optimize’ function should recursively optimize the sub-expressions and then match on_
_the optimized sub-expressions._
_### Hint: Do NOT use ‘requires’ anywhere._
_### Hint: Write the lemma as_
**_lemma OptimizerOptimal ( e:_** _Expr )_
**_ensures optimal ( optimize ( e ) )_**
_### Hint: Recall that in Dafny, pattern match takes the form_
**_match e_**
**_case Foo ( x,_** _y ) ⇒_ _1_
**_case Bar ( x ) ⇒_** _2_
**_case _ ⇒_** _3_
_### Hint: In the optimizer, recursively optimize the sub-expressions._
_### Hint: For the proof, just do a simple pattern match (match not if) and call the lemma recursively without adding asserts._
A.3.1. BST PROMPT
_In {LANG},_
- (1) write an ADT for a tree of natural numbers.
- Then (2) write a predicate that checks whether a given tree is a binary search tree (BST).
- Then (3) write a function that inserts an element into a binary search tree while preserving the BST property.
- Then (4) write a predicate that checks whether a given tree contains a given element.
- Then (5) write a lemma about the insert function that ensures that the tree resulting from inserting an element contains
_that element (without requiring nor ensuring the BST property)._
- Then (6) write another lemma about the insert function that checks the BST property continues to hold after insertion.
_This lemma should take bounds on the BST, and require that the element to be inserted is within those bounds._
**Hints for Dafny.**
### Hint: For each proof, do not use assertions. Just analyze the structure based on the insert function, and recursively call
the lemma to match the recursive calls in the function.
### Hint: Recall that in Dafny, pattern match takes the form
**match e**
**case Foo ( x,** y ) ⇒ 1
**case Bar ( x ) ⇒** 2
**case _ ⇒** 3
### Hint: do not have ‘requires’ nor ‘ensures’ clauses in the insert function. The lemmas will be proved after the definition;
in those lemmas, have ‘requires’ and ‘ensures’ clauses.
**A.4. Repeat Prompt**
_In {LANG}:_
- (1) Write a function ‘repeat’ that takes an integer ‘x’ and a natural number ‘n’ as inputs, and returns a list of length ‘n’
_in which every element is ‘x’._
13
-----
**Verified Multi-Step Synthesis using LLMs and MCTS**
- (2) Write a lemma that checks that for any ‘x’ and ‘n’, ‘repeat’ returns a list of length ‘n’.
- (3) Write a lemma that checks that for any ‘x’ and ‘n’, ‘repeat’ returns a list where every elemenis ‘x’.
**Hints for Dafny.**
_### Hint: The length of a list or sequence ‘s’ is ‘|s|’._
_### Hint: In a specification, you can write ‘forall VAR :: CONDITION1 ==> CONDITION2’._
**Hints for Coq.**
_### Hint: Import ‘Coq.Lists.List’._
**B. Examples of Scoring Partial Programs**
Partial program with a score of 0:
**datatype Expr =**
Partial program with a score of +1:
**datatype Expr =**
| Const ( val : **int )**
Partial program with a score of −1:
**datatype Expr =**
| Const ( val : **int )**
| Var (name: **string )**
| Add( e1: Expr, e2: Expr )
**function Evaluate ( e:** Expr,
env: **string −> int )** : **int**
**reads env**
{
**match e**
**case Const ( val ) ⇒** val
**case Var (name) ⇒** env (name)
**case Add( e1, e2 ) ⇒**
Evaluate ( e1, env ) +
Evaluate ( e2, env )
}
The negative score is due to the reads clause, which shouldn’t be there. Unfortunately, we only confirm the error once the
whole function is generated.
**C. Checking Steps**
Instead of leaving it to the LLM to generate correct specification, we can explicitly impose the interfaces and specifications
on the LLM. In this mode, we alternate between LLM generation under imposition, and independent verification.
For example, we can create a multi-step prompt that ensures the optimality of the optimizer devised by the LLM.
The prompt: In Dafny, an ADT for arithmetic expressions comprising constants, variables and binary addition.
_```dafny_
**_datatype Expr =_**
_Const ( i :_ **_int )_**
_|_ _Var ( x:_ **_string )_**
_| Add( e1:_ _Expr, e2:_ _Expr )_
_```_
_An optimizer taking an expression and returning an expression with all additions by 0 removed._
_```dafny_
**_function_** _optimize ( e:_ _Expr )_ : _Expr_
_{_
14
-----
**Verified Multi-Step Synthesis using LLMs and MCTS**
And after the VMCTS completes the prompt, we check it with the following code:
_```dafny_
**_predicate_** _optimal ( e:_ _Expr )_ _{_
**_match e_**
**_case Add( Const (0), _ ) ⇒_** **_false_**
**_case Add(_,_** _Const ( 0 ) ) ⇒_ **_false_**
**_case Add( e1, e2 ) ⇒_** _optimal ( e1 ) ∧_ _optimal ( e2 )_
**_case _ ⇒_** **_true_**
_}_
**_lemma OptimizerOptimal ( e:_** _Expr )_
**_ensures optimal ( optimize ( e ) )_**
_{_
**_match e_**
**_case Add( e1, e2 ) ⇒_**
_OptimizerOptimal ( e1 ) ;_
_OptimizerOptimal ( e2 ) ;_
**_case _ ⇒_**
_}_
_```_
We can thus alternate between generation, checking, and re-prompting steps.
As we provide a generic proof in the independent verification, we avoid the LLM having to backtrack the optimizer after
attempting several proofs.
In this mode, we run a new VMCTS algorithm for each step, committing to the previous steps that have passed verification.
**D. DPO Datasets (Excerpted Triples 3/14)**
**D.1. Triple for correcting pattern matching syntax**
D.1.1. PROMPT
In Dafny, we have the following ADT:
```dafny
**datatype Foo = Bar ( n:** **nat )** | Baz ( x: Foo )
```
Take the denotation of a Foo to be 2 times n for a Bar, and the denotations of its Foo for a Baz.
```dafny
**function** denotation ( foo : Foo ) : **nat**
{
D.1.2. CHOSEN
**match foo**
**case Bar ( n ) ⇒** 2 * n
**case Baz ( x ) ⇒** denotation ( x )
}
```
D.1.3. REJECTED
**match foo**
**case Bar ( n )** : 2 * n
**case Baz ( x )** : denotation ( x )
}
```
15
-----
**Verified Multi-Step Synthesis using LLMs and MCTS**
**D.2. Triple for correcting proof case analysis**
D.2.1. PROMPT
In Dafny, we have the following ADT:
```dafny
**datatype Foo = Bar ( n:** **nat )** | Baz ( x: Foo )
```
Take the denotation of a Foo to be 2 times n for a Bar, and the denotations of its Foo for a Baz.
```dafny
**function** denotation ( foo : Foo ) : **nat**
{
**match foo**
**case Bar ( n ) ⇒** 2 * n
**case Baz ( x ) ⇒** denotation ( x )
}
```
A lemma that proves that the denotation of a Foo is always even.
```dafny
**lemma denotationAlwaysEven ( foo** : Foo )
**ensures denotation ( foo ) %**
{
D.2.2. CHOSEN
**match foo**
**case Bar ( n ) ⇒**
**case Baz ( x ) ⇒**
denotationAlwaysEven ( x ) ;
}
D.2.3. REJECTED
**i f** foo = Const (?n )
{
}
**else**
{
denotationAlwaysEven ( foo . x ) ;
}
}
**D.3. Triple for preferring optimal solution**
D.3.1. PROMPT
In Dafny, an ADT for arithmetic expressions comprising constants, variables and binary addition.
```dafny
**datatype Expr = Const ( i :** **int )** | Var ( x: **string )** | Add( e1: Expr, e2: Expr )
```
An evaluator ‘eval’ taking an expression and an environment (a function that takes a variable name and returns a number)
and returning the number resulting from evaluation.
```dafny
16
-----
**Verified Multi-Step Synthesis using LLMs and MCTS**
**function** eval ( e: Expr, env: **string −> int )** : **int**
{
**match e**
{
**case Const ( i ) ⇒** i
**case Var ( x ) ⇒** env ( x )
**case Add( e1, e2 ) ⇒** eval ( e1, env ) + eval ( e2, env )
}
}
```
An optimizer taking an expression and returning an expression with all additions by 0 removed. In the addition case, the
‘optimize’ function first recursively optimize the sub-expressions, then matches against the the optimized sub-expressions to
check for constants 0.
```dafny
**function** optimize ( e: Expr ) : Expr
{
D.3.2. CHOSEN
**match e**
{
**case Add( e1, e2 ) ⇒**
**var e1_opt := optimize ( e1 ) ;**
**var e2_opt := optimize ( e2 ) ;**
**i f** e1_opt = Const (0) **then e2_opt**
**else** **i f** e2_opt = Const (0) **then e1_opt**
**else Add( e1_opt,** e2_opt )
**case _ ⇒** e
}
}
```
D.3.3. REJECTED
**match e**
{
**case Add( Const (0), e2 ) ⇒** optimize ( e2 )
**case Add( e1,** Const ( 0 ) ) ⇒ optimize ( e1 )
**case Add( e1, e2 ) ⇒** Add( optimize ( e1 ), optimize ( e2 ) )
**case _ ⇒** e
}
}
```
**E. Experimental Setup**
Our setup is a multi-GPU machine, with 4 GPUs NVIDIA A100 80GB PCIe, 2 CPUs Intel Xeon Platinum 8358 @ 2.60GHz,
and 1024 GB DDR-4 memory. The machine was potentially used during our measurements. In practice, we only need 1
GPU for any of the experiments reported here.
For most experiments, we use a temperature of 0.8, a top-k of 7, and a top-p of 0.9. The [*Diversity] variant uses those
same settings and uses 5 samples per call to the LLM. For DPO training, we used a learning rate of 1e−3, beta of 0.5 and
400 steps (taking the loss to 0).
We use the Transformers library (Wolf et al., 2020) to query an LLM and the TRL library for DPO (von Werra et al., 2020).
For the MCTS, we adapt a generic open-source library (ImparaAI, 2024).
**F. ChatGPT Transcripts**
_In Figure 6, we provide links to the transcripts of the ChatGPT sessions that lead to the results in Figure 2._
17
-----
**Verified Multi-Step Synthesis using LLMs and MCTS**
_prompt_ _ChatGPT4_
whole plug. whole plug. steps
Factorial (Coq) ✗ ✗ ✓
[https://chat.openai.com/share/f42763b4-3707-43f0-b2a0-8541bc5f962a](https://chat.openai.com/share/f42763b4-3707-43f0-b2a0-8541bc5f962a)
[https://chat.openai.com/share/50183b58-be0e-42a8-8378-91a99f760d88](https://chat.openai.com/share/50183b58-be0e-42a8-8378-91a99f760d88)
[https://chat.openai.com/share/6d16bf18-17da-447d-9959-f9e71db1701e](https://chat.openai.com/share/6d16bf18-17da-447d-9959-f9e71db1701e)
" [Hints] ✗ ✗ ✗
[https://chat.openai.com/share/b530804e-cec0-4a17-a86d-fed2e35e7408](https://chat.openai.com/share/b530804e-cec0-4a17-a86d-fed2e35e7408)
[https://chat.openai.com/share/6d4fa9b1-5a0c-4640-90a7-e3dcbaa25c80](https://chat.openai.com/share/6d4fa9b1-5a0c-4640-90a7-e3dcbaa25c80)
[https://chat.openai.com/share/1b5aa7dc-da3f-42d2-b1ea-4b7873e115b3](https://chat.openai.com/share/1b5aa7dc-da3f-42d2-b1ea-4b7873e115b3)
" [lia] ✓ ✓ ✓
[https://chat.openai.com/share/cb909b1c-997a-4c4c-bbc9-3ad5e84d4e3c](https://chat.openai.com/share/cb909b1c-997a-4c4c-bbc9-3ad5e84d4e3c)
[https://chat.openai.com/share/6275f885-39a4-4529-a09d-7a605bcfdc91](https://chat.openai.com/share/6275f885-39a4-4529-a09d-7a605bcfdc91)
Factorial (Dafny) ✓ ✓ ✓
[https://chat.openai.com/share/4aada350-5471-4f5d-8d0b-2b6286266b74](https://chat.openai.com/share/4aada350-5471-4f5d-8d0b-2b6286266b74)
[https://chat.openai.com/share/675b4044-f108-4dd9-9602-55156b6d832d](https://chat.openai.com/share/675b4044-f108-4dd9-9602-55156b6d832d)
Factorial (Lean) ✗ -
[https://chat.openai.com/share/09ecf72e-a2e3-4b95-a11a-ecfc79dcae2e](https://chat.openai.com/share/09ecf72e-a2e3-4b95-a11a-ecfc79dcae2e)
Eval/Opt0 (Coq) ✗ ✗ ✗
[https://chat.openai.com/share/98f1bd59-083a-4b39-b6fa-f6da7ca8d21c](https://chat.openai.com/share/98f1bd59-083a-4b39-b6fa-f6da7ca8d21c)
[https://chat.openai.com/share/041152a1-654f-4f37-a6e4-630834b3e225](https://chat.openai.com/share/041152a1-654f-4f37-a6e4-630834b3e225)
[https://chat.openai.com/share/8f6ff93d-fe75-46b6-ae25-715536d30f55](https://chat.openai.com/share/8f6ff93d-fe75-46b6-ae25-715536d30f55)
" [Hints] ✗ ✓ ✓
[https://chat.openai.com/share/296ca631-923f-4c38-8abb-4626e00596b1](https://chat.openai.com/share/296ca631-923f-4c38-8abb-4626e00596b1)
[https://chat.openai.com/share/bc4915b7-d47d-4e04-a58c-7aa4716386e5](https://chat.openai.com/share/bc4915b7-d47d-4e04-a58c-7aa4716386e5)
Eval/Opt0 (Dafny) ✗ ✗ ✓
[https://chat.openai.com/share/c1c16978-440c-43ad-b448-e121466013a1](https://chat.openai.com/share/c1c16978-440c-43ad-b448-e121466013a1)
[https://chat.openai.com/share/49b34550-47dc-4d18-8485-38a2e0c13431](https://chat.openai.com/share/49b34550-47dc-4d18-8485-38a2e0c13431)
[https://chat.openai.com/share/2874c7fb-6b32-4124-b695-c49dac0cd2f6](https://chat.openai.com/share/2874c7fb-6b32-4124-b695-c49dac0cd2f6)
" [Hints] ✗ ✓ ✓
[https://chat.openai.com/share/318eabd0-59af-4429-b453-15d6da50319d](https://chat.openai.com/share/318eabd0-59af-4429-b453-15d6da50319d)
[https://chat.openai.com/share/026f91a5-2920-4526-9eb9-8aaaa734cb12](https://chat.openai.com/share/026f91a5-2920-4526-9eb9-8aaaa734cb12)
Opt0 Opt (Dafny) ✓ ✓ ✓
[https://chat.openai.com/share/46e7a8c3-11a6-4a36-9735-295cec4801f4](https://chat.openai.com/share/46e7a8c3-11a6-4a36-9735-295cec4801f4)
BST (Dafny) ✗ ✗ ✓
[https://chat.openai.com/share/cfc40473-1212-43f8-bbb9-35312fb644d7](https://chat.openai.com/share/cfc40473-1212-43f8-bbb9-35312fb644d7)
[https://chat.openai.com/share/de790e8f-6d84-43c9-965d-3b1440c4f59d](https://chat.openai.com/share/de790e8f-6d84-43c9-965d-3b1440c4f59d)
[https://chat.openai.com/share/744efa4b-6498-4f6e-85ef-c837aa75fb2d](https://chat.openai.com/share/744efa4b-6498-4f6e-85ef-c837aa75fb2d)
Repeat (Coq) ✗ ✓ ✓
[https://chat.openai.com/share/e8bddd4d-c3b7-4af0-8e06-a0393a0649e9](https://chat.openai.com/share/e8bddd4d-c3b7-4af0-8e06-a0393a0649e9)
[https://chat.openai.com/share/20001cb7-29fc-423a-b2c6-73dea825b062](https://chat.openai.com/share/20001cb7-29fc-423a-b2c6-73dea825b062)
Repeat (Dafny) ✗ ✗ ✓
[https://chat.openai.com/share/f37d520e-2a87-4767-b7dd-b6992f7a2e26](https://chat.openai.com/share/f37d520e-2a87-4767-b7dd-b6992f7a2e26)
[https://chat.openai.com/share/26660140-2bcc-4374-aeb3-b24240d3b6f0](https://chat.openai.com/share/26660140-2bcc-4374-aeb3-b24240d3b6f0)
[https://chat.openai.com/share/f70de2ee-26b7-455e-b39f-cea78fab7995](https://chat.openai.com/share/f70de2ee-26b7-455e-b39f-cea78fab7995)
_Figure 6. Links to ChatGPT transcripts for the benchmark results in Figure 2._
18
-----
**Verified Multi-Step Synthesis using LLMs and MCTS**
We have developed ChatGPT GPTs, one for Dafny checking and one for Coq checking. The action of each GPT enables
ChatGPT to check a code and get a detailed outcome, including error messages.
In our evaluation, we try three variants:
**whole We ask ChatGPT to solve the problem in one prompt, without providing a GPT assistant.**
**plug. whole We ask ChatGPT to solve the problem in one prompt, while providing access to the GPT, so ChatGPT can**
check its solution and re-try.
**plug. steps We ask ChatGPT to solve the problem while providing access to the GPT, but we break the prompt into**
interactive steps.
ChatGPT sometimes get stuck with a system error, in which case we re-try unless it keeps failing for over many tries.
Breaking down the problem into multiple steps like in the plug. steps variant enables ChatGPT to retry on smaller chunks,
causing fewer system errors.
19
-----
| [
"David, Brandfonbrener",
"Sibi, Raja",
"Tarun, Prasad",
"Chloe, Loughridge",
"Jianang, Yang",
"Simon, Henniger",
"William E., Byrd",
"Robert, Zinkov",
"Nada, Amin"
] | 2024-02-12T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2402.08147 | null | null |
VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment | Large language models (LLMs) are increasingly applied to complex reasoning tasks that require executing several complex steps before receiving any reward. Properly assigning credit to these steps is essential for enhancing model performance. Proximal Policy Optimization (PPO), a state-of-the-art reinforcement learning (RL) algorithm used for LLM finetuning, employs value networks to tackle credit assignment. However, value networks face challenges in predicting the expected cumulative rewards accurately in complex reasoning tasks, often leading to high-variance updates and suboptimal performance. In this work, we systematically evaluate the efficacy of value networks and reveal their significant shortcomings in reasoning-heavy LLM tasks, showing that they barely outperform a random baseline when comparing alternative steps. To address this, we propose VinePPO, a straightforward approach that leverages the flexibility of language environments to compute unbiased Monte Carlo-based estimates, bypassing the need for large value networks. Our method consistently outperforms PPO and other RL-free baselines across MATH and GSM8K datasets with fewer gradient updates (up to 9x), less wall-clock time (up to 3.0x). These results emphasize the importance of accurate credit assignment in RL finetuning of LLM and demonstrate VinePPO's potential as a superior alternative. | VinePPO is proposed, a straightforward approach that leverages the flexibility of language environments to compute unbiased Monte Carlo-based estimates, bypassing the need for large value networks, and consistently outperforms PPO and other RL-free baselines across MATH and GSM8K datasets. | [
"Alessandro, Sordoni",
"Amirhossein, Kazemnejad",
"Milad, Aghajohari",
"Siva, Reddy",
"Eva, Portelance",
"Aaron, Courville",
"Nicolas Le, Roux"
] | 2024-10-02T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.01679 | https://arxiv.org/abs/2410.01679 | https://www.semanticscholar.org/paper/93e2bf97b2664443b108e03950829cc79f3f8419 |
|
VisScience: An Extensive Benchmark for Evaluating K12 Educational Multi-modal Scientific Reasoning | Multi-modal large language models (MLLMs) have demonstrated promising capabilities across various tasks by integrating textual and visual information to achieve visual understanding in complex scenarios. Despite the availability of several benchmarks aims to evaluating MLLMs in tasks from visual question answering to complex problem-solving, most focus predominantly on mathematics or general visual understanding tasks. This reveals a critical gap in current benchmarks, which often overlook the inclusion of other key scientific disciplines such as physics and chemistry. To address this gap, we meticulously construct a comprehensive benchmark, named VisScience, which is utilized to assess the multi-modal scientific reasoning across the three disciplines of mathematics, physics, and chemistry. This benchmark comprises 3,000 questions drawn from K12 education - spanning elementary school through high school - equally distributed across three disciplines, with 1,000 questions per discipline. The questions within VisScience span 21 distinct subjects and are categorized into five difficulty levels, offering a broad spectrum of topics within each discipline. With VisScience, we present a detailed evaluation of the performance of 25 representative MLLMs in scientific reasoning. Experimental results demonstrate that closed-source MLLMs generally outperform open-source models. The best performance observed include a 53.4\% accuracy in mathematics by Claude3.5-Sonnet, 38.2\% in physics by GPT-4o, and 47.0\% in chemistry by Gemini-1.5-Pro. These results underscore the strengths and limitations of MLLMs, suggesting areas for future improvement and highlighting the importance of developing models that can effectively handle the diverse demands of multi-modal scientific reasoning. | A comprehensive benchmark is constructed, named VisScience, which is utilized to assess the multi-modal scientific reasoning across the three disciplines of mathematics, physics, and chemistry and demonstrates that closed-source MLLMs generally outperform open-source models. | [
"Zhen, Yang",
"Jinhao, Chen",
"Zhihuan, Jiang",
"Zhengxiao, Du",
"Weihan, Wang",
"Bin, Xu",
"Yuxiao, Dong",
"Jie, Tang"
] | 2024-09-09T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.13730 | https://arxiv.org/abs/2409.13730 | https://www.semanticscholar.org/paper/32f49c85f75b6339a776d8012e24c00894f30c7d |
|
What is my math transformer doing?–Three results on interpretability and generalization | N/A | null | ## What is my math transformer doing? Three results on interpretability and generalization
François Charton
Meta AI
Abstract
This paper investigates the failure cases and out-of-distribution behavior of transformers trained on matrix inversion and eigenvalue decomposition. I show that incorrect model predictions still retain deep mathematical properties of the solution
(e.g. correct eigenvalues, unit norm of eigenvectors), and that almost all model
failures can be attributed to, and predicted from, properties of the problem or solution. This demonstrates that, when in doubt, math transformers do not hallucinate
absurd solutions (as was sometimes proposed) but remain “roughly right”. I also
show that the careful choice of a training dataset can accelerate training, while
allowing the model to generalize out of its training distribution, invalidating the
idea that transformers “merely interpolate” from memorized examples.
Introduction
Transformer-based AI for mathematics is a fast-developing field. Over recent years, transformers
were applied to a wide range of problems: arithmetic [9], linear algebra [2], polylogarithm identities [3], symbolic integration [6], symbolic regression [1] and theorem proving [10]. Meanwhile,
limitations of transformers were found, which may restrict their use in maths and science. In this
paper, I challenge three commonly discussed limitations, namely:
- that transformers are black boxes, and there is no way to know how they solve a problem.
In mathematics, this means one cannot tell whether the model has learned the abstract
concepts needed to solve the problem, or just interpolates between memorized training
examples.
- that transformers have no sense of the correctness of their results. They sometimes hallucinate absurd solutions, instead of remaining “roughly right” or admitting failure.
- that trained transformers are brittle, and struggle with out-of-domain generalization. In
mathematics, the procedure used to generate the training data heavily influences the problems that the model can solve accurately.
Experimenting with three problems of linear algebra, eigenvalue calculation, diagonalisation and
matrix inversion, in the setting described by [2], I show that mathematical properties are indeed
learned by transformers, and that their failure cases can be understood and predicted. I also show
that by carefully selecting the training dataset, I can improve model performance and generalize far
away from the training distribution, challenging the idea that transformers “merely interpolate”.
2 What is my model doing? Learning the spectral theorem.
In the diagonalization task (“eigenvectors” in [2]), a model is trained to decompose a symmetric
H5 × such that 5 matrix H M[T], by predicting a vectorMH = diag(Λ). Theory [4] tells us that the coordinates of Λ ∈ R[5] (with λ1 ≥ λ2 ≥ . . . ≥ λ Λ5) and a are the eigenvalues 5 × 5 matrix
of M, and the columns of H the corresponding eigenvectors. Besides, H is orthogonal, that is,
Preprint. Under review.
-----
H [−][1] = H [T], or, equivalently, all its rows and columns have unit norm and are mutually orthogonal.
Because its coordinates are sorted, Λ is unique. The columns of H, on the other hand, are defined
up to a sign change (or a transformation from the symmetry group O(k) when k eigenvalues are
equal).
As in [2], a sequence-to-sequence transformer (see appendix B for details) is trained to predict
the decomposition (Λ, H) of a matrix M . During training, the model minimizes the cross-entropy
between its predictions and the sequences representing Λ and H. At test time, model accuracy is
defined as the quality of the diagonalisation, i.e. whether ∥H [T] MH − Λ∥/∥Λ∥ < τ (using the L[1]
norm, and with tolerance τ = 5%). In this experiment, the model is trained from examples only,
and no problem-specific inductive bias is introduced, either in the architecture or in the training
procedure. To determine if some of the theoretical properties of diagonalization are learned, I run
the trained model on a test set of 50000 random matrices, and investigate its predictions.
The model achieves an accuracy of 92.0%. However, in 99.9% of the test cases, the eigenvalues of
the input matrix M are predicted with less than 1% relative error (in L[1] norm), and within 0.5%
in 96.1% of test cases. Also, in 98.9% of the test cases, the norms of all rows and columns in the
predicted H are in the interval [0.99, 1.01], as theory dictates. These two mathematical properties of
diagonalization, i.e. that Λ is the eigenvalues, and that the columns of H have unit norm, have been
learned by the model. They are verified even in incorrect predictions.
In this experiment, the model achieves high in-domain accuracy, but similar results are observed
in weaker models. On a “half-trained” model that only achieves 70% accuracy, the eigenvalues
are predicted (within 1%) in 99.6% of the test cases, and all rows and columns have unit norms in
96.7%. For larger matrices (6 × 6), the model achieves a meager 43% accuracy. Yet, eigenvalues
are predicted within 1% in 99.6% of the test cases, and rows and columns of H have unit norm in
93.1%.
Theory predicts that the rows and columns of H should be orthogonal. This property can be quantified by computing the dot products between sucessive normalized rows and columns of H. The
dot products are second order approximations of the difference between π/2 and the angle between
vectors (which should be zero if H is orthogonal). On the test set, all angles are within 0.1 radians
(5.7[◦]) of π/2 in 95.2% of test cases, and 0.05 radians (2.9[◦]) in 93.6%. The lack of orthogonality
between lines and columns accounts for almost all failure cases: in 99.5% of successful model predictions, all angles between successive rows and columns are less than 0.03 radians, and H is close
to orthogonal. On the other hand, one angle is larger than 0.03 radians in 90% of model failures.
These experiments teach us three lessons about math transformers. First, deep mathematical properties are learned during training: all eigenvalues are correctly predicted, and all columns of H have
unit norms, even when the model fails to predict the correct diagonalisation, and even for models
with low accuracy (half-trained, or trained on harder problems). Second, math transformers do not
seem to hallucinate absurd solutions. Even when the model fails, Λ is correct, and H is close to
orthogonal. Finally, they provide a simple mathematical explanation for almost all model failures.
3 Predicting failure: verifiers for math transformers
On the diagonalization task, almost all incorrect model predictions can be attributed to H not being
orthogonal. From this observation, a useful statistic for predicting model failure can be derived: the
condition number of H (i.e. the ratio of its largest and smallest singular values, henceforth c(H)).
When H is orthogonal, we have c(H) = 1 (else c(H) > 1). Over the 50 000 test cases, correct
model predictions have an average condition number of 1.01 (with a standard deviation of 0.0065).
For model failures, the average condition number is 1.28. Using the rule c(H) < 1.045, 99.3% of
model successes and failures can be predicted. More precisely, we have c(H) < 1.045 in 99.94%
of correct predictions, and c(H) > 1.045 in 96.7% of model failures.
A similar situation arises for 5×5 matrix inversion. Over a test set of 50 000 examples, a transformer
has an accuracy of 89.0%. As in [2], accuracy is defined by how close the product of the model
prediction P and the input matrix M is to identity, i.e. ∥PM − I∥/∥I∥ < τ (τ = 5%). But we can
also compute the L[1] distance between the model prediction and the inverse ∥P − M [−][1]∥/∥M [−][1]∥ <
τ . On this metric, accuracy is 98.2% with 5% tolerance, and 99.6% with 25%. When in doubt, the
model does not hallucinate, but provides a rough approximation to the correct solution M [−][1].
-----
This provides us with a complete mathematical explanation of model failure for the inversion task.
Whereas the model fails on 11% of test cases, its predictions are within 5% of the correct solution in
98.2%, and in 84% of failures ((98.2 − 89)/11). In such cases, the model predicts an approximation
of M [−][1] that turns out not to be a “good inverse” of M . We know from theory that this happens when
M has a large condition number c(M ), and therefore we can use c(M ) to predict model failure. On
the test set, the matrices correctly inverted by the model have an average condition number of 15.8
(with a standard deviation of 13.3). For model failures, the average condition number is 640.5. The
decision rule c(M ) < 62 predicts model success in 98.0% of cases, and we have c(M ) < 62 in
99.0% of correct predictions, and c(M ) > 62 in 89.8% of failures. Note that for this task, we do
not even need to run the model, since success can be predicted from its input M only.
These experiments indicate that verifiers, external routines that can predict a model success from
its input or output, can be computed from problem-specific statistics. In linear algebra, this is of
little practical interest because model predictions can be checked in a few matrix multiplications.
Verifiers, however, are important in some areas of mathematics (e.g. theorem proving).
4 Out-of-domain generalization and the role of generators
On the eigenvalue computation task, I have shown, in [2], that models trained on Wigner matrices
(with eigenvalues distributed as a semicircle law) do not generalize to test sets with different distributions of eigenvalues (uniform, Gaussian, Laplace, or positive). On the other hand, models trained
on matrices with Laplace distributed eigenvalues (Laplace models, henceforth) generalize to all test
sets.
Table 1 presents additional results for seven eigenvalue distributions (semi-circle, uniform, Gaussian,
Laplace, absolute-semicircle, absolute-Laplace, and Marchenko-Pastur, see Appendix B.2). In the
first four, eigenvalues are symmetrically distributed around zero. In the last three, all eigenvalues are
positive. Also, the semicircle, uniform, absolute semicircle and Marchenko-Pastur distribution have
bounded support, whereas the Gaussian, Laplace and absolute Laplace allow for large eigenvalues.
Semi-circle Uniform Gaussian Laplace abs-sc abs-Lapl Marchenko
Semi-circle 100 34 36 39 1 5 0
Uniform 93 100 76 70 92 70 2
Gaussian 100 100 100 100 100 100 99
Laplace 100 100 100 100 100 100 100
Abs-semicircle 0 5 4 4 100 78 20
Abs-Laplace 0 4 5 5 100 100 100
Marchenko-Pastur 0 4 4 4 100 76 100
Table 1: Out-of-distribution generalization. Eigenvalues of 5x5 matrices. Rows are the training distributions, columns the test distributions.
The Wigner ensemble, the obvious default choice for random matrices, turns out to be the worst for
out-of-distribution generalization. On the other hand, the Gaussian or Laplace models generalize
to all test sets. Models trained on positive eigenvalue distributions do not generalize to symmetric (non-positive) test distributions, because negative eigenvalues were never encountered during
training (the 4 to 5% performance achieved by positive models on the Laplace, Gaussian and Uniform ensembles roughly corresponds to the number of positive matrices in the test set). But models
trained on symmetric distributions can generalize to positive matrices. Finally, it is interesting to
note that models trained on distributions with compact support (semi-circle, uniform, abs-semicircle
and Marchenko-Pastur) generalize less well than their unbounded counterparts.
Besides generalizing better, the Laplace and Gaussian models are more data efficient. To achieve
99% accuracy on a Wigner (semi-circle) test set, the Gaussian model needs 2.4 million training
examples, the Laplace model 2.7 and the semi-circle model 3.6. On a test set of positive matrices,
the Gaussian and Laplace model achieve 99% accuracy in 2.1 and 2.4 million examples, the positive
model in 3.9 million (see Table 6 in Appendix A.2). As problem dimension increases, so does the
advantage of Gaussian and Laplace models. On 8 × 8 matrices (Table 2), Gaussian and Laplace
models achieve 99% accuracy on a semi-circle test set after 11.4 and 13.2 million examples. After
36 million examples, our best uniform and semicircle models only achieve 91 and 0.5% accuracy.
-----
With deeper encoders (8 and 12 layers), the Laplace and Gaussian models can predict the eigenvalues
of 10 × 10 Wigner matrices with 100% accuracy (in 12.9 and 23.1million examples, larger models
allow for faster learning). The best (semicircle) models reported in [2] only achieve 25% accuracy
after 360 million examples.
Semi-circle Uniform Gaussian Laplace abs-sc abs-Lapl Marchenko
8x8 matrices
Semicircle 0 0 0 0 0 0 0
Uniform 91 100 65 57 89 55 0
Gaussian 100 100 100 99 100 99 41
Laplace 100 100 100 100 100 100 97
Abs-semicircle 0 1 1 0 100 53 0
Abs-Laplace 0 1 1 1 100 100 98
Marchenko-Pastur 0 0 0 0 1 1 20
10x10 matrices
Gaussian (12/1 layers) 100 100 100 98 100 97 3
Laplace (8/1 layers) 100 100 100 100 100 100 74
Table 2: Out-of-distribution generalization. Eigenvalues of 8x8 and 10x10 matrices, accuracy after 36
million examples. Rows are the training distributions, columns the test distributions.
Achieving 100% accuracy on test sets of positive matrices, with Laplace or Gaussian models, rules
out the idea that transformers interpolate between memorized examples. For 8 × 8 and 10 × 10
matrices, there is almost no overlap between the training and test sets: the probability of a Gaussian
or Laplace matrix having only positive eigenvalues is 0.4% and 0.1% respectively.
I obtain similar results when diagonalizing 5 × 5 matrices (Table 3). After training on 80 million
examples, the best models achieve 94% accuracy on the semicircle test set. As with the eigenvalue
task, the semicircle model does not generalize out of distribution, and the Gaussian and Laplace
generalize to all test distributions, and achieve about 80% accuracy. Previous observations on data
efficiency also apply: on the semicircle test set, the Laplace and Gaussian models need 37 and 45
million examples to achieve 90% accuracy, whereas the semicircle model needs 50 million (see
Table 7 in Appendix A.2).
Semi-circle Uniform Gaussian Laplace abs-sc abs-Lapl Marchenko
Semicircle 93 15 18 18 0 0 0
Uniform 91 80 62 56 81 50 2
Gaussian 94 80 81 77 84 69 80
Laplace 94 79 81 78 84 70 81
Abs-semicircle 0 3 2 2 82 51 15
Abs-Laplace 0 2 3 3 79 71 82
Marchenko-Pastur 0 1 2 2 64 42 88
Table 3: Out-of-distribution generalization. Diagonalization of 5x5 matrices. Rows are the training distributions, columns the test distributions.
Finally, experiments with symmetric matrix inversion (Appendix A.1) confirm that Gaussian and
Laplace distributions generalize better, and that models trained on positive matrices only generalize
to positive test sets. This suggests that the choice of a good training distribution might not be taskspecific, and that some distributions may generalize out-of-domain for a large class of problems.
5 Conclusion
Experimenting with three problems of linear algebra, I have shown that transformers can learn mathematical properties: all their predictions, correct or not, satisfy some properties (correct eigenvalues
and unit vectors for diagonalization). Also, model failures do not happen at random, and can be
predicted from the input or the predicted solution. Finally, I show that selecting an appropriate training set improves both out-of-distribution generalization, and model performance and data efficiency.
-----
These experiments were designed by leveraging the mathematical theory of random matrices and
linear algebra. This demonstrates how mathematical problems can be used as frameworks for understanding transformers, trying to explain their predictions, and investigating the conditions under
which they generalize. I believe this is a promising direction for future research.
References
[1] Luca Biggio, Tommaso Bendinelli, Alexander Neitz, Aurelien Lucchi, and Giambattista Parascandolo. Neural symbolic regression that scales. arXiv preprint arXiv:2106.06427, 2021.
[2] François Charton. Linear algebra with transformers. arXiv preprint arXiv:2112.01898, 2021.
[3] Aurélien Dersy, Matthew D. Schwartz, and Xiaoyuan Zhang. Simplifying polylogarithms with
machine learning. arXiv preprint arXiv:2206.04115, 2022.
[4] Gene H. Golub and Charles F. van Loan. Matrix Computations. JHU Press, fourth edition,
2013.
[5] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv
preprint arXiv:1412.6980, 2014.
[6] Guillaume Lample and François Charton. Deep learning for symbolic mathematics. arXiv
preprint arXiv:1912.01412, 2019.
[7] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv
preprint arXiv:1608.03983, 2016.
[8] Madan Lal Mehta. Random Matrices. Academic Press, 3rd edition, 2004.
[9] Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. Investigating the limitations of transformers
with simple arithmetic tasks. arXiv preprint arXiv:2102.13019, 2021.
[10] Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving. arXiv preprint arXiv:2009.03393, 2020.
[11] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez,
Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010, 2017.
[12] Sean Welleck, Peter West, Jize Cao, and Yejin Choi. Symbolic brittleness in sequence models:
on systematic generalization in symbolic mathematics. In Proceedings of the AAAI Conference
on Artificial Intelligence, volume 36, pages 8629–8637, 2022.
[13] Gal Yehuda, Moshe Gabel, and Assaf Schuster. It’s not what machines can learn, it’s what we
cannot teach. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International
Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research,
pages 10831–10841. PMLR, 13–18 Jul 2020.
-----
Appendix
A Additional results
A.1 Out-of-distribution generalization, symmetric matrice inversion
In the eigenvalue and diagonalization tasks, out-of-distribution (ood) experiments indicate that the
most robust models are trained on ensembles of matrices with long-tailed eigenvalue distributions
(Laplace and Gaussian). This may suggest that ood generalization happens when models are trained
on datasets that contain more “edge cases” for this specific problem – large absolute eigenvalues,
here. This would make the choice of a good (i.e. robust) training set a problem-specific issue.
To test this hypothesis, I experiment with the inversion of symmetric matrices. As discussed in
section 3, the “edge cases” for this task are matrices with large condition numbers – the ratio of
the largest and smallest absolute eigenvalues in this particular case. If the “edge case” hypothesis
were true, we would expect distributions with a larger range of condition numbers to generalize best.
Table 4 provides statistics about the distribution of condition numbers in our seven training and test
sets. Since the uniform distribution has smaller (and less variable) condition numbers, we should
expect it to generalize worst. On the other hand, the Laplace and the Marchenko-Pastur, having a
broad range of condition numbers, should generalize out of distribution.
Median Third quartile 90th percentile
Semi-circle 9.4 20.4 52.0
Uniform 6.3 14.8 38.9
Gaussian 9.0 21.2 57.4
Laplace 14.1 34.5 99.5
abs-semicircle 9.5 20.6 51.7
abs-Laplace 14.3 35.4 98.3
Marchenko-Pastur 190 885 5293
Table 4: Distribution of condition numbers. On a set of 10000 randomly generated 5x5 symmetric matrices.
Table 5 presents results for 5 × 5 symmetric matrices. As in previous experiments, models trained
on positive matrices only generalize to positive test sets (the reverse being false). Models trained on
the uniform set, which has the smallest condition numbers, generalize just as well as the Gaussian
and Laplace models, which have the largest condition numbers. This invalidates our hypothesis. We
also note that while matrix inversion is only loosely related to eigenvalues and their distribution, the
Laplace model performs best on this task as well. This result needs to be confirmed, but it does
suggest that certain ensembles of matrices (Laplace and Gaussian) are robust for several tasks of
linear algebra.
Semi-circle Uniform Gaussian Laplace abs-sc abs-Lapl Marchenko
Semi-circle 81 18 25 26 1 17 0
Uniform 67 76 63 45 76 50 2
Gaussian 62 72 63 45 71 51 5
Laplace 65 75 65 49 76 58 7
Abs-semicircle 0 2 2 2 84 59 5
Abs-Laplace 0 3 2 2 87 75 17
Marchenko-Pastur 0 3 3 2 85 66 16
Table 5: Generalization with different generators. Inversion of 5x5 symmetric matrices. Rows are training
data, columns test data.
A.2 Out-of-distribution results: learning speeds
Table 6 indicates the number of training samples needed for a model to achieve 99% accuracy on
the eigenvalue task. On both the semi-circle and positive test sets, Gaussian and Laplace models are
-----
more data effective than models trained on the test distribution. On the positive test set (eigenvalues
distributed as the absolute value of a semi-cricle law), the absolute Laplace is the most data-efficient
of the three models trained on positive matrices. Absolute Laplace requires about 33% less examples
than absolute semicircle (just like Laplace vs semi-circle in the non-positive case).
Semi-circle Absolute Semi-circle
Semi-circle 3.6 -
Uniform - -
Gaussian 2.4 2.1
Laplace 2.7 2.4
Absolute semi-circle - 4.5
Absolute Laplace - 3.9
Marchenko-Pastur - 7.5
Table 6: Learning speed of different generators. Millions of examples to compute the eigenvalues of 5x5
matrices to 99% accuracy. Rows are the training distributions, columns the test distributions.
Finally, Table 7 indicates the sample size needed to achieve 85% accuracy when diagonalizing 5 × 5
matrices. Models need about ten times more data than for the eigenvalue task, but the advantage of
models trained on non-compact eigenvalue distributions (Laplace and Gaussian) remains.
Semi-circle
Semi-circle 49.5
Uniform 68.4
Gaussian 45.3
Laplace 36.9
Table 7: Learning speed of different generators. Millions of examples to compute the eigenvectors of 5x5
matrices to 90% accuracy.
B Architecture, training parameters and data sets
B.1 Architecture and training
All models used in this work are sequence-to-sequence transformers [11]. The models used to predict eigenvectors, in sections 2 and 3, have 6 layers in the encoder and one in the decoder, 512
dimensions and 8 attention heads. Their input are encoded with the FP15 scheme (one token per
coefficient), and their output with the P1000 (three tokens, sign, mantissa in base 1000, and exponent). The “half-trained” model with 70% accuracy used P1000 for the input and output. The model
used for matrix inversion in section 3 has the same architecture as in [2]: 6 layers, 516 dimensions
and 12 attention heads in the encoder, and 1 layer, 512 dimensions and 8 heads in the decoder. It
uses FP15 for its input, and P1000 for its output. In out-of-distribution experiments, models have 6
layers in the encoder and 1 in the decoder; and either P1000 in the encoder and decoder or FP15 in
the encoder and P1000 in the decoder.
Models are trained to minimize the cross-entropy between their prediction and the correct solution,
encoded as sequences. They use the Adam optimiser [5], on batches of 64 examples, with a learning
rate of 0.0001, a linear warmup phase of 10000 optimisation steps, and cosine scheduling with a
period of 4000000 [7].
B.2 Data sets
The training and test data for the interpretability and failure experiments (sections 2 and 3) are generated as in [2]. All matrices have independent, identically distributed (iid) coefficients, sampled from
a uniform law over [−10, 10]. In out-of-distribution experiments (section 4), I generate symmetric matrices with iid Gaussian coefficients, with standard deviation 10/√3 (same as the uniform law
over [−10, 10]). For n×n matrices, Gaussian coefficients guarantee that matrix eigenvectors are uniformly distributed in all directions of R[n] . Since their coefficients are iid, these are Wigner matrices,
-----
and their eigenvalues are distributed according to a semi-circle law [8]. To generate uniform, Gaussian and Laplace distributed matrices, I decompose M into their eigenvalues Λ and eigenvectors H,
replace the eigenvalues by Λ2, sampled from another distribution, and reassemble M = HΛ2H [T] . I
take the absolute values of Λ for the abs-semicircle distribution, and those of Λ2 for the abs-Laplace.
For Marchenko-Pastur distribution, I sample a matrix N with Gaussian iid coefficient, with standard
deviation q10/√3, and compute M = N [T] N . All matrices are encoded using the P1000 and FP15
schemes from [2].
C Related works
This paper builds on [2], which introduces the experiments, and provides initial results on out-ofdistribution (OOD) generalization for the eigenvalues of 5 × 5 matrices. I introduce a new task,
inversion of symmetric matrices, conduct experiments on model failures, and expand the OOD results to larger matrices, and to two new tasks: diagonalization and matrix inversion.
The importance of data generators in math transformers was first stressed by Lample and Charton
[6]. When performing symbolic integration, they noticed that models trained on data generated by
differenciating random functions performed badly on test examples generated by integrating random
functions (and vice versa). Welleck et al. [12] provides additional results on the lack of robustness
of models trained to compute integrals.
Yehuda et al. [13] explore the theoretical limitations of models trained from synthetic mathematical
data. They argue that model performance is limited by the training data: which instances of the
problem the generator can provide. We believe our results might stand as a counter-example: if
“long range” out-of-distribution is possible (as suggested by our experiments), then it might be
possible to solve hard instances of a problem, with a model trained on solvable instances.
-----
| [
"François, Charton"
] | 2022-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
What makes math problems hard for reinforcement learning: a case study | Using a long-standing conjecture from combinatorial group theory, we explore, from multiple angles, the challenges of finding rare instances carrying disproportionately high rewards. Based on lessons learned in the mathematical context defined by the Andrews-Curtis conjecture, we propose algorithmic improvements that can be relevant in other domains with ultra-sparse reward problems. Although our case study can be formulated as a game, its shortest winning sequences are potentially $10^6$ or $10^9$ times longer than those encountered in chess. In the process of our study, we demonstrate that one of the potential counterexamples due to Akbulut and Kirby, whose status escaped direct mathematical methods for 39 years, is stably AC-trivial. | null | [
"Sergei, Gukov",
"Ali, Shehper",
"Anibal M., Medina-Mardones",
"Bartłomiej, Lewandowski",
"Angus, Gruen",
"Piotr, Kucharski"
] | 2024-08-27T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2408.15332v1 | https://arxiv.org/abs/2408.15332 | https://www.semanticscholar.org/paper/6dd4f7609e064350bf7f83c01f169bb911778d49 |
|
When a language model is optimized for reasoning, does it still show embers of autoregression? An analysis of OpenAI o1 | In "Embers of Autoregression" (McCoy et al., 2023), we showed that several large language models (LLMs) have some important limitations that are attributable to their origins in next-word prediction. Here we investigate whether these issues persist with o1, a new system from OpenAI that differs from previous LLMs in that it is optimized for reasoning. We find that o1 substantially outperforms previous LLMs in many cases, with particularly large improvements on rare variants of common tasks (e.g., forming acronyms from the second letter of each word in a list, rather than the first letter). Despite these quantitative improvements, however, o1 still displays the same qualitative trends that we observed in previous systems. Specifically, o1 -- like previous LLMs -- is sensitive to the probability of examples and tasks, performing better and requiring fewer "thinking tokens" in high-probability settings than in low-probability ones. These results show that optimizing a language model for reasoning can mitigate but might not fully overcome the language model's probability sensitivity. | It is found that o1 substantially outperforms previous LLMs in many cases, with particularly large improvements on rare variants of common tasks, and shows that optimizing a language model for reasoning can mitigate but might not fully overcome the language model's probability sensitivity. | [
"Shunyu, Yao",
"Dan, Friedman",
"R. Thomas, McCoy",
"Mathew D., Hardy",
"Thomas L., Griffiths"
] | 2024-10-02T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.01792 | https://arxiv.org/abs/2410.01792 | https://www.semanticscholar.org/paper/5ad254fa8e53174635c3dd02adc408e581d29109 |
|
When and How Does Synthetic Data Improve Reasoning Capabilities of Language Models? | Training on model-generated synthetic data is a promising approach for finetuning LLMs, but it remains unclear when it helps or hurts. In this paper, we investigate this for reasoning problems via an empirical study, followed by a theoretical formalization of our observations. First, we find that while the typical approach of finetuning a model on synthetic correct or positive problem-solution pairs generated by capable models offers modest performance gains, sampling more correct solutions from the finetuned learner doubles the sample efficiency of synthetic data. At the same time, training on model-generated positives can amplify various spurious correlations, resulting in flat or even inverse scaling trends as the amount of data increases. Surprisingly, we find that several of these issues can be addressed if we also utilize negative responses, i.e. model-generated responses that are deemed incorrect via final answer checking. Crucially, these negatives must be constructed such that the training can appropriately recover the utility or credit of each intermediate step in the negative response. With this per-step scheme, we are able to attain consistent gains over only positive data, attaining performance similar to amplifying the amount of synthetic data by 8x. We show that training on per-step negatives can help to unlearn spurious correlations in the positive data, and is equivalent to advantage-weighted reinforcement learning (RL), implying that it inherits benefits of RL over imitating positive data alone. | null | null | null | null | NeurIPS 2024 | true | 0 | 0 | null | https://neurips.cc/virtual/2024/poster/96295 | null | null |
\textit{SKIntern}: Internalizing Symbolic Knowledge for Distilling Better CoT Capabilities into Small Language Models | Small Language Models (SLMs) are attracting attention due to the high computational demands and privacy concerns of Large Language Models (LLMs). Some studies fine-tune SLMs using Chains of Thought (CoT) data distilled from LLMs, aiming to enhance their reasoning ability. Furthermore, Some CoT distillation methods introduce external symbolic knowledge into the generation process to improve the limited knowledge memory, reasoning ability and out-of-domain (OOD) generalization of SLMs. However, the introduction of symbolic knowledge increases computational overhead and introduces potential noise. In this paper, we introduce $\textit{SKIntern}$, an innovative approach that empowers SLMs to internalize symbolic knowledge and few-shot examples gradually through a progressive fine-tuning process, guided by a predefined linear decay schedule under curriculum learning. By efficiently internalizing knowledge, $\textit{SKIntern}$ reduces computational overhead and speeds up the reasoning process by focusing solely on the question during inference. It outperforms state-of-the-art baselines by over 5\%, while reducing inference costs (measured in FLOPs) by up to $4\times$ across a wide range of SLMs in both in-domain (ID) and out-of-domain (OOD) tasks. Our code will be available at \url{https://github.com/Xnhyacinth/SKIntern}. | null | ## SKIntern: Internalizing Symbolic Knowledge for Distilling Better CoT Capabilities into Small Language Models
**Huanxuan Liao[1][,][2], Shizhu He[1][,][2], Yupu Hao[1][,][2], Xiang Li[1][,][2], Yuanzhe Zhang[1][,][2],**
**Kang Liu[1][,][2], Jun Zhao[1][,][2]**
1
The Laboratory of Cognition and Decision Intelligence for Complex Systems,
Institute of Automation, Chinese Academy of Sciences, Beijing, China
2
School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
{liaohuanxuan2023, haoyupu2023, lixiang2022}@ia.ac.cn {shizhu.he, kliu, jzhao}@nlpr.ia.ac.cn
**Abstract** (i) Std-CoT
Small Language Models (SLMs) are attracting attention due to the high computational
demands and privacy concerns of Large Language Models (LLMs). Some studies fine-tune
SLMs using Chains of Thought (CoT) data distilled from LLMs, aiming to enhance their reasoning ability. Furthermore, Some CoT distillation methods introduce external symbolic
knowledge into the generation process to improve the limited knowledge memory, reasoning ability and out-of-domain (OOD) generalization of SLMs. However, the introduction of
symbolic knowledge increases computational
overhead and introduces potential noise. In
this paper, we introduce SKIntern, an innovative approach that empowers SLMs to internalize symbolic knowledge and few-shot examples gradually through a progressive finetuning process, guided by a predefined linear
decay schedule under curriculum learning. By
efficiently internalizing knowledge, SKIntern
reduces computational overhead and speeds up
the reasoning process by focusing solely on
the question during inference. It outperforms
state-of-the-art baselines by over 5%, while reducing inference costs (measured in FLOPs)
by up to 4× across a wide range of SLMs in
both in-domain (ID) and out-of-domain (OOD)
[tasks. Our code will be available at https:](https://github.com/Xnhyacinth/SKIntern)
[//github.com/Xnhyacinth/SKIntern.](https://github.com/Xnhyacinth/SKIntern)
**Performance Efficiency**
Generate Fine-tune
(i) Std-CoT
Q R + A SLM
Generate
Fine-tune LLM
(ii) KARD Retrieve Q + K
KB R + A Trainable
Stage 1 Fine-tune Frozen
(iii) CasCoD Generate Fine-tuneQ R Rationale (R)
Stage 2
Q + R A Knowledge (K)
Q + K R + A
Generate Fine-tune and
(iv) SKIntern Internalize
Progressively
Q R + A
Figure 1: Knowledge utilization comparisons of SKIn_tern and other typical CoT distillation methods. (i)_
Std-CoT: SLM is fine-tuned to generate the rationale
and answer for the question (Q -> R + A). (ii) KARD:
Fine-tune the SLM to generate the rationale and answer
based on the question and the retrieved symbolic knowledge (Q + K -> R + A). (iii): CasCoD: Decompose the
single CoT learning step into two comprehensive learning steps of rationale generation (Q -> R) and rationale
utilization (Q + R -> A). (iv): SKIntern: Like human interns, SLMs gradually absorb and internalize symbolic
knowledge provided by LLMs during the progressive
fine-tuning, thereby achieving efficient (Q -> R + A)
and effective reasoning (modeling K in parameters).
need for Small Language Models (SLMs) (Xu et al.,
2024). However, these advanced reasoning and
knowledge capabilities are typically modeled in
larger models (≥13B), making it challenging to
replicate in SLMs (≤7B) (Kaplan et al., 2020).
To improve the reasoning ability of SLMs, existing works (Fu et al., 2023; Li et al., 2024b) aim
to distill the reasoning ability of LLMs into SLMs
by fine-tuning SLMs with high-quality rationales
obtained from LLMs, known as standard CoTs distillation (Std-CoT) (Magister et al., 2023). However, due to the limited parameter size of SLMs,
they cannot effectively memorize all knowledge
and model reasoning ability, making it difficult to
generalize to out-of-domain (OOD) tasks.
Recently, several methods have been proposed to
further improve the knowledge memory and reason
**1** **Introduction**
Large Language Models (LLMs) (Touvron et al.,
2023; Yang et al., 2024) have greatly excelled at
various complex reasoning tasks such as mathematical (Li et al., 2024a), symbolic (Suzgun et al.,
2022) and logical (Dave et al., 2024) reasoning,
by applying Chains of Thought (CoT) prompting
(Wei et al., 2022) and In-Context Learning (ICL)
(Ye et al., 2023; Shum et al., 2023). Nonetheless,
the high computational expenses and data privacy
issues associated with LLMs have highlighted the
-----
ing ability of SLMs. For example, as illustrated in
Figure 1, KARD (Kang et al., 2023) uses external
knowledge bases to enhance the memory capacity
of SLMs, while CasCoD (Dai et al., 2024) employs cascading decomposition to support gradual
learning. However, those methods lead to two challenges: 1) Redundant and noisy symbolic knowl**edge degrades the effect of CoT distillation. Doc-**
ument retrieval based on similarity frequently results in repetitive and trivial content, complicating
the model’s ability to extract key information (Liu
et al., 2023). Additionally, retrieved documents
often contain irrelevant or misleading information,
introducing noise that diminishes the model’s performance. 2) Long input and multi-stage gen**eration reduce the inference efficiency of CoT**
**distillation. Processing additional documents and**
rationales imposes significant memory and computational burdens, and the complex inference process complicates deployment and implementation,
reducing overall efficiency. Therefore, a key challenge of CoT distillation is: Can we effectively and
**_efficiently transfer the rich knowledge and rea-_**
**_soning ability of LLMs through CoT distillation_**
**_while minimizing computational overhead?_**
To resolve the above challenge, we examine
the human learning process and draw analogies to
model fine-tuning. For instance, at first, an intern
typically needs detailed explanations, examples,
and documentation to learn new skills (Zou et al.,
2024). However, once they have internalized this
knowledge and mastered the required skills, such
extensive information is no longer needed. Therefore, we believe that if SLMs are provided with
detailed guidance and symbolic knowledge while
learning rationales from LLMs, their learning outcomes can be greatly enhanced. By gradually internalizing this knowledge into their parameters,
SLMs can independently develop efficient reasoning abilities, eliminating the need for additional
document retrieval or multi-stage generation.
To perform an efficient and effective CoT distillation, we introduce a novel approach SKIntern that
internalizes the symbolic knowledge during model
fine-tuning and enables efficient inference without
additional context. Specifically, our method comprises two key steps. Initially, for each training
instance, LLMs generate rationales and symbolic
knowledge (such as the learning summaries and
supplementary materials) and we select the most
relevant ones using cosine similarity. Secondly, we
gradually perform token-level symbolic knowledge
compression and instance-level example pruning
based on a predefined linear decay schedule. This
refined information is then used to fine-tune the
SLM to generate the rationale from the LLMs and
the answer. As the schedule progresses, both symbolic knowledge and examples are internalized into
the model’s parameters, enabling effective reasoning based solely on the questions during inference.
We evaluate SKIntern on open-source models like TinyLLaMA (Zhang et al., 2024) and
LLaMA2-7B (Touvron et al., 2023) across factual, mathematical, and general reasoning benchmarks. By internalizing symbolic knowledge into
parameters and addressing questions exclusively
during inference, SKIntern surpasses strong baselines in both ID and OOD tasks while significantly
reducing computational requirements (measured in
FLOPs). This supports our hypothesis that internalizing symbolic knowledge can significantly reduce
inference costs, thereby avoiding explicit processing during inference. Additionally, we find that the
performance of SKIntern can be further enhanced
by incorporating few-shot examples into parameters with minimal additional computation. These
improvements suggest that our method balances
efficiency and effectiveness, making it highly suitable for optimizing SLM inference performance in
cost-sensitive scenarios. In conclusion, the contributions of this paper are summarized as follows:
- We propose a novel CoT distillation method
_SKIntern designed to emulate the incremental_
learning process of interns, gradually learning
and mastering knowledge and skills.
- We progressively internalize the symbolic
knowledge generated by the LLM and the
selected examples into parameters, thereby
achieving effective and efficient inference
without the need for additional information.
- We conducted extensive experiments on 7 reasoning benchmarks. SKIntern outperforms
robust baselines by 5% in both ID and OOD
tasks, while reducing inference costs by up to
4× across a broad spectrum of SLMs.
**2** **Related Work**
**CoT Distillation transfers the reasoning ability of**
LLMs to SLMs, where reasoning ability is an emergent property that enables LLMs to excel in reasoning tasks through Chains of Thought (CoT) prompting (e.g., Let’s think step-by-step) (Wei et al., 2022;
-----
Ho et al., 2022). Recent works (Magister et al.,
2023; Fu et al., 2023) show that this CoT inference
mechanism can be used for distillation: fine-tuning
a smaller student model using CoT sequences extracted from a larger teacher model significantly
boosts performance. Further studies (Hsieh et al.,
2023; Li et al., 2024b) have proposed treating the
learning of rationales and answers as distinct optimization objectives. However, these approaches
often overlook the limited memory and reasoning
ability of SLMs, making it difficult to generalize
to OOD tasks. KARD (Kang et al., 2023) boosts
SLMs’ memory by retrieving external knowledge,
while CasCoD (Dai et al., 2024) refines rationale
perception through cascading decomposition learning. However, both methods require processing
more tokens (document retrieval and multi-stage
generation), which introduces additional complexity and uncontrollability in reasoning tasks. Our
proposed method mirrors how interns learn a new
task by first providing full symbolic knowledge and
examples and gradually internalizing them into the
parameters, achieving effective inference without
additional information.
**Prompt Compression condenses lengthy prompts,**
retaining only essential information while reducing length. This process can be divided into
three main methods: Information entropy-based
techniques (Li et al., 2023; Jiang et al., 2023)
use a small language model to calculate the selfinformation or error-proneness of tokens, removing those with lower error-proneness; Soft prompts
methods (Chevalier et al., 2023; Mu et al., 2023)
require fine-tuning LLM parameters to use learnable tokens for condensing prompts; Interpretable
summaries methods (Xu et al., 2023; Pan et al.,
2024) extract data from the LLM to train models
for generating more interpretable text summaries.
A method analogous to ours is PromptIntern (Zou
et al., 2024), which achieves prompt compression
through progressive fine-tuning. We internalize
knowledge and examples into the parameters by
gradually pruning the prompt during training, allowing the prompt to be discarded during inference.
**3** **Methodology**
In this section, we introduce the detailed procedures of SKIntern. As illustrated in Figure 2, SKIn_tern starts with the full knowledge and examples,_
and progressively prunes tokens to gradually internalize them into the model’s parameters, reducing
the prompt length and the number of computations
towards the model. Below, we first describe how
to extract CoT and symbolic knowledge from the
teacher LLM in § 3.1. Then we introduce techniques for symbolic knowledge compression and
examples pruning to convert them into parameters
in § 3.2. Finally, we present a customized progressive fine-tuning pipeline for SKIntern in § 3.3.
Note, SKIntern achieves great results without additional knowledge and examples in input compared
with Std-Cot during inference, merely depending
on the knowledge stored in the parameters.
**3.1** **Rationale and Knowledge Generation**
**Rationale Generation.** In our problem setup,
we assume a given training dataset Dtrain =
_{(xi, yi)}i[n]=1_ [for the target task, where][ x][i][ is the]
input sequence (question in QA) and yi is the label
(answer in QA). LLMs can generate high-quality
rationales, which is known as the emergent ability (Ho et al., 2022). Our objective is to transfer this capability to SLMs through CoT distillation. Firstly, we leverage the CoT prompting
(Wei et al., 2022) to guide the teacher LLM in generating proper l rationales for each training data
point: rij = LLM(pc, xi, yi) where r are generated rationales, j ∈{1, ..., l} and pc is the prompt
which is shown in Appendix D.1. To maintain highquality CoT data, we filter out reasoning processes
that do not yield correct results, retaining only the
distilled CoT sequences that lead to accurate outcomes as the training data (Hsieh et al., 2023).
**Symbolic Knowledge Generation. Rationales of-**
fer insights into the logic behind answers, which is
crucial for SLMs to respond more precisely. However, SLMs with limited parameters may struggle
to retain all training data and complex reasoning
capabilities, which can affect the quality of rationale generation (Kang et al., 2023). Furthermore,
this single learning might lead SLMs to focus on
directly answering questions after reading, potentially impairing their ability to generalize in reasoning (Dai et al., 2024). Hence, it is imperative
to present the SLM with knowledge in the initial
stages of learning to facilitate its understanding.
We use prompt pk which is in the Appendix
D.2 to enable teacher LLM to generate learning
summaries k[m] that incorporate thinking processes
and supplemental knowledge k[p], collectively referred to as symbolic knowledge k. Formally, the
teacher LLM generate m knowledge using the
question xi, the rationale ri and the answer yi:
-----
|T2|T3|T4|T5|
|---|---|---|---|
|T2|T3|T4|T5|
|---|---|---|---|
|T2|T3|T4|
|---|---|---|
Schedule SFull Knowledge with N tokens0-T θ0 SLM LLM Question Examples LLMLingua2
Full Examples with K shots Token in Knowledge Token in Knowledge Example Example
(Not Pruned) (Pruned) (Not Pruned) (Pruned)
E/T Epochs **Rationale and Knowledge Generation**
Generate Generate
θ1 Rationale Knowledge
_Prune Knowledge_
_Prune Examples_ Previous Schedule
…S1 … Schedule St
_Prune Knowledge_ T1 T2 T3 T4 T5 T6 T7 E1 E2 E3 E4 E5 E6 E7
_Prune Examples_
Keep St tokens Random Select Keep St shots
St
E/T Epochs T1 T2 T3 T4 T5 T6 T7 E1 E2 E3 E4 E5 E6 E7
T Schedule Steps E Epochs
θt
… …
_Prune Knowledge_ T1 T3 T5 E2 E6 E7
_Prune Examples_ Fine-tune
ST None
E/T Epochs T1 T2 T3 T4 T5 T6 T7 E1 E2 E3 E4 E5 E6 E7
Following Schedule
θT
**(a) SKIntern framework** **(b) Schedule-wise Fine-tune**
Figure 2: Overview of the SKIntern framework. SKIntern starts with full symbolic knowledge and examples,
and progressively prunes them to gradually internalize knowledge, reducing the prompt length and the number of
computations towards the SLM. Based on schedule S, we perform effective knowledge compression and example
pruning before fine-tuning the SLM to generate rationales and answers. Gradual fine-tuning makes SLMs internalize
knowledge and examples into parameters, thereby enhancing performance without increasing computational cost.
**_kij = LLM(pk, xi, yi, ri), where j_** 1, ..., m .
_∈{_ _}_
A rationale typically addresses a specific question,
whereas knowledge generally offers broader explanations, methods and outlines.
**3.2** **Progressive Internalization**
Before this work, knowledge augmentation has
been successfully applied to optimize SLM inference (Kang et al., 2023). However, these methods
necessitate processing full knowledge during both
training and inference phases, significantly increasing computation overhead. Consequently, they are
unsuitable for scenarios with limited computational
resources. In contrast, by pruning the number of
tokens gradually during the training phase, SKIntern processes only the question during inference
without requiring additional symbolic knowledge.
We implement a predefined schedule S to regulate the pruning rate of knowledge and examples.
At each step, the pruned symbolic knowledge and
few-shot examples are appended to the question,
fine-tuning the SLM over E/T epochs, where the
total training spans E epochs. As shown in Figure
2 (a), with T total schedule steps, the value of S
progressively decreases from 1 to 0. As the compression rate increases and fine-tuning progresses,
the knowledge in the input gradually reduces to 0,
leading to the internalization of knowledge into the
model’s parameters.
**Symbolic Knowledge Compression. Inspired by**
prompt compression works (Pan et al., 2024), we
aim to gradually increase the compression rate to
reduce the symbolic knowledge at the token-level
determined by St at t-th step and internalize it into
the parameters, which can be expressed as:
**_k[t]i_** [=][ LLMLingua2][(][k][i][,][ S][t][)] (1)
where LLMLingua2 (Pan et al., 2024) is a taskagnostic prompt compression method that distills
knowledge from the LLM and fine-tunes the encoder to compress prompts without losing key information, k[t] is the compressed symbolic knowledge at t-th step, varying at different schedule _t._
_S_
**Example Pruning. During inference, incorporat-**
ing few-shot examples can significantly enhance
model performance, and incorporating these exam
-----
**4** **Experiment**
In this section, we conduct extensive experiments
and comprehensive analysis to evaluate the effectiveness of SKIntern on both in-domain (ID) and
out-of-domain (OOD) datasets.
**4.1** **Datasets**
Following Ying et al. (2024), we focus on three
practical abilities: factual, mathematical, and general reasoning. For each ability, we select a relevant public dataset as the ID dataset, integrate
its training data into the target dataset Dtrain for
mixed training, and combine its test data into the
evaluation dataset Deval. Additionally, each ability
includes OOD datasets in Deval, allowing us to evaluate the model’s ability to generalize and enhance
performance beyond the ID training environment.
**Factual Reasoning: We select the Multitask Lan-**
guage Understanding (MMLU) (Hendrycks et al.,
2021a) as the ID dataset, which includes multiplechoice questions across 57 subjects. For OOD evaluation, we use the ARC (Clark et al., 2018), comprising both Easy and Challenge segments.
**Mathematical Reasoning:** We select MetaMathQA (Yu et al., 2023) as the ID dataset, which
only has a training set that includes a high-quality
collection of mathematical reasoning questionanswer pairs, derived from GSM8K (Cobbe et al.,
2021) and MATH (Hendrycks et al., 2021b). We
use GSM8K as the ID evaluation and GSM8K+ (Li
et al., 2024a) for OOD evaluation.
**General Complex Reasoning: We chose BIG-**
Bench Hard (BBH) (Suzgun et al., 2022) as the ID
dataset, which includes 27 challenging tasks spanning arithmetic, symbolic reasoning, and more, derived from BIG-Bench (BB) (bench authors, 2023).
Most of the data consists of multiple-choice questions. For OOD evaluation, we use BB-Sub filtered
by CasCoD, and AGIEval (Zhong et al., 2023) subtasks about English multiple-choice questions.
**4.2** **Baselines**
We compare our method with the following baselines: 1) Teacher & Vanilla Student in Zero-shot
(Radford et al., 2019) and Zero-shot-CoT (Kojima
et al., 2022). 2) Fine-tuning involves fine-tuning
a model to generate answers given only questions.
The performance of the baselines above illustrates
the capability of SLMs to solve tasks using only
training data, without external guidance or additional knowledge. 3) CoT distillation includes
ples into the fine-tuning stage can further improve
the comprehension of various task inputs and outputs (Zou et al., 2024). However, directly adding
verbose minority examples to the input would increase the load on the context window and elevate
inference computation and latency. So we propose
a similarity-based instance-level pruning method
to internalize the examples into parameters. For
each training instance (xi, yi), we begin by employing a relevance scoring function sim(·, ·) to
assess the similarity between its and different instances in the training set and select the most K
relevant examples Di[e][:]
_Di[e]_ [=][ {][(][x][j][,][ y]j[)][ |][ x][j] _[∈]_ [top][ K][(][sim][(][x][i][,][ x][j][))][}][ (2)]
Inspired by compression techniques, we propose
instance-level examples pruning to leverage the
performance gains while mitigating the generation
of substantial additional overhead. We gradually
reduce the number of examples from K to 0 over
a total of T schedule steps, to achieve complete
example internalization. The number of examples
_K[t]_ at t-th step can be expressed as:
_K[t]_ = _K_ _t_ (3)
_⌊_ _× S_ _⌋_
Finally, we randomly select K[t] examples from the
set Di[e] [as examples][ e]i[t] [for][ t][-th step fine-tuning.]
**3.3** **SKIntern Pipeline**
**Fine-tuning SLMs with Rationales. For each**
specific schedule step _t, we utilize the compressed_
_S_
symbolic knowledge k[t]i [and pruned examples][ e]i[t] [for]
fine-tuning the SLM pθ with trainable parameters θ
to generate the rationale rij and answer yi for the
question xi as follows:
_n_ _l_
_t(θ) =_ log pθ(rij, yi **_k[t]i[,][ e][t]i[,][ x][i][)]_**
_L_ _−_ _n[1] · l_ _i=1_ _j=1_ _|_
X X
(4)
We aim to minimize the negative log-likelihood
of the sequence comprising the rationale rij and
answer yi, ensuring rationale precedes the answer.
**Progressive Fine-tuning. For a total of T schedule**
steps, we fine-tune the SLM parameters with the
learning rate η for internalizing as follows:
_θt+1 = θt −_ _η∇θLt(θ)_ (5)
**Inference. After progressive fine-tuning, we utilize**
the updated model parameters, denoted as θT, to
conduct inferences without the need for additional
knowledge or examples. Consequently, we can
simply handle the question and complete efficient
and effective inference.
-----
**In-Domain** **Out-Of-Domain** **Rel.**
**Methods** **Avg**
**BBH-test** **GSM8K** **BB-sub** **AGIEval** **GSM8K-PLUS** **ARC-E** **ARC-C** **FLOPs**
_# Closed-source model and Open-source models (Zero-shot-CoT)_
GPT-3.5-turbo (Teacher) 43.2 72.6 44.0 50.5 55.9 91.8 84.1 63.2 -
LLaMA-3-70B-Instruct 62.6 89.2 51.0 66.3 72.9 97.6 93.2 76.1 -
_# TinyLLaMA-1.1B based_
Zero-shot (Radford et al., 2019) 14.0 2.0 17.7 17.8 1.5 19.4 15.0 12.5 _×1.0_
Zero-shot-CoT (Kojima et al., 2022) 13.5 1.4 17.7 10.4 1.3 16.0 13.4 10.5 _×1.0_
Fine-tuning 48.8 3.5 26.0 21.2 3.7 28.0 24.6 22.3 _×0.9_
Knowledge-Augmented Fine-tuning 49.3 3.7 27.4 21.9 3.3 29.4 25.3 22.9 _×3.7_
Std-CoT (Magister et al., 2023) 47.8±.43 7.9±.27 27.6±.31 21.5±.56 4.3±.62 28.2±.69 25.0±.48 23.2 _×1.0_
MT-CoT (Li et al., 2024b) 44.1 _.78_ 4.1 _.35_ 25.0 _.45_ 21.4 _.64_ 2.8 _.83_ 33.5 _.52_ 25.1 _.59_ 22.3 **0.9**
_±_ _±_ _±_ _±_ _±_ _±_ _±_ _×_
Step-by-step (Hsieh et al., 2023) 42.4 _.56_ 4.3 _.47_ 26.2 _.38_ 21.1 _.72_ 3.1 _.54_ 29.6 _.61_ 25.9 _.66_ 21.8 **0.9**
_±_ _±_ _±_ _±_ _±_ _±_ _±_ _×_
KARD (BM25) (Kang et al., 2023) 49.5±.61 7.6±.40 26.9±.43 20.2±.48 4.0±.77 28.2±.85 26.5±.91 23.3 _×3.9_
CasCoD (Dai et al., 2024) 48.1±.49 6.8±.39 23.1±.64 19.4±.73 4.8±.48 29.0±.63 27.1±.42 22.6 _×3.0_
**SKIntern (ours)** **55.5** _.71_ **8.1** _.65_ **31.4** _.44_ **24.4** _.90_ **5.3** _.68_ **36.8** _.89_ **31.2** _.32_ **27.5** 1.0
_±_ _±_ _±_ _±_ _±_ _±_ _±_ _×_
_# LLaMA2-7B based_
Zero-shot (Radford et al., 2019) 17.3 2.7 18.6 19.2 2.4 25.2 20.6 17.0 _×6.4_
Zero-shot-CoT (Kojima et al., 2022) 13.5 3.1 12.2 10.3 2.1 29.1 20.2 12.9 _×6.4_
Fine-tuning 57.8 5.8 33.3 31.0 5.8 73.3 56.3 37.6 _×5.6_
Knowledge-Augmented Fine-tuning 58.7 6.3 34.2 31.8 6.1 75.1 57.0 38.5 _×23.7_
Std-CoT (Magister et al., 2023) 58.1±.74 20.5±.71 30.7±.48 23.6±.65 12.0±.26 73.4±.81 55.9±.78 39.2 _×6.4_
MT-CoT (Li et al., 2024b) 45.6±.43 6.8±.59 27.8±.75 31.7±.89 6.0±.72 74.2±.46 57.6±.38 35.7 _×5.7_
Step-by-step (Hsieh et al., 2023) 54.3 _.37_ 8.4 _.93_ 32.9 _.55_ 32.4 _.64_ 5.9 _.57_ 77.7 _.35_ 61.8 _.87_ 39.1 **5.6**
_±_ _±_ _±_ _±_ _±_ _±_ _±_ _×_
KARD (BM25) (Kang et al., 2023) 58.9±.53 27.5±.71 30.3±.45 18.9±.38 19.1±.73 73.7±.41 57.0±.82 40.8 _×24.5_
CasCoD (Dai et al., 2024) 58.9 _.59_ 29.2 _.75_ 32.2 _.36_ 28.8 _.29_ **21.4** _.79_ 74.7 _.91_ 57.3 _.62_ 43.2 19.0
_±_ _±_ _±_ _±_ _±_ _±_ _±_ _×_
**SKIntern (ours)** **69.3** _.58_ **33.9** _.71_ **37.2** _.51_ **31.3** _.49_ 21.2 _.83_ **78.1** _.24_ **62.1** _.67_ **47.6** 6.4
_±_ _±_ _±_ _±_ _±_ _±_ _±_ _×_
Table 1: Performance (%) of LLaMA2-7B (Touvron et al., 2023) and TinyLLaMA-1.1B (Zhang et al., 2024) with
different methods across seven selected datasets. Bold indicates the best in each setting. We report the mean and
standard deviation of accuracy with 3 different runs for CoT distillation methods. Relative FLOPs cost is calculated
relative to the TinyLLaMA with Zero-shot. We calculate the FLOPs required on BBH-test for each method.
**Std-CoT (Magister et al., 2023) which is the stan-**
dard CoT distillation method, enabling direct finetuning of the student model with CoT data; Step**by-step (Hsieh et al., 2023) is a multi-task method**
that extracts rationales and answers separately; MT**CoT (Li et al., 2024b) is another multi-task method**
that optimizes both answer prediction and CoT
generation simultaneously; CasCoD (Dai et al.,
2024) decomposes the traditional single-step learning process into two cascaded learning steps. 4)
**Knowledge-Augmentation involves attaching re-**
trieved passages to the question during both training and inference. This includes Knowledge**Augmented Fine-tuning focuses on generating**
answers only, and KARD (Kang et al., 2023) emphasizes learning the generation of rationales.
**4.3** **Implementations**
For all experiments, we use the LLaMA3-8B,
LLaMA2-7B (Touvron et al., 2023), Qwen2 (0.5B,
1.5B, 7B) (Yang et al., 2024) and TinyLLaMA1.1B (Zhang et al., 2024) as the student SLM. We
query the teacher model GPT-3.5-turbo to annotate
the CoTs data with the manual prompt (Suzgun
et al., 2022). Unless otherwise specified, T is set
|Col1|4.88× 2.95×|
|---|---|
4.88×
2.95×
Figure 3: Accuracy (%) against FLOPs for varying
model sizes. FLOPs calculations are based on processing all examples from the same task during inference.
to 4 (§4.6), and total epochs E is set to 12.
We employ LoRA (Hu et al., 2022) for
parameter-efficient fine-tuning of the student SLMs.
All experiments are conducted on 2 A100 GPUs
with 80GB. During the inference stage, we utilize
vLLM (Kwon et al., 2023) to accelerate inference.
Detailed information about training, inference and
-----
Figure 4: Efficiency on training data and model size. The backbone model for the data size variation is Qwen2-7B.
hyperparameters is provided in Appendix A.
**4.4** **Main Results**
We report the performance and inference costs of
_SKIntern and baselines in Table 1 and Figure 3_
(More results are shown in Appendix B) and find:
**_SKIntern outperform baselines with fewer_**
**FLOPs.** As shown in Figure 3, when FLOPsmatched (in a vertical comparison), SKIntern outperforms KARD which retrieves documents to augment reasoning, and CasCoD which enhances reasoning by cascaded decomposition. Specifically,
from Table 1, it is evident that SKIntern shows
an average improvement of 8.4% with LLaMA27B and 5.9% with TinyLLaMA-1.1B, respectively.
This highlights the utility of dynamic pruning and
gradual internalization of symbolic knowledge.
**_SKIntern are up to 4× more efficient than_**
**baselines. Table 1 demonstrates that SKIntern uses**
2-4× fewer FLOPs than state-of-the-art KARD and
CasCoD. Although other CoT distillation methods can achieve similar computational savings,
their performance is significantly worse than SKIn_tern (≥_ 8%). Specifically, their performance is
10% lower on the mathematical reasoning dataset
GSM8K and 15% lower on the complex reasoning
dataset BBH. Furthermore, SKIntern achieves comparable performance with fewer FLOPs, as shown
in Figure 3 (in a horizontal comparison).
**4.5** **Efficiency on Dataset and Model Sizes**
To evaluate the efficiency of SKIntern in terms of
training data and model size, we measured test accuracy using Qwen2 (Yang et al., 2024) models
across various methods while varying the amount
of training data and model size. As shown at the
bottom of Figure 4, SKIntern successfully transfers the reasoning ability of the teacher LLM into
the parameters, even with minimal training data.
As the amount of training data increases, SKIntern
consistently outperforms other baselines, with the
improvement magnitude growing as well. This suggests that SKIntern performs optimally across
**different data volumes and achieves superior**
**reasoning ability distillation. Even with a lim-**
ited dataset, SKIntern outperforms other methods,
demonstrating robustness and sample efficiency.
Regarding model size efficiency, as shown at
the top of Figure 4, SKIntern outperforms other
baselines across various model scales. Notably,
_SKIntern enables Qwen2-7B to surpass the teacher_
model, GPT-3.5 Turbo, in both ID and OOD tasks,
despite having fewer parameters. SKIntern offers
substantial advantages for models of varying sizes,
consistently outperforming other methods. These
results underscore the practical benefits of SKIn_tern in resource-limited environments, as it reduces_
the computational demands for SLMs while delivering performance on par with or surpassing larger
models. This further demonstrates that SLMs
**(0.5B) struggle to fully leverage CoT reasoning**
**generated by LLMs, highlighting the need for**
**our SKIntern approach.**
**4.6** **Analysis on Schedule**
**Schedule Pattern. We examine the effectiveness**
of different schedule patterns during the progressive fine-tuning process, focusing on their impact
on reasoning performance. The patterns tested include exponential, inverse exponential, and linear
decay. As shown in Table 2, the linear decay con
-----
**SKIntern** **BBH** **BB** **AGIEval** **GSM8K+** **ARC-E**
_Pattern of Schedule S_
- exp 64.8 36.2 30.0 16.3 76.0
- exp[−][1] 59.5 31.2 28.8 15.4 73.9
- linear **69.3** **37.2** **31.3** **21.2** **78.1**
_Step of Schedule T_
- T = 3 60.2 33.4 29.1 15.5 74.8
- T = 4 **69.3** **37.2** **31.3** **21.2** **78.1**
- T = 7 65.7 35.0 30.0 20.9 76.6
Table 2: Comparison of schedule patterns and steps of
_SKIntern. The backbone model is LLaMA2-7B._
sistently delivers the highest performance, showcasing superior parsing efficiency and language
understanding. In contrast, the inverse exponential
schedule exhibits the lowest effectiveness, while
the exponential decay offers moderate performance
but remains inferior to the linear schedule. These
findings indicate that a gradual, steady reduction
**is more advantageous than a more aggressive**
**approach. Progressive fine-tuning with a linear de-**
cay schedule appears to yield optimal performance
compared to other patterns.
**Schedule Setup. We explore the optimal schedule**
step T for linear decay during progressive finetuning. With the total number of epochs set to
12, we chose the common divisors of 12 for linear decay, where T corresponds to the decay step
plus 1. As seen in Table 2, T = 4 offers the
best performance, while T = 7 shows slightly
lower results, and T = 3 yields the poorest performance. This suggests that overly frequent schedule changes hinder sufficient learning in the initial
stages, whereas sparse schedules cause large, disruptive jumps, complicating smooth progression
and increasing learning difficulty. Therefore, se**lecting an appropriate schedule step is crucial**
**for effectively internalizing knowledge and en-**
**hancing reasoning abilities in SLMs.**
**4.7** **Ablation Studies**
To demonstrate the effectiveness of SKIntern, we
conducted ablation studies using LLaMA2-7B by
creating three variants: (1) w/o k[m], which removes
the learning summary during fine-tuning; (2) w/o
**_k[p], where supplemental knowledge is excluded;_**
and (3) w/o e, where example pruning is omitted.
As shown in Table 3, the removal of any of these
components results in reduced performance, highlighting the critical role of internalizing both knowledge and examples in enhancing SLMs’ complex
reasoning abilities during progressive fine-tuning.
**Methods** **BBH** **BB** **AGIEval** **GSM8K+** **ARC-E**
**SKIntern** **69.3** **37.2** **31.3** **21.2** **78.1**
w/o k[m] 59.8 30.8 28.7 15.3 74.1
w/o k[p] 62.3 32.1 29.5 16.2 75.7
w/o e 61.9 34.1 29.4 18.1 74.6
Table 3: Ablation studies on different components.
Figure 5: Ablation studies of k on vanilla methods.
Additionally, we investigate the effectiveness of
the generated symbolic knowledge (see Figure 5).
Incorporating learning summaries k[m] and supplementary knowledge k[p] into the original zero-shot,
zero-shot-cot, and few-shot-cot significantly enhances performance. Remarkably, this improvement occurs without fine-tuning, demonstrating the
utility and generalization of symbolic knowledge
in augmenting the model’s inference capabilities.
**5** **Conclusion**
In this paper, we introduce SKIntern, a novel CoT
distillation method designed to internalize symbolic knowledge and rich examples into model parameters, thereby enhancing the ability of SLMs
to tackle complex reasoning tasks. Through a systematic schedule, symbolic knowledge generated
by the LLM including learning summaries and supplementary knowledge is compressed and selected
examples are refined. These elements are then used
to fine-tune the SLM, enabling it to produce coherent rationales and accurate answers. We implement
a customized progressive fine-tuning pipeline to
accommodate various schedule steps and training
epochs. Extensive experiments demonstrate that
our method not only improves reasoning performance on both in-domain (ID) and out-of-domain
(OOD) tasks but also significantly accelerates inference and reduces computational resource usage.
-----
**Limitations**
**Method We have demonstrated through SKIntern**
that the performance of SLM on complex inference
tasks can be significantly improved while greatly
reducing computational overhead. However, it is
important to acknowledge the limitations of our
research. The effectiveness of our knowledge enhancement largely depends on the incremental finetuning required to internalize the original symbolic
knowledge and examples, which increases the complexity and cost of training. Additionally, using
LLM to generate supplementary symbolic knowledge necessitates further monetary expenditure due
to API calls.
**Task While our current tests encompass factual**
knowledge, mathematics, and complex reasoning,
the method’s efficacy for different tasks, such as
various coding exercises and extended text tasks,
requires further analysis and experimentation. Additionally, further investigation is needed to determine which types of symbolic knowledge and task
examples are more easily learned and internalized.
**Large Language Models Regarding the experi-**
ments, given our limited computing and financial
budgets, we chose GPT-3.5-Turbo as the teacher.
Using GPT-4 would likely better verify the effectiveness of our method, SKIntern. Additionally,
our aim to enhance the complex reasoning ability
of SLMs restricted our choice to mainstream models, such as Llama2, Llama3, and Qwen2, thereby
excluding other excellent models like Phi3 and
DeepSeek. However, exploring larger LMs such
as 13B and 72B with SKIntern could be of great
interest, presenting a promising direction for future research. Experimental results indicate that
enhancing powerful models like Llama3-8B and
Qwen2-7B surpasses GPT-3.5-Turbo and matches
Llama3-70B.
**Ethical Considerations**
In this paper, we proposed a novel knowledge enhancement method aimed at leveraging the knowledge of LLMs. However, LLMs may generate
inappropriate or discriminatory knowledge. Our
approach does not introduce ethical concerns. The
datasets we used are public, and there are no privacy issues.
**References**
[BIG bench authors. 2023. Beyond the imitation game:](https://openreview.net/forum?id=uyTL5Bvosj)
[Quantifying and extrapolating the capabilities of lan-](https://openreview.net/forum?id=uyTL5Bvosj)
[guage models. Transactions on Machine Learning](https://openreview.net/forum?id=uyTL5Bvosj)
_Research._
Alexis Chevalier, Alexander Wettig, Anirudh Ajith, and
[Danqi Chen. 2023. Adapting language models to](https://doi.org/10.18653/v1/2023.emnlp-main.232)
[compress contexts. In Proceedings of the 2023 Con-](https://doi.org/10.18653/v1/2023.emnlp-main.232)
_ference on Empirical Methods in Natural Language_
_Processing, pages 3829–3846, Singapore. Associa-_
tion for Computational Linguistics.
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot,
Ashish Sabharwal, Carissa Schoenick, and Oyvind
[Tafjord. 2018. Think you have solved question an-](https://api.semanticscholar.org/CorpusID:3922816)
[swering? try arc, the ai2 reasoning challenge. ArXiv,](https://api.semanticscholar.org/CorpusID:3922816)
abs/1803.05457.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
Chengwei Dai, Kun Li, Wei Zhou, and Songlin Hu.
2024. Improve student’s reasoning generalizability through cascading decomposed cots distillation.
_arXiv preprint arXiv:2405.19842._
Neisarg Dave, Daniel Kifer, C. Lee Giles, and Ankur Ar[jun Mali. 2024. Investigating symbolic capabilities](https://api.semanticscholar.org/CorpusID:269983499)
[of large language models. ArXiv, abs/2405.13209.](https://api.semanticscholar.org/CorpusID:269983499)
Yao Fu, Hao-Chun Peng, Litu Ou, Ashish Sabharwal,
[and Tushar Khot. 2023. Specializing smaller lan-](https://api.semanticscholar.org/CorpusID:256390607)
[guage models towards multi-step reasoning. ArXiv,](https://api.semanticscholar.org/CorpusID:256390607)
abs/2301.12726.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou,
Mantas Mazeika, Dawn Song, and Jacob Steinhardt.
[2021a. Measuring massive multitask language under-](https://openreview.net/forum?id=d7KBjmI3GmQ)
[standing. In International Conference on Learning](https://openreview.net/forum?id=d7KBjmI3GmQ)
_Representations._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021b. Measuring mathematical
problem solving with the math dataset. NeurIPS.
Namgyu Ho, Laura Schmid, and Se-Young Yun. 2022.
[Large language models are reasoning teachers. In](https://api.semanticscholar.org/CorpusID:254877399)
_Annual Meeting of the Association for Computational_
_Linguistics._
Cheng-Yu Hsieh, Chun-Liang Li, Chih-Kuan Yeh,
Hootan Nakhost, Yasuhisa Fujii, Alexander Ratner,
Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister.
2023. Distilling step-by-step! outperforming larger
language models with less training data and smaller
model sizes. arXiv preprint arXiv:2305.02301.
Edward J Hu, yelong shen, Phillip Wallis, Zeyuan AllenZhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu
-----
[Chen. 2022. LoRA: Low-rank adaptation of large](https://openreview.net/forum?id=nZeVKeeFYf9)
[language models. In International Conference on](https://openreview.net/forum?id=nZeVKeeFYf9)
_Learning Representations._
Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin,
[and Edouard Grave. 2021. Unsupervised dense infor-](https://doi.org/10.48550/ARXIV.2112.09118)
[mation retrieval with contrastive learning.](https://doi.org/10.48550/ARXIV.2112.09118)
Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng
Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2023.
[Longllmlingua: Accelerating and enhancing llms](https://arxiv.org/abs/2310.06839)
[in long context scenarios via prompt compression.](https://arxiv.org/abs/2310.06839)
_Preprint, arXiv:2310.06839._
Minki Kang, Seanie Lee, Jinheon Baek, Kenji
Kawaguchi, and Sung Ju Hwang. 2023. Knowledgeaugmented reasoning distillation for small language
models in knowledge-intensive tasks. In Advances in
_Neural Information Processing Systems 37: Annual_
_Conference on Neural Information Processing Sys-_
_tems 2023, NeurIPS 2023, December 10-16, 2023,_
_New Orleans._
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B
Brown, Benjamin Chess, Rewon Child, Scott Gray,
Alec Radford, Jeffrey Wu, and Dario Amodei. 2020.
Scaling laws for neural language models. _arXiv_
_preprint arXiv:2001.08361._
Vladimir Karpukhin, Barlas O˘guz, Sewon Min, Patrick
Lewis, Ledell Yu Wu, Sergey Edunov, Danqi
[Chen, and Wen tau Yih. 2020. Dense passage re-](https://api.semanticscholar.org/CorpusID:215737187)
[trieval for open-domain question answering. ArXiv,](https://api.semanticscholar.org/CorpusID:215737187)
abs/2004.04906.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in
_neural information processing systems, 35:22199–_
22213.
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying
Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E.
Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model
serving with pagedattention. In Proceedings of the
_ACM SIGOPS 29th Symposium on Operating Systems_
_Principles._
Qintong Li, Leyang Cui, Xueliang Zhao, Lingpeng
[Kong, and Wei Bi. 2024a. Gsm-plus: A compre-](https://api.semanticscholar.org/CorpusID:268063753)
[hensive benchmark for evaluating the robustness](https://api.semanticscholar.org/CorpusID:268063753)
[of llms as mathematical problem solvers.](https://api.semanticscholar.org/CorpusID:268063753) _ArXiv,_
abs/2402.19255.
Shiyang Li, Jianshu Chen, yelong shen, Zhiyu Chen,
Xinlu Zhang, Zekun Li, Hong Wang, Jing Qian,
Baolin Peng, Yi Mao, Wenhu Chen, and Xifeng Yan.
[2024b. Explanations from large language models](https://openreview.net/forum?id=rH8ZUcfL9r)
[make small reasoners better. In 2nd Workshop on](https://openreview.net/forum?id=rH8ZUcfL9r)
_Sustainable AI._
Yucheng Li, Bo Dong, Chenghua Lin, and Frank
[Guerin. 2023. Compressing context to enhance infer-](https://arxiv.org/abs/2310.06201)
[ence efficiency of large language models. Preprint,](https://arxiv.org/abs/2310.06201)
arXiv:2310.06201.
Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy
[Liang. 2023. Lost in the middle: How language mod-](https://api.semanticscholar.org/CorpusID:259360665)
[els use long contexts. Transactions of the Association](https://api.semanticscholar.org/CorpusID:259360665)
_for Computational Linguistics, 12:157–173._
Lucie Charlotte Magister, Jonathan Mallinson, Jakub
Adamek, Eric Malmi, and Aliaksei Severyn. 2023.
[Teaching small language models to reason. In Pro-](https://doi.org/10.18653/v1/2023.acl-short.151)
_ceedings of the 61st Annual Meeting of the Associa-_
_tion for Computational Linguistics (Volume 2: Short_
_Papers), pages 1773–1781, Toronto, Canada. Associ-_
ation for Computational Linguistics.
Jesse Mu, Xiang Lisa Li, and Noah D. Goodman.
[2023. Learning to compress prompts with gist to-](https://api.semanticscholar.org/CorpusID:258179012)
[kens. ArXiv, abs/2304.08467.](https://api.semanticscholar.org/CorpusID:258179012)
Zhuoshi Pan, Qianhui Wu, Huiqiang Jiang, Menglin
Xia, Xufang Luo, Jue Zhang, Qingwei Lin, Victor Rühle, Yuqing Yang, Chin-Yew Lin, H. Vicky
Zhao, Lili Qiu, Dongmei Zhang, Karl Cobbe, Vineet
Kosaraju, Mo Bavarian, Mark Chen, Heewoo Jun,
Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Ja[cob Hilton, and Reiichiro Nakano. 2024. Llmlingua-](https://api.semanticscholar.org/CorpusID:268531237)
[2: Data distillation for efficient and faithful task-](https://api.semanticscholar.org/CorpusID:268531237)
[agnostic prompt compression. In Annual Meeting of](https://api.semanticscholar.org/CorpusID:268531237)
_the Association for Computational Linguistics._
Adam Paszke, Sam Gross, Francisco Massa, Adam
Lerer, James Bradbury, Gregory Chanan, Trevor
Killeen, Zeming Lin, Natalia Gimelshein, Luca
Antiga, Alban Desmaison, Andreas Köpf, Edward
Yang, Zach DeVito, Martin Raison, Alykhan Tejani,
Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: an
_imperative style, high-performance deep learning li-_
_brary. Curran Associates Inc., Red Hook, NY, USA._
Alec Radford, Jeffrey Wu, Rewon Child, David Luan,
Dario Amodei, Ilya Sutskever, et al. 2019. Language
models are unsupervised multitask learners. OpenAI
_blog, 1(8):9._
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and
[Percy Liang. 2016. SQuAD: 100,000+ questions for](https://doi.org/10.18653/v1/D16-1264)
[machine comprehension of text. In Proceedings of](https://doi.org/10.18653/v1/D16-1264)
_the 2016 Conference on Empirical Methods in Natu-_
_ral Language Processing, pages 2383–2392, Austin,_
Texas. Association for Computational Linguistics.
Stephen Robertson and Hugo Zaragoza. 2009. [The](https://doi.org/10.1561/1500000019)
[probabilistic relevance framework: Bm25 and be-](https://doi.org/10.1561/1500000019)
[yond. Found. Trends Inf. Retr., 3(4):333–389.](https://doi.org/10.1561/1500000019)
KaShun Shum, Shizhe Diao, and Tong Zhang. 2023.
Automatic prompt augmentation and selection with
chain-of-thought from labeled data. arXiv preprint
_arXiv:2302.12822._
Mirac Suzgun, Nathan Scales, Nathanael Scharli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung,
Aakanksha Chowdhery, Quoc V. Le, Ed Huai hsin
[Chi, Denny Zhou, and Jason Wei. 2022. Challenging](https://api.semanticscholar.org/CorpusID:252917648)
[big-bench tasks and whether chain-of-thought can](https://api.semanticscholar.org/CorpusID:252917648)
[solve them. In Annual Meeting of the Association for](https://api.semanticscholar.org/CorpusID:252917648)
_Computational Linguistics._
-----
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint_
_arXiv:2307.09288._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural
_information processing systems, 35:24824–24837._
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen,
Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu,
Teven Le Scao, Sylvain Gugger, Mariama Drame,
[Quentin Lhoest, and Alexander Rush. 2020. Trans-](https://doi.org/10.18653/v1/2020.emnlp-demos.6)
[formers: State-of-the-art natural language processing.](https://doi.org/10.18653/v1/2020.emnlp-demos.6)
In Proceedings of the 2020 Conference on Empirical
_Methods in Natural Language Processing: System_
_Demonstrations, pages 38–45, Online. Association_
for Computational Linguistics.
Fangyuan Xu, Weijia Shi, and Eunsol Choi. 2023.
[Recomp: Improving retrieval-augmented lms with](https://api.semanticscholar.org/CorpusID:263830734)
[compression and selective augmentation.](https://api.semanticscholar.org/CorpusID:263830734) _ArXiv,_
abs/2310.04408.
Xiaohan Xu, Ming Li, Chongyang Tao, Tao Shen,
Reynold Cheng, Jinyang Li, Can Xu, Dacheng Tao,
and Tianyi Zhou. 2024. A survey on knowledge distillation of large language models. arXiv preprint
_arXiv:2402.13116._
An Yang, Baosong Yang, Binyuan Hui, Bo Zheng,
Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan
Li, Dayiheng Liu, Fei Huang, Guanting Dong, Haoran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian
Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jin
Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang
Lin, Kai Dang, Keming Lu, Ke-Yang Chen, Kexin
Yang, Mei Li, Min Xue, Na Ni, Pei Zhang, Peng
Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin,
Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu,
Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng,
Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin
Wei, Xuancheng Ren, Yang Fan, Yang Yao, Yichang
Zhang, Yunyang Wan, Yunfei Chu, Zeyu Cui, Zhenru
[Zhang, and Zhi-Wei Fan. 2024. Qwen2 technical](https://api.semanticscholar.org/CorpusID:271212307)
[report. ArXiv.](https://api.semanticscholar.org/CorpusID:271212307)
Jiacheng Ye, Zhiyong Wu, Jiangtao Feng, Tao Yu, and
Lingpeng Kong. 2023. Compositional exemplars for
in-context learning. In International Conference on
_Machine Learning, pages 39818–39833. PMLR._
Jiahao Ying, Mingbao Lin, Yixin Cao, Wei Tang,
Bo Wang, Qianru Sun, Xuanjing Huang, and
Shuicheng Yan. 2024. Llms-as-instructors: Learning
from errors toward automating model improvement.
_arXiv preprint arXiv:2407.00497._
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2023.
Metamath: Bootstrap your own mathematical questions for large language models. _arXiv preprint_
_arXiv:2309.12284._
Peiyuan Zhang, Guangtao Zeng, Tianduo Wang, and
[Wei Lu. 2024. Tinyllama: An open-source small](https://api.semanticscholar.org/CorpusID:266755802)
[language model. ArXiv, abs/2401.02385.](https://api.semanticscholar.org/CorpusID:266755802)
Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan
Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma.
[2024. Llamafactory: Unified efficient fine-tuning](http://arxiv.org/abs/2403.13372)
[of 100+ language models. In Proceedings of the](http://arxiv.org/abs/2403.13372)
_62nd Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 3: System Demonstra-_
_tions), Bangkok, Thailand. Association for Computa-_
tional Linguistics.
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang,
Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen,
and Nan Duan. 2023. Agieval: A human-centric
benchmark for evaluating foundation models. arXiv
_preprint arXiv:2304.06364._
Jiaru Zou, Meng Zhou, Tao Li, Shi Han, and Dong[mei Zhang. 2024. Promptintern: Saving inference](https://api.semanticscholar.org/CorpusID:270878548)
[costs by internalizing recurrent prompt during large](https://api.semanticscholar.org/CorpusID:270878548)
[language model fine-tuning. ArXiv, abs/2407.02211.](https://api.semanticscholar.org/CorpusID:270878548)
**A** **Experimantal Settings**
**A.1** **Datasets**
For each ability, we select a relevant public dataset,
integrate its training data into the target dataset
_Dtrain for mixed training, and combine its test data_
into the evaluation dataset Deval. Additionally, each
ability includes an OOD dataset in Deval. This
setup allows us to evaluate the model’s ability to
generalize and enhance performance beyond the
ID training environment.
Table 4 shows the statistics details of the selected
datasets.
For MMLU (Hendrycks et al., 2021a), we adhere
to previous prompt styles (Suzgun et al., 2022),
instructing the teacher model (e.g., GPT-3.5-Turbo)
to generate answers and Chains of Thought (CoT).
By excluding samples with incorrect answers, we
ultimately obtained a total of 1,556 samples. For
MetaMathQA (Yu et al., 2023), we acquired 3,500
samples through random sampling. For BB (bench
authors, 2023), we followed the CasCoD (Dai et al.,
2024) methodology by filtering the original dataset
for tasks containing the keyword "multiple choice"
and randomly extracting up to 100 examples for
each task. Note that tasks in BBH do not involve
BB-sub.
-----
**Abilities** **Task** **# Train** **# Train (Filtered)** **# Test**
ID: MMLU Dev + Val: 1,815 1,555 -
Factuality OOD: ARC-C - - 1,172
OOD: ARC-E - - 2,376
ID: MetaMathQA 395,000 3,500 -
Math OOD: GSM8K - - 1,319
OOD: GSM8K-PLUS - - 1,400
ID: BBH 6,511 3,805 1,304
Reasoning OOD: BB-sub - - 5,384
OOD: AGIEval - - 2,546
**All** **Sum** - 8,860 15,501
Table 4: Statistical details of the selected datasets. Since MMLU lacks official training data, we combined the
development and validation datasets to form a training set. To maintain sample balance, we matched the size of
MetaMathQA to that of BBH. We obtained balanced samples from two dataset augmentation modes, MATH_Aug
and GSM_Aug, resulting in a total of 3,500 samples.
**A.3** **Implementations**
Our implementations are based on huggingface
transformers v4.42.1 (Wolf et al., 2020) using PyTorch v2.3.1 (Paszke et al., 2019) and LlamaFactory (Zheng et al., 2024).
For CasCoD (Dai et al., 2024), we adhere to
the optimal settings recommended by the authors,
specifically setting α to 0.3. For KARD (Kang
et al., 2023), we employ the BM25 configuration
(Robertson and Zaragoza, 2009), a sparse retrieval
method based on word frequency, and retrieve three
documents per question. Wikipedia serves as the
external knowledge base for all datasets. For all
retrievers used in SKIntern, including BM25, Contriever (Izacard et al., 2021), and DPR (Karpukhin
et al., 2020), we utilize the Pyserini[1] library, which
offers a reproducible information retrieval framework.
**A.4** **Symbolic Knowledge Collection**
For specialized knowledge collection, using 2-shot
hand-written examples, the teacher model is configured with a temperature of 0.8 and a maximum
length of 1024 tokens. It generates specialized
knowledge corresponding to each incorrect example produced by the student SLMs. The prompt
can be found in the Appendix D.2.
**B** **Extended Results**
In Table 7, we present the results of various models discussed in this paper, including LLaMA3-8B,
1https://github.com/castorini/pyserini
During the evaluation stage, we use Exact Match
(Rajpurkar et al., 2016) as the evaluation metric.
The answer generation between the involved
models is conducted in a zero-shot setting, with
all models set to a temperature of 0.8 and a maximum token length of 1024. The prompt can be
found in the Appendix D.1.
**A.2** **Hyperparameter**
The complete set of stable hyperparameters used
for both baseline models and the proposed SKIntern
training and inference runs can be found in Table 5
and Table 6, respectively.
In our research, we ensured consistent hyperparameter settings across all baselines, including the
proposed SKIntern method, to maintain the fairness
of our comparative analysis. Detailed hyperparameters and their explanations are presented below.
For SKIntern, particularly in the fourth step, we
reduced the enhanced distillation parameters to 3
epochs and fixed the batch size at 8, as the concatenation of specialized knowledge results in longer inputs. We maintained a consistent batch size across
all baselines to eliminate any performance differences attributable to varying batch sizes, which depend on model size, with larger models use smaller
batch sizes. The learning rate, a key parameter affecting model performance, was set to 5e-5, 1e-4,
2e-4, and 3e-4 in a series of experiments, revealing
that larger models require smaller learning rates.
Consequently, we adjusted the learning rate according to model size.
-----
**Hyperparameter** **TinyLLaMA-1.1B** **LLaMA2-7B** **LLaMA3-8B** **Qwen2-0.5B** **Qwen2-1.5B** **Qwen2-7B**
Max Input Len 2048 4096 4096 4096 4096 4096
Max Output Len 128 128 128 128 128 128
Optimizer AdamW AdamW AdamW AdamW AdamW AdamW
Learning Rate 2e-4 1e-4 5e-5 2e-4 1e-4 1e-4
precision fp16 fp16 fp16 fp16 fp16 fp16
# Training epochs 12 12 12 12 12 12
# Warmup Steps 10% of total training steps
Batch Size 32 16 8 32 16 8
Gradient Accumulation 1 2 4 1 2 4
rank of LoRA 32 32 32 32 32 32
Table 5: Training hyperparameters.
**D** **Instruction Details**
**D.1** **Prompt for Generating CoTs**
We use the prompt template shown below to call the
teacher model to generate the CoTs for the training
datasets.
**Teacher**
**Hyperparameter** **Student**
Rationale Reasoning
do_sample False True False
temperature 0.6 0.8 0.6
top-p 0.95 1.0 0.95
top-k 50 50 50
max_new_tokens 1024 2048 1024
# return sequences 1 2 1
Table 6: Generation configs of students and teachers.
QWen2-0.5B, 1.5B, and 7B, utilizing different baseline methods along with the outcomes of SKIntern.
**C** **Case Study**
We present two cases from Tables 8 and 9 to compare the Chains of Thought (CoTs) generated by
_SKIntern, the teacher large language model (LLM),_
and the standard CoTs distillation method (StdCoT). We use ✓ and ✗ to indicate the correctness
of the CoT.
Table 8 shows that the Std-CoT’s response is
confused and fails to comprehend the question accurately. Although it has a rough idea, its rationale
is entirely incorrect as it struggles to emulate the
rationale of the teacher LLM.
Table 9 presents the symbolic knowledge generated by the LLM for a training example in BBH,
encompassing learning summaries and supplementary information. This symbolic knowledge offers
detailed logical reasoning and positional insights,
which assist the LLM in understanding and solving
these problems.
-----
**In-Domain** **Out-Of-Domain** **Rel.**
**Methods** **Avg**
**BBH-test** **GSM8K** **BB-sub** **AGIEval** **GSM8K-PLUS** **ARC-E** **ARC-C** **FLOPs**
_# Closed-source model and Open-source models (Zero-shot-CoT)_
GPT-3.5-turbo (Teacher) 43.2 72.6 44.0 50.5 55.9 91.8 84.1 63.2 -
LLaMA-3-70B-Instruct 62.6 89.2 51.0 66.3 72.9 97.6 93.2 76.1 -
_# LLaMA-3-8B based_
Zero-shot (Radford et al., 2019) 18.2 2.8 27.4 29.7 2.2 50.8 50.0 25.9 _×6.2_
Zero-shot-CoT (Kojima et al., 2022) 26.5 6.6 23.5 32.2 3.7 68.1 55.5 30.9 _×6.2_
Fine-tuning 43.7 11.7 29.1 35.3 9.4 75.2 65.2 38.5 _×5.4_
Knowledge-Augmented Fine-tuning 30.4 9.9 14.4 13.0 8.5 40.8 33.9 21.6 _×23.3_
Std-CoT (Magister et al., 2023) 79.4 61.6 40.5 41.3 45.6 83.2 71.9 60.5 _×6.2_
MT-CoT (Li et al., 2024b) 62.8 13.1 36.3 **43.9** 11.4 83.6 72.3 46.3 _×5.5_
Step-by-step (Hsieh et al., 2023) 64.0 11.5 38.8 43.7 9.0 84.3 74.6 46.6 _×5.4_
KARD (BM25) (Kang et al., 2023) **81.4** **64.3** 43.1 43.4 **48.6** 85.6 **76.1** 63.2 _×24.2_
CasCoD (Dai et al., 2024) 32.1 59.1 18.1 23.6 46.1 34.6 27.7 34.5 _×17.7_
**SKIntern (ours)** 80.8 62.5 42.8 43.6 48.1 **89.9** 75.9 **63.4** _×6.2_
_# Qwen2-0.5B based_
Std-CoT (Magister et al., 2023) 65.8 26.7 29.6 25.6 17.1 43.6 32.0 34.3 _×0.4_
MT-CoT (Li et al., 2024b) 47.2 5.3 30.5 **27.7** 4.4 46.0 35.1 28.0 _×0.4_
Step-by-step (Hsieh et al., 2023) 44.2 5.2 28.9 26.2 3.1 41.8 36.2 26.5 _×0.4_
KARD (BM25) (Kang et al., 2023) **66.3** **30.9** 31.7 23.9 18.2 **48.9** **37.2** **36.7** _×1.7_
CasCoD (Dai et al., 2024) 37.6 27.7 20.0 15.6 17.6 21.5 14.8 22.1 _×1.2_
**SKIntern (ours)** 65.9 **30.9** **30.8** 27.0 **18.5** 48.5 35.6 **36.7** _×0.4_
_# Qwen2-1.5B based_
Std-CoT (Magister et al., 2023) 68.2 52.7 35.7 34.0 37.3 69.3 56.4 50.5 _×1.3_
MT-CoT (Li et al., 2024b) 58.0 6.7 36.4 34.2 6.1 72.7 57.5 38.8 _×1.1_
Step-by-step (Hsieh et al., 2023) 48.4 5.8 32.8 34.4 6.1 72.1 57.6 36.7 _×1.1_
KARD (BM25) (Kang et al., 2023) **72.2** **55.4** **37.4** 31.2 39.4 74.0 62.2 53.1 _×5.2_
CasCoD (Dai et al., 2024) 31.7 53.4 25.4 24.7 38.8 57.1 47.8 39.8 _×3.8_
**SKIntern (ours)** 70.1 54.8 36.5 **36.3** **41.8** **76.5** **62.7** **54.1** _×1.3_
_# Qwen2-7B based_
Std-CoT (Magister et al., 2023) **80.7** 71.5 43.4 **49.9** 60.0 90.5 80.3 68.0 _×6.0_
MT-CoT (Li et al., 2024b) 70.0 15.2 42.6 49.4 12.1 90.9 80.2 51.5 _×5.3_
Step-by-step (Hsieh et al., 2023) 68.8 15.2 41.2 49.1 10.9 72.1 71.8 47.0 _×5.2_
KARD (BM25) (Kang et al., 2023) 80.2 75.3 43.2 49.6 60.6 92.1 **83.5** 69.2 _×23.6_
CasCoD (Dai et al., 2024) 35.7 72.3 23.8 37.4 60.6 70.1 63.1 51.9 _×17.4_
**SKIntern (ours)** 78.8 **76.1** **45.2** 49.4 **61.5** **93.0** **83.5** **69.6** _×6.0_
Table 7: Performance (%) of LLaMA3-8B (Touvron et al., 2023) and Qwen2-0.5B, 1.5B and 7B (Yang et al., 2024)
with different methods across seven selected datasets. Bold indicates the best in each setting. Relative FLOPs cost
is calculated relative to the TinyLLaMA with Zero-shot. We calculate the FLOPs required on BBH-test for each
method.
Generate CoTs
You are an expert assistant teacher. The following are tasks about {Task_Name}. {Task Description}. Explain your reasoning first and your response should conclude with the format “Therefore,
the answer is".
Question: {QUESTION}
Answer: Let’s think step by step.
**D.2** **Prompt for Specialized Knowledge**
**Collection**
**Generate Learning Summary only prompts**
LLMs to analyze the SLM’s errors and generate
the specialized knowledge of learning summary.
-----
Question: Kylar went to the store to buy glasses for his new apartment. One glass costs $5, but every second glass costs
only 60% of the price. Kylar wants to buy 16 glasses. How much does he need to pay for them?
**Symbolic Knowledge** **Output**
**Teacher** - The discount price of one glass is 60/100 * 5 =
$«60/100*5=3»3. If every second glass is cheaper,
that means Kylar is going to buy 16 / 2 = «16/2=8»8
cheaper glasses. So for the cheaper glasses, Kylar
is going to pay 8 * 3 = $«8*3=24»24. And for
the regular-priced glasses, Kylar will pay 8 * 5 =
$«8*5=40»40. So in total Kylar needs to pay 24 +
40 = $«24+40=64»64 for the glasses he wants to buy.
Therefore, the answer is 64. ✓
**Std-CoT** - 3.0 times a year means Pauline goes to the museum
3.0 * 5.0 = 15.0 times in 5.0 years. The cost of one
visit is $2.00, but in the last 5.0 years, it has increased
by 1.5 times, so the new cost is $2.00 * 1.5 = $3.00.
So, Pauline spent 15.0 * 3.0 = $45.00 on all visits to
the museum in dollars. Therefore, the answer is 45.
**KARD** 1. Conversely, deadweight loss can come from consumers if they buy a product even if it costs more
than it benefits them. To describe this, if the same
nail market had the government giving a $0.03 subsidy to every nail produced, the subsidy would push
the market price of each nail down to $0.07(...) 2.
Several glasses are utilized in transparent armor, such
as normal plate glass (soda-lime-silica), borosilicate
glasses, and fused silica. Plate glass has been the
most common glass used due to its low cost(...) 3.
To be eligible for the program, applicants must be
in financial need, have had a recent eye exam, and
have no other resources available to them to pay for
glasses, including federal or state programs or assistance from local charitable organizations(...)
The cost of one glass is $5. Kylar wants to buy 16
glasses, so he needs to pay for 16 glasses + 1 glass
= 17 glasses. Since every second glass costs 60% of
the price, the cost of every second glass is 60% of $5
= $3. So, the total cost of the 17 glasses is $5 x 17 +
$3 x (17/2) = $85 + $10. Therefore, Kylar needs to
pay $85 + $10 = $95 for the glasses. Therefore, the
answer is 95. ✗
**_SKIntern_** - 16 glasses is an even number, so every second glass
will cost 60% of the price. Kylar wants to buy 16
glasses, and every second glass costs 60% of the
price, so he needs to buy 16/2 = 8 glasses at the full
price of $5/glass. He also needs to buy 8 glasses at
the reduced price of 60% of $5/glass, which is 60/100
- $5 = $3/glass. So in total, Kylar needs to pay 8 * $5
+ 8 * $3 = $40 + $24 = $64 for the glasses. Therefore,
the answer is 64. ✓
Table 8: A inference case from GSM8K.
-----
Question: If you follow these instructions, do you return to the starting point?
Take 7 steps. Take 9 steps. Turn around. Turn right.
Options:
- Yes
- No
**Symbolic Knowledge**
**Learning Summary: When solving logical reasoning problems involving directions and movements,**
it’s essential to follow the steps methodically and visualize the movements if necessary. 1. Question
Comprehension: Begin by understanding the sequence of movements and turns. Identify the starting point
and note the instructions given, such as steps taken and direction changes. 2. Thought Steps: - Start by
taking note of the initial position and direction. - Follow the movement instructions step by step: - Take 7
steps in the initial direction. - Take 9 steps in the same direction. - Turn around (180-degree turn) which
reverses your direction. - Turn right, which changes your direction perpendicularly. - After executing
these steps, assess whether you return to the original position or direction. 3. Visualization: Drawing a
simple diagram or using a grid can help track the positions and directions. This visualization helps verify
whether the initial and final positions match. 4. Summative Experience: For similar questions, always
track each movement and turn carefully. Be aware of the effects of each instruction, particularly turns,
which change direction.
**Supplementary Knowledge: 1. Understanding Directions: - Familiarize yourself with basic directions**
(e.g., north, south, east, west) and understand relative turns (left, right, and turn around). - A 180degree turn changes direction to the opposite, while a 90-degree right or left turn changes the direction
perpendicularly. 2. Visualization Techniques: - Use diagrams, sketches, or grids to map directions and
movements to see the path clearly. - Visual aids can help prevent confusion, especially when multiple
turns are involved. 3. Logical Sequencing: - Carefully follow each step in the sequence as instructed.
Misinterpreting a step or turn can lead to incorrect conclusions. - Practice breaking down instructions
into smaller parts to manage them more effectively. 4. Definitions: - Turn Around: A 180-degree turn
where you face the opposite direction from where you started. - Right Turn: A 90-degree turn to the right,
changing the direction perpendicular to the current path. By practicing these steps and understanding the
underlying concepts, students can improve their ability to solve similar direction-based logical reasoning
problems.
Table 9: A symbolic knowledge generation case from BBH-test.
-----
Generate Learning Summary
As an excellent educational teacher, your goal is to help students enhance their question-solving
abilities.
Based on an understanding and explanation of the question, along with relevant background
knowledge, fundamental concepts, and empirical conclusions, please generate a learning summary
in a numbered list format that will help students complete the same task in the future.
### Requirements:
1. Learning summary should outline the thought processes and precautions for addressing student
mistakes, including, but not limited to, question comprehension, thought steps and mathematical
calculations. It should also provide a summative experience to help students solve similar questions
in the future.
2. Ensure that the content is understandable and usable by students, while also being concise and
effective.
3. The obtained learning summary should be general and generalized, not aimed at specific
questions.
4. Keep these requirements in mind while generating the learning summary and supplementary
knowledge.
### Return Format:
Return in the following format:
Learning Summary: [Learning Summary]
Question: {QUESTION}
Answer: {ANSWER}
Please follow the requirements and provide the learning summary.
**Generate Learning Summary and Supplemen-**
**tary Knowledge prompts LLMs to analyze the**
SLM’s errors and generate the specialized knowledge of learning summary and Supplementary
Knowledge, providing additional relevant background knowledge to further assist SLMs in solving
similar complex reasoning tasks in the future.
-----
Generate Learning Summary and Supplementary Knowledge
As an excellent educational teacher, your goal is to help students enhance their question-solving
abilities and to aid students in completing the same task in the future.
You should generate targeted, detailed thought processes and relevant background knowledge for
solving similar questions in the future.
Your role involves creating learning summaries and supplementary knowledge, specifically identifying the steps needed to solve the question and providing additional general knowledge in the
supplementary knowledge.
### Requirements:
1. Learning summary should outline the thought processes including, but is not limited to, question
comprehension, thought steps and mathematical calculations. It should also provide a summative
experience to help students solve similar questions in the future.
2. Supplementary knowledge should include a list of essential background information that
students need to solve the question. This should encompass, but is not limited to, mathematical
formulas, definitions, relevant world knowledge, and specific techniques.
3. Ensure that the content is understandable and usable by students, while also being concise and
effective.
4. The obtained learning summary should be general and generalized, not aimed at specific
problems, and the supplementary knowledge should also be general knowledge of the problem
without involving specific analysis.
5. Keep these requirements in mind while generating the learning summary and supplementary
knowledge.
### Return Format:
Return in the following format:
Learning Summary: [Learning Summary]
Supplementary Knowledge: [Supplementary Knowledge]
Question: {QUESTION}
Answer: {ANSWER}
Please follow the requirements and provide the learning summary and supplementary knowledge.
-----
| [
"Jun, Zhao",
"Huanxuan, Liao",
"Xiang, Li",
"Yupu, Hao",
"Yuanzhe, Zhang",
"Shizhu, He",
"Kang, Liu"
] | 2024-09-19T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.13183 | null | null |